Skip to content

feat(mma): add fp16@fp16->fp32 mma and unit tests #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

liyanc
Copy link

@liyanc liyanc commented Mar 6, 2025

feat(mma): add half-precision MMAs for automotive devices and training

Add FP16 variants of matrix multiply operations benefiting non-Hopper
devices including NVIDIA Orin (sm_87) and Ada Lovelace (sm_89) automotive
edge devices. In addition, backward passes during training can benefit from
higher precisions. These variants provide higher precision compared to BF16
when needed.

Key changes:

  • Add mma.sync.aligned.m16n8k16.row.col.f32.f16.f16.f32 interfaces
  • Add mma_ interfaces for register tiles rt_base<half,...>
  • Add half-precision MMA implementations for all matrix operation patterns
    (AB, ABt, AtB, AtBt)
  • Add corresponding unit tests to verify correctness

Tested on NVIDIA A100, Ada, and H100 platforms.

Add mma with half-precision (FP16) inputs and fp32 accumulators for
`mma.sync.aligned` instructions including:
 - mma_AB_base
 - mma_ABt_base
 - mma_AtB_base
 - mma_AtBt_base
Add unit tests to ensure the correctness:
 - test_mma_AB_half
 - test_mma_ABt_half
 - test_mma_AtB_half
 - test_mma_AtBt_half
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant