Skip to content

[FEATURE] Int8 kernels for Sparsity #29

@vkkhare

Description

@vkkhare

Describe the feature request
Since quantization techniques are orthogonal to sparsity, we should be able to leverage the benefits of both and stack them together.

Describe the solution you'd like

We have similar dtype templates in cuda which we need to replicate for CPU and instruction sets like AVX

template <>
__global__ void sparse_mlp_combined_cuda_kernel<float>(...)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    Status

    Planning

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions