Skip to content

Add accuracy benchmarks for llama 3 #31

@vkkhare

Description

@vkkhare

Post-training predictors on llama 3 output, we need to evaluate

  • MMLU and AIME, refer to further benchmarks
  • Under int8 quant, fp16 and fp32
  • With and without explicitly sparsifying LLMs like Prosparse

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions