Skip to content

Conversation

loubnabnl
Copy link

Run generations for humaneval problems using Megatron (sequential generation for greedy mode)

# deploy the server on a GPU node (adapt paths/ports..)
bash examples/run_text_generation_starcoder.sh
# run and save generations (on the same node) to be executed with bigcode-evaluation-harness
python /fsx/loubna/bigcode_2/code/pr/Megatron-LM/tools/run_requests_humaneval.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant