This repository provides everything you need to perform Supervised Fine-Tuning (SFT) of the
Qwen2.5-Coder-1.5B-Instruct
model—or any of its larger variants (7B, 14B, 32B)—on the Qwen Models, using the
nvidia/OpenCodeReasoning
dataset.
Note: If you would like to contribute to this repository, please read the CONTRIBUTING
file under
the Documentation section first.
- Prerequisites
- Installation
- Demo
- Inference
- File Structure
- Documentation
- License
- Links
- Team
- Contact
- Citation
1x NVIDIA GeForce RTX 3060 (min 12GB GPU)
sudo apt update -y && sudo apt upgrade -y
git clone https://github.com/bunyaminergen/Qwen2.5-Coder-1.5B-Instruct-Reasoning
cd Qwen2.5-Coder-1.5B-Instruct-Reasoning
conda env create -f environment.yaml
conda activate Qwen2.5-Coder-1.5B-Instruct-Reasoning
pip install flash-attn --no-build-isolation
python main.py
.
├── .docs
│ └── documentation
│ ├── CONFIG.md
│ ├── CONTRIBUTING.md
│ └── RESOURCES.md
├── environment.yaml
├── .github
│ └── CODEOWNERS
├── .gitignore
├── LICENSE
├── main.py
├── README.md
├── requirements.txt
└── src
├── config
│ └── config.yaml
├── model
│ └── core.py
├── process
│ └── train.py
└── utils
├── common
│ └── helpers.py
├── data
│ └── manager.py
├── log
│ └── manager.py
└── type
└── schema.py
13 directories, 17 files
@software{ Qwen2.5-Coder-1.5B-Instruct-Reasoning,
author = {Bunyamin Ergen},
title = {{Qwen2.5-Coder-1.5B-Instruct-Reasoning}},
year = {2025},
month = {04},
url = {https://github.com/bunyaminergen/Qwen2.5-Coder-1.5B-Instruct-Reasoning},
}