Skip to content

Commit 8ae6159

Browse files
author
khadijeh.alibabaei
committed
add hpy opt script using optuna to yolov8
1 parent c65c776 commit 8ae6159

File tree

5 files changed

+534
-1
lines changed

5 files changed

+534
-1
lines changed

README.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,8 @@ apt install -y libglib2.0-0
3636
│ ├── README.md <- Instructions on how to integrate your model with DEEPaaS.
3737
│ ├── __init__.py <- Makes <your-model-source> a Python module
3838
│ ├── ... <- Other source code files
39-
│ └── config.py <- Module to define CONSTANTS used across the AI-model python package
39+
│ └── config.py <- Module to define CONSTANTS used across the AI-model python package
40+
│ └── hpo_yolov8 <- Hyperparameter Optimization using Optuna + Hydra + MLflow
4041
4142
├── api <- API subpackage for the integration with DEEP API
4243
│ ├── __init__.py <- Makes api a Python module, includes API interface methods
@@ -248,3 +249,7 @@ You can utilize the Swagger interface to upload your images or videos and obtain
248249
- A video with bounding boxes delineating objects of interest throughout.
249250
- A JSON string accompanying each frame, supplying bounding box coordinates, object names within the boxes, and confidence scores for the detected objects.
250251
252+
# Hyperparameter Optimization using Optuna + Hydra + MLflow
253+
254+
Please refer to the `README.md` inside the `yolov8_api/hpo_yolov8` directory to see how you can use these tools to automatically optimize YOLOv8 hyperparameters from the command line.
255+

yolov8_api/hpo_yolov8/README.md

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
# 🚀 YOLOv8 Hyperparameter Optimization with Hydra, Optuna & MLflow
2+
3+
This repository provides a framework to **train and optimize YOLOv8 models** using **Optuna**, **Hydra**, and **MLflow**. It is designed to streamline the process of configuring, training, validating, and evaluating object detection models with automated tracking and logging.
4+
5+
## 📁 Project Structure
6+
```bash
7+
├── configs
8+
│ └── basic_train_params_pretrained.yaml # Hydra config with model, training, and logging params
9+
├── hpo.py # Main script for training and hyperparameter optimization
10+
├── requirements.txt # Python dependencies
11+
12+
```
13+
## ⚙️ Features
14+
15+
- 🔧 Hyperparameter optimization with **Optuna**
16+
- 🧠 Config management via **Hydra**
17+
- 📈 Metrics & artifact logging using **MLflow**
18+
- 📦 Supports training/validation with **Ultralytics YOLOv8**
19+
20+
---
21+
## 🧰 Installation
22+
```bash
23+
pip install -r requirements.txt
24+
```
25+
## 🧾 Configuration
26+
In `config/basic_train_params_pretrained.yaml` you can
27+
### 🔧 Defaults & Sweeper:
28+
You do not need to change this part.
29+
```bash
30+
defaults:
31+
- override hydra/sweeper: optuna
32+
```
33+
### 📁 MLflow Settings
34+
You can change the name of your MLflow experiment here.
35+
36+
```bash
37+
38+
mlflow_project: hpo_yolov8_kitti # Name of the MLflow experiment
39+
mlflow_parent: basic_train_params_loss_30epochs_640imgsz # The name of the parent run in MLflow
40+
```
41+
### ⚙️ Hydra + Optuna Sweep Configuration
42+
Here you can change the Hydra configuration.
43+
44+
45+
```bash
46+
47+
hydra:
48+
sweep:
49+
dir: tmp_multirun # Directory where multi-run outputs are stored
50+
sweeper:
51+
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper # Optuna sweeper plugin
52+
sampler:
53+
seed: 815 # Seed for reproducibility
54+
direction: maximize # Objective direction (e.g. maximize validation fitness)
55+
n_trials: 30 # Number of trials to run
56+
n_jobs: 1 # Number of jobs to run in parallel
57+
params: # Hyperparameters to optimize. you can add any hyperparameters you want here
58+
train_params.optimizer: choice(SGD, Adam, AdamW)
59+
train_params.lr0: interval(0.001, 0.2)
60+
train_params.momentum: interval(0.6, 0.999)
61+
train_params.weight_decay: interval(0.00001, 0.001)
62+
train_params.box: interval(0.0, 10.0)
63+
train_params.cls: interval(0.0, 10.0)
64+
train_params.dfl: interval(0.0, 10.0)
65+
```
66+
67+
### 🏋️‍♂️ Training Parameters
68+
Here you can select the version of the YOLO model you want and the path to the `dataset.yaml` configuration..
69+
```bash
70+
71+
train_params:
72+
model: yolov8m.pt # Pretrained model to use (YOLOv8-m)
73+
epochs: 40 # Number of epochs to train
74+
patience: 10 # Early stopping patience
75+
box: 7.5 # Box loss gain
76+
cls: 0.5 # Class loss gain
77+
dfl: 1.5 # Distribution Focal Loss gain
78+
optimizer: 'auto' # Optimizer (overwritten by sweep)
79+
cos_lr: False # Use cosine learning rate schedule
80+
lr0: 0.01 # Initial learning rate
81+
momentum: 0.937 # Momentum (used with SGD)
82+
weight_decay: 0.0005 # Weight decay
83+
data: &data path/to/data/kitti.yaml # Path to dataset config
84+
batch: &batch 16 # Batch size
85+
imgsz: &imgsz 640 # Image size
86+
save: True # Save model checkpoints
87+
cache: True # Cache images for faster training
88+
device: &device 0 # GPU device
89+
workers: 8 # Number of data loading workers
90+
rect: &rect True # Use rectangular training batches
91+
plots: &plots True # Save training plots
92+
```
93+
### 🧪 Validation Parameters
94+
```bash
95+
96+
val_test_params:
97+
data: *data # Use same dataset as training
98+
imgsz: *imgsz # Same image size
99+
batch: *batch # Same batch size
100+
device: *device # Same GPU
101+
plots: *plots # Generate plots
102+
rect: *rect # Use rectangular validation batches
103+
```
104+
## How to run
105+
To launch training and start the Optuna hyperparameter optimization:
106+
107+
```bash
108+
python3 hpo.py --multirun
109+
```
110+
✅ This will:
111+
112+
- Run training and validation across **30 trials**
113+
- Optimize the defined hyperparameters using **Optuna**
114+
- Log all **metrics**, **configs**, and **artifacts** to **MLflow**
115+
- Store outputs in the **`tmp_multirun/`** directory
116+
117+
## 📚 References
118+
119+
- [Ultralytics YOLOv8 Documentation](https://docs.ultralytics.com/)
120+
- [Hydra – Elegant Configuration Management](https://hydra.cc/)
121+
- [Hydra Optuna Sweeper Plugin](https://github.com/facebookresearch/hydra/tree/main/plugins/hydra_optuna_sweeper)
122+
- [Optuna – Hyperparameter Optimization Framework](https://optuna.org/)
123+
- [MLflow – Open Source Experiment Tracking](https://mlflow.org/)
124+
s
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
defaults:
2+
- override hydra/sweeper: optuna
3+
4+
mlflow_project: hpo_yolov8_kitti
5+
mlflow_parent: basic_train_params_loss_30epochs_640imgsz
6+
7+
hydra:
8+
sweep:
9+
dir: tmp_multirun
10+
sweeper:
11+
_target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
12+
sampler:
13+
seed: 815
14+
direction: maximize
15+
n_trials: 30
16+
n_jobs: 1
17+
params:
18+
train_params.optimizer: choice(SGD, Adam, AdamW)
19+
train_params.lr0: interval(0.001, 0.2)
20+
train_params.momentum: interval(0.6, 0.999)
21+
train_params.weight_decay: interval(0.00001, 0.001)
22+
train_params.box: interval(0.0, 10.0)
23+
train_params.cls: interval(0.0, 10.0)
24+
train_params.dfl: interval(0.0, 10.0)
25+
26+
train_params:
27+
model: yolov8m.pt
28+
epochs: 40
29+
patience: 10
30+
# loss params
31+
box: 7.5
32+
cls: 0.5
33+
dfl: 1.5
34+
# train params
35+
optimizer: 'auto'
36+
cos_lr: False
37+
lr0: 0.01
38+
momentum: 0.937
39+
weight_decay: 0.0005
40+
# default params
41+
data: &data configs/kitti.yaml
42+
batch: &batch 16
43+
imgsz: &imgsz 640
44+
save: True
45+
cache: True
46+
device: &device 0
47+
workers: 8
48+
rect: &rect True
49+
plots: &plots True
50+
val_test_params:
51+
# default params
52+
data: *data
53+
imgsz: *imgsz
54+
batch: *batch
55+
device: *device
56+
plots: *plots
57+
rect: *rect

0 commit comments

Comments
 (0)