|
| 1 | +# 🚀 YOLOv8 Hyperparameter Optimization with Hydra, Optuna & MLflow |
| 2 | + |
| 3 | +This repository provides a framework to **train and optimize YOLOv8 models** using **Optuna**, **Hydra**, and **MLflow**. It is designed to streamline the process of configuring, training, validating, and evaluating object detection models with automated tracking and logging. |
| 4 | + |
| 5 | +## 📁 Project Structure |
| 6 | +```bash |
| 7 | +├── configs |
| 8 | +│ └── basic_train_params_pretrained.yaml # Hydra config with model, training, and logging params |
| 9 | +├── hpo.py # Main script for training and hyperparameter optimization |
| 10 | +├── requirements.txt # Python dependencies |
| 11 | + |
| 12 | +``` |
| 13 | +## ⚙️ Features |
| 14 | + |
| 15 | +- 🔧 Hyperparameter optimization with **Optuna** |
| 16 | +- 🧠 Config management via **Hydra** |
| 17 | +- 📈 Metrics & artifact logging using **MLflow** |
| 18 | +- 📦 Supports training/validation with **Ultralytics YOLOv8** |
| 19 | + |
| 20 | +--- |
| 21 | +## 🧰 Installation |
| 22 | +```bash |
| 23 | +pip install -r requirements.txt |
| 24 | +``` |
| 25 | +## 🧾 Configuration |
| 26 | +In `config/basic_train_params_pretrained.yaml` you can |
| 27 | +### 🔧 Defaults & Sweeper: |
| 28 | +You do not need to change this part. |
| 29 | +```bash |
| 30 | +defaults: |
| 31 | + - override hydra/sweeper: optuna |
| 32 | +``` |
| 33 | +### 📁 MLflow Settings |
| 34 | +You can change the name of your MLflow experiment here. |
| 35 | + |
| 36 | +```bash |
| 37 | + |
| 38 | +mlflow_project: hpo_yolov8_kitti # Name of the MLflow experiment |
| 39 | +mlflow_parent: basic_train_params_loss_30epochs_640imgsz # The name of the parent run in MLflow |
| 40 | +``` |
| 41 | +### ⚙️ Hydra + Optuna Sweep Configuration |
| 42 | +Here you can change the Hydra configuration. |
| 43 | + |
| 44 | + |
| 45 | +```bash |
| 46 | + |
| 47 | +hydra: |
| 48 | + sweep: |
| 49 | + dir: tmp_multirun # Directory where multi-run outputs are stored |
| 50 | + sweeper: |
| 51 | + _target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper # Optuna sweeper plugin |
| 52 | + sampler: |
| 53 | + seed: 815 # Seed for reproducibility |
| 54 | + direction: maximize # Objective direction (e.g. maximize validation fitness) |
| 55 | + n_trials: 30 # Number of trials to run |
| 56 | + n_jobs: 1 # Number of jobs to run in parallel |
| 57 | + params: # Hyperparameters to optimize. you can add any hyperparameters you want here |
| 58 | + train_params.optimizer: choice(SGD, Adam, AdamW) |
| 59 | + train_params.lr0: interval(0.001, 0.2) |
| 60 | + train_params.momentum: interval(0.6, 0.999) |
| 61 | + train_params.weight_decay: interval(0.00001, 0.001) |
| 62 | + train_params.box: interval(0.0, 10.0) |
| 63 | + train_params.cls: interval(0.0, 10.0) |
| 64 | + train_params.dfl: interval(0.0, 10.0) |
| 65 | +``` |
| 66 | + |
| 67 | +### 🏋️♂️ Training Parameters |
| 68 | +Here you can select the version of the YOLO model you want and the path to the `dataset.yaml` configuration.. |
| 69 | +```bash |
| 70 | + |
| 71 | +train_params: |
| 72 | + model: yolov8m.pt # Pretrained model to use (YOLOv8-m) |
| 73 | + epochs: 40 # Number of epochs to train |
| 74 | + patience: 10 # Early stopping patience |
| 75 | + box: 7.5 # Box loss gain |
| 76 | + cls: 0.5 # Class loss gain |
| 77 | + dfl: 1.5 # Distribution Focal Loss gain |
| 78 | + optimizer: 'auto' # Optimizer (overwritten by sweep) |
| 79 | + cos_lr: False # Use cosine learning rate schedule |
| 80 | + lr0: 0.01 # Initial learning rate |
| 81 | + momentum: 0.937 # Momentum (used with SGD) |
| 82 | + weight_decay: 0.0005 # Weight decay |
| 83 | + data: &data path/to/data/kitti.yaml # Path to dataset config |
| 84 | + batch: &batch 16 # Batch size |
| 85 | + imgsz: &imgsz 640 # Image size |
| 86 | + save: True # Save model checkpoints |
| 87 | + cache: True # Cache images for faster training |
| 88 | + device: &device 0 # GPU device |
| 89 | + workers: 8 # Number of data loading workers |
| 90 | + rect: &rect True # Use rectangular training batches |
| 91 | + plots: &plots True # Save training plots |
| 92 | +``` |
| 93 | +### 🧪 Validation Parameters |
| 94 | +```bash |
| 95 | + |
| 96 | +val_test_params: |
| 97 | + data: *data # Use same dataset as training |
| 98 | + imgsz: *imgsz # Same image size |
| 99 | + batch: *batch # Same batch size |
| 100 | + device: *device # Same GPU |
| 101 | + plots: *plots # Generate plots |
| 102 | + rect: *rect # Use rectangular validation batches |
| 103 | +``` |
| 104 | +## How to run |
| 105 | +To launch training and start the Optuna hyperparameter optimization: |
| 106 | + |
| 107 | +```bash |
| 108 | +python3 hpo.py --multirun |
| 109 | +``` |
| 110 | +✅ This will: |
| 111 | + |
| 112 | +- Run training and validation across **30 trials** |
| 113 | +- Optimize the defined hyperparameters using **Optuna** |
| 114 | +- Log all **metrics**, **configs**, and **artifacts** to **MLflow** |
| 115 | +- Store outputs in the **`tmp_multirun/`** directory |
| 116 | + |
| 117 | +## 📚 References |
| 118 | + |
| 119 | +- [Ultralytics YOLOv8 Documentation](https://docs.ultralytics.com/) |
| 120 | +- [Hydra – Elegant Configuration Management](https://hydra.cc/) |
| 121 | +- [Hydra Optuna Sweeper Plugin](https://github.com/facebookresearch/hydra/tree/main/plugins/hydra_optuna_sweeper) |
| 122 | +- [Optuna – Hyperparameter Optimization Framework](https://optuna.org/) |
| 123 | +- [MLflow – Open Source Experiment Tracking](https://mlflow.org/) |
| 124 | +s |
0 commit comments