Skip to content

Commit e9cf9bb

Browse files
authored
Update README.md
1 parent a64deea commit e9cf9bb

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44

55
Ultralytics YOLOv8 represents the forefront of object detection models, incorporating advancements from prior YOLO iterations while introducing novel features to enhance performance and versatility. YOLOv8 prioritizes speed, precision, and user-friendliness, positioning itself as an exceptional solution across diverse tasks such as object detection, ororiented bounding boxes detection, tracking, instance segmentation, and image classification. Its refined architecture and innovations make it an ideal choice for cutting-edge applications in the field of computer vision.
66

7-
# Adding DeepaaS API into the existing codebase
7+
# 🔌 Integrating DeepaaS API with YOLOv8
88
In this repository, we have integrated a DeepaaS API into the Ultralytics YOLOv8, enabling the seamless utilization of this pipeline. The inclusion of the DeepaaS API enhances the functionality and accessibility of the code, making it easier for users to leverage and interact with the pipeline efficiently.
99

10-
# Install the API
10+
# 🛠️ Install the API
1111
To launch the API, first, install the package, and then run DeepaaS:
1212
``` bash
1313
git clone --depth 1 https://codebase.helmholtz.cloud/m-team/ai/ai4os-yolov8-torch.git
@@ -24,7 +24,7 @@ apt install -y libgl1
2424
apt install -y libglib2.0-0
2525
```
2626

27-
## Project structure
27+
## 📂Project structure
2828

2929
```
3030
├── Jenkinsfile <- Describes basic Jenkins CI/CD pipeline
@@ -84,7 +84,7 @@ apt install -y libglib2.0-0
8484
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
8585
```
8686

87-
# Environment variables settings
87+
# ⚙️ Environment variables settings
8888
"In `./api/config.py` you can configure several environment variables:
8989

9090
- `DATA_PATH`: Path definition for the data folder; the default is './data'.
@@ -93,7 +93,7 @@ apt install -y libglib2.0-0
9393
- `YOLOV8_DEFAULT_TASK_TYPE`: Specify the default tasks related to your work among detection (det), segmentation (seg), and classification (cls).
9494
- `YOLOV8_DEFAULT_WEIGHTS`: Define default timestamped weights for your trained models to be used during prediction. If no timestamp is specified by the user during prediction, the first model in YOLOV8_DEFAULT_WEIGHTS will be used. If it is set to None, the Yolov8n trained on coco/imagenet will be used. Format them as timestamp1, timestamp2, timestamp3, ..."
9595

96-
# Track your experiments with Mlfow
96+
# 📊 Track Your Experiments with MLflow
9797
If you want to use Mflow to track and log your experiments, you should first set the following environment variables:
9898
- `MLFLOW_TRACKING_URI`
9999
- `MLFLOW_TRACKING_USERNAME`
@@ -109,7 +109,7 @@ optional options:
109109
- Then you should set the argument `Enable_MLFLOW` to `True` during the execution of the training.
110110

111111

112-
# Dataset Preparation
112+
# 📁 Dataset Preparation
113113
- Detection (det), oriented bounding boxes detection (obb) and Segmentation Tasks (seg):
114114

115115
- To train the yolov8 model, your annotations should be saved as yolo formats (.txt). Please organize your data in the following structure:
@@ -202,7 +202,7 @@ data/
202202
ai4os-yolov8-torch/yolov8_api/seg_coco_json_to_yolo.py #for segmentation
203203
ai4os-yolov8-torch/yolov8_api/preprocess_ann.py #For detection
204204
```
205-
# Available Models
205+
# 📦 Available Models
206206
207207
The Ultralytics YOLOv8 model can be used to train multiple tasks including classification, detection, and segmentatio.
208208
To train the model based on your project, you can select on of the task_type option in the training arguments and the corresponding model will be loaded and trained.
@@ -223,7 +223,7 @@ for each task, you can select the model arguments among the following options:
223223
`yolov8X.yaml` bulid a model from scratch and
224224
`yolov8X.pt` load a pretrained model (recommended for training).
225225
226-
# Launching the API
226+
# 🚀 Launching the API
227227
228228
To train the model, run:
229229
```
@@ -235,7 +235,7 @@ Then, open the Swagger interface, change the hyperparameters in the train sectio
235235
236236
><span style="color:Blue">**Note:**</span> Augmentation Settings:
237237
among the training arguments, there are options related to augmentation, such as flipping, scaling, etc. The default values are set to automatically activate some of these options during training. If you want to disable augmentation entirely or partially, please review the default values and adjust them accordingly to deactivate the desired augmentations.
238-
# Inference Methods
238+
# 🔍 Inference Methods
239239
240240
You can utilize the Swagger interface to upload your images or videos and obtain the following outputs:
241241
@@ -249,7 +249,7 @@ You can utilize the Swagger interface to upload your images or videos and obtain
249249
- A video with bounding boxes delineating objects of interest throughout.
250250
- A JSON string accompanying each frame, supplying bounding box coordinates, object names within the boxes, and confidence scores for the detected objects.
251251
252-
# Hyperparameter Optimization using Optuna + Hydra + MLflow
252+
# Hyperparameter Optimization using Optuna + Hydra + MLflow
253253
254254
Please refer to the `README.md` inside the `yolov8_api/hpo_yolov8` directory to see how you can use these tools to automatically optimize YOLOv8 hyperparameters from the command line.
255255

0 commit comments

Comments
 (0)