You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@
4
4
5
5
Ultralytics YOLOv8 represents the forefront of object detection models, incorporating advancements from prior YOLO iterations while introducing novel features to enhance performance and versatility. YOLOv8 prioritizes speed, precision, and user-friendliness, positioning itself as an exceptional solution across diverse tasks such as object detection, ororiented bounding boxes detection, tracking, instance segmentation, and image classification. Its refined architecture and innovations make it an ideal choice for cutting-edge applications in the field of computer vision.
6
6
7
-
# Adding DeepaaS API into the existing codebase
7
+
# 🔌 Integrating DeepaaS API with YOLOv8
8
8
In this repository, we have integrated a DeepaaS API into the Ultralytics YOLOv8, enabling the seamless utilization of this pipeline. The inclusion of the DeepaaS API enhances the functionality and accessibility of the code, making it easier for users to leverage and interact with the pipeline efficiently.
9
9
10
-
# Install the API
10
+
# 🛠️ Install the API
11
11
To launch the API, first, install the package, and then run DeepaaS:
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
85
85
```
86
86
87
-
# Environment variables settings
87
+
# ⚙️ Environment variables settings
88
88
"In `./api/config.py` you can configure several environment variables:
89
89
90
90
-`DATA_PATH`: Path definition for the data folder; the default is './data'.
@@ -93,7 +93,7 @@ apt install -y libglib2.0-0
93
93
-`YOLOV8_DEFAULT_TASK_TYPE`: Specify the default tasks related to your work among detection (det), segmentation (seg), and classification (cls).
94
94
-`YOLOV8_DEFAULT_WEIGHTS`: Define default timestamped weights for your trained models to be used during prediction. If no timestamp is specified by the user during prediction, the first model in YOLOV8_DEFAULT_WEIGHTS will be used. If it is set to None, the Yolov8n trained on coco/imagenet will be used. Format them as timestamp1, timestamp2, timestamp3, ..."
95
95
96
-
# Track your experiments with Mlfow
96
+
# 📊 Track Your Experiments with MLflow
97
97
If you want to use Mflow to track and log your experiments, you should first set the following environment variables:
98
98
-`MLFLOW_TRACKING_URI`
99
99
-`MLFLOW_TRACKING_USERNAME`
@@ -109,7 +109,7 @@ optional options:
109
109
- Then you should set the argument `Enable_MLFLOW` to `True` during the execution of the training.
The Ultralytics YOLOv8 model can be used to train multiple tasks including classification, detection, and segmentatio.
208
208
To train the model based on your project, you can select on of the task_type option in the training arguments and the corresponding model will be loaded and trained.
@@ -223,7 +223,7 @@ for each task, you can select the model arguments among the following options:
223
223
`yolov8X.yaml` bulid a model from scratch and
224
224
`yolov8X.pt` load a pretrained model (recommended for training).
225
225
226
-
# Launching the API
226
+
# 🚀 Launching the API
227
227
228
228
To train the model, run:
229
229
```
@@ -235,7 +235,7 @@ Then, open the Swagger interface, change the hyperparameters in the train sectio
among the training arguments, there are options related to augmentation, such as flipping, scaling, etc. The default values are set to automatically activate some of these options during training. If you want to disable augmentation entirely or partially, please review the default values and adjust them accordingly to deactivate the desired augmentations.
238
-
# Inference Methods
238
+
# 🔍 Inference Methods
239
239
240
240
You can utilize the Swagger interface to upload your images or videos and obtain the following outputs:
241
241
@@ -249,7 +249,7 @@ You can utilize the Swagger interface to upload your images or videos and obtain
249
249
- A video with bounding boxes delineating objects of interest throughout.
250
250
- A JSON string accompanying each frame, supplying bounding box coordinates, object names within the boxes, and confidence scores for the detected objects.
251
251
252
-
# Hyperparameter Optimization using Optuna + Hydra + MLflow
252
+
# ⚡ Hyperparameter Optimization using Optuna + Hydra + MLflow
253
253
254
254
Please refer to the `README.md` inside the `yolov8_api/hpo_yolov8` directory to see how you can use these tools to automatically optimize YOLOv8 hyperparameters from the command line.
0 commit comments