You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 22, 2024. It is now read-only.
To run the pipeline, follow below instructions outside of docker instance.
58
+
To run the pipeline, follow below instructions outside of docker instance.
58
59
59
60
* Initiate the VDMS inference endpoint.
60
61
61
62
```
62
-
numactl --physcpubind=52-55 --membind=1 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
63
+
numactl --physcpubind=0 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
63
64
```
64
65
65
66
* Initiate the Video-Streamer service.
@@ -84,52 +85,98 @@ For docker environment, please go to [docker session](#docker).
84
85
#### Setup
85
86
Please go to the directory where you cloned the repo, follow commands to install required software.
86
87
88
+
```
89
+
cd video-streamer
90
+
```
91
+
##### 1. Video and AI Setup
92
+
*1. Edit `install.sh` for `mesa-libGL` install
93
+
In `install.sh`, default command `sudo yum install -y mesa-libGL` is for CentOS. For Ubuntu, change as following
94
+
```
95
+
#sudo yum install -y mesa-libGL
96
+
sudo apt install libgl1-mesa-glx
97
+
```
98
+
99
+
*2. Run following install script
100
+
101
+
create conda environment `vdms-test`
87
102
```
88
103
conda create -n vdms-test python=3.8
104
+
```
105
+
activate `vdms-test` then run install
106
+
```
89
107
conda activate vdms-test
90
108
./install.sh
91
109
```
92
110
93
111
By default, this will install intel-tensorflow-avx512. If it is necessary to run the workflow using a specific TensorFlow, please update it in `requirements.txt`
94
112
113
+
##### 2. VDMS database Setup
114
+
* VDMS instance for database is using docker.
115
+
Pull Docker Images
116
+
```
117
+
docker pull vuiseng9/intellabs-vdms:demo-191220
118
+
```
119
+
95
120
### Configuration
96
121
97
-
* config/pipeline-settings for pipeline setting
122
+
*1. Edit config/pipeline-settings for pipeline setting
123
+
98
124
Modify the parameter `gst_plugin_dir` and `video_path` to fit your Gstreamer plugin directory and input video path.
99
125
100
-
For example, we have `test.mp4` in `dataset` folder and gstreamer installed in `/home/test_usr/miniconda3/envs/vdms-test`. So we set as following:
*2. Edit `config/settings.yaml` for inference setting
138
+
- Customize to choose `data_type` from `FP32`, `AMPBF16` and `INT8` for inference. `INT8` is recommended for better performance.
109
139
110
-
`config/pipeline-settings`
111
-
1. cores_per_pipeline
112
-
This controls the number of CPU cores to run in the whole pipeline.
140
+
*3. CPU Optimization settings are found in two files:
113
141
114
-
`config/settings.yaml`
115
-
1. inter_op_parallelism : "2"
116
-
2. intra_op_parallelism : "4"
142
+
* 3.1) `config/pipeline-settings`
143
+
-`cores_per_pipeline` controls the number of CPU cores to run in the whole pipeline.
117
144
118
-
This controls TensorFlow thread settings.
119
-
* inter_op_parallelism: the number of threads used by independent non-blocking operations in TensorFlow.
120
-
* intra_op_parallelism: execution of an individual operation can be parallelized on a pool of threads in TensorFlow. `intra_op_parallelism` controls the maximum thread number of the pool.
145
+
* 3.2) `config/settings.yaml`
146
+
```
147
+
inter_op_parallelism : "2" #the number of threads used by independent non-blocking operations in TensorFlow.
148
+
intra_op_parallelism : "4" #execution of an individual operation can be parallelized on a pool of threads in TensorFlow.
149
+
```
121
150
122
151
### How to run
123
152
153
+
*1. Initiate the VDMS inference endpoint.
154
+
155
+
```
156
+
numactl --physcpubind=0 --membind=1 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
157
+
```
158
+
159
+
*2. start video AI workflow
160
+
161
+
```
162
+
./run.sh 1
163
+
```
164
+
124
165
`run.sh` is configured to accept a single input parameter which defines how many separate instances of the gstreamer pipelines to run. Each OpenMP thread from a given instance is pinned to a physical CPU core. I.e, when running four pipelines with OMP_NUM_THREADS=4
166
+
by configure `config/pipeline-settings`:
167
+
168
+
```
169
+
cores_per_pipeline = 4
170
+
```
171
+
125
172
|*Pipeline*|*Cores*|*Memory*|
126
173
| ---- | ---- | ---- |
127
174
|1| 0-3| Local |
128
175
|2| 4-7| Local |
129
176
|3| 8-11| Local |
130
177
|4|12-15| Local |
131
178
132
-
It is very important that the pipelines don't overlap numa domains or any other hardware non-uniformity. These values must be updated for each core architecture to get optimum performance.
179
+
It is very important that the pipelines don't overlap numa domains or any other hardware non-uniformity. These values must be updated for each core architecture to get optimum performance.
133
180
134
181
For launching the workload using a single instance, use the following command:
135
182
```
@@ -153,4 +200,4 @@ The hardware below is recommended for use with this reference implementation.
153
200
[Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)
154
201
155
202
# Support
156
-
Video Streamer tracks both bugs and enhancement requests using Github. We welcome input, however, before filing a request, please make sure you do the following: Search the Github issue database.
203
+
Video Streamer tracks both bugs and enhancement requests using Github. We welcome input, however, before filing a request, please make sure you do the following: Search the Github issue database.
0 commit comments