Skip to content
This repository was archived by the owner on Feb 22, 2024. It is now read-only.

Commit 82c7890

Browse files
author
tylertitsworth
committed
update Video Streamer DevCatalog for Demo
1 parent 89ff306 commit 82c7890

File tree

1 file changed

+68
-21
lines changed

1 file changed

+68
-21
lines changed

analytics/tensorflow/ssd_resnet34/inference/DEVCATALOG.md

Lines changed: 68 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,8 @@ Video streamer data flow
2222
#### Download the repo
2323
Clone [Main Repository](https://github.com/intel/video-streamer) repository into your working directory.
2424
```
25-
git clone https://github.com/intel/video-streamer .
25+
git clone https://github.com/intel/video-streamer
26+
cd video-streamer
2627
git checkout v1.0.0
2728
```
2829
#### Download the Video File and Models
@@ -54,12 +55,12 @@ export DOCKER_RUN_ENVS="-e ftp_proxy=${ftp_proxy} \
5455
-e NO_PROXY=${NO_PROXY} -e socks_proxy=${socks_proxy} \
5556
-e SOCKS_PROXY=${SOCKS_PROXY}"
5657
```
57-
To run the pipeline, follow below instructions outside of docker instance.
58+
To run the pipeline, follow below instructions outside of docker instance.
5859

5960
* Initiate the VDMS inference endpoint.
6061

6162
```
62-
numactl --physcpubind=52-55 --membind=1 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
63+
numactl --physcpubind=0 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
6364
```
6465

6566
* Initiate the Video-Streamer service.
@@ -84,52 +85,98 @@ For docker environment, please go to [docker session](#docker).
8485
#### Setup
8586
Please go to the directory where you cloned the repo, follow commands to install required software.
8687

88+
```
89+
cd video-streamer
90+
```
91+
##### 1. Video and AI Setup
92+
* 1. Edit `install.sh` for `mesa-libGL` install
93+
In `install.sh`, default command `sudo yum install -y mesa-libGL` is for CentOS. For Ubuntu, change as following
94+
```
95+
#sudo yum install -y mesa-libGL
96+
sudo apt install libgl1-mesa-glx
97+
```
98+
99+
* 2. Run following install script
100+
101+
create conda environment `vdms-test`
87102
```
88103
conda create -n vdms-test python=3.8
104+
```
105+
activate `vdms-test` then run install
106+
```
89107
conda activate vdms-test
90108
./install.sh
91109
```
92110

93111
By default, this will install intel-tensorflow-avx512. If it is necessary to run the workflow using a specific TensorFlow, please update it in `requirements.txt`
94112

113+
##### 2. VDMS database Setup
114+
* VDMS instance for database is using docker.
115+
Pull Docker Images
116+
```
117+
docker pull vuiseng9/intellabs-vdms:demo-191220
118+
```
119+
95120
### Configuration
96121

97-
* config/pipeline-settings for pipeline setting
122+
* 1. Edit config/pipeline-settings for pipeline setting
123+
98124
Modify the parameter `gst_plugin_dir` and `video_path` to fit your Gstreamer plugin directory and input video path.
99125

100-
For example, we have `test.mp4` in `dataset` folder and gstreamer installed in `/home/test_usr/miniconda3/envs/vdms-test`. So we set as following:
101126
```
102-
video_path=dataset/test.mp4
103-
gst_plugin_dir=/home/test_usr/miniconda3/envs/vdms-test/lib/gstreamer-1.0
127+
mv classroom.mp4 dataset/classroom.mp4
104128
```
105-
* config/settings.yaml for inference setting
106-
Customize to choose FP32, AMPBF16 or INT8 for inference.
107129

108-
CPU Optimization settings are found in two files:
130+
For example, we have
131+
- `classroom.mp4` in `dataset` folder and
132+
- gstreamer installed in `/home/test_usr/miniconda3/envs/vdms-test`. So it is set:
133+
```
134+
video_path=dataset/classroom.mp4
135+
gst_plugin_dir=/home/test_usr/miniconda3/envs/vdms-test/lib/gstreamer-1.0
136+
```
137+
* 2. Edit `config/settings.yaml` for inference setting
138+
- Customize to choose `data_type` from `FP32`, `AMPBF16` and `INT8` for inference. `INT8` is recommended for better performance.
109139

110-
`config/pipeline-settings`
111-
1. cores_per_pipeline
112-
This controls the number of CPU cores to run in the whole pipeline.
140+
* 3. CPU Optimization settings are found in two files:
113141

114-
`config/settings.yaml`
115-
1. inter_op_parallelism : "2"
116-
2. intra_op_parallelism : "4"
142+
* 3.1) `config/pipeline-settings`
143+
- `cores_per_pipeline` controls the number of CPU cores to run in the whole pipeline.
117144

118-
This controls TensorFlow thread settings.
119-
* inter_op_parallelism: the number of threads used by independent non-blocking operations in TensorFlow.
120-
* intra_op_parallelism: execution of an individual operation can be parallelized on a pool of threads in TensorFlow. `intra_op_parallelism` controls the maximum thread number of the pool.
145+
* 3.2) `config/settings.yaml`
146+
```
147+
inter_op_parallelism : "2" #the number of threads used by independent non-blocking operations in TensorFlow.
148+
intra_op_parallelism : "4" #execution of an individual operation can be parallelized on a pool of threads in TensorFlow.
149+
```
121150

122151
### How to run
123152

153+
* 1. Initiate the VDMS inference endpoint.
154+
155+
```
156+
numactl --physcpubind=0 --membind=1 docker run --net=host -d vuiseng9/intellabs-vdms:demo-191220
157+
```
158+
159+
* 2. start video AI workflow
160+
161+
```
162+
./run.sh 1
163+
```
164+
124165
`run.sh` is configured to accept a single input parameter which defines how many separate instances of the gstreamer pipelines to run. Each OpenMP thread from a given instance is pinned to a physical CPU core. I.e, when running four pipelines with OMP_NUM_THREADS=4
166+
by configure `config/pipeline-settings`:
167+
168+
```
169+
cores_per_pipeline = 4
170+
```
171+
125172
|*Pipeline*|*Cores*|*Memory*|
126173
| ---- | ---- | ---- |
127174
|1| 0-3| Local |
128175
|2| 4-7| Local |
129176
|3| 8-11| Local |
130177
|4|12-15| Local |
131178

132-
It is very important that the pipelines don't overlap numa domains or any other hardware non-uniformity. These values must be updated for each core architecture to get optimum performance.
179+
It is very important that the pipelines don't overlap numa domains or any other hardware non-uniformity. These values must be updated for each core architecture to get optimum performance.
133180

134181
For launching the workload using a single instance, use the following command:
135182
```
@@ -153,4 +200,4 @@ The hardware below is recommended for use with this reference implementation.
153200
[Intel® AI Analytics Toolkit (AI Kit)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html)
154201

155202
# Support
156-
Video Streamer tracks both bugs and enhancement requests using Github. We welcome input, however, before filing a request, please make sure you do the following: Search the Github issue database.
203+
Video Streamer tracks both bugs and enhancement requests using Github. We welcome input, however, before filing a request, please make sure you do the following: Search the Github issue database.

0 commit comments

Comments
 (0)