Skip to content

Commit 3b3ca75

Browse files
refine docs
1 parent ba409b3 commit 3b3ca75

File tree

2 files changed

+22
-10
lines changed

2 files changed

+22
-10
lines changed

deploy/slim/readme.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,10 +69,16 @@ TODO
6969
以PP-TSM模型为例,生成`inference model`后,离线量化运行方式如下
7070

7171
```bash
72-
# 首先进入deploy/slim目录下
72+
# 下载并解压出少量数据用于离线量化的校准
73+
pushd ./data/k400
74+
wget -nc https://videotag.bj.bcebos.com/Data/k400_rawframes_small.tar
75+
tar -xf k400_rawframes_small.tar
76+
popd
77+
78+
# 然后进入deploy/slim目录下
7379
cd deploy/slim
7480

75-
# 然后执行离线量化命令
81+
# 执行离线量化命令
7682
python3.7 quant_post_static.py \
7783
-c ../../configs/recognition/pptsm/pptsm_k400_frames_uniform_quantization.yaml \
7884
--use_gpu=True

deploy/slim/readme_en.md

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -28,15 +28,15 @@ Model compression mainly includes five steps:
2828

2929
```bash
3030
python3.7 -m pip install paddleslim -i https://pypi.tuna.tsinghua.edu.cn/simple
31-
````
31+
```
3232

3333
* If you get the latest features of PaddleSlim, you can install it from source.
3434

3535
```bash
3636
git clone https://github.com/PaddlePaddle/PaddleSlim.git
3737
cd Paddleslim
3838
python3.7 setup.py install
39-
````
39+
```
4040

4141
### 2. Prepare the trained model
4242

@@ -48,7 +48,7 @@ Go to PaddleVideo root directory
4848

4949
```bash
5050
cd PaddleVideo
51-
````
51+
```
5252

5353
The offline quantization code is located in `deploy/slim/quant_post_static.py`.
5454

@@ -68,14 +68,20 @@ Generally speaking, the offline quantization loss model has more accuracy.
6868
Taking the PP-TSM model as an example, after generating the `inference model`, the offline quantization operation is as follows
6969

7070
```bash
71-
# First enter the deploy/slim directory
71+
# download a small amount of data for calibration
72+
pushd ./data/k400
73+
wget -nc https://videotag.bj.bcebos.com/Data/k400_rawframes_small.tar
74+
tar -xf k400_rawframes_small.tar
75+
popd
76+
77+
# then switch to deploy/slim
7278
cd deploy/slim
7379

74-
# Then execute the offline quantization command
80+
# execute quantization script
7581
python3.7 quant_post_static.py \
7682
-c ../../configs/recognition/pptsm/pptsm_k400_frames_uniform_quantization.yaml \
7783
--use_gpu=True
78-
````
84+
```
7985

8086
All quantization environment parameters except `use_gpu` are configured in `pptsm_k400_frames_uniform_quantization.yaml` file
8187
Where `inference_model_dir` represents the directory path of the `inference model` exported in the previous step, and `quant_output_dir` represents the output directory path of the quantization model
@@ -96,14 +102,14 @@ python3.7 tools/predict.py \
96102
--params_file ./inference/ppTSM/quant_model/__params__ \
97103
--use_gpu=True \
98104
--use_tensorrt=False
99-
````
105+
```
100106

101107
The output is as follows:
102108
```bash
103109
Current video file: data/example.avi
104110
top-1 class: 5
105111
top-1 score: 0.9997928738594055
106-
````
112+
```
107113
#### 3.2 Model pruning
108114
TODO
109115

0 commit comments

Comments
 (0)