Skip to content

Commit 1c59abe

Browse files
committed
docs: add en doc
1 parent 991a2a3 commit 1c59abe

File tree

2 files changed

+62
-3
lines changed

2 files changed

+62
-3
lines changed

deploy/paddle2onnx/readme_en.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Paddle2ONNX: Converting To ONNX and Deployment
2+
3+
This section introduce that how to convert the Paddle Inference Model ResNet50_vd to ONNX model and deployment based on ONNX engine.
4+
5+
## 1. Installation
6+
7+
First, you need to install Paddle2ONNX and onnxruntime. Paddle2ONNX is a toolkit to convert Paddle Inference Model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/README_en.md) for more information.
8+
9+
- Paddle2ONNX Installation
10+
```
11+
python3.7 -m pip install paddle2onnx
12+
```
13+
14+
- ONNX Installation
15+
```
16+
python3.7 -m pip install onnxruntime
17+
```
18+
19+
## 2. Converting to ONNX
20+
21+
Download the Paddle Inference Model ResNet50_vd:
22+
23+
```
24+
cd deploy
25+
mkdir models && cd models
26+
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
27+
cd ..
28+
```
29+
30+
Converting to ONNX model:
31+
32+
```
33+
paddle2onnx --model_dir=./models/ResNet50_vd_infer/ \
34+
--model_filename=inference.pdmodel \
35+
--params_filename=inference.pdiparams \
36+
--save_file=./models/ResNet50_vd_infer/inference.onnx \
37+
--opset_version=10 \
38+
--enable_onnx_checker=True
39+
```
40+
41+
After running the above command, the ONNX model file converted would be save in `./models/ResNet50_vd_infer/`.
42+
43+
## 3. Deployment
44+
45+
Deployment with ONNX model, command is as shown below.
46+
47+
```
48+
python3.7 python/predict_cls.py \
49+
-c configs/inference_cls.yaml \
50+
-o Global.use_onnx=True \
51+
-o Global.use_gpu=False \
52+
-o Global.inference_model_dir=./models/ResNet50_vd_infer
53+
```
54+
55+
The prediction results:
56+
57+
```
58+
ILSVRC2012_val_00000010.jpeg: class id(s): [153, 204, 229, 332, 155], score(s): [0.69, 0.10, 0.02, 0.01, 0.01], label_name(s): ['Maltese dog, Maltese terrier, Maltese', 'Lhasa, Lhasa apso', 'Old English sheepdog, bobtail', 'Angora, Angora rabbit', 'Shih-Tzu']
59+
```

docs/en/PULC/PULC_person_exists_en.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
- [6.3 Deployment with C++](#6.3)
3333
- [6.4 Deployment as Service](#6.4)
3434
- [6.5 Deployment on Mobile](#6.5)
35-
- [6.6 To ONNX and Deployment](#6.6)
35+
- [6.6 Converting To ONNX and Deployment](#6.6)
3636

3737

3838
<a name="1"></a>
@@ -274,7 +274,7 @@ The results:
274274

275275
### 4.1 SKL-UGI Knowledge Distillation
276276

277-
SKL-UGI is a simple but effective knowledge distillation algrithem proposed by PaddleClas. Please refer to [SKL-UGI 知识蒸馏](../advanced_tutorials/ssld_en.md) for more details.
277+
SKL-UGI is a simple but effective knowledge distillation algrithem proposed by PaddleClas. Please refer to [SKL-UGI 知识蒸馏](../advanced_tutorials/distillation/distillation_en.md) for more details.
278278

279279
<a name="4.1.1"></a>
280280

@@ -450,7 +450,7 @@ PaddleClas provides an example of how to deploy on mobile by Paddle-Lite. Please
450450

451451
<a name="6.6"></a>
452452

453-
### 6.6 To ONNX and Deployment
453+
### 6.6 Converting To ONNX and Deployment
454454

455455
Paddle2ONNX support convert Paddle Inference model to ONNX model. And you can deploy with ONNX model on different inference engine, such as TensorRT, OpenVINO, MNN/TNN, NCNN and so on. About Paddle2ONNX details, please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
456456

0 commit comments

Comments
 (0)