Skip to content

Commit 87a320c

Browse files
update code
1 parent 0dc06c2 commit 87a320c

File tree

2 files changed

+2
-6
lines changed

2 files changed

+2
-6
lines changed

deploy/python_serving/readme.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -64,10 +64,7 @@ python3.7 -m pip install paddle-serving-server-gpu==0.7.0.post112 # GPU with CU
6464
unzip ppTSM.zip
6565
popd
6666
```
67-
68-
- 我们提供了转换好的[PP-TSM推理模型](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM.zip)
69-
70-
- 用 paddle_serving_client 把下载的推理模型转换成易于 Server 部署的模型格式:
67+
- 用 paddle_serving_client 把转换好的推理模型再转换成易于 Server 部署的模型格式:
7168
```bash
7269
python3.7 -m paddle_serving_client.convert \
7370
--dirname inference/ppTSM \

deploy/python_serving/readme_en.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,7 @@ When using PaddleServing for service deployment, you need to convert the saved i
6565
popd
6666
```
6767

68-
- We provide the converted [PP-TSM inference model](https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM.zip)
69-
- Use paddle_serving_client to convert the downloaded inference model into a model format that is easy for server deployment:
68+
- Use paddle_serving_client to convert the converted inference model into a model format that is easy for server deployment:
7069
```bash
7170
python3.7 -m paddle_serving_client.convert \
7271
--dirname inference/ppTSM \

0 commit comments

Comments
 (0)