You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy/cpp_infer/readme_en.md
+13-4
Original file line number
Diff line number
Diff line change
@@ -101,12 +101,12 @@ There are two ways to obtain the Paddle prediction library, which will be descri
101
101
102
102
#### 1.2.1 Download and install directly
103
103
104
-
* [Paddle prediction library official website](https://paddleinference.paddlepaddle.org.cn/v2.1/user_guides/download_lib.html) provides different cuda versions of Linux prediction libraries, you can Check and **select the appropriate prediction library version** on the official website (it is recommended to selectthe prediction library with paddle version>=2.0.1).
104
+
* [Paddle prediction library official website](https://paddleinference.paddlepaddle.org.cn/v2.2/user_guides/download_lib.html) provides different cuda versions of Linux prediction libraries, you can Check and **select the appropriate prediction library version** on the official website (it is recommended to selectthe prediction library with paddle version>=2.0.1).
105
105
106
106
* Download and get a `paddle_inference.tgz` compressed package, and then unzip it into a folder, the command is as follows (taking the machine environment as gcc8.2 as an example):
* After entering the Paddle directory, the compilation method is as follows.
@@ -217,7 +217,16 @@ Operation mode:
217
217
218
218
Among them, `mode` is a required parameter, which means the selected function, and the value range is ['rec'], which means **video recognition** (more functions will be added in succession).
219
219
220
+
Note: Note: If you want to enable the TensorRT optimization option during prediction, you need to run the following command to set the relevant path of TensorRT.
0 commit comments