Skip to content

Commit 8cfe40f

Browse files
authored
Merge branch 'PaddlePaddle:main' into main
2 parents 40fc8bb + 6dce3e0 commit 8cfe40f

37 files changed

+3392
-2759
lines changed

README_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ In addition to providing an outstanding model library, PaddleOCR 3.0 also offers
4545
## 📣 Recent updates
4646
🔥🔥2025.05.20: Official Release of **PaddleOCR v3.0**, including:
4747
- **PP-OCRv5**: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
48-
1. 🌐 Single-model support for **five** text types - Seamlessly process **Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English** and **Japanse** within a single model.
48+
1. 🌐 Single-model support for **five** text types - Seamlessly process **Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English** and **Japanese** within a single model.
4949
2. ✍️ Improved **handwriting recognition**: Significantly better at complex cursive scripts and non-standard handwriting.
5050
3. 🎯 **13-point accuracy gain** over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
5151

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# 基于Python或C++预测引擎推理
2+
3+
自 2.x 分支以来,基于 Python 或 C++ 的预测引擎推理一直是一项重要功能。该功能允许用户在无需安装 wheel 包的情况下加载OCR相关模型并进行推理。有关具体使用方法,请参考以下文档:
4+
5+
* [基于Python预测引擎推理](../../version2.x/legacy/python_infer.md)
6+
* [基于C++预测引擎推理](../../version2.x/legacy/cpp_infer.md)
7+
* [支持的模型列表](../../version2.x/legacy/model_list_2.x.md)

docs/version3.x/deployment/serving.en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ The command-line options related to serving are as follows:
6060
</tr>
6161
<tr>
6262
<td><code>--device</code></td>
63-
<td>Deployment device for the pipeline. Defaults to <code>cpu</code> (if GPU is unavailable) or <code>gpu</code> (if GPU is available).</td>
63+
<td>Deployment device for the pipeline. By default, a GPU will be used if available; otherwise, a CPU will be used."</td>
6464
</tr>
6565
<tr>
6666
<td><code>--host</code></td>

docs/version3.x/deployment/serving.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
6060
</tr>
6161
<tr>
6262
<td><code>--device</code></td>
63-
<td>产线部署设备。默认为 <code>cpu</code>(如 GPU 不可用)或 <code>gpu</code>(如 GPU 可用)。</td>
63+
<td>产线部署设备。默认情况下,当 GPU 可用时,将使用 GPU;否则使用 CPU。</td>
6464
</tr>
6565
<tr>
6666
<td><code>--host</code></td>

docs/version3.x/model_list.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -127,21 +127,21 @@ PaddleOCR 内置了多条产线,每条产线都包含了若干模块,每个
127127
</tr>
128128
<tr>
129129
<td>PP-OCRv5_server_rec</td>
130-
<td>-</td>
131-
<td>- / -</td>
132-
<td>- / -</td>
133-
<td>206 M</td>
130+
<td>86.38</td>
131+
<td>8.45/2.36</td>
132+
<td>122.69/122.69</td>
133+
<td>81 M</td>
134134
<td><a href="https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/configs/modules/text_recognition/PP-OCRv5_server_rec.yaml">PP-OCRv5_server_rec.yaml</a></td>
135-
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv5_server_rec_infer.tar">推理模型</a>/<a href="">训练模型</a></td>
135+
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv5_server_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams">训练模型</a></td>
136136
</tr>
137137
<tr>
138138
<td>PP-OCRv5_mobile_rec</td>
139-
<td>-</td>
140-
<td>- / -</td>
141-
<td>- / -</td>
142-
<td>137 M</td>
139+
<td>81.29</td>
140+
<td>1.46/5.43</td>
141+
<td>5.32/91.79</td>
142+
<td>16 M</td>
143143
<td><a href="https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/configs/modules/text_recognition/PP-OCRv5_mobile_rec.yaml">PP-OCRv5_mobile_rec.yaml</a></td>
144-
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv5_mobile_rec_infer.tar">推理模型</a>/<a href="">训练模型</a></td>
144+
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv5_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_mobile_rec_pretrained.pdparams">训练模型</a></td>
145145
</tr>
146146
<tr>
147147
<td>PP-OCRv4_server_rec_doc</td>
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Module Overview
2+
3+
A module is the smallest unit that implements basic functionality. Modules typically use a single model to accomplish specific tasks, such as text detection, image classification, and other basic functions. As fundamental building blocks, modules provide the necessary functional support for more complex application scenarios. This design approach allows users to flexibly select and combine different modules according to their needs, thereby simplifying the development process and enhancing development flexibility and efficiency.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# 模块概述
2+
3+
模块是实现基本功能的最小单位。模块通常使用单个模型去完成特定的任务,比如文本检测、图像分类等基本功能。模块作为基础构建单元,为更复杂的应用场景提供了必要的功能支持。这种设计方式使得用户可以根据需要灵活选择和组合不同的模块,从而简化了开发流程,并提高了开发的灵活性和效率。

docs/version3.x/module_usage/table_cells_detection.en.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ The Table Cell Detection Module is a key component of the table recognition task
8181

8282
## III. Quick Start
8383

84-
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.md).
84+
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.en.md).
8585
8686
You can quickly experience it with one command:
8787

@@ -187,8 +187,8 @@ The relevant methods, parameters, etc., are described as follows:
187187
<td><code>float/dict</code></td>
188188
<td>
189189
<ul>
190-
<li><b>float</b>, e.g., 0.2, indicates filtering out all bounding boxes with confidence lower than 0.2</li>
191-
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.45, 2: 0.48, 7: 0.4}</code> applies a threshold of 0.45 for category cls_id 0, 0.48 for category cls_id 1, and 0.4 for category cls_id 7</li>
190+
<li><b>float</b>, e.g., 0.3, indicates filtering out all bounding boxes with confidence lower than 0.3</li>
191+
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.3}</code> applies a threshold of 0.3 for category cls_id 0</li>
192192
</ul>
193193
</td>
194194
<td>None</td>

docs/version3.x/module_usage/table_cells_detection.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -236,11 +236,11 @@ for res in output:
236236
<td><code>float/dict</code></td>
237237
<td>
238238
<ul>
239-
<li><b>float</b>,如 0.2, 表示过滤掉所有阈值小于0.2的目标框</li>
240-
<li><b>字典</b>,字典的key为<b>int</b>类型,代表<code>cls_id</code>,val为<b>float</b>类型阈值。如 <code>{0: 0.45, 2: 0.48, 7: 0.4}</code>,表示对cls_id为0的类别应用阈值0.45、cls_id为1的类别应用阈值0.48、cls_id为7的类别应用阈值0.4</li>
239+
<li><b>float</b>,如 0.3, 表示过滤掉所有阈值小于0.3的目标框</li>
240+
<li><b>字典</b>,字典的key为<b>int</b>类型,代表<code>cls_id</code>,val为<b>float</b>类型阈值。如 <code>{0: 0.3}</code>,表示对cls_id为0的类别应用阈值0.3</li>
241241
</ul>
242242
</td>
243-
<td></td>
243+
<td>None</td>
244244
</tr>
245245
</table>
246246

docs/version3.x/module_usage/table_classification.en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ The Table Classification Module is a key component in computer vision systems, r
7474

7575
## 3. Quick Start
7676

77-
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.md).
77+
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.en.md).
7878
7979
You can quickly experience it with one command:
8080

docs/version3.x/module_usage/table_classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,6 +272,6 @@ for res in output:
272272

273273
## 四、二次开发
274274

275-
由于 PaddleOCR 并不直接提供表格分类模块的训练,因此,如果需要训练表格分类模型,可以参考 [PaddleX 表格分类模块二次开发](https://paddlepaddle.github.io/PaddleX/latest/module_usage/tutorials/ocr_modules/table_classification.html#_4)部分进行训练。训练后的模型可以无缝集成到 PaddleOCR 的 API 中进行推理。
275+
由于 PaddleOCR 并不直接提供表格分类模块的训练,因此,如果需要训练表格分类模型,可以参考 [PaddleX 表格分类模块二次开发](https://paddlepaddle.github.io/PaddleX/latest/module_usage/tutorials/ocr_modules/table_classification.html#_5)部分进行训练。训练后的模型可以无缝集成到 PaddleOCR 的 API 中进行推理。
276276

277277
## 五、FAQ

docs/version3.x/module_usage/table_structure_recognition.en.md

Lines changed: 32 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ comments: true
66

77
## 1. Overview
88

9-
Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML or Latex code for the table area, which is then passed as input to the table content recognition module for further processing.
9+
Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML code for the table area, which is then passed as input to the tabl recognition pipeline for further processing.
1010

1111
## 2. Supported Model List
1212

@@ -56,7 +56,7 @@ Table structure recognition is an important component of table recognition syste
5656
<ul>
5757
<li><b>Performance Test Environment</b>
5858
<ul>
59-
<li><strong>Test Dataset:</strong> High-difficulty Chinese table recognition dataset built internally by PaddleX.</li>
59+
<li><strong>Test Dataset:</strong> High-difficulty Chinese table recognition dataset.</li>
6060
<li><strong>Hardware Configuration:</strong>
6161
<ul>
6262
<li>GPU: NVIDIA Tesla T4</li>
@@ -147,7 +147,7 @@ Descriptions of related methods and parameters are as follows:
147147
<td><code>model_name</code></td>
148148
<td>Model name</td>
149149
<td><code>str</code></td>
150-
<td>All model names supported by PaddleX</td>
150+
<td>All model names</td>
151151
<td>None</td>
152152
</tr>
153153
<tr>
@@ -180,7 +180,7 @@ Descriptions of related methods and parameters are as follows:
180180
</tr>
181181
</table>
182182

183-
* Among them, `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX are used by default. On this basis, if `model_dir` is specified, the user's custom model is used.
183+
* Among them, `model_name` must be specified. If `model_dir` is specified, the user's custom model is used.
184184

185185
* Call the `predict()` method of the table structure recognition model for inference prediction, which returns a result list. In addition, this module also provides the `predict_iter()` method. The two are completely consistent in parameter acceptance and result return. The difference is that `predict_iter()` returns a `generator`, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where you want to save memory. You can choose to use either method according to your actual needs. The `predict()` method has parameters `input` and `batch_size`, described as follows:
186186

@@ -290,7 +290,7 @@ Descriptions of related methods and parameters are as follows:
290290

291291
## 4. Secondary Development
292292

293-
If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training `SLANet` is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the [table structure recognition demo data](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar). Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the [installation documentation](../installation.en.md).
293+
If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training `SLANet_plus` is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the [table structure recognition demo data](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar). Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the [installation documentation](../installation.en.md).
294294

295295

296296
## 4.1 Dataset and Pretrained Model Preparation
@@ -306,24 +306,35 @@ tar -xf table_rec_dataset_examples.tar
306306
### 4.1.2 Download Pretrained Model
307307

308308
```shell
309-
# Download SLANet pretrained model
310-
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/SLANet_pretrained.pdparams
309+
# Download SLANet_plus pretrained model
310+
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/SLANet_plus_pretrained.pdparams
311311
```
312312

313313
### 4.2 Model Training
314314

315-
PaddleOCR is modularized. When training the `SLANet` recognition model, you need to use the [configuration file](https://github.com/PaddlePaddle/PaddleOCR/blob/main/configs/table/SLANet.yml) of `SLANet`.
315+
PaddleOCR is modularized. When training the `SLANet_plus` recognition model, you need to use the [configuration file](https://github.com/PaddlePaddle/PaddleOCR/blob/main/configs/table/SLANet_plus.yml) of `SLANet_plus`.
316316

317317

318318
The training commands are as follows:
319319

320320
```bash
321321
# Single card training (default training method)
322-
python3 tools/train.py -c configs/table/SLANet.yml \
323-
-o Global.pretrained_model=./SLANet_pretrained.pdparams
322+
python3 tools/train.py -c configs/table/SLANet_plus.yml \
323+
-o Global.pretrained_model=./SLANet_plus_pretrained.pdparams
324+
Train.dataset.data_dir=./table_rec_dataset_examples \
325+
Train.dataset.label_file_list='[./table_rec_dataset_examples/train.txt]' \
326+
Eval.dataset.data_dir=./table_rec_dataset_examples \
327+
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
328+
324329
# Multi-card training, specify card numbers via --gpus parameter
325-
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/table/SLANet.yml \
326-
-o Global.pretrained_model=./SLANet_pretrained.pdparams
330+
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py \
331+
-c configs/table/SLANet_plus.yml \
332+
-o Global.pretrained_model=./SLANet_plus_pretrained.pdparams
333+
-o Global.pretrained_model=./PP-OCRv5_server_det_pretrained.pdparams \
334+
Train.dataset.data_dir=./table_rec_dataset_examples \
335+
Train.dataset.label_file_list='[./table_rec_dataset_examples/train.txt]' \
336+
Eval.dataset.data_dir=./table_rec_dataset_examples \
337+
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
327338
```
328339

329340

@@ -334,21 +345,23 @@ You can evaluate the trained weights, such as `output/xxx/xxx.pdparams`, using t
334345
```bash
335346
# Note to set the path of pretrained_model to the local path. If you use the model saved by your own training, please modify the path and file name to {path/to/weights}/{model_name}.
336347
# Demo test set evaluation
337-
python3 tools/eval.py -c configs/table/SLANet.yml -o \
338-
Global.pretrained_model=output/xxx/xxx.pdparams
348+
python3 tools/eval.py -c configs/table/SLANet_plus.yml -o \
349+
Global.pretrained_model=output/xxx/xxx.pdparams
350+
Eval.dataset.data_dir=./table_rec_dataset_examples \
351+
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
339352
```
340353

341354
### 4.4 Model Export
342355

343356
```bash
344-
python3 tools/export_model.py -c configs/table/SLANet.yml -o \
345-
Global.pretrained_model=output/xxx/xxx.pdparams \
346-
save_inference_dir="./SLANet_infer/"
357+
python3 tools/export_model.py -c configs/table/SLANet_plus.yml -o \
358+
Global.pretrained_model=output/xxx/xxx.pdparams \
359+
Global.save_inference_dir="./SLANet_plus_infer/"
347360
```
348361

349-
After exporting the model, the static graph model will be stored in `./SLANet_infer/` in the current directory. In this directory, you will see the following files:
362+
After exporting the model, the static graph model will be stored in `./SLANet_plus_infer/` in the current directory. In this directory, you will see the following files:
350363
```
351-
./SLANet_infer/
364+
./SLANet_plus_infer/
352365
├── inference.json
353366
├── inference.pdiparams
354367
├── inference.yml

0 commit comments

Comments
 (0)