You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README_en.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ In addition to providing an outstanding model library, PaddleOCR 3.0 also offers
45
45
## 📣 Recent updates
46
46
🔥🔥2025.05.20: Official Release of **PaddleOCR v3.0**, including:
47
47
-**PP-OCRv5**: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
48
-
1. 🌐 Single-model support for **five** text types - Seamlessly process **Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English** and **Japanse** within a single model.
48
+
1. 🌐 Single-model support for **five** text types - Seamlessly process **Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English** and **Japanese** within a single model.
49
49
2. ✍️ Improved **handwriting recognition**: Significantly better at complex cursive scripts and non-standard handwriting.
50
50
3. 🎯 **13-point accuracy gain** over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
A module is the smallest unit that implements basic functionality. Modules typically use a single model to accomplish specific tasks, such as text detection, image classification, and other basic functions. As fundamental building blocks, modules provide the necessary functional support for more complex application scenarios. This design approach allows users to flexibly select and combine different modules according to their needs, thereby simplifying the development process and enhancing development flexibility and efficiency.
Copy file name to clipboardExpand all lines: docs/version3.x/module_usage/table_cells_detection.en.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -81,7 +81,7 @@ The Table Cell Detection Module is a key component of the table recognition task
81
81
82
82
## III. Quick Start
83
83
84
-
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.md).
84
+
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.en.md).
85
85
86
86
You can quickly experience it with one command:
87
87
@@ -187,8 +187,8 @@ The relevant methods, parameters, etc., are described as follows:
187
187
<td><code>float/dict</code></td>
188
188
<td>
189
189
<ul>
190
-
<li><b>float</b>, e.g., 0.2, indicates filtering out all bounding boxes with confidence lower than 0.2</li>
191
-
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.45, 2: 0.48, 7: 0.4}</code> applies a threshold of 0.45 for category cls_id 0, 0.48 for category cls_id 1, and 0.4 for category cls_id 7</li>
190
+
<li><b>float</b>, e.g., 0.3, indicates filtering out all bounding boxes with confidence lower than 0.3</li>
191
+
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.3}</code> applies a threshold of 0.3 for category cls_id 0</li>
Copy file name to clipboardExpand all lines: docs/version3.x/module_usage/table_classification.en.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ The Table Classification Module is a key component in computer vision systems, r
74
74
75
75
## 3. Quick Start
76
76
77
-
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.md).
77
+
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../installation.en.md).
Copy file name to clipboardExpand all lines: docs/version3.x/module_usage/table_structure_recognition.en.md
+32-19Lines changed: 32 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ comments: true
6
6
7
7
## 1. Overview
8
8
9
-
Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML or Latex code for the table area, which is then passed as input to the table content recognition module for further processing.
9
+
Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML code for the table area, which is then passed as input to the tabl recognition pipeline for further processing.
10
10
11
11
## 2. Supported Model List
12
12
@@ -56,7 +56,7 @@ Table structure recognition is an important component of table recognition syste
56
56
<ul>
57
57
<li><b>Performance Test Environment</b>
58
58
<ul>
59
-
<li><strong>Test Dataset:</strong> High-difficulty Chinese table recognition dataset built internally by PaddleX.</li>
59
+
<li><strong>Test Dataset:</strong> High-difficulty Chinese table recognition dataset.</li>
60
60
<li><strong>Hardware Configuration:</strong>
61
61
<ul>
62
62
<li>GPU: NVIDIA Tesla T4</li>
@@ -147,7 +147,7 @@ Descriptions of related methods and parameters are as follows:
147
147
<td><code>model_name</code></td>
148
148
<td>Model name</td>
149
149
<td><code>str</code></td>
150
-
<td>All model names supported by PaddleX</td>
150
+
<td>All model names</td>
151
151
<td>None</td>
152
152
</tr>
153
153
<tr>
@@ -180,7 +180,7 @@ Descriptions of related methods and parameters are as follows:
180
180
</tr>
181
181
</table>
182
182
183
-
* Among them, `model_name` must be specified. After specifying `model_name`, the built-in model parameters of PaddleX are used by default. On this basis, if`model_dir` is specified, the user's custom model is used.
183
+
* Among them, `model_name` must be specified. If`model_dir` is specified, the user's custom model is used.
184
184
185
185
* Call the `predict()` method of the table structure recognition model for inference prediction, which returns a result list. In addition, this module also provides the `predict_iter()` method. The two are completely consistent in parameter acceptance and result return. The difference is that `predict_iter()` returns a `generator`, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where you want to save memory. You can choose to use either method according to your actual needs. The `predict()` method has parameters `input` and `batch_size`, described as follows:
186
186
@@ -290,7 +290,7 @@ Descriptions of related methods and parameters are as follows:
290
290
291
291
## 4. Secondary Development
292
292
293
-
If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training `SLANet` is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the [table structure recognition demo data](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar). Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the [installation documentation](../installation.en.md).
293
+
If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training `SLANet_plus` is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the [table structure recognition demo data](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar). Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the [installation documentation](../installation.en.md).
294
294
295
295
296
296
## 4.1 Dataset and Pretrained Model Preparation
@@ -306,24 +306,35 @@ tar -xf table_rec_dataset_examples.tar
PaddleOCR is modularized. When training the `SLANet` recognition model, you need to use the [configuration file](https://github.com/PaddlePaddle/PaddleOCR/blob/main/configs/table/SLANet.yml) of `SLANet`.
315
+
PaddleOCR is modularized. When training the `SLANet_plus` recognition model, you need to use the [configuration file](https://github.com/PaddlePaddle/PaddleOCR/blob/main/configs/table/SLANet_plus.yml) of `SLANet_plus`.
@@ -334,21 +345,23 @@ You can evaluate the trained weights, such as `output/xxx/xxx.pdparams`, using t
334
345
```bash
335
346
# Note to set the path of pretrained_model to the local path. If you use the model saved by your own training, please modify the path and file name to {path/to/weights}/{model_name}.
After exporting the model, the static graph model will be stored in `./SLANet_infer/` in the current directory. In this directory, you will see the following files:
362
+
After exporting the model, the static graph model will be stored in `./SLANet_plus_infer/` in the current directory. In this directory, you will see the following files:
0 commit comments