Skip to content

Commit b4a85c1

Browse files
add table_cells_detection.en.md (#15186)
1 parent a6bc9cd commit b4a85c1

File tree

1 file changed

+335
-0
lines changed

1 file changed

+335
-0
lines changed
Lines changed: 335 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,335 @@
1+
---
2+
3+
comments: true
4+
5+
---
6+
7+
# Table Cell Detection Module Usage Tutorial
8+
9+
## I. Overview
10+
11+
The Table Cell Detection Module is a key component of the table recognition task, responsible for locating and marking each cell region in table images. The performance of this module directly affects the accuracy and efficiency of the entire table recognition process. The Table Cell Detection Module typically outputs bounding boxes for each cell region, which are then passed as input to the table recognition pipeline for further processing.
12+
13+
## II. Supported Model List
14+
15+
<table>
16+
<tr>
17+
<th>Model</th><th>Model Download Link</th>
18+
<th>mAP(%)</th>
19+
<th>GPU Inference Time (ms)<br/>[Regular Mode / High-Performance Mode]</th>
20+
<th>CPU Inference Time (ms)<br/>[Regular Mode / High-Performance Mode]</th>
21+
<th>Model Storage Size (M)</th>
22+
<th>Description</th>
23+
</tr>
24+
<tr>
25+
<td>RT-DETR-L_wired_table_cell_det</td>
26+
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/RT-DETR-L_wired_table_cell_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_wired_table_cell_det_pretrained.pdparams">Training Model</a></td>
27+
<td rowspan="2">82.7</td>
28+
<td rowspan="2">35.00 / 10.45</td>
29+
<td rowspan="2">495.51 / 495.51</td>
30+
<td rowspan="2">124M</td>
31+
<td rowspan="2">RT-DETR is the first real-time end-to-end object detection model. The Baidu PaddlePaddle Vision team pre-trained on a self-built table cell detection dataset based on the RT-DETR-L as the base model, achieving good performance in detecting both wired and wireless table cells.</td>
32+
</tr>
33+
<tr>
34+
<td>RT-DETR-L_wireless_table_cell_det</td>
35+
<td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/RT-DETR-L_wireless_table_cell_det_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/RT-DETR-L_wireless_table_cell_det_pretrained.pdparams">Training Model</a></td>
36+
</tr>
37+
</table>
38+
39+
<strong>Test Environment Description:</strong>
40+
41+
<ul>
42+
<li><b>Performance Test Environment</b>
43+
<ul>
44+
<li><strong>Test Dataset:</strong> Internal evaluation set built by PaddleX.</li>
45+
<li><strong>Hardware Configuration:</strong>
46+
<ul>
47+
<li>GPU: NVIDIA Tesla T4</li>
48+
<li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li>
49+
<li>Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
50+
</ul>
51+
</li>
52+
</ul>
53+
</li>
54+
<li><b>Inference Mode Explanation</b></li>
55+
</ul>
56+
57+
<table border="1">
58+
<thead>
59+
<tr>
60+
<th>Mode</th>
61+
<th>GPU Configuration</th>
62+
<th>CPU Configuration</th>
63+
<th>Acceleration Technology Combination</th>
64+
</tr>
65+
</thead>
66+
<tbody>
67+
<tr>
68+
<td>Regular Mode</td>
69+
<td>FP32 Precision / No TRT Acceleration</td>
70+
<td>FP32 Precision / 8 Threads</td>
71+
<td>PaddleInference</td>
72+
</tr>
73+
<tr>
74+
<td>High-Performance Mode</td>
75+
<td>Optimal combination of prior precision type and acceleration strategy</td>
76+
<td>FP32 Precision / 8 Threads</td>
77+
<td>Choose the optimal prior backend (Paddle/OpenVINO/TRT, etc.)</td>
78+
</tr>
79+
</tbody>
80+
</table>
81+
82+
## III. Quick Start
83+
84+
> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../ppocr/installation.md).
85+
86+
You can quickly experience it with one command:
87+
88+
```bash
89+
paddleocr table_cells_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg
90+
```
91+
92+
You can also integrate model inference from the table cell detection module into your project. Before running the following code, please download the [sample image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg) locally.
93+
94+
```python
95+
from paddleocr import TableCellsDetection
96+
model = TableCellsDetection(model_name="RT-DETR-L_wired_table_cell_det")
97+
output = model.predict("table_recognition.jpg", threshold=0.3, batch_size=1)
98+
for res in output:
99+
res.print(json_format=False)
100+
res.save_to_img("./output/")
101+
res.save_to_json("./output/res.json")
102+
```
103+
104+
After running, the result obtained is:
105+
106+
```
107+
{'res': {'input_path': 'table_recognition.jpg', 'page_index': None, 'boxes': [{'cls_id': 0, 'label': 'cell', 'score': 0.9698355197906494, 'coordinate': [2.3011515, 0, 546.29926, 30.530712]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9690820574760437, 'coordinate': [212.37508, 64.62493, 403.58868, 95.61413]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9668057560920715, 'coordinate': [212.46791, 30.311079, 403.7182, 64.62613]}, {'cls_id': 0, 'label': 'cell', 'score': 0.966505229473114, 'coordinate': [403.56082, 64.62544, 546.83215, 95.66117]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9662341475486755, 'coordinate': [109.48873, 64.66485, 212.5177, 95.631294]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9654079079627991, 'coordinate': [212.39197, 95.63037, 403.60852, 126.78792]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9653300642967224, 'coordinate': [2.2320926, 64.62229, 109.600494, 95.59732]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9639787673950195, 'coordinate': [403.5752, 30.562355, 546.98975, 64.61531]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9636150002479553, 'coordinate': [2.1537683, 30.410172, 109.568306, 64.62762]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9631900191307068, 'coordinate': [2.0534437, 95.57448, 109.57601, 126.71458]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9631181359291077, 'coordinate': [403.65976, 95.68139, 546.84766, 126.713394]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9614537358283997, 'coordinate': [109.56504, 30.391184, 212.65425, 64.6444]}, {'cls_id': 0, 'label': 'cell', 'score': 0.9607433080673218, 'coordinate': [109.525795, 95.62622, 212.44917, 126.8258]}]}}
108+
```
109+
110+
The parameter meanings are as follows:
111+
112+
- `input_path`: Path of the input image to be predicted
113+
- `page_index`: If the input is a PDF file, it indicates which page of the PDF it is; otherwise, it is `None`
114+
- `boxes`: Predicted bounding box information, a list of dictionaries. Each dictionary represents a detected object and contains the following information:
115+
- `cls_id`: Class ID, an integer
116+
- `label`: Class label, a string
117+
- `score`: Confidence of the bounding box, a float
118+
- `coordinate`: Coordinates of the bounding box, a list of floats in the format <code>[xmin, ymin, xmax, ymax]</code>
119+
120+
The visualized image is as follows:
121+
122+
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/table_cells_detection/01.jpg">
123+
124+
The relevant methods, parameters, etc., are described as follows:
125+
126+
* `TableCellsDetection` instantiates the table cell detection model (taking `RT-DETR-L_wired_table_cell_det` as an example here), with specific explanations as follows:
127+
<table>
128+
<thead>
129+
<tr>
130+
<th>Parameter</th>
131+
<th>Description</th>
132+
<th>Type</th>
133+
<th>Options</th>
134+
<th>Default Value</th>
135+
</tr>
136+
</thead>
137+
<tr>
138+
<td><code>model_name</code></td>
139+
<td>Model Name</td>
140+
<td><code>str</code></td>
141+
<td>None</td>
142+
<td>None</td>
143+
</tr>
144+
<tr>
145+
<td><code>model_dir</code></td>
146+
<td>Model Storage Path</td>
147+
<td><code>str</code></td>
148+
<td>None</td>
149+
<td>None</td>
150+
</tr>
151+
<tr>
152+
<td><code>device</code></td>
153+
<td>Model Inference Device</td>
154+
<td><code>str</code></td>
155+
<td>Supports specifying specific GPU card numbers, such as “gpu:0”, specific hardware card numbers, such as “npu:0”, CPU as “cpu”.</td>
156+
<td><code>gpu:0</code></td>
157+
</tr>
158+
<tr>
159+
<td><code>use_hpip</code></td>
160+
<td>Whether to enable high-performance inference plugin</td>
161+
<td><code>bool</code></td>
162+
<td>None</td>
163+
<td><code>False</code></td>
164+
</tr>
165+
<tr>
166+
<td><code>hpi_config</code></td>
167+
<td>High-Performance Inference Configuration</td>
168+
<td><code>dict</code> | <code>None</code></td>
169+
<td>None</td>
170+
<td><code>None</code></td>
171+
</tr>
172+
<tr>
173+
<td><code>img_size</code></td>
174+
<td>Input image size; if not specified, the PaddleX official model configuration will be used by default</td>
175+
<td><code>int/list</code></td>
176+
<td>
177+
<ul>
178+
<li><b>int</b>, e.g., 640, indicates resizing the input image to 640x640</li>
179+
<li><b>list</b>, e.g., [640, 512], indicates resizing the input image to a width of 640 and a height of 512</li>
180+
</ul>
181+
</td>
182+
<td>None</td>
183+
</tr>
184+
<tr>
185+
<td><code>threshold</code></td>
186+
<td>Threshold for filtering out low-confidence prediction results; if not specified, the PaddleX official model configuration will be used by default. In table cell detection tasks, appropriately lowering the threshold may help achieve more accurate results</td>
187+
<td><code>float/dict</code></td>
188+
<td>
189+
<ul>
190+
<li><b>float</b>, e.g., 0.2, indicates filtering out all bounding boxes with confidence lower than 0.2</li>
191+
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.45, 2: 0.48, 7: 0.4}</code> applies a threshold of 0.45 for category cls_id 0, 0.48 for category cls_id 1, and 0.4 for category cls_id 7</li>
192+
</ul>
193+
</td>
194+
<td>None</td>
195+
</tr>
196+
</table>
197+
198+
* Among them, `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. When `model_dir` is specified, the user-defined model is used.
199+
200+
* Call the `predict()` method of the table cell detection model for inference prediction. This method will return a result list. Additionally, this module also provides a `predict_iter()` method. Both methods are consistent in terms of parameter acceptance and result return. The difference is that `predict_iter()` returns a `generator`, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where memory saving is desired. You can choose to use either of these methods according to your actual needs. The `predict()` method has parameters `input`, `batch_size`, and `threshold`, with specific explanations as follows:
201+
202+
<table>
203+
<thead>
204+
<tr>
205+
<th>Parameter</th>
206+
<th>Description</th>
207+
<th>Type</th>
208+
<th>Options</th>
209+
<th>Default Value</th>
210+
</tr>
211+
</thead>
212+
<tr>
213+
<td><code>input</code></td>
214+
<td>Data to be predicted, supports multiple input types</td>
215+
<td><code>Python Var</code>/<code>str</code>/<code>list</code></td>
216+
<td>
217+
<ul>
218+
<li><b>Python Variable</b>, such as <code>numpy.ndarray</code> representing image data</li>
219+
<li><b>File Path</b>, such as the local path of an image file: <code>/root/data/img.jpg</code></li>
220+
<li><b>URL Link</b>, such as the network URL of an image file: <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg">Example</a></li>
221+
<li><b>Local Directory</b>, which should contain data files to be predicted, such as the local path: <code>/root/data/</code></li>
222+
<li><b>List</b>, where list elements must be of the above types, such as <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li>
223+
</ul>
224+
</td>
225+
<td>None</td>
226+
</tr>
227+
<tr>
228+
<td><code>batch_size</code></td>
229+
<td>Batch Size</td>
230+
<td><code>int</code></td>
231+
<td>Any integer</td>
232+
<td>1</td>
233+
</tr>
234+
<tr>
235+
<td><code>threshold</code></td>
236+
<td>Threshold for filtering out low-confidence prediction results; if not specified, the <code>threshold</code> parameter specified in <code>create_model</code> will be used by default, and if <code>create_model</code> is not specified, the PaddleX official model configuration will be used</td>
237+
<td><code>float/dict</code></td>
238+
<td>
239+
<ul>
240+
<li><b>float</b>, e.g., 0.2, indicates filtering out all bounding boxes with confidence lower than 0.2</li>
241+
<li><b>dictionary</b>, where the key is of type <b>int</b> representing <code>cls_id</code>, and the value is of type <b>float</b> representing the threshold. For example, <code>{0: 0.45, 2: 0.48, 7: 0.4}</code> applies a threshold of 0.45 for category cls_id 0, 0.48 for category cls_id 1, and 0.4 for category cls_id 7</li>
242+
</ul>
243+
</td>
244+
<td>None</td>
245+
</tr>
246+
</table>
247+
248+
* Process the prediction results. The prediction result for each sample is a corresponding Result object, which supports printing, saving as an image, and saving as a `json` file:
249+
250+
<table>
251+
<thead>
252+
<tr>
253+
<th>Method</th>
254+
<th>Description</th>
255+
<th>Parameter</th>
256+
<th>Type</th>
257+
<th>Parameter Description</th>
258+
<th>Default Value</th>
259+
</tr>
260+
</thead>
261+
<tr>
262+
<td rowspan = "3"><code>print()</code></td>
263+
<td rowspan = "3">Print result to terminal</td>
264+
<td><code>format_json</code></td>
265+
<td><code>bool</code></td>
266+
<td>Whether to format the output content using <code>JSON</code> indentation</td>
267+
<td><code>True</code></td>
268+
</tr>
269+
<tr>
270+
<td><code>indent</code></td>
271+
<td><code>int</code></td>
272+
<td>Specifies the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
273+
<td>4</td>
274+
</tr>
275+
<tr>
276+
<td><code>ensure_ascii</code></td>
277+
<td><code>bool</code></td>
278+
<td>Controls whether to escape non-<code>ASCII</code> characters into <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> will retain the original characters, effective only when <code>format_json</code> is <code>True</code></td>
279+
<td><code>False</code></td>
280+
</tr>
281+
<tr>
282+
<td rowspan = "3"><code>save_to_json()</code></td>
283+
<td rowspan = "3">Save the result as a json format file</td>
284+
<td><code>save_path</code></td>
285+
<td><code>str</code></td>
286+
<td>The path to save the file. When specified as a directory, the saved file is named consistent with the input file type.</td>
287+
<td>None</td>
288+
</tr>
289+
<tr>
290+
<td><code>indent</code></td>
291+
<td><code>int</code></td>
292+
<td>Specifies the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
293+
<td>4</td>
294+
</tr>
295+
<tr>
296+
<td><code>ensure_ascii</code></td>
297+
<td><code>bool</code></td>
298+
<td>Controls whether to escape non-<code>ASCII</code> characters into <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> will retain the original characters, effective only when <code>format_json</code> is <code>True</code></td>
299+
<td><code>False</code></td>
300+
</tr>
301+
<tr>
302+
<td><code>save_to_img()</code></td>
303+
<td>Save the result as an image format file</td>
304+
<td><code>save_path</code></td>
305+
<td><code>str</code></td>
306+
<td>The path to save the file. When specified as a directory, the saved file is named consistent with the input file type.</td>
307+
<td>None</td>
308+
</tr>
309+
</table>
310+
311+
* Additionally, the result can be obtained through attributes that provide the visualized images with results and the prediction results, as follows:
312+
313+
<table>
314+
<thead>
315+
<tr>
316+
<th>Attribute</th>
317+
<th>Description</th>
318+
</tr>
319+
</thead>
320+
<tr>
321+
<td rowspan = "1"><code>json</code></td>
322+
<td rowspan = "1">Get the prediction result in <code>json</code> format</td>
323+
</tr>
324+
<tr>
325+
<td rowspan = "1"><code>img</code></td>
326+
<td rowspan = "1">Get the visualized image</td>
327+
</tr>
328+
329+
</table>
330+
331+
## IV. Secondary Development
332+
333+
Since PaddleOCR does not directly provide training for the table cell detection module, if you need to train a table cell detection model, you can refer to the [PaddleX Table Cell Detection Module Secondary Development](https://paddlepaddle.github.io/PaddleX/latest/module_usage/tutorials/ocr_modules/table_cells_detection.html#_4) section for training. The trained model can be seamlessly integrated into the PaddleOCR API for inference.
334+
335+
## V. FAQ

0 commit comments

Comments
 (0)