|
| 1 | +--- |
| 2 | + |
| 3 | +comments: true |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +# Table Classification Module Usage Tutorial |
| 8 | + |
| 9 | +## 1. Overview |
| 10 | + |
| 11 | +The Table Classification Module is a key component in computer vision systems, responsible for classifying input table images. The performance of this module directly affects the accuracy and efficiency of the entire table recognition process. The Table Classification Module typically receives table images as input and, using deep learning algorithms, classifies them into predefined categories based on the characteristics and content of the images, such as wired and wireless tables. The classification results from the Table Classification Module serve as output for use in table recognition pipelines. |
| 12 | + |
| 13 | +## 2. Supported Model List |
| 14 | + |
| 15 | +<table> |
| 16 | +<tr> |
| 17 | +<th>Model</th><th>Model Download Link</th> |
| 18 | +<th>Top1 Acc(%)</th> |
| 19 | +<th>GPU Inference Time (ms)<br/>[Regular Mode / High-Performance Mode]</th> |
| 20 | +<th>CPU Inference Time (ms)<br/>[Regular Mode / High-Performance Mode]</th> |
| 21 | +<th>Model Storage Size (M)</th> |
| 22 | +</tr> |
| 23 | +<tr> |
| 24 | +<td>PP-LCNet_x1_0_table_cls</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/CLIP_vit_base_patch16_224_infer.tar">Inference Model</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-LCNet_x1_0_table_cls_pretrained.pdparams">Training Model</a></td> |
| 25 | +<td>94.2</td> |
| 26 | +<td>2.35 / 0.47</td> |
| 27 | +<td>4.03 / 1.35</td> |
| 28 | +<td>6.6M</td> |
| 29 | +</tr> |
| 30 | +</table> |
| 31 | + |
| 32 | +<strong>Test Environment Description:</strong> |
| 33 | + |
| 34 | + <ul> |
| 35 | + <li><b>Performance Test Environment</b> |
| 36 | + <ul> |
| 37 | + <li><strong>Test Dataset:</strong> Internal evaluation dataset built by PaddleX.</li> |
| 38 | + <li><strong>Hardware Configuration:</strong> |
| 39 | + <ul> |
| 40 | + <li>GPU: NVIDIA Tesla T4</li> |
| 41 | + <li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li> |
| 42 | + <li>Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li> |
| 43 | + </ul> |
| 44 | + </li> |
| 45 | + </ul> |
| 46 | + </li> |
| 47 | + <li><b>Inference Mode Explanation</b></li> |
| 48 | + </ul> |
| 49 | + |
| 50 | +<table border="1"> |
| 51 | + <thead> |
| 52 | + <tr> |
| 53 | + <th>Mode</th> |
| 54 | + <th>GPU Configuration</th> |
| 55 | + <th>CPU Configuration</th> |
| 56 | + <th>Acceleration Technology Combination</th> |
| 57 | + </tr> |
| 58 | + </thead> |
| 59 | + <tbody> |
| 60 | + <tr> |
| 61 | + <td>Regular Mode</td> |
| 62 | + <td>FP32 Precision / No TRT Acceleration</td> |
| 63 | + <td>FP32 Precision / 8 Threads</td> |
| 64 | + <td>PaddleInference</td> |
| 65 | + </tr> |
| 66 | + <tr> |
| 67 | + <td>High-Performance Mode</td> |
| 68 | + <td>Optimal combination of prior precision type and acceleration strategy</td> |
| 69 | + <td>FP32 Precision / 8 Threads</td> |
| 70 | + <td>Choose the optimal prior backend (Paddle/OpenVINO/TRT, etc.)</td> |
| 71 | + </tr> |
| 72 | + </tbody> |
| 73 | +</table> |
| 74 | + |
| 75 | +## 3. Quick Start |
| 76 | + |
| 77 | +> ❗ Before starting quickly, please first install the PaddleOCR wheel package. For details, please refer to the [installation tutorial](../ppocr/installation.md). |
| 78 | +
|
| 79 | +You can quickly experience it with one command: |
| 80 | + |
| 81 | +```bash |
| 82 | +paddleocr table_classification -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg |
| 83 | +``` |
| 84 | + |
| 85 | +You can also integrate model inference from the table classification module into your project. Before running the following code, please download the [sample image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg) locally. |
| 86 | + |
| 87 | +```python |
| 88 | +from paddleocr import TableClassification |
| 89 | +model = TableClassification(model_name="PP-LCNet_x1_0_table_cls") |
| 90 | +output = model.predict("table_recognition.jpg", batch_size=1) |
| 91 | +for res in output: |
| 92 | + res.print(json_format=False) |
| 93 | + res.save_to_json("./output/res.json") |
| 94 | +``` |
| 95 | + |
| 96 | +After running, the result obtained is: |
| 97 | + |
| 98 | +``` |
| 99 | +{'res': {'input_path': 'table_recognition.jpg', 'page_index': None, 'class_ids': array([0, 1], dtype=int32), 'scores': array([0.84421, 0.15579], dtype=float32), 'label_names': ['wired_table', 'wireless_table']}} |
| 100 | +``` |
| 101 | + |
| 102 | +The parameter meanings are as follows: |
| 103 | +- `input_path`: Path of the input image |
| 104 | +- `page_index`: If the input is a PDF file, it indicates which page of the PDF it is; otherwise, it is `None` |
| 105 | +- `class_ids`: Class IDs of the prediction results |
| 106 | +- `scores`: Confidence scores of the prediction results |
| 107 | +- `label_names`: Class names of the prediction results |
| 108 | + |
| 109 | +The visualized image is as follows: |
| 110 | + |
| 111 | +<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/table_classification/01.jpg"> |
| 112 | + |
| 113 | +The relevant methods, parameters, etc., are described as follows: |
| 114 | + |
| 115 | +* `TableClassification` instantiates the table classification model (taking `PP-LCNet_x1_0_table_cls` as an example here), with specific explanations as follows: |
| 116 | +<table> |
| 117 | +<thead> |
| 118 | +<tr> |
| 119 | +<th>Parameter</th> |
| 120 | +<th>Description</th> |
| 121 | +<th>Type</th> |
| 122 | +<th>Options</th> |
| 123 | +<th>Default Value</th> |
| 124 | +</tr> |
| 125 | +</thead> |
| 126 | +<tr> |
| 127 | +<td><code>model_name</code></td> |
| 128 | +<td>Model Name</td> |
| 129 | +<td><code>str</code></td> |
| 130 | +<td>None</td> |
| 131 | +<td><code>None</code></td> |
| 132 | +</tr> |
| 133 | +<tr> |
| 134 | +<td><code>model_dir</code></td> |
| 135 | +<td>Model Storage Path</td> |
| 136 | +<td><code>str</code></td> |
| 137 | +<td>None</td> |
| 138 | +<td>None</td> |
| 139 | +</tr> |
| 140 | +<tr> |
| 141 | +<td><code>device</code></td> |
| 142 | +<td>Model Inference Device</td> |
| 143 | +<td><code>str</code></td> |
| 144 | +<td>Supports specifying specific GPU card numbers, such as “gpu:0”, specific hardware card numbers, such as “npu:0”, CPU as “cpu”.</td> |
| 145 | +<td><code>gpu:0</code></td> |
| 146 | +</tr> |
| 147 | +<tr> |
| 148 | +<td><code>use_hpip</code></td> |
| 149 | +<td>Whether to enable high-performance inference plugin</td> |
| 150 | +<td><code>bool</code></td> |
| 151 | +<td>None</td> |
| 152 | +<td><code>False</code></td> |
| 153 | +</tr> |
| 154 | +<tr> |
| 155 | +<td><code>hpi_config</code></td> |
| 156 | +<td>High-Performance Inference Configuration</td> |
| 157 | +<td><code>dict</code> | <code>None</code></td> |
| 158 | +<td>None</td> |
| 159 | +<td><code>None</code></td> |
| 160 | +</tr> |
| 161 | +</table> |
| 162 | + |
| 163 | +* Among them, `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX are used. When `model_dir` is specified, the user-defined model is used. |
| 164 | + |
| 165 | +* Call the `predict()` method of the table classification model for inference prediction. This method will return a result list. Additionally, this module also provides a `predict_iter()` method. Both methods are consistent in terms of parameter acceptance and result return. The difference is that `predict_iter()` returns a `generator`, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where memory saving is desired. You can choose to use either of these methods according to your actual needs. The `predict()` method has parameters `input` and `batch_size`, with specific explanations as follows: |
| 166 | + |
| 167 | +<table> |
| 168 | +<thead> |
| 169 | +<tr> |
| 170 | +<th>Parameter</th> |
| 171 | +<th>Description</th> |
| 172 | +<th>Type</th> |
| 173 | +<th>Options</th> |
| 174 | +<th>Default Value</th> |
| 175 | +</tr> |
| 176 | +</thead> |
| 177 | +<tr> |
| 178 | +<td><code>input</code></td> |
| 179 | +<td>Data to be predicted, supports multiple input types</td> |
| 180 | +<td><code>Python Var</code>/<code>str</code>/<code>list</code></td> |
| 181 | +<td> |
| 182 | +<ul> |
| 183 | + <li><b>Python Variable</b>, such as <code>numpy.ndarray</code> representing image data</li> |
| 184 | + <li><b>File Path</b>, such as the local path of an image file: <code>/root/data/img.jpg</code></li> |
| 185 | + <li><b>URL Link</b>, such as the network URL of an image file: <a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg">Example</a></li> |
| 186 | + <li><b>Local Directory</b>, which should contain data files to be predicted, such as the local path: <code>/root/data/</code></li> |
| 187 | + <li><b>List</b>, where list elements must be of the above types, such as <code>[numpy.ndarray, numpy.ndarray]</code>, <code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>, <code>["/root/data1", "/root/data2"]</code></li> |
| 188 | +</ul> |
| 189 | +</td> |
| 190 | +<td>None</td> |
| 191 | +</tr> |
| 192 | +<tr> |
| 193 | +<td><code>batch_size</code></td> |
| 194 | +<td>Batch Size</td> |
| 195 | +<td><code>int</code></td> |
| 196 | +<td>Any integer</td> |
| 197 | +<td>1</td> |
| 198 | +</tr> |
| 199 | +</table> |
| 200 | + |
| 201 | +* Process the prediction results. The prediction result for each sample is a corresponding Result object, which supports printing, saving as an image, and saving as a `json` file: |
| 202 | + |
| 203 | +<table> |
| 204 | +<thead> |
| 205 | +<tr> |
| 206 | +<th>Method</th> |
| 207 | +<th>Description</th> |
| 208 | +<th>Parameter</th> |
| 209 | +<th>Type</th> |
| 210 | +<th>Parameter Description</th> |
| 211 | +<th>Default Value</th> |
| 212 | +</tr> |
| 213 | +</thead> |
| 214 | +<tr> |
| 215 | +<td rowspan = "3"><code>print()</code></td> |
| 216 | +<td rowspan = "3">Print result to terminal</td> |
| 217 | +<td><code>format_json</code></td> |
| 218 | +<td><code>bool</code></td> |
| 219 | +<td>Whether to format the output content using <code>JSON</code> indentation</td> |
| 220 | +<td><code>True</code></td> |
| 221 | +</tr> |
| 222 | +<tr> |
| 223 | +<td><code>indent</code></td> |
| 224 | +<td><code>int</code></td> |
| 225 | +<td>Specifies the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td> |
| 226 | +<td>4</td> |
| 227 | +</tr> |
| 228 | +<tr> |
| 229 | +<td><code>ensure_ascii</code></td> |
| 230 | +<td><code>bool</code></td> |
| 231 | +<td>Controls whether to escape non-<code>ASCII</code> characters into <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> will retain the original characters, effective only when <code>format_json</code> is <code>True</code></td> |
| 232 | +<td><code>False</code></td> |
| 233 | +</tr> |
| 234 | +<tr> |
| 235 | +<td rowspan = "3"><code>save_to_json()</code></td> |
| 236 | +<td rowspan = "3">Save the result as a json format file</td> |
| 237 | +<td><code>save_path</code></td> |
| 238 | +<td><code>str</code></td> |
| 239 | +<td>The path to save the file. When specified as a directory, the saved file is named consistent with the input file type.</td> |
| 240 | +<td>None</td> |
| 241 | +</tr> |
| 242 | +<tr> |
| 243 | +<td><code>indent</code></td> |
| 244 | +<td><code>int</code></td> |
| 245 | +<td>Specifies the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td> |
| 246 | +<td>4</td> |
| 247 | +</tr> |
| 248 | +<tr> |
| 249 | +<td><code>ensure_ascii</code></td> |
| 250 | +<td><code>bool</code></td> |
| 251 | +<td>Controls whether to escape non-<code>ASCII</code> characters into <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> will retain the original characters, effective only when <code>format_json</code> is <code>True</code></td> |
| 252 | +<td><code>False</code></td> |
| 253 | +</tr> |
| 254 | +</table> |
| 255 | + |
| 256 | +* Additionally, the result can be obtained through attributes that provide the visualized images with results and the prediction results, as follows: |
| 257 | + |
| 258 | +<table> |
| 259 | +<thead> |
| 260 | +<tr> |
| 261 | +<th>Attribute</th> |
| 262 | +<th>Description</th> |
| 263 | +</tr> |
| 264 | +</thead> |
| 265 | +<tr> |
| 266 | +<td rowspan = "1"><code>json</code></td> |
| 267 | +<td rowspan = "1">Get the prediction result in <code>json</code> format</td> |
| 268 | +</tr> |
| 269 | +<tr> |
| 270 | +<td rowspan = "1"><code>img</code></td> |
| 271 | +<td rowspan = "1">Get the visualized image</td> |
| 272 | +</tr> |
| 273 | +</table> |
| 274 | + |
| 275 | +For more information on how to use PaddleX's single-model inference API, you can refer to the [PaddleX Single Model Python Script Usage Instructions](../../instructions/model_python_API.md). |
| 276 | + |
| 277 | +## 4. Secondary Development |
| 278 | + |
| 279 | +Since PaddleOCR does not directly provide training for the table classification module, if you need to train a table classification model, you can refer to the [PaddleX Table Classification Module Secondary Development](https://paddlepaddle.github.io/PaddleX/latest/module_usage/tutorials/ocr_modules/table_classification.html#_4) section for training. The trained model can be seamlessly integrated into the PaddleOCR API for inference. |
| 280 | + |
| 281 | +## 5. FAQ |
0 commit comments