Skip to content

Commit 1788b86

Browse files
add ocr docs (#15173)
* add ocr docs * fix sth
1 parent 97b454b commit 1788b86

File tree

5 files changed

+1171
-14
lines changed

5 files changed

+1171
-14
lines changed
Lines changed: 255 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,255 @@
1+
---
2+
comments: true
3+
---
4+
5+
# Tutorial on Using Document Visual Language Model Module
6+
7+
## I. Overview
8+
9+
Document visual language models are a cutting-edge multimodal processing technology aimed at addressing the limitations of traditional document processing methods. Traditional methods are often limited to processing document information in specific formats or predefined categories, whereas document visual language models can integrate visual and linguistic information to understand and handle diverse document content. By combining computer vision and natural language processing technologies, these models can recognize images, text, and their relationships within documents, and even understand semantic information within complex layout structures. This makes document processing more intelligent and flexible, with stronger generalization capabilities, showing broad application prospects in automated office work, information extraction, and other fields.
10+
11+
## II. Supported Model List
12+
13+
<table>
14+
<tr>
15+
<th>Model</th><th>Model Download Link</th>
16+
<th>Model Storage Size (GB)</th>
17+
<th>Total Score</th>
18+
<th>Description</th>
19+
</tr>
20+
<tr>
21+
<td>PP-DocBee-2B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee-2B_infer.tar">Inference Model</a></td>
22+
<td>4.2</td>
23+
<td>765</td>
24+
<td rowspan="2">PP-DocBee is a self-developed multimodal large model by the PaddlePaddle team, focusing on document understanding, and it performs excellently in Chinese document understanding tasks. The model is fine-tuned and optimized using nearly 5 million multimodal datasets for document understanding, including general VQA, OCR, charts, text-rich documents, mathematics and complex reasoning, synthetic data, and pure text data, with different training data ratios set. On several authoritative English document understanding evaluation lists in academia, PP-DocBee has basically achieved SOTA for models of the same parameter scale. In terms of internal business Chinese scenario indicators, PP-DocBee also outperforms the current popular open-source and closed-source models.</td>
25+
</tr>
26+
<tr>
27+
<td>PP-DocBee-7B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee-7B_infer.tar">Inference Model</a></td>
28+
<td>15.8</td>
29+
<td>-</td>
30+
</tr>
31+
<tr>
32+
<td>PP-DocBee2-3B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee2-3B_infer.tar">Inference Model</a></td>
33+
<td>7.6</td>
34+
<td>852</td>
35+
<td>PP-DocBee2 is a self-developed multimodal large model by the PaddlePaddle team, further optimizing the base model on the foundation of PP-DocBee and introducing a new data optimization scheme to improve data quality. Using a small amount of 470,000 data generated by a self-developed data synthesis strategy, PP-DocBee2 performs better in Chinese document understanding tasks. In terms of internal business Chinese scenario indicators, PP-DocBee2 improves by about 11.4% compared to PP-DocBee, and also outperforms the current popular open-source and closed-source models of the same scale.</td>
36+
</tr>
37+
</table>
38+
39+
<b>Note: The total scores of the above models are test results from an internal evaluation set, where all images have a resolution (height, width) of (1680, 1204), with a total of 1196 data entries, covering scenarios such as financial reports, laws and regulations, scientific and technical papers, manuals, humanities papers, contracts, research reports, etc. There are no plans for public release at the moment.</b>
40+
41+
## III. Quick Start
42+
43+
> ❗ Before starting quickly, please install the PaddleOCR wheel package. For details, please refer to the [Installation Guide](../ppocr/installation.md).
44+
45+
You can quickly experience it with one line of command:
46+
47+
```bash
48+
paddleocr doc_vlm -i "{'image': 'https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png', 'query': '识别这份表格的内容, 以markdown格式输出'}"
49+
```
50+
51+
You can also integrate the model inference from the open document visual language model module into your project. Before running the following code, please download the [sample image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png) locally.
52+
53+
```python
54+
from paddleocr import DocVLM
55+
model = DocVLM(model_name="PP-DocBee2-3B")
56+
results = model.predict(
57+
input={"image": "medal_table.png", "query": "识别这份表格的内容, 以markdown格式输出"},
58+
batch_size=1
59+
)
60+
for res in results:
61+
res.print()
62+
res.save_to_json(f"./output/res.json")
63+
```
64+
65+
After running, the result is:
66+
67+
```bash
68+
{'res': {'image': 'medal_table.png', 'query': '识别这份表格的内容, 以markdown格式输出', 'result': '| 名次 | 国家/地区 | 金牌 | 银牌 | 铜牌 | 奖牌总数 |\n| --- | --- | --- | --- | --- | --- |\n| 1 | 中国(CHN) | 48 | 22 | 30 | 100 |\n| 2 | 美国(USA) | 36 | 39 | 37 | 112 |\n| 3 | 俄罗斯(RUS) | 24 | 13 | 23 | 60 |\n| 4 | 英国(GBR) | 19 | 13 | 19 | 51 |\n| 5 | 德国(GER) | 16 | 11 | 14 | 41 |\n| 6 | 澳大利亚(AUS) | 14 | 15 | 17 | 46 |\n| 7 | 韩国(KOR) | 13 | 11 | 8 | 32 |\n| 8 | 日本(JPN) | 9 | 8 | 8 | 25 |\n| 9 | 意大利(ITA) | 8 | 9 | 10 | 27 |\n| 10 | 法国(FRA) | 7 | 16 | 20 | 43 |\n| 11 | 荷兰(NED) | 7 | 5 | 4 | 16 |\n| 12 | 乌克兰(UKR) | 7 | 4 | 11 | 22 |\n| 13 | 肯尼亚(KEN) | 6 | 4 | 6 | 16 |\n| 14 | 西班牙(ESP) | 5 | 11 | 3 | 19 |\n| 15 | 牙买加(JAM) | 5 | 4 | 2 | 11 |\n'}}
69+
```
70+
71+
The meaning of the result parameters is as follows:
72+
- `image`: Indicates the path of the input image to be predicted
73+
- `query`: Represents the input text information to be predicted
74+
- `result`: Information of the model's prediction result
75+
76+
The visualization of the prediction result is as follows:
77+
78+
```bash
79+
| 名次 | 国家/地区 | 金牌 | 银牌 | 铜牌 | 奖牌总数 |
80+
| --- | --- | --- | --- | --- | --- |
81+
| 1 | 中国(CHN) | 48 | 22 | 30 | 100 |
82+
| 2 | 美国(USA) | 36 | 39 | 37 | 112 |
83+
| 3 | 俄罗斯(RUS) | 24 | 13 | 23 | 60 |
84+
| 4 | 英国(GBR) | 19 | 13 | 19 | 51 |
85+
| 5 | 德国(GER) | 16 | 11 | 14 | 41 |
86+
| 6 | 澳大利亚(AUS) | 14 | 15 | 17 | 46 |
87+
| 7 | 韩国(KOR) | 13 | 11 | 8 | 32 |
88+
| 8 | 日本(JPN) | 9 | 8 | 8 | 25 |
89+
| 9 | 意大利(ITA) | 8 | 9 | 10 | 27 |
90+
| 10 | 法国(FRA) | 7 | 16 | 20 | 43 |
91+
| 11 | 荷兰(NED) | 7 | 5 | 4 | 16 |
92+
| 12 | 乌克兰(UKR) | 7 | 4 | 11 | 22 |
93+
| 13 | 肯尼亚(KEN) | 6 | 4 | 6 | 16 |
94+
| 14 | 西班牙(ESP) | 5 | 11 | 3 | 19 |
95+
| 15 | 牙买加(JAM) | 5 | 4 | 2 | 11 |
96+
```
97+
98+
Explanations of related methods, parameters, etc., are as follows:
99+
100+
* `DocVLM` instantiates the document visual language model (taking `PP-DocBee-2B` as an example), with specific explanations as follows:
101+
<table>
102+
<thead>
103+
<tr>
104+
<th>Parameter</th>
105+
<th>Description</th>
106+
<th>Type</th>
107+
<th>Options</th>
108+
<th>Default</th>
109+
</tr>
110+
</thead>
111+
<tr>
112+
<td><code>model_name</code></td>
113+
<td>Model Name</td>
114+
<td><code>str</code></td>
115+
<td>None</td>
116+
<td><code>None</code></td>
117+
</tr>
118+
<tr>
119+
<td><code>model_dir</code></td>
120+
<td>Model Storage Path</td>
121+
<td><code>str</code></td>
122+
<td>None</td>
123+
<td>None</td>
124+
</tr>
125+
<tr>
126+
<td><code>device</code></td>
127+
<td>Model Inference Device</td>
128+
<td><code>str</code></td>
129+
<td>Supports specifying specific GPU card number, such as "gpu:0", other hardware specific card numbers, such as "npu:0", CPU such as "cpu".</td>
130+
<td><code>gpu:0</code></td>
131+
</tr>
132+
<tr>
133+
<td><code>use_hpip</code></td>
134+
<td>Whether to enable high-performance inference plugin. Currently not supported.</td>
135+
<td><code>bool</code></td>
136+
<td>None</td>
137+
<td><code>False</code></td>
138+
</tr>
139+
<tr>
140+
<td><code>hpi_config</code></td>
141+
<td>High-performance inference configuration. Currently not supported.</td>
142+
<td><code>dict</code> | <code>None</code></td>
143+
<td>None</td>
144+
<td><code>None</code></td>
145+
</tr>
146+
</table>
147+
148+
* Among them, `model_name` must be specified. After specifying `model_name`, the default PaddleX built-in model parameters will be used. On this basis, when specifying `model_dir`, user-defined models will be used.
149+
150+
* Call the `predict()` method of the document visual language model for inference prediction. This method will return a result list. Additionally, this module also provides the `predict_iter()` method. Both are completely consistent in terms of parameter acceptance and result return, the difference being that `predict_iter()` returns a `generator`, capable of gradually processing and obtaining prediction results, suitable for handling large datasets or scenarios where memory saving is desired. You can choose to use either of these methods based on actual needs. The `predict()` method parameters include `input`, `batch_size`, with specific explanations as follows:
151+
152+
<table>
153+
<thead>
154+
<tr>
155+
<th>Parameter</th>
156+
<th>Description</th>
157+
<th>Type</th>
158+
<th>Options</th>
159+
<th>Default</th>
160+
</tr>
161+
</thead>
162+
<tr>
163+
<td><code>input</code></td>
164+
<td>Data to be predicted</td>
165+
<td><code>dict</code></td>
166+
<td>
167+
<code>Dict</code>, as multimodal models have different input requirements, it needs to be determined based on the specific model. Specifically:
168+
<li>PP-DocBee series input format is <code>{'image': image_path, 'query': query_text}</code></li>
169+
</td>
170+
<td>None</td>
171+
</tr>
172+
<tr>
173+
<td><code>batch_size</code></td>
174+
<td>Batch Size</td>
175+
<td><code>int</code></td>
176+
<td>Integer</td>
177+
<td>1</td>
178+
</tr>
179+
</table>
180+
181+
* Process the prediction results. The prediction result for each sample is the corresponding Result object, and it supports operations such as printing and saving as `json` file:
182+
183+
<table>
184+
<thead>
185+
<tr>
186+
<th>Method</th>
187+
<th>Description</th>
188+
<th>Parameter</th>
189+
<th>Type</th>
190+
<th>Description</th>
191+
<th>Default</th>
192+
</tr>
193+
</thead>
194+
<tr>
195+
<td rowspan = "3"><code>print()</code></td>
196+
<td rowspan = "3">Print results to terminal</td>
197+
<td><code>format_json</code></td>
198+
<td><code>bool</code></td>
199+
<td>Whether to format the output content using <code>JSON</code> indentation</td>
200+
<td><code>True</code></td>
201+
</tr>
202+
<tr>
203+
<td><code>indent</code></td>
204+
<td><code>int</code></td>
205+
<td>Specify the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
206+
<td>4</td>
207+
</tr>
208+
<tr>
209+
<td><code>ensure_ascii</code></td>
210+
<td><code>bool</code></td>
211+
<td>Control whether non-<code>ASCII</code> characters are escaped to <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> retains the original characters, effective only when <code>format_json</code> is <code>True</code></td>
212+
<td><code>False</code></td>
213+
</tr>
214+
<tr>
215+
<td rowspan = "3"><code>save_to_json()</code></td>
216+
<td rowspan = "3">Save the result as a json format file</td>
217+
<td><code>save_path</code></td>
218+
<td><code>str</code></td>
219+
<td>Path of the file to be saved. When it is a directory, the naming of the saved file is consistent with the input file type.</td>
220+
<td>None</td>
221+
</tr>
222+
<tr>
223+
<td><code>indent</code></td>
224+
<td><code>int</code></td>
225+
<td>Specify the indentation level to beautify the output <code>JSON</code> data, making it more readable, effective only when <code>format_json</code> is <code>True</code></td>
226+
<td>4</td>
227+
</tr>
228+
<tr>
229+
<td><code>ensure_ascii</code></td>
230+
<td><code>bool</code></td>
231+
<td>Control whether non-<code>ASCII</code> characters are escaped to <code>Unicode</code>. When set to <code>True</code>, all non-<code>ASCII</code> characters will be escaped; <code>False</code> retains the original characters, effective only when <code>format_json</code> is <code>True</code></td>
232+
<td><code>False</code></td>
233+
</tr>
234+
</table>
235+
236+
* Additionally, it also supports obtaining prediction results through attributes, as follows:
237+
238+
<table>
239+
<thead>
240+
<tr>
241+
<th>Attribute</th>
242+
<th>Description</th>
243+
</tr>
244+
</thead>
245+
<tr>
246+
<td rowspan = "1"><code>json</code></td>
247+
<td rowspan = "1">Get the prediction result in <code>json</code> format</td>
248+
</tr>
249+
</table>
250+
251+
## IV. Secondary Development
252+
253+
The current module does not support fine-tuning training temporarily, only inference integration is supported. The fine-tuning training of this module is planned to be supported in the future.
254+
255+
## V. FAQ

docs/version3.x/module_usage/doc_vlm.md

Lines changed: 21 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,19 +15,31 @@ comments: true
1515
<tr>
1616
<th>模型</th><th>模型下载链接</th>
1717
<th>模型存储大小(GB)</th>
18+
<th>模型总分</th>
1819
<th>介绍</th>
1920
</tr>
2021
<tr>
2122
<td>PP-DocBee-2B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee-2B_infer.tar">推理模型</a></td>
2223
<td>4.2</td>
24+
<td>765</td>
2325
<td rowspan="2">PP-DocBee 是飞桨团队自研的一款专注于文档理解的多模态大模型,在中文文档理解任务上具有卓越表现。该模型通过近 500 万条文档理解类多模态数据集进行微调优化,各种数据集包括了通用VQA类、OCR类、图表类、text-rich文档类、数学和复杂推理类、合成数据类、纯文本数据等,并设置了不同训练数据配比。在学术界权威的几个英文文档理解评测榜单上,PP-DocBee基本都达到了同参数量级别模型的SOTA。在内部业务中文场景类的指标上,PP-DocBee也高于目前的热门开源和闭源模型。</td>
2426
</tr>
2527
<tr>
2628
<td>PP-DocBee-7B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee-7B_infer.tar">推理模型</a></td>
2729
<td>15.8</td>
30+
<td>-</td>
31+
</tr>
32+
<tr>
33+
<td>PP-DocBee2-3B</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-DocBee2-3B_infer.tar">推理模型</a></td>
34+
<td>7.6</td>
35+
<td>852</td>
36+
<td>PP-DocBee2 是飞桨团队自研的一款专注于文档理解的多模态大模型,在PP-DocBee的基础上进一步优化了基础模型,并引入了新的数据优化方案,提高了数据质量,使用自研数据合成策略生成的少量的47万数据便使得PP-DocBee2在中文文档理解任务上表现更佳。在内部业务中文场景类的指标上,PP-DocBee2相较于PP-DocBee提升了约11.4%,同时也高于目前的同规模热门开源和闭源模型。</td>
2837
</tr>
2938
</table>
3039

40+
<b>注:以上模型总分为内部评估集模型测试结果,内部评估集所有图像分辨率 (height, width) 为 (1680,1204),共1196条数据,包括了财报、法律法规、理工科论文、说明书、文科论文、合同、研报等场景,暂时未有计划公开。</b>
41+
42+
3143

3244
## 三、快速开始
3345

@@ -36,16 +48,16 @@ comments: true
3648
使用一行命令即可快速体验:
3749

3850
```bash
39-
paddleocr doc_vlm -i "{'image': 'https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png', 'query': '识别这份表格的内容'}"
51+
paddleocr doc_vlm -i "{'image': 'https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png', 'query': '识别这份表格的内容, 以markdown格式输出'}"
4052
```
4153

4254
您也可以将开放文档类视觉语言模型模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/medal_table.png)到本地。
4355

4456
```python
4557
from paddleocr import DocVLM
46-
model = DocVLM(model_name="PP-DocBee-2B")
58+
model = DocVLM(model_name="PP-DocBee2-3B")
4759
results = model.predict(
48-
input={"image": "medal_table.png", "query": "识别这份表格的内容"},
60+
input={"image": "medal_table.png", "query": "识别这份表格的内容, 以markdown格式输出"},
4961
batch_size=1
5062
)
5163
for res in results:
@@ -56,7 +68,7 @@ for res in results:
5668
运行后,得到的结果为:
5769

5870
```bash
59-
{'res': {'image': 'medal_table.png', 'query': '识别这份表格的内容', 'result': '| 名次 | 国家/地区 | 金牌 | 银牌 | 铜牌 | 奖牌总数 |\n| --- | --- | --- | --- | --- | --- |\n| 1 | 中国(CHN) | 48 | 22 | 30 | 100 |\n| 2 | 美国(USA) | 36 | 39 | 37 | 112 |\n| 3 | 俄罗斯(RUS) | 24 | 13 | 23 | 60 |\n| 4 | 英国(GBR) | 19 | 13 | 19 | 51 |\n| 5 | 德国(GER) | 16 | 11 | 14 | 41 |\n| 6 | 澳大利亚(AUS) | 14 | 15 | 17 | 46 |\n| 7 | 韩国(KOR) | 13 | 11 | 8 | 32 |\n| 8 | 日本(JPN) | 9 | 8 | 8 | 25 |\n| 9 | 意大利(ITA) | 8 | 9 | 10 | 27 |\n| 10 | 法国(FRA) | 7 | 16 | 20 | 43 |\n| 11 | 荷兰(NED) | 7 | 5 | 4 | 16 |\n| 12 | 乌克兰(UKR) | 7 | 4 | 11 | 22 |\n| 13 | 肯尼亚(KEN) | 6 | 4 | 6 | 16 |\n| 14 | 西班牙(ESP) | 5 | 11 | 3 | 19 |\n| 15 | 牙买加(JAM) | 5 | 4 | 2 | 11 |\n'}}
71+
{'res': {'image': 'medal_table.png', 'query': '识别这份表格的内容, 以markdown格式输出', 'result': '| 名次 | 国家/地区 | 金牌 | 银牌 | 铜牌 | 奖牌总数 |\n| --- | --- | --- | --- | --- | --- |\n| 1 | 中国(CHN) | 48 | 22 | 30 | 100 |\n| 2 | 美国(USA) | 36 | 39 | 37 | 112 |\n| 3 | 俄罗斯(RUS) | 24 | 13 | 23 | 60 |\n| 4 | 英国(GBR) | 19 | 13 | 19 | 51 |\n| 5 | 德国(GER) | 16 | 11 | 14 | 41 |\n| 6 | 澳大利亚(AUS) | 14 | 15 | 17 | 46 |\n| 7 | 韩国(KOR) | 13 | 11 | 8 | 32 |\n| 8 | 日本(JPN) | 9 | 8 | 8 | 25 |\n| 9 | 意大利(ITA) | 8 | 9 | 10 | 27 |\n| 10 | 法国(FRA) | 7 | 16 | 20 | 43 |\n| 11 | 荷兰(NED) | 7 | 5 | 4 | 16 |\n| 12 | 乌克兰(UKR) | 7 | 4 | 11 | 22 |\n| 13 | 肯尼亚(KEN) | 6 | 4 | 6 | 16 |\n| 14 | 西班牙(ESP) | 5 | 11 | 3 | 19 |\n| 15 | 牙买加(JAM) | 5 | 4 | 2 | 11 |\n'}}
6072
```
6173
运行结果参数含义如下:
6274
- `image`: 表示输入待预测图像的路径
@@ -155,15 +167,16 @@ for res in results:
155167
<td>待预测数据</td>
156168
<td><code>dict</code></td>
157169
<td>
158-
<code>Dict</code>, 需要根据具体的模型确定,如PP-DocBee系列的输入为{'image': image_path, 'query': query_text}
170+
<code>Dict</code>, 由于多模态模型对输入有不同的要求,需要根据具体的模型确定,具体而言:
171+
<li>PP-DocBee系列的输入形式为<code>{'image': image_path, 'query': query_text}</code></li>
159172
</td>
160173
<td>无</td>
161174
</tr>
162175
<tr>
163176
<td><code>batch_size</code></td>
164177
<td>批大小</td>
165178
<td><code>int</code></td>
166-
<td>整数(目前仅支持为1)</td>
179+
<td>整数</td>
167180
<td>1</td>
168181
</tr>
169182
</table>
@@ -241,3 +254,5 @@ for res in results:
241254
## 四、二次开发
242255

243256
当前模块暂时不支持微调训练,仅支持推理集成。关于该模块的微调训练,计划在未来支持。
257+
258+
## 五、FAQ

0 commit comments

Comments
 (0)