|
1 |
| -# Paddle-Mobile |
| 1 | +[中文版](./README_cn.md) |
| 2 | + |
| 3 | +# Paddle Lite |
2 | 4 |
|
3 | 5 | [](https://travis-ci.org/PaddlePaddle/paddle-mobile)
|
4 | 6 | [](https://github.com/PaddlePaddle/paddle-mobile/tree/develop/doc)
|
5 | 7 | [](LICENSE)
|
| 8 | +<!-- [](https://github.com/PaddlePaddle/Paddle-Mobile/releases) --> |
6 | 9 |
|
7 |
| -<!--[](https://github.com/PaddlePaddle/Paddle-Mobile/releases) |
8 |
| -[](LICENSE)--> |
9 |
| - |
10 |
| -Welcome to Paddle-Mobile GitHub project。Paddle-Mobile is a project of PaddlePaddle as well as a deep learning framework for embedded platforms. |
11 |
| - |
12 |
| -欢迎来到 Paddle-Mobile GitHub 项目。Paddle-Mobile是PaddlePaddle组织下的项目,是一个致力于嵌入式平台的深度学习的框架。 |
13 |
| - |
14 |
| -## Features |
15 |
| - |
16 |
| -- high performance in support of ARM CPU |
17 |
| -- support Mali GPU |
18 |
| -- support Andreno GPU |
19 |
| -- support the realization of GPU Metal on Apple devices |
20 |
| -- support implementation on ZU5、ZU9 and other FPGA-based development boards |
21 |
| -- support implementation on Raspberry Pi and other arm-linux development boards |
22 |
| - |
23 |
| -## Features |
24 |
| - |
25 |
| -- 高性能支持ARM CPU |
26 |
| -- 支持Mali GPU |
27 |
| -- 支持Andreno GPU |
28 |
| -- 支持苹果设备的GPU Metal实现 |
29 |
| -- 支持ZU5、ZU9等FPGA开发板 |
30 |
| -- 支持树莓派等arm-linux开发板 |
31 |
| - |
32 |
| - |
33 |
| -## Demo |
34 |
| -- [ANDROID](https://github.com/xiebaiyuan/paddle-mobile-demo) |
35 |
| - |
36 |
| -### 原Domo目录 |
37 |
| - |
38 |
| -[https://github.com/PaddlePaddle/paddle-mobile/tree/develop/demo](https://github.com/PaddlePaddle/paddle-mobile/tree/develop/demo) |
39 |
| - |
40 |
| -## Documentation |
41 |
| - |
42 |
| -### Documentation of design |
| 10 | +Paddle Lite is an updated version of Paddle-Mobile, an open-open source deep learning framework designed to make it easy to perform inference on mobile devices. It is compatible with PaddlePaddle and pre-trained models from other sources. |
43 | 11 |
|
44 |
| -If you want to know more details about the documentation of paddle-mobile design, please refer to the link as follows. There are many previous designs and discussion: [issue](https://github.com/PaddlePaddle/paddle-mobile/issues). |
| 12 | +For tutorials, please see [PaddleLite Wiki](https://github.com/PaddlePaddle/paddle-mobile/wiki). |
45 | 13 |
|
46 |
| -[link of documentation of design](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/design_doc.md) |
| 14 | +## Key Features |
47 | 15 |
|
48 |
| -### Documentation of development |
| 16 | +### Light Weight |
49 | 17 |
|
50 |
| -Documentation of development is mainly about building, running and other tasks.As a developer,you can use it with the help of contributed documents. |
51 |
| -* [iOS](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_ios.md) |
52 |
| -* [Android_CPU](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_android.md) |
53 |
| -* [Android_GPU](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_android_GPU.md) |
54 |
| -* [FPGA](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_fpga.md) |
55 |
| -* [ARM_LINUX](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_arm_linux.md) |
| 18 | +On mobile devices, execution module can be deployed without third-party libraries, because our excecution module and analysis module are decoupled. |
56 | 19 |
|
57 |
| -### How to contribute your documents |
58 |
| -- [tutorial link to contribute documents](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/CONTRIBUTING.md) |
59 |
| -- Main procedure of contributing code is covered in the document above.If you have other problems during the procedure,please send them as [issue](https://github.com/PaddlePaddle/paddle-mobile/issues). We will deal with it as quickly as possible. |
| 20 | +On ARM V7, only 800KB are taken up, while on ARM V8, 1.3MB are taken up with the 80 operators and 85 kernels in the dynamic libraries provided by Paddle Lite. |
60 | 21 |
|
61 |
| -## 文档 |
| 22 | +Paddle Lite enables immediate inference without extra optimization. |
62 | 23 |
|
63 |
| -### 设计文档 |
| 24 | +### High Performance |
64 | 25 |
|
65 |
| -关于paddle-mobile设计文档在下面链接中,如果想了解更多内容。[issue](https://github.com/PaddlePaddle/paddle-mobile/issues)中会有很多早期的设计和讨论过程。 |
66 |
| -[设计文档链接](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/design_doc.md) |
| 26 | +Paddle Lite enables device-optimized kernels, maximizing ARM CPU performance. |
67 | 27 |
|
68 |
| -### 开发文档 |
| 28 | +It also supports INT8 quantizations with [PaddleSlim model compression tools](https://github.com/PaddlePaddle/models/tree/v1.5/PaddleSlim), reducing the size of models and increasing the performance of models. |
69 | 29 |
|
70 |
| -开发文档主要是关于编译、运行等问题。做为开发者,它可以和贡献文档共同结合使用。 |
71 |
| -* [iOS](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_ios.md) |
72 |
| -* [Android_CPU](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_android.md) |
73 |
| -* [Android_GPU](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_android_GPU.md) |
74 |
| -* [FPGA](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_fpga.md) |
75 |
| -* [ARM_LINUX](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/doc/development_arm_linux.md) |
| 30 | +On Huawei NPU and FPGA, the performance is also boosted. |
76 | 31 |
|
77 |
| -### 贡献文档 |
78 |
| -- [贡献文档链接](https://github.com/PaddlePaddle/paddle-mobile/blob/develop/CONTRIBUTING.md) |
79 |
| -- 上面文档中涵盖了主要的贡献代码流程,如果在实践中您还遇到了其他问题,可以发[issue](https://github.com/PaddlePaddle/paddle-mobile/issues)。我们看到后会尽快处理。 |
| 32 | +### High Compatibility |
80 | 33 |
|
81 |
| -## Acquision of Models |
82 |
| -At present Paddle-Mobile only supports Paddle fluid training model. Models wiil be operated regularly after transformation if you have various models. |
83 |
| -### 1. Use Paddle Fluid directly to train |
84 |
| -It is the most reliable method to be recommanded |
85 |
| -### 2. Transform Caffe to Paddle Fluid model |
86 |
| -[caffe2fluid](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/caffe2fluid) |
87 |
| -### 3. ONNX |
88 |
| -ONNX is expanded as Open Neural Network Exchange. The project is aimed to make a full communication and usage among diffrent nerual network development frameworks. |
| 34 | +Hardware compatibility: Paddle Lite supports a diversity of hardwares — ARM CPU, Mali GPU, Adreno GPU, Huawei NPU and FPGA. In the near future, we will also support AI microchips from Cambricon and Bitmain. |
89 | 35 |
|
90 |
| -Except for directly using fluid models trained by PaddlePaddle,you can also get certain Paddle fluid models through onnx transformation. |
| 36 | +Model compatibility: The Op of Paddle Lite is fully compatible to that of PaddlePaddle. The accuracy and performance of 18 models (mostly CV models and OCR models) and 85 operators have been validated. In the future, we will also support other models. |
91 | 37 |
|
92 |
| -At present,work in support of onnx is also under operation in Baidu. Related tranformation project can be referred to here: |
93 |
| -[https://github.com/PaddlePaddle/paddle-onnx](https://github.com/PaddlePaddle/paddle-onnx) |
| 38 | +Framework compatibility: In addition to models trained on PaddlePaddle, those trained on Caffe and TensorFlow can also be converted to be used on Paddle Lite, via [X2Paddle](https://github.com/PaddlePaddle/X2Paddle). In the future to come, we will also support models of ONNX format. |
94 | 39 |
|
95 |
| -### 4. Download parts of testing models and testing pictures |
96 |
| -[http://mms-graph.bj.bcebos.com/paddle-mobile%2FmodelsAndImages.zip](http://mms-graph.bj.bcebos.com/paddle-mobile%2FmodelsAndImages.zip) |
| 40 | +## Architecture |
97 | 41 |
|
98 |
| -- input data generated by tools from `tools/python/imagetools`. |
| 42 | +Paddle Lite is designed to support a wide range of hardwares and devices, and it enables mixed execution of a single model on multiple devices, optimization on various phases, and leight-weighted applications on devices. |
99 | 43 |
|
| 44 | + |
100 | 45 |
|
101 |
| -## 模型获得 |
102 |
| -目前Paddle-Mobile仅支持Paddle fluid训练的模型。如果你手中的模型是不同种类的模型,需要进行模型转换才可以运行。 |
103 |
| -### 1. 直接使用Paddle Fluid训练 |
104 |
| -该方式最为可靠,推荐方式 |
105 |
| -### 2. caffe转为Paddle Fluid模型 |
106 |
| -[caffe2fluid](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/caffe2fluid) |
107 |
| -### 3. ONNX |
108 |
| -ONNX全称为“Open Neural Network Exchange”,即“开放的神经网络切换”。该项目的目的是让不同的神经网络开发框架做到互通互用。 |
| 46 | +As is shown in the figure above, analysis phase includes Machine IR module, and it enables optimizations like Op fusion and redundant computation pruning. Besides, excecution phase only involves Kernal exevution, so it can be deployed on its own to ensure maximized light-weighted deployment. |
109 | 47 |
|
110 |
| -除直接使用PaddlePaddle训练fluid版本的模型外,还可以通过onnx转换得到个别Paddle fluid模型。 |
| 48 | +## Key Info about the Update |
111 | 49 |
|
112 |
| -目前,百度也在做onnx支持工作。相关转换项目在这里: |
113 |
| -[https://github.com/PaddlePaddle/paddle-onnx](https://github.com/PaddlePaddle/paddle-onnx) |
| 50 | +The earlier Paddle-Mobile was designed to be compatible with PaddlePaddle and multiple hardwares, including ARM CPU, Mali GPU, Adreno GPU, FPGA, ARM-Linux and Apple's GPU Metal. Within Baidu, inc, many product lines have been using Paddle-Mobile. For more details, please see: [mobile/README](mobile/README). |
114 | 51 |
|
115 |
| -### 4. 部分测试模型和测试图片下载 |
116 |
| -[http://mms-graph.bj.bcebos.com/paddle-mobile%2FmodelsAndImages.zip](http://mms-graph.bj.bcebos.com/paddle-mobile%2FmodelsAndImages.zip) |
| 52 | +As an update of Paddle-Mobile, Paddle Lite has incorporated many older capabilities into the [new architecture](https://github.com/PaddlePaddle/paddle-mobile/tree/develop/lite). For the time being, the code of Paddle-mobile will be kept under the directory `mobile/`, before complete transfer to Paddle Lite. |
117 | 53 |
|
118 |
| -- 测试输入数据可由本仓库下的脚本`tools/python/imagetools`生成。 |
| 54 | +For demands of Apple's GPU Metal and web front end inference, please see `./metal` and `./web` . These two modules will be further developed and maintained. |
119 | 55 |
|
120 |
| -## Communication |
121 |
| -- [Github Issues](https://github.com/PaddlePaddle/Paddle/issues): bug reports, feature requests, install issues, usage issues, etc. |
122 |
| -- QQ discussion group: 696965088 (Paddle-Mobile). |
123 |
| -- [Forums](http://ai.baidu.com/forum/topic/list/168?pageNo=1): discuss implementations, research, etc. |
| 56 | +## Special Thanks |
124 | 57 |
|
125 |
| -## 交流与反馈 |
126 |
| -- 欢迎您通过[Github Issues](https://github.com/PaddlePaddle/Paddle/issues)来提交问题、报告与建议 |
127 |
| -- QQ群: 696965088 (Paddle-Mobile) |
128 |
| -- [论坛](http://ai.baidu.com/forum/topic/list/168): 欢迎大家在PaddlePaddle论坛分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围 |
| 58 | +Paddle Lite has referenced the following open-source projects: |
129 | 59 |
|
130 |
| -## Old version Mobile-Deep-Learning |
131 |
| -Original MDL(Mobile-Deep-Learning) project has been transferred to [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning) |
| 60 | +- [ARM compute library](http://agroup.baidu.com/paddle-infer/md/article/%28https://github.com/ARM-software/ComputeLibrary%29) |
| 61 | +- [Anakin](https://github.com/PaddlePaddle/Anakin). The optimizations under Anakin has been incorporated into Paddle Lite, and so there will not be any future updates of Anakin. As another high-performance inference project under PaddlePaddle, Anakin has been forward-looking and helpful to the making of Paddle Lite. |
132 | 62 |
|
133 |
| -## 旧版 Mobile-Deep-Learning |
134 |
| -原MDL(Mobile-Deep-Learning)工程被迁移到了这里 [Mobile-Deep-Learning](https://github.com/allonli/mobile-deep-learning) |
| 63 | +## Feedback and Community Support |
135 | 64 |
|
136 |
| -## Copyright and License |
137 |
| -[Apache-2.0 license](LICENSE). |
| 65 | +- Questions, reports, and suggestions are welcome through Github Issues! |
| 66 | +- Forum: Opinions and questions are welcome at our [PaddlePaddle Forum](https://ai.baidu.com/forum/topic/list/168)! |
| 67 | +- QQ group chat: 696965088 |
0 commit comments