Skip to content

add Electromobile_In_Elevator_Detection #864

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
# �����ڵ�ƿ�����Ҽ��

## ����

* [��Ŀ˵��](#��Ŀ˵��)
* [��װ˵��](#��װ˵��)
* [����׼��](#����׼��)
* [ģ��ѡ��](#ģ��ѡ��)
* [ģ��ѵ��](#ģ��ѵ��)
* [ģ�͵���](#ģ�͵���)
* [����������](#����������)
* [�����ⲿ��](#�����ⲿ��)

<a name="��Ŀ˵��"></a>

����������ƿ����¥�뻧�����Ļ����¹��ż����ʣ���Ը������Ƴ�����Ӧ�ĵ�ƿ�����Ҽ��ģ�ͣ�ּ�ڴ�Դͷ������һ����ķ����� �������Ħ�г�ģ�Ϳ��ܻᷢ����������������˶����ͼ�������ʽʵ�ָ�Ϊ��ȷ��ʶ�� ������ʹ���˷ɽ�Ŀ�����׼�PaddleDetection�е�picodetģ���Լ�ͼ��ʶ���׼�PaddleClas�е�������ͨ��ʶ��ģ�͡�

![demo](docs/images/result_5.png)

ע:AI Studio�������д�����ο�[�����ڵ�ƿ�����ȫ����](https://aistudio.baidu.com/aistudio/projectdetail/3497217?channelType=0&channel=0)(����gpu��Դ)
## 2 ��װ˵��

##### ����Ҫ��

* PaddlePaddle = 2.2.2
* Python >= 3.5

<a name="����׼��"></a>

## 3 ����׼��

��������picodet��ģ�����ݼ�ΪVOC��ʽ(ʹ��labelimg�Ƴ�)������21903�ŵ����е�ͼƬ������ѵ����17522�ţ����Լ�4381�ţ��������ճ��ĵ��ݳ����У�����14715��Ħ�г��Ŀ�23058���˵Ŀ�3750�����г��Ŀ�����picodetʹ�õ���coco��ʽ��������Ҫ��VOC��ʽת����coco��ʽ�� ����VOC���ݼ���ʹ��python��labelimgͼ���ע����ΪԭʼͼƬ���ɶ�Ӧ�ı�עxml�ļ���Ϊԭʼ��VOC��ʽ���ݼ������ɵ�xml�ļ���ʽ����ͼ��ʾ������ÿ��object���������ÿһ������object�е�name������������ֶ�bndbox�а�����ľ������꣨���Ͻ��Լ����½ǣ���

![label_img][docs/images/label_img.png]

![xml_content](docs/images/xml_content.png)


����VOC���ݼ��� ���ͼƬ��ע����һ�������������ݼ�����ÿ��ͼƬ����xml��Ӧ�������������ɶ�Ӧ��ѵ�����Լ����Լ�.

```
������ classify_voc.py
������ picodet_motorcycle
�� ������ Annotations
�� �� ������ 1595214506200933-1604535322-[]-motorcycle.xml
�� �� ������ 1595214506200933-1604542813-[]-motorcycle.xml
�� �� ������ 1595214506200933-1604559538-[]-motorcycle.xml
| ...
�� ������ ImageSets
�� �� ������ Main
�� �� ������ test.txt
�� �� ������ train.txt
�� �� ������ trainval.txt
�� �� ������ val.txt
�� ������ JPEGImages
�� ������ 1595214506200933-1604535322-[]-motorcycle.jpg
�� ������ 1595214506200933-1604542813-[]-motorcycle.jpg
�� ������ 1595214506200933-1604559538-[]-motorcycle.jpg
�� | ...
������ picodet_motorcycle.zip
������ prepare_voc_data.py
������ test.txt
������ trainval.txt
```

VOC���ݼ� [���ص�ַ](https://aistudio.baidu.com/aistudio/datasetdetail/128282)
���������ݼ� [���ص�ַ](https://aistudio.baidu.com/aistudio/datasetdetail/128448)
��VOC��ʽ�����ݼ�ת��Ϊcoco��ʽ��ʹ��paddle�Դ���ת���ű�����
������˵����ʹ��ʱ���޸�·��
```
python x2coco.py --dataset_type voc --voc_anno_dir /home/aistudio/data/data128282/ --voc_anno_list /home/aistudio/data/data128282/trainval.txt --voc_label_list /home/aistudio/data/data128282/label_list.txt --voc_out_name voc_train.json
python x2coco.py --dataset_type voc --voc_anno_dir /home/aistudio/data/data128282/ --voc_anno_list /home/aistudio/data/data128282/test.txt --voc_label_list /home/aistudio/data/data128282/label_list.txt --voc_out_name voc_test.json
mv voc_test.json /home/aistudio/data/data128282/
mv voc_train.json /home/aistudio/data/data128282/

```
<a name="ģ��ѡ��"></a>

## 4 ģ��ѡ��

������ѡ����PaddleDetection�������ȫ�µ�������ϵ��ģ��PP-PicoDet

PP-PicoDetģ���������ص㣺

- ���ߵ�mAP: ��һ����1M������֮��mAP(0.5:0.95)��Խ30+(����416����ʱ)��
- �����Ԥ���ٶ�: ����Ԥ����ARM CPU�¿ɴ�150FPS��
- �����Ѻ�: ֧��PaddleLite/MNN/NCNN/OpenVINO��Ԥ��⣬֧��ת��ONNX���ṩ��C++/Python/Android��demo��
- �Ƚ����㷨: ������SOTA�㷨�н����˴���, ������ESNet, CSP-PAN, SimOTA�ȵȡ�


<a name="ģ��ѵ��"></a>

## 5 ģ��ѵ��


���Ȱ�װ������
```
cd code/train/
pip install pycocotools
pip install faiss-gpu
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
```

����Ϊservingģ�͵�׼��
```
pip install paddle-serving-app==0.6.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-client==0.6.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install paddle-serving-server-gpu==0.6.3.post102 -i https://pypi.tuna.tsinghua.edu.cn/simple
```

<a name="ģ�͵���"></a>

## 6 ģ�͵���


����Ϊservingģ��
```
cd code/train/
python export_model.py --export_serving_model=true -c picodet_lcnet_1_5x_416_coco.yml --output_dir=./output_inference/
```

```
cd code/train/output_inference/picodet_lcnet_1_5x_416_coco/
mv serving_server/ code/picodet_lcnet_1_5x_416_coco/
```

��������
```
cd /home/aistudio/work/code/picodet_lcnet_1_5x_416_coco/
python3 web_service.py
```

����������ͼ��ʾ:

![infer_result](docs/images/infer_result.png)

<a name="����������"></a>

## 7 ����������

��Ŀ����ģ�Ͳ�����Ϻ��ƿ�����Ҽ��Ĺ��ܱ��Ͷ��ʹ�ã���Ϊ����������׼ȷ�ȼ���������Ҫһ������ļ�����ʽ�����������PaddleClas��ͼ��ʶ���е�������ͨ��ʶ��ģ��general_PPLCNet_x2_5_lite_v1.0_infer

���ȴ�paddle���ؽ�ѹģ�Ͳ�����Ϊservingģ��
```
cd code/
wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
cd models
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ --model_filename inference.pdmodel --params_filename inference.pdiparams --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
cp -r ./general_PPLCNet_x2_5_lite_v1.0_serving ../general_PPLCNet_x2_5_lite_v1.0/
```

��ѹ���ݼ������·���޸�make_label.py,�������������.
```
cd code
python make_label.py
python python/build_gallery.py -c build_gallery/build_general.yaml -o IndexProcess.data_file="./index_label.txt" -o IndexProcess.index_dir="index_result"
mv index_result/ general_PPLCNet_x2_5_lite_v1.0/
```

<a name="�����ⲿ��"></a>

## 7 �����ⲿ��
```
cd /home/aistudio/work/code/general_PPLCNet_x2_5_lite_v1.0/
python recognition_web_service_onlyrec.py
```

��ʵ�ʳ�������������ͼ��ʾ��

![index_infer_result](docs/images/index_infer_result.png)
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys

__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))

import cv2
import faiss
import numpy as np
from tqdm import tqdm
import pickle
from predict_rec import RecPredictor

from utils import logger
from utils import config


def split_datafile(data_file, image_root, delimiter="\t"):
'''
data_file: image path and info, which can be splitted by spacer
image_root: image path root
delimiter: delimiter
'''
gallery_images = []
gallery_docs = []
with open(data_file, 'r', encoding='utf-8') as f:
lines = f.readlines()
for _, ori_line in enumerate(lines):
line = ori_line.strip().split(delimiter)
text_num = len(line)
assert text_num >= 2, f"line({ori_line}) must be splitted into at least 2 parts, but got {text_num}"
image_file = os.path.join(image_root, line[0])

gallery_images.append(image_file)
gallery_docs.append(ori_line.strip())

return gallery_images, gallery_docs


class GalleryBuilder(object):
def __init__(self, config):

self.config = config
self.rec_predictor = RecPredictor(config)
assert 'IndexProcess' in config.keys(), "Index config not found ... "
self.build(config['IndexProcess'])

def build(self, config):
'''
build index from scratch
'''
operation_method = config.get("index_operation", "new").lower()

gallery_images, gallery_docs = split_datafile(
config['data_file'], config['image_root'], config['delimiter'])

# when remove data in index, do not need extract fatures
if operation_method != "remove":
gallery_features = self._extract_features(gallery_images, config)
assert operation_method in [
"new", "remove", "append"
], "Only append, remove and new operation are supported"

# vector.index: faiss index file
# id_map.pkl: use this file to map id to image_doc
if operation_method in ["remove", "append"]:
# if remove or append, vector.index and id_map.pkl must exist
assert os.path.join(
config["index_dir"], "vector.index"
), "The vector.index dose not exist in {} when 'index_operation' is not None".format(
config["index_dir"])
assert os.path.join(
config["index_dir"], "id_map.pkl"
), "The id_map.pkl dose not exist in {} when 'index_operation' is not None".format(
config["index_dir"])
index = faiss.read_index(
os.path.join(config["index_dir"], "vector.index"))
with open(os.path.join(config["index_dir"], "id_map.pkl"),
'rb') as fd:
ids = pickle.load(fd)
assert index.ntotal == len(ids.keys(
)), "data number in index is not equal in in id_map"
else:
if not os.path.exists(config["index_dir"]):
os.makedirs(config["index_dir"], exist_ok=True)
index_method = config.get("index_method", "HNSW32")

# if IVF method, cal ivf number automaticlly
if index_method == "IVF":
index_method = index_method + str(
min(int(len(gallery_images) // 8), 65536)) + ",Flat"

# for binary index, add B at head of index_method
if config["dist_type"] == "hamming":
index_method = "B" + index_method

#dist_type
dist_type = faiss.METRIC_INNER_PRODUCT if config[
"dist_type"] == "IP" else faiss.METRIC_L2

#build index
if config["dist_type"] == "hamming":
index = faiss.index_binary_factory(config["embedding_size"],
index_method)
else:
index = faiss.index_factory(config["embedding_size"],
index_method, dist_type)
index = faiss.IndexIDMap2(index)
ids = {}

if config["index_method"] == "HNSW32":
logger.warning(
"The HNSW32 method dose not support 'remove' operation")

if operation_method != "remove":
# calculate id for new data
start_id = max(ids.keys()) + 1 if ids else 0
ids_now = (
np.arange(0, len(gallery_images)) + start_id).astype(np.int64)

# only train when new index file
if operation_method == "new":
if config["dist_type"] == "hamming":
index.add(gallery_features)
else:
index.train(gallery_features)

if not config["dist_type"] == "hamming":
index.add_with_ids(gallery_features, ids_now)

for i, d in zip(list(ids_now), gallery_docs):
ids[i] = d
else:
if config["index_method"] == "HNSW32":
raise RuntimeError(
"The index_method: HNSW32 dose not support 'remove' operation"
)
# remove ids in id_map, remove index data in faiss index
remove_ids = list(
filter(lambda k: ids.get(k) in gallery_docs, ids.keys()))
remove_ids = np.asarray(remove_ids)
index.remove_ids(remove_ids)
for k in remove_ids:
del ids[k]

# store faiss index file and id_map file
if config["dist_type"] == "hamming":
faiss.write_index_binary(
index, os.path.join(config["index_dir"], "vector.index"))
else:
faiss.write_index(
index, os.path.join(config["index_dir"], "vector.index"))

with open(os.path.join(config["index_dir"], "id_map.pkl"), 'wb') as fd:
pickle.dump(ids, fd)

def _extract_features(self, gallery_images, config):
# extract gallery features
if config["dist_type"] == "hamming":
gallery_features = np.zeros(
[len(gallery_images), config['embedding_size'] // 8],
dtype=np.uint8)
else:
gallery_features = np.zeros(
[len(gallery_images), config['embedding_size']],
dtype=np.float32)

#construct batch imgs and do inference
batch_size = config.get("batch_size", 32)
batch_img = []
for i, image_file in enumerate(tqdm(gallery_images)):
img = cv2.imread(image_file)
if img is None:
logger.error("img empty, please check {}".format(image_file))
exit()
img = img[:, :, ::-1]
batch_img.append(img)

if (i + 1) % batch_size == 0:
rec_feat = self.rec_predictor.predict(batch_img)
gallery_features[i - batch_size + 1:i + 1, :] = rec_feat
batch_img = []

if len(batch_img) > 0:
rec_feat = self.rec_predictor.predict(batch_img)
gallery_features[-len(batch_img):, :] = rec_feat
batch_img = []

return gallery_features


def main(config):
GalleryBuilder(config)
return


if __name__ == "__main__":
args = config.parse_args()
config = config.get_config(args.config, overrides=args.override, show=True)
main(config)
Loading