diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/README.md b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/README.md
new file mode 100644
index 000000000..3cadea3bc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/README.md
@@ -0,0 +1,171 @@
+# 电梯内电瓶车入室检测
+
+## 内容
+
+* [项目说明](#项目说明)
+* [安装说明](#安装说明)
+* [数据准备](#数据准备)
+* [模型选择](#模型选择)
+* [模型训练](#模型训练)
+* [模型导出](#模型导出)
+* [检索库生成](#检索库生成)
+* [检索库部署](#检索库部署)
+
+
+
+近年来,电瓶车进楼入户发生的火灾事故屡见不鲜,针对该现象推出了相应的电瓶车入室检测模型,旨在从源头减少这一情况的发生。 针对室内摩托车模型可能会发生的误报情况,采用了额外的图像检索方式实现更为精确的识别。 本案例使用了飞桨目标检测套件PaddleDetection中的picodet模型以及图像识别套件PaddleClas中的轻量级通用识别模型。
+
+
+
+注:AI Studio在线运行代码请参考[电梯内电瓶车检测全流程](https://aistudio.baidu.com/aistudio/projectdetail/3497217?channelType=0&channel=0)(配置gpu资源)
+## 2 安装说明
+
+##### 环境要求
+
+* PaddlePaddle = 2.2.2
+* Python >= 3.5
+
+
+
+## 3 数据准备
+
+本案例中picodet的模型数据集为VOC格式(使用labelimg制成),包括21903张电梯中的图片,其中训练集17522张,测试集4381张,皆来自日常的电梯场景中,共有14715个摩托车的框,23058个人的框,3750个自行车的框,由于picodet使用的是coco格式,所以需要将VOC格式转换成coco格式。 生成VOC数据集:使用python的labelimg图像标注工具为原始图片生成对应的标注xml文件作为原始的VOC格式数据集,生成的xml文件格式如下图所示,其中每个object代表框出的每一个对象,object中的name表明对象的名字而bndbox中包含框的具体坐标(左上角以及右下角)。
+
+![label_img][docs/images/label_img.png]
+
+
+
+
+生成VOC数据集: 完成图片标注后,下一步就是生成数据集,将每个图片与其xml对应起来按比例生成对应的训练集以及测试集.
+
+```
+├── classify_voc.py
+├── picodet_motorcycle
+│ ├── Annotations
+│ │ ├── 1595214506200933-1604535322-[]-motorcycle.xml
+│ │ ├── 1595214506200933-1604542813-[]-motorcycle.xml
+│ │ ├── 1595214506200933-1604559538-[]-motorcycle.xml
+| ...
+│ ├── ImageSets
+│ │ └── Main
+│ │ ├── test.txt
+│ │ ├── train.txt
+│ │ ├── trainval.txt
+│ │ └── val.txt
+│ └── JPEGImages
+│ ├── 1595214506200933-1604535322-[]-motorcycle.jpg
+│ ├── 1595214506200933-1604542813-[]-motorcycle.jpg
+│ ├── 1595214506200933-1604559538-[]-motorcycle.jpg
+│ | ...
+├── picodet_motorcycle.zip
+├── prepare_voc_data.py
+├── test.txt
+└── trainval.txt
+```
+
+VOC数据集 [下载地址](https://aistudio.baidu.com/aistudio/datasetdetail/128282)
+检索库数据集 [下载地址](https://aistudio.baidu.com/aistudio/datasetdetail/128448)
+将VOC格式的数据集转换为coco格式(使用paddle自带的转换脚本):
+仅举例说明,使用时请修改路径
+```
+python x2coco.py --dataset_type voc --voc_anno_dir /home/aistudio/data/data128282/ --voc_anno_list /home/aistudio/data/data128282/trainval.txt --voc_label_list /home/aistudio/data/data128282/label_list.txt --voc_out_name voc_train.json
+python x2coco.py --dataset_type voc --voc_anno_dir /home/aistudio/data/data128282/ --voc_anno_list /home/aistudio/data/data128282/test.txt --voc_label_list /home/aistudio/data/data128282/label_list.txt --voc_out_name voc_test.json
+mv voc_test.json /home/aistudio/data/data128282/
+mv voc_train.json /home/aistudio/data/data128282/
+
+```
+
+
+## 4 模型选择
+
+本案例选择了PaddleDetection中提出了全新的轻量级系列模型PP-PicoDet
+
+PP-PicoDet模型有如下特点:
+
+ - 更高的mAP: 第一个在1M参数量之内mAP(0.5:0.95)超越30+(输入416像素时)。
+ - 更快的预测速度: 网络预测在ARM CPU下可达150FPS。
+ - 部署友好: 支持PaddleLite/MNN/NCNN/OpenVINO等预测库,支持转出ONNX,提供了C++/Python/Android的demo。
+ - 先进的算法: 在现有SOTA算法中进行了创新, 包括:ESNet, CSP-PAN, SimOTA等等。
+
+
+
+
+## 5 模型训练
+
+
+首先安装依赖库
+```
+cd code/train/
+pip install pycocotools
+pip install faiss-gpu
+pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
+```
+
+导出为serving模型的准备
+```
+pip install paddle-serving-app==0.6.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
+pip install paddle-serving-client==0.6.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
+pip install paddle-serving-server-gpu==0.6.3.post102 -i https://pypi.tuna.tsinghua.edu.cn/simple
+```
+
+
+
+## 6 模型导出
+
+
+导出为serving模型
+```
+cd code/train/
+python export_model.py --export_serving_model=true -c picodet_lcnet_1_5x_416_coco.yml --output_dir=./output_inference/
+```
+
+```
+cd code/train/output_inference/picodet_lcnet_1_5x_416_coco/
+mv serving_server/ code/picodet_lcnet_1_5x_416_coco/
+```
+
+启动服务
+```
+cd /home/aistudio/work/code/picodet_lcnet_1_5x_416_coco/
+python3 web_service.py
+```
+
+请求结果如下图所示:
+
+
+
+
+
+## 7 检索库生成
+
+在目标检测模型部署完毕后电瓶车入室检测的功能便可投入使用,但为了提高整体的准确度减少误报则还需要一个额外的检索方式。这里采用了PaddleClas下图像识别中的轻量级通用识别模型general_PPLCNet_x2_5_lite_v1.0_infer
+
+首先从paddle下载解压模型并导出为serving模型
+```
+cd code/
+wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
+cd models
+tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
+python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ --model_filename inference.pdmodel --params_filename inference.pdiparams --serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ --serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
+cp -r ./general_PPLCNet_x2_5_lite_v1.0_serving ../general_PPLCNet_x2_5_lite_v1.0/
+```
+
+解压数据集后根据路径修改make_label.py,随后生成索引库.
+```
+cd code
+python make_label.py
+python python/build_gallery.py -c build_gallery/build_general.yaml -o IndexProcess.data_file="./index_label.txt" -o IndexProcess.index_dir="index_result"
+mv index_result/ general_PPLCNet_x2_5_lite_v1.0/
+```
+
+
+
+## 7 检索库部署
+```
+cd /home/aistudio/work/code/general_PPLCNet_x2_5_lite_v1.0/
+python recognition_web_service_onlyrec.py
+```
+
+在实际场景中请求结果如图所示。
+
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/build_gallery.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/build_gallery.cpython-37.pyc
new file mode 100644
index 000000000..0127dba01
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/build_gallery.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/det_preprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/det_preprocess.cpython-37.pyc
new file mode 100644
index 000000000..5d7e3c23a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/det_preprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/postprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/postprocess.cpython-37.pyc
new file mode 100644
index 000000000..48953d4a7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/postprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/predict_rec.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/predict_rec.cpython-37.pyc
new file mode 100644
index 000000000..661ed04d7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/predict_rec.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/preprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/preprocess.cpython-37.pyc
new file mode 100644
index 000000000..4369f8ab4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/__pycache__/preprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_gallery.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_gallery.py
new file mode 100644
index 000000000..5a7d82fb5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_gallery.py
@@ -0,0 +1,213 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import faiss
+import numpy as np
+from tqdm import tqdm
+import pickle
+from predict_rec import RecPredictor
+
+from utils import logger
+from utils import config
+
+
+def split_datafile(data_file, image_root, delimiter="\t"):
+ '''
+ data_file: image path and info, which can be splitted by spacer
+ image_root: image path root
+ delimiter: delimiter
+ '''
+ gallery_images = []
+ gallery_docs = []
+ with open(data_file, 'r', encoding='utf-8') as f:
+ lines = f.readlines()
+ for _, ori_line in enumerate(lines):
+ line = ori_line.strip().split(delimiter)
+ text_num = len(line)
+ assert text_num >= 2, f"line({ori_line}) must be splitted into at least 2 parts, but got {text_num}"
+ image_file = os.path.join(image_root, line[0])
+
+ gallery_images.append(image_file)
+ gallery_docs.append(ori_line.strip())
+
+ return gallery_images, gallery_docs
+
+
+class GalleryBuilder(object):
+ def __init__(self, config):
+
+ self.config = config
+ self.rec_predictor = RecPredictor(config)
+ assert 'IndexProcess' in config.keys(), "Index config not found ... "
+ self.build(config['IndexProcess'])
+
+ def build(self, config):
+ '''
+ build index from scratch
+ '''
+ operation_method = config.get("index_operation", "new").lower()
+
+ gallery_images, gallery_docs = split_datafile(
+ config['data_file'], config['image_root'], config['delimiter'])
+
+ # when remove data in index, do not need extract fatures
+ if operation_method != "remove":
+ gallery_features = self._extract_features(gallery_images, config)
+ assert operation_method in [
+ "new", "remove", "append"
+ ], "Only append, remove and new operation are supported"
+
+ # vector.index: faiss index file
+ # id_map.pkl: use this file to map id to image_doc
+ if operation_method in ["remove", "append"]:
+ # if remove or append, vector.index and id_map.pkl must exist
+ assert os.path.join(
+ config["index_dir"], "vector.index"
+ ), "The vector.index dose not exist in {} when 'index_operation' is not None".format(
+ config["index_dir"])
+ assert os.path.join(
+ config["index_dir"], "id_map.pkl"
+ ), "The id_map.pkl dose not exist in {} when 'index_operation' is not None".format(
+ config["index_dir"])
+ index = faiss.read_index(
+ os.path.join(config["index_dir"], "vector.index"))
+ with open(os.path.join(config["index_dir"], "id_map.pkl"),
+ 'rb') as fd:
+ ids = pickle.load(fd)
+ assert index.ntotal == len(ids.keys(
+ )), "data number in index is not equal in in id_map"
+ else:
+ if not os.path.exists(config["index_dir"]):
+ os.makedirs(config["index_dir"], exist_ok=True)
+ index_method = config.get("index_method", "HNSW32")
+
+ # if IVF method, cal ivf number automaticlly
+ if index_method == "IVF":
+ index_method = index_method + str(
+ min(int(len(gallery_images) // 8), 65536)) + ",Flat"
+
+ # for binary index, add B at head of index_method
+ if config["dist_type"] == "hamming":
+ index_method = "B" + index_method
+
+ #dist_type
+ dist_type = faiss.METRIC_INNER_PRODUCT if config[
+ "dist_type"] == "IP" else faiss.METRIC_L2
+
+ #build index
+ if config["dist_type"] == "hamming":
+ index = faiss.index_binary_factory(config["embedding_size"],
+ index_method)
+ else:
+ index = faiss.index_factory(config["embedding_size"],
+ index_method, dist_type)
+ index = faiss.IndexIDMap2(index)
+ ids = {}
+
+ if config["index_method"] == "HNSW32":
+ logger.warning(
+ "The HNSW32 method dose not support 'remove' operation")
+
+ if operation_method != "remove":
+ # calculate id for new data
+ start_id = max(ids.keys()) + 1 if ids else 0
+ ids_now = (
+ np.arange(0, len(gallery_images)) + start_id).astype(np.int64)
+
+ # only train when new index file
+ if operation_method == "new":
+ if config["dist_type"] == "hamming":
+ index.add(gallery_features)
+ else:
+ index.train(gallery_features)
+
+ if not config["dist_type"] == "hamming":
+ index.add_with_ids(gallery_features, ids_now)
+
+ for i, d in zip(list(ids_now), gallery_docs):
+ ids[i] = d
+ else:
+ if config["index_method"] == "HNSW32":
+ raise RuntimeError(
+ "The index_method: HNSW32 dose not support 'remove' operation"
+ )
+ # remove ids in id_map, remove index data in faiss index
+ remove_ids = list(
+ filter(lambda k: ids.get(k) in gallery_docs, ids.keys()))
+ remove_ids = np.asarray(remove_ids)
+ index.remove_ids(remove_ids)
+ for k in remove_ids:
+ del ids[k]
+
+ # store faiss index file and id_map file
+ if config["dist_type"] == "hamming":
+ faiss.write_index_binary(
+ index, os.path.join(config["index_dir"], "vector.index"))
+ else:
+ faiss.write_index(
+ index, os.path.join(config["index_dir"], "vector.index"))
+
+ with open(os.path.join(config["index_dir"], "id_map.pkl"), 'wb') as fd:
+ pickle.dump(ids, fd)
+
+ def _extract_features(self, gallery_images, config):
+ # extract gallery features
+ if config["dist_type"] == "hamming":
+ gallery_features = np.zeros(
+ [len(gallery_images), config['embedding_size'] // 8],
+ dtype=np.uint8)
+ else:
+ gallery_features = np.zeros(
+ [len(gallery_images), config['embedding_size']],
+ dtype=np.float32)
+
+ #construct batch imgs and do inference
+ batch_size = config.get("batch_size", 32)
+ batch_img = []
+ for i, image_file in enumerate(tqdm(gallery_images)):
+ img = cv2.imread(image_file)
+ if img is None:
+ logger.error("img empty, please check {}".format(image_file))
+ exit()
+ img = img[:, :, ::-1]
+ batch_img.append(img)
+
+ if (i + 1) % batch_size == 0:
+ rec_feat = self.rec_predictor.predict(batch_img)
+ gallery_features[i - batch_size + 1:i + 1, :] = rec_feat
+ batch_img = []
+
+ if len(batch_img) > 0:
+ rec_feat = self.rec_predictor.predict(batch_img)
+ gallery_features[-len(batch_img):, :] = rec_feat
+ batch_img = []
+
+ return gallery_features
+
+
+def main(config):
+ GalleryBuilder(config)
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_general.yaml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_general.yaml
new file mode 100644
index 000000000..5b83ea4d4
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/build_general.yaml
@@ -0,0 +1,36 @@
+Global:
+ rec_inference_model_dir: "./models/general_PPLCNet_x2_5_lite_v1.0_infer"
+ batch_size: 32
+ use_gpu: False
+ enable_mkldnn: True
+ cpu_num_threads: 10
+ enable_benchmark: True
+ use_fp16: False
+ ir_optim: True
+ use_tensorrt: False
+ gpu_mem: 8000
+ enable_profile: False
+
+RecPreProcess:
+ transform_ops:
+ - ResizeImage:
+ size: 224
+ - NormalizeImage:
+ scale: 0.00392157
+ mean: [0.485, 0.456, 0.406]
+ std: [0.229, 0.224, 0.225]
+ order: ''
+ - ToCHWImage:
+
+RecPostProcess: null
+
+# indexing engine config
+IndexProcess:
+ index_method: "HNSW32" # supported: HNSW32, IVF, Flat
+ image_root: ""
+ index_dir: "./images/index"
+ data_file: "./images/motorcyclebike_label_all_02.txt"
+ index_operation: "new" # suported: "append", "remove", "new"
+ delimiter: "\t"
+ dist_type: "IP"
+ embedding_size: 512
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/det_preprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/det_preprocess.py
new file mode 100644
index 000000000..65db32dc3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/det_preprocess.py
@@ -0,0 +1,216 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import numpy as np
+
+
+def decode_image(im_file, im_info):
+ """read rgb image
+ Args:
+ im_file (str|np.ndarray): input can be image path or np.ndarray
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ if isinstance(im_file, str):
+ with open(im_file, 'rb') as f:
+ im_read = f.read()
+ data = np.frombuffer(im_read, dtype='uint8')
+ im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+ else:
+ im = im_file
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ im_info['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ return im, im_info
+
+
+class DetResize(object):
+ """resize image by target_size and max_size
+ Args:
+ target_size (int): the target size of image
+ keep_ratio (bool): whether keep_ratio or not, default true
+ interp (int): method of resize
+ """
+
+ def __init__(
+ self,
+ target_size,
+ keep_ratio=True,
+ interp=cv2.INTER_LINEAR, ):
+ if isinstance(target_size, int):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+ self.keep_ratio = keep_ratio
+ self.interp = interp
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ assert len(self.target_size) == 2
+ assert self.target_size[0] > 0 and self.target_size[1] > 0
+ im_channel = im.shape[2]
+ im_scale_y, im_scale_x = self.generate_scale(im)
+ # set image_shape
+ im_info['input_shape'][1] = int(im_scale_y * im.shape[0])
+ im_info['input_shape'][2] = int(im_scale_x * im.shape[1])
+ print(0000000000000000000000000000000000000000)
+ print(im)
+ print(im_scale_x,im_scale_y,cv2.INTER_LINEAR,self.interp)
+ im = cv2.resize(
+ im,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ print(im)
+ im_info['im_shape'] = np.array(im.shape[:2]).astype('float32')
+ im_info['scale_factor'] = np.array(
+ [im_scale_y, im_scale_x]).astype('float32')
+ return im, im_info
+
+ def generate_scale(self, im):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ Returns:
+ im_scale_x: the resize ratio of X
+ im_scale_y: the resize ratio of Y
+ """
+ origin_shape = im.shape[:2]
+ im_c = im.shape[2]
+ if self.keep_ratio:
+ im_size_min = np.min(origin_shape)
+ im_size_max = np.max(origin_shape)
+ target_size_min = np.min(self.target_size)
+ target_size_max = np.max(self.target_size)
+ im_scale = float(target_size_min) / float(im_size_min)
+ if np.round(im_scale * im_size_max) > target_size_max:
+ im_scale = float(target_size_max) / float(im_size_max)
+ im_scale_x = im_scale
+ im_scale_y = im_scale
+ else:
+ resize_h, resize_w = self.target_size
+ im_scale_y = resize_h / float(origin_shape[0])
+ im_scale_x = resize_w / float(origin_shape[1])
+ return im_scale_y, im_scale_x
+
+
+class DetNormalizeImage(object):
+ """normalize image
+ Args:
+ mean (list): im - mean
+ std (list): im / std
+ is_scale (bool): whether need im / 255
+ is_channel_first (bool): if True: image shape is CHW, else: HWC
+ """
+
+ def __init__(self, mean, std, is_scale=True):
+ self.mean = mean
+ self.std = std
+ self.is_scale = is_scale
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ im = im.astype(np.float32, copy=False)
+ mean = np.array(self.mean)[np.newaxis, np.newaxis, :]
+ std = np.array(self.std)[np.newaxis, np.newaxis, :]
+ if self.is_scale:
+ im = im / 255.0
+ print(im)
+ im -= mean
+ im /= std
+ return im, im_info
+
+
+class DetPermute(object):
+ """permute image
+ Args:
+ to_bgr (bool): whether convert RGB to BGR
+ channel_first (bool): whether convert HWC to CHW
+ """
+
+ def __init__(self, ):
+ super().__init__()
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ #im = im.transpose((2, 0, 1)).copy()
+ print("detprossssssss")
+ print(im)
+ im = im.transpose((2, 0, 1)).copy()
+ print(im)
+ return im, im_info
+
+
+class DetPadStride(object):
+ """ padding image for model with FPN , instead PadBatch(pad_to_stride, pad_gt) in original config
+ Args:
+ stride (bool): model with FPN need image shape % stride == 0
+ """
+
+ def __init__(self, stride=0):
+ self.coarsest_stride = stride
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ coarsest_stride = self.coarsest_stride
+ if coarsest_stride <= 0:
+ return im, im_info
+ im_c, im_h, im_w = im.shape
+ pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride)
+ pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride)
+ padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32)
+ padding_im[:, :im_h, :im_w] = im
+ return padding_im, im_info
+
+
+def det_preprocess(im, im_info, preprocess_ops):
+ for operator in preprocess_ops:
+ print(operator)
+ print(im)
+ print(666)
+ im, im_info = operator(im, im_info)
+ print(im)
+ return im, im_info
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/postprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/postprocess.py
new file mode 100644
index 000000000..d26cbaa9a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/postprocess.py
@@ -0,0 +1,161 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import copy
+import shutil
+from functools import partial
+import importlib
+import numpy as np
+import paddle
+import paddle.nn.functional as F
+
+
+def build_postprocess(config):
+ if config is None:
+ return None
+
+ mod = importlib.import_module(__name__)
+ config = copy.deepcopy(config)
+
+ main_indicator = config.pop(
+ "main_indicator") if "main_indicator" in config else None
+ main_indicator = main_indicator if main_indicator else ""
+
+ func_list = []
+ for func in config:
+ func_list.append(getattr(mod, func)(**config[func]))
+ return PostProcesser(func_list, main_indicator)
+
+
+class PostProcesser(object):
+ def __init__(self, func_list, main_indicator="Topk"):
+ self.func_list = func_list
+ self.main_indicator = main_indicator
+
+ def __call__(self, x, image_file=None):
+ rtn = None
+ for func in self.func_list:
+ tmp = func(x, image_file)
+ if type(func).__name__ in self.main_indicator:
+ rtn = tmp
+ return rtn
+
+
+class Topk(object):
+ def __init__(self, topk=1, class_id_map_file=None):
+ assert isinstance(topk, (int, ))
+ self.class_id_map = self.parse_class_id_map(class_id_map_file)
+ self.topk = topk
+
+ def parse_class_id_map(self, class_id_map_file):
+ if class_id_map_file is None:
+ return None
+
+ if not os.path.exists(class_id_map_file):
+ print(
+ "Warning: If want to use your own label_dict, please input legal path!\nOtherwise label_names will be empty!"
+ )
+ return None
+
+ try:
+ class_id_map = {}
+ with open(class_id_map_file, "r") as fin:
+ lines = fin.readlines()
+ for line in lines:
+ partition = line.split("\n")[0].partition(" ")
+ class_id_map[int(partition[0])] = str(partition[-1])
+ except Exception as ex:
+ print(ex)
+ class_id_map = None
+ return class_id_map
+
+ def __call__(self, x, file_names=None, multilabel=False):
+ if file_names is not None:
+ assert x.shape[0] == len(file_names)
+ y = []
+ for idx, probs in enumerate(x):
+ index = probs.argsort(axis=0)[-self.topk:][::-1].astype(
+ "int32") if not multilabel else np.where(
+ probs >= 0.5)[0].astype("int32")
+ clas_id_list = []
+ score_list = []
+ label_name_list = []
+ for i in index:
+ clas_id_list.append(i.item())
+ score_list.append(probs[i].item())
+ if self.class_id_map is not None:
+ label_name_list.append(self.class_id_map[i.item()])
+ result = {
+ "class_ids": clas_id_list,
+ "scores": np.around(
+ score_list, decimals=5).tolist(),
+ }
+ if file_names is not None:
+ result["file_name"] = file_names[idx]
+ if label_name_list is not None:
+ result["label_names"] = label_name_list
+ y.append(result)
+ return y
+
+
+class MultiLabelTopk(Topk):
+ def __init__(self, topk=1, class_id_map_file=None):
+ super().__init__()
+
+ def __call__(self, x, file_names=None):
+ return super().__call__(x, file_names, multilabel=True)
+
+
+class SavePreLabel(object):
+ def __init__(self, save_dir):
+ if save_dir is None:
+ raise Exception(
+ "Please specify save_dir if SavePreLabel specified.")
+ self.save_dir = partial(os.path.join, save_dir)
+
+ def __call__(self, x, file_names=None):
+ if file_names is None:
+ return
+ assert x.shape[0] == len(file_names)
+ for idx, probs in enumerate(x):
+ index = probs.argsort(axis=0)[-1].astype("int32")
+ self.save(index, file_names[idx])
+
+ def save(self, id, image_file):
+ output_dir = self.save_dir(str(id))
+ os.makedirs(output_dir, exist_ok=True)
+ shutil.copy(image_file, output_dir)
+
+
+class Binarize(object):
+ def __init__(self, method="round"):
+ self.method = method
+ self.unit = np.array([[128, 64, 32, 16, 8, 4, 2, 1]]).T
+
+ def __call__(self, x, file_names=None):
+ if self.method == "round":
+ x = np.round(x + 1).astype("uint8") - 1
+
+ if self.method == "sign":
+ x = ((np.sign(x) + 1) / 2).astype("uint8")
+
+ embedding_size = x.shape[1]
+ assert embedding_size % 8 == 0, "The Binary index only support vectors with sizes multiple of 8"
+
+ byte = np.zeros([x.shape[0], embedding_size // 8], dtype=np.uint8)
+ for i in range(embedding_size // 8):
+ byte[:, i:i + 1] = np.dot(x[:, i * 8:(i + 1) * 8], self.unit)
+
+ return byte
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_cls.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_cls.py
new file mode 100644
index 000000000..cdeb32e48
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_cls.py
@@ -0,0 +1,140 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import numpy as np
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from python.preprocess import create_operators
+from python.postprocess import build_postprocess
+
+
+class ClsPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"])
+
+ self.preprocess_ops = []
+ self.postprocess = None
+ if "PreProcess" in config:
+ if "transform_ops" in config["PreProcess"]:
+ self.preprocess_ops = create_operators(config["PreProcess"][
+ "transform_ops"])
+ if "PostProcess" in config:
+ self.postprocess = build_postprocess(config["PostProcess"])
+
+ # for whole_chain project to test each repo of paddle
+ self.benchmark = config["Global"].get("benchmark", False)
+ if self.benchmark:
+ import auto_log
+ import os
+ pid = os.getpid()
+ self.auto_logger = auto_log.AutoLogger(
+ model_name=config["Global"].get("model_name", "cls"),
+ model_precision='fp16'
+ if config["Global"]["use_fp16"] else 'fp32',
+ batch_size=config["Global"].get("batch_size", 1),
+ data_shape=[3, 224, 224],
+ save_path=config["Global"].get("save_log_path",
+ "./auto_log.log"),
+ inference_config=self.config,
+ pids=pid,
+ process_name=None,
+ gpu_ids=None,
+ time_keys=[
+ 'preprocess_time', 'inference_time', 'postprocess_time'
+ ],
+ warmup=2)
+
+ def predict(self, images):
+ input_names = self.paddle_predictor.get_input_names()
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[0])
+
+ output_names = self.paddle_predictor.get_output_names()
+ output_tensor = self.paddle_predictor.get_output_handle(output_names[
+ 0])
+ if self.benchmark:
+ self.auto_logger.times.start()
+ if not isinstance(images, (list, )):
+ images = [images]
+ for idx in range(len(images)):
+ for ops in self.preprocess_ops:
+ images[idx] = ops(images[idx])
+ image = np.array(images)
+ if self.benchmark:
+ self.auto_logger.times.stamp()
+
+ input_tensor.copy_from_cpu(image)
+ self.paddle_predictor.run()
+ batch_output = output_tensor.copy_to_cpu()
+ if self.benchmark:
+ self.auto_logger.times.stamp()
+ if self.postprocess is not None:
+ batch_output = self.postprocess(batch_output)
+ if self.benchmark:
+ self.auto_logger.times.end(stamp=True)
+ return batch_output
+
+
+def main(config):
+ cls_predictor = ClsPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ batch_imgs = []
+ batch_names = []
+ cnt = 0
+ for idx, img_path in enumerate(image_list):
+ img = cv2.imread(img_path)
+ if img is None:
+ logger.warning(
+ "Image file failed to read and has been skipped. The path: {}".
+ format(img_path))
+ else:
+ img = img[:, :, ::-1]
+ batch_imgs.append(img)
+ img_name = os.path.basename(img_path)
+ batch_names.append(img_name)
+ cnt += 1
+
+ if cnt % config["Global"]["batch_size"] == 0 or (idx + 1
+ ) == len(image_list):
+ if len(batch_imgs) == 0:
+ continue
+ batch_results = cls_predictor.predict(batch_imgs)
+ for number, result_dict in enumerate(batch_results):
+ filename = batch_names[number]
+ clas_ids = result_dict["class_ids"]
+ scores_str = "[{}]".format(", ".join("{:.2f}".format(
+ r) for r in result_dict["scores"]))
+ label_names = result_dict["label_names"]
+ print("{}:\tclass id(s): {}, score(s): {}, label_name(s): {}".
+ format(filename, clas_ids, scores_str, label_names))
+ batch_imgs = []
+ batch_names = []
+ if cls_predictor.benchmark:
+ cls_predictor.auto_logger.report()
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det.py
new file mode 100644
index 000000000..0b9c25a5a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from det_preprocess import det_preprocess
+from preprocess import create_operators
+from utils.draw_bbox import draw_bbox_results
+
+import os
+import argparse
+import time
+import yaml
+import ast
+from functools import reduce
+import cv2
+import numpy as np
+import paddle
+import requests
+import base64
+import json
+class DetPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["det_inference_model_dir"])
+
+ self.preprocess_ops = create_operators(config["DetPreProcess"][
+ "transform_ops"])
+ self.config = config
+
+ def preprocess(self, img):
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': np.array(
+ img.shape[:2], dtype=np.float32),
+ 'input_shape': self.config["Global"]["image_shape"],
+ "scale_factor": np.array(
+ [1., 1.], dtype=np.float32)
+ }
+
+ im, im_info = det_preprocess(img, im_info, self.preprocess_ops)
+ print(111111111111111111111)
+ print(im)
+ inputs = self.create_inputs(im, im_info)
+ return inputs
+
+ def create_inputs(self, im, im_info):
+ """generate input for different model type
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ model_arch (str): model type
+ Returns:
+ inputs (dict): input of model
+ """
+ inputs = {}
+ inputs['image'] = np.array((im, )).astype('float32')
+ inputs['im_shape'] = np.array(
+ (im_info['im_shape'], )).astype('float32')
+ inputs['scale_factor'] = np.array(
+ (im_info['scale_factor'], )).astype('float32')
+ #print(inputs)
+ return inputs
+
+ def parse_det_results(self, pred, threshold, label_list):
+ max_det_results = self.config["Global"]["max_det_results"]
+ keep_indexes = pred[:, 1].argsort()[::-1][:max_det_results]
+ results = []
+ for idx in keep_indexes:
+ single_res = pred[idx]
+ class_id = int(single_res[0])
+ score = single_res[1]
+ bbox = single_res[2:]
+ if score < threshold:
+ continue
+ label_name = label_list[class_id]
+ '''
+ results.append({
+ "class_id": class_id,
+ "score": score,
+ "bbox": bbox,
+ "label_name": label_name,
+ })'''
+ results.append({
+ "bbox": bbox,
+ "rec_docs": "background",
+ "rec_scores": score,
+ })
+ return results
+
+ def predict(self, image, threshold=0.5, run_benchmark=False):
+ '''
+ Args:
+ image (str/np.ndarray): path of image/ np.ndarray read by cv2ps
+ threshold (float): threshold of predicted box' score
+ Returns:
+ results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
+ matix element:[class, score, x_min, y_min, x_max, y_max]
+ MaskRCNN's results include 'masks': np.ndarray:
+ shape: [N, im_h, im_w]
+ '''
+ inputs = self.preprocess(image)
+ print(str(inputs))
+ np_boxes = None
+ input_names = self.paddle_predictor.get_input_names()
+ print(input_names)
+ for i in range(len(input_names)):
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[
+ i])
+ input_tensor.copy_from_cpu(inputs[input_names[i]])
+ print("!!!!!!!",inputs[input_names[i]])
+ t1 = time.time()
+ print(self.paddle_predictor.run())
+ output_names = self.paddle_predictor.get_output_names()
+ boxes_tensor = self.paddle_predictor.get_output_handle(output_names[0])
+
+ np_boxes = boxes_tensor.copy_to_cpu()
+ t2 = time.time()
+
+ print("Inference: {} ms per batch image".format((t2 - t1) * 1000.0))
+
+ # do not perform postprocess in benchmark mode
+ results = []
+ if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
+ print('[WARNNING] No object detected.')
+ results = np.array([])
+ else:
+ results = np_boxes
+
+ results = self.parse_det_results(results,
+ self.config["Global"]["threshold"],
+ self.config["Global"]["labe_list"])
+ return results
+
+
+def main(config):
+ det_predictor = DetPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = det_predictor.predict(img)
+ print(output)
+ draw_bbox_results(img, output, image_file)
+
+ return image_file,output
+
+def cv2_to_base64_img(img):
+ data = cv2.imencode('.jpg', img)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+def solve_output(output,image_file):
+ print(image_file)
+ img = cv2.imread(image_file)
+
+ for bbox in output:
+ left,top,right,bottom = int(bbox["bbox"][0]),int(bbox["bbox"][1]),int(bbox["bbox"][2]),int(bbox["bbox"][3])
+ print(left,top,right,bottom)
+ img_crop = img[top:bottom,left:right]
+ url = "http://123.157.241.94:36807/ppyolo_mbv3/prediction"
+ img2 = {"key": ["image"], "value": [cv2_to_base64_img(img_crop)]}
+ r = requests.post(url=url,data=json.dumps(img2), timeout=5)
+ r = r.json()
+ print(r)
+ result = eval(r['value'][0])[0]
+ cv2.putText(img,str(round(float(result["scores"][0]),2)),(left,top+30), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+ cv2.putText(img,str(result["label_names"][0]),(left,top+60), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+ cv2.rectangle(img,(left ,top),(right,bottom), (0, 0, 255), 2)
+ cv2.imwrite("./output/ppyolo_result" + image_file[image_file.rfind("/"):],img)
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ image_file,output = main(config)
+ #solve_output(output,image_file)
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det_bak.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det_bak.py
new file mode 100644
index 000000000..323d65ab1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_det_bak.py
@@ -0,0 +1,167 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from det_preprocess import det_preprocess
+from preprocess import create_operators
+from utils.draw_bbox import draw_bbox_results
+
+import os
+import argparse
+import time
+import yaml
+import ast
+from functools import reduce
+import cv2
+import numpy as np
+import paddle
+
+
+class DetPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["det_inference_model_dir"])
+
+ self.preprocess_ops = create_operators(config["DetPreProcess"][
+ "transform_ops"])
+ self.config = config
+
+ def preprocess(self, img):
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': np.array(
+ img.shape[:2], dtype=np.float32),
+ 'input_shape': self.config["Global"]["image_shape"],
+ "scale_factor": np.array(
+ [1., 1.], dtype=np.float32)
+ }
+ im, im_info = det_preprocess(img, im_info, self.preprocess_ops)
+ inputs = self.create_inputs(im, im_info)
+ return inputs
+
+ def create_inputs(self, im, im_info):
+ """generate input for different model type
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ model_arch (str): model type
+ Returns:
+ inputs (dict): input of model
+ """
+ inputs = {}
+ inputs['image'] = np.array((im, )).astype('float32')
+ inputs['im_shape'] = np.array(
+ (im_info['im_shape'], )).astype('float32')
+ inputs['scale_factor'] = np.array(
+ (im_info['scale_factor'], )).astype('float32')
+ print(inputs)
+ return inputs
+
+ def parse_det_results(self, pred, threshold, label_list):
+ max_det_results = self.config["Global"]["max_det_results"]
+ keep_indexes = pred[:, 1].argsort()[::-1][:max_det_results]
+ results = []
+ for idx in keep_indexes:
+ single_res = pred[idx]
+ class_id = int(single_res[0])
+ score = single_res[1]
+ bbox = single_res[2:]
+ if score < threshold:
+ continue
+ label_name = label_list[class_id]
+ '''
+ results.append({
+ "class_id": class_id,
+ "score": score,
+ "bbox": bbox,
+ "label_name": label_name,
+ })'''
+ results.append({
+ "bbox": bbox,
+ "rec_docs": "background",
+ "rec_scores": score,
+ })
+ return results
+
+ def predict(self, image, threshold=0.5, run_benchmark=False):
+ '''
+ Args:
+ image (str/np.ndarray): path of image/ np.ndarray read by cv2
+ threshold (float): threshold of predicted box' score
+ Returns:
+ results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
+ matix element:[class, score, x_min, y_min, x_max, y_max]
+ MaskRCNN's results include 'masks': np.ndarray:
+ shape: [N, im_h, im_w]
+ '''
+ inputs = self.preprocess(image)
+ np_boxes = None
+ input_names = self.paddle_predictor.get_input_names()
+
+ for i in range(len(input_names)):
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[
+ i])
+ input_tensor.copy_from_cpu(inputs[input_names[i]])
+
+ t1 = time.time()
+ self.paddle_predictor.run()
+ output_names = self.paddle_predictor.get_output_names()
+ boxes_tensor = self.paddle_predictor.get_output_handle(output_names[0])
+ np_boxes = boxes_tensor.copy_to_cpu()
+ t2 = time.time()
+
+ print("Inference: {} ms per batch image".format((t2 - t1) * 1000.0))
+
+ # do not perform postprocess in benchmark mode
+ results = []
+ if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
+ print('[WARNNING] No object detected.')
+ results = np.array([])
+ else:
+ results = np_boxes
+
+ results = self.parse_det_results(results,
+ self.config["Global"]["threshold"],
+ self.config["Global"]["labe_list"])
+ return results
+
+
+def main(config):
+ det_predictor = DetPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = det_predictor.predict(img)
+ print(output)
+ draw_bbox_results(img, output, image_file)
+ print(output)
+
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_rec.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_rec.py
new file mode 100644
index 000000000..d41c513f8
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_rec.py
@@ -0,0 +1,105 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import numpy as np
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from preprocess import create_operators
+from postprocess import build_postprocess
+
+
+class RecPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["rec_inference_model_dir"])
+ self.preprocess_ops = create_operators(config["RecPreProcess"][
+ "transform_ops"])
+ self.postprocess = build_postprocess(config["RecPostProcess"])
+
+ def predict(self, images, feature_normalize=True):
+ input_names = self.paddle_predictor.get_input_names()
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[0])
+
+ output_names = self.paddle_predictor.get_output_names()
+ output_tensor = self.paddle_predictor.get_output_handle(output_names[
+ 0])
+
+ if not isinstance(images, (list, )):
+ images = [images]
+ for idx in range(len(images)):
+ for ops in self.preprocess_ops:
+ images[idx] = ops(images[idx])
+ image = np.array(images)
+
+ input_tensor.copy_from_cpu(image)
+ self.paddle_predictor.run()
+ batch_output = output_tensor.copy_to_cpu()
+
+ if feature_normalize:
+ feas_norm = np.sqrt(
+ np.sum(np.square(batch_output), axis=1, keepdims=True))
+ batch_output = np.divide(batch_output, feas_norm)
+
+ if self.postprocess is not None:
+ batch_output = self.postprocess(batch_output)
+ return batch_output
+
+
+def main(config):
+ rec_predictor = RecPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ batch_imgs = []
+ batch_names = []
+ cnt = 0
+ for idx, img_path in enumerate(image_list):
+ img = cv2.imread(img_path)
+ if img is None:
+ logger.warning(
+ "Image file failed to read and has been skipped. The path: {}".
+ format(img_path))
+ else:
+ img = img[:, :, ::-1]
+ batch_imgs.append(img)
+ img_name = os.path.basename(img_path)
+ batch_names.append(img_name)
+ cnt += 1
+
+ if cnt % config["Global"]["batch_size"] == 0 or (idx + 1) == len(image_list):
+ if len(batch_imgs) == 0:
+ continue
+
+ batch_results = rec_predictor.predict(batch_imgs)
+ for number, result_dict in enumerate(batch_results):
+ filename = batch_names[number]
+ print("{}:\t{}".format(filename, result_dict))
+ batch_imgs = []
+ batch_names = []
+
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_system.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_system.py
new file mode 100644
index 000000000..fb2d66a53
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/predict_system.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import copy
+import cv2
+import numpy as np
+import faiss
+import pickle
+
+from python.predict_rec import RecPredictor
+from python.predict_det import DetPredictor
+
+from utils import logger
+from utils import config
+from utils.get_image_list import get_image_list
+from utils.draw_bbox import draw_bbox_results
+
+
+class SystemPredictor(object):
+ def __init__(self, config):
+
+ self.config = config
+ self.rec_predictor = RecPredictor(config)
+ self.det_predictor = DetPredictor(config)
+
+ assert 'IndexProcess' in config.keys(), "Index config not found ... "
+ self.return_k = self.config['IndexProcess']['return_k']
+
+ index_dir = self.config["IndexProcess"]["index_dir"]
+ assert os.path.exists(os.path.join(
+ index_dir, "vector.index")), "vector.index not found ..."
+ assert os.path.exists(os.path.join(
+ index_dir, "id_map.pkl")), "id_map.pkl not found ... "
+
+ if config['IndexProcess'].get("binary_index", False):
+ self.Searcher = faiss.read_index_binary(
+ os.path.join(index_dir, "vector.index"))
+ else:
+ self.Searcher = faiss.read_index(
+ os.path.join(index_dir, "vector.index"))
+
+ with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:
+ self.id_map = pickle.load(fd)
+
+ def append_self(self, results, shape):
+ results.append({
+ "class_id": 0,
+ "score": 1.0,
+ "bbox":
+ np.array([0, 0, shape[1], shape[0]]), # xmin, ymin, xmax, ymax
+ "label_name": "foreground",
+ })
+ return results
+
+ def nms_to_rec_results(self, results, thresh=0.1):
+ filtered_results = []
+ x1 = np.array([r["bbox"][0] for r in results]).astype("float32")
+ y1 = np.array([r["bbox"][1] for r in results]).astype("float32")
+ x2 = np.array([r["bbox"][2] for r in results]).astype("float32")
+ y2 = np.array([r["bbox"][3] for r in results]).astype("float32")
+ scores = np.array([r["rec_scores"] for r in results])
+
+ areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ order = scores.argsort()[::-1]
+ while order.size > 0:
+ i = order[0]
+ xx1 = np.maximum(x1[i], x1[order[1:]])
+ yy1 = np.maximum(y1[i], y1[order[1:]])
+ xx2 = np.minimum(x2[i], x2[order[1:]])
+ yy2 = np.minimum(y2[i], y2[order[1:]])
+
+ w = np.maximum(0.0, xx2 - xx1 + 1)
+ h = np.maximum(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (areas[i] + areas[order[1:]] - inter)
+ inds = np.where(ovr <= thresh)[0]
+ order = order[inds + 1]
+ filtered_results.append(results[i])
+
+ return filtered_results
+
+ def predict(self, img):
+ output = []
+ # st1: get all detection results
+ results = self.det_predictor.predict(img)
+
+ # st2: add the whole image for recognition to improve recall
+ results = self.append_self(results, img.shape)
+
+ # st3: recognition process, use score_thres to ensure accuracy
+ for result in results:
+ preds = {}
+ xmin, ymin, xmax, ymax = result["bbox"].astype("int")
+ crop_img = img[ymin:ymax, xmin:xmax, :].copy()
+ rec_results = self.rec_predictor.predict(crop_img)
+ preds["bbox"] = [xmin, ymin, xmax, ymax]
+ scores, docs = self.Searcher.search(rec_results, self.return_k)
+
+ # just top-1 result will be returned for the final
+ if scores[0][0] >= self.config["IndexProcess"]["score_thres"]:
+ preds["rec_docs"] = self.id_map[docs[0][0]].split()[1]
+ preds["rec_scores"] = scores[0][0]
+ output.append(preds)
+
+ # st5: nms to the final results to avoid fetching duplicate results
+ output = self.nms_to_rec_results(
+ output, self.config["Global"]["rec_nms_thresold"])
+
+ return output
+
+
+def main(config):
+ system_predictor = SystemPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = system_predictor.predict(img)
+ print(image_file)
+ draw_bbox_results(img, output, image_file)
+ print(output)
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/preprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/preprocess.py
new file mode 100644
index 000000000..c4b6bca30
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/build_gallery/preprocess.py
@@ -0,0 +1,337 @@
+"""
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+from functools import partial
+import six
+import math
+import random
+import cv2
+import numpy as np
+import importlib
+from PIL import Image
+
+from det_preprocess import DetNormalizeImage, DetPadStride, DetPermute, DetResize
+
+
+def create_operators(params):
+ """
+ create operators based on the config
+
+ Args:
+ params(list): a dict list, used to create some operators
+ """
+ assert isinstance(params, list), ('operator config should be a list')
+ mod = importlib.import_module(__name__)
+ ops = []
+ for operator in params:
+ assert isinstance(operator,
+ dict) and len(operator) == 1, "yaml format error"
+ op_name = list(operator)[0]
+ param = {} if operator[op_name] is None else operator[op_name]
+ op = getattr(mod, op_name)(**param)
+ ops.append(op)
+
+ return ops
+
+
+class UnifiedResize(object):
+ def __init__(self, interpolation=None, backend="cv2"):
+ _cv2_interp_from_str = {
+ 'nearest': cv2.INTER_NEAREST,
+ 'bilinear': cv2.INTER_LINEAR,
+ 'area': cv2.INTER_AREA,
+ 'bicubic': cv2.INTER_CUBIC,
+ 'lanczos': cv2.INTER_LANCZOS4
+ }
+ _pil_interp_from_str = {
+ 'nearest': Image.NEAREST,
+ 'bilinear': Image.BILINEAR,
+ 'bicubic': Image.BICUBIC,
+ 'box': Image.BOX,
+ 'lanczos': Image.LANCZOS,
+ 'hamming': Image.HAMMING
+ }
+
+ def _pil_resize(src, size, resample):
+ pil_img = Image.fromarray(src)
+ pil_img = pil_img.resize(size, resample)
+ return np.asarray(pil_img)
+
+ if backend.lower() == "cv2":
+ if isinstance(interpolation, str):
+ interpolation = _cv2_interp_from_str[interpolation.lower()]
+ # compatible with opencv < version 4.4.0
+ elif interpolation is None:
+ interpolation = cv2.INTER_LINEAR
+ self.resize_func = partial(cv2.resize, interpolation=interpolation)
+ elif backend.lower() == "pil":
+ if isinstance(interpolation, str):
+ interpolation = _pil_interp_from_str[interpolation.lower()]
+ self.resize_func = partial(_pil_resize, resample=interpolation)
+ else:
+ logger.warning(
+ f"The backend of Resize only support \"cv2\" or \"PIL\". \"f{backend}\" is unavailable. Use \"cv2\" instead."
+ )
+ self.resize_func = cv2.resize
+
+ def __call__(self, src, size):
+ return self.resize_func(src, size)
+
+
+class OperatorParamError(ValueError):
+ """ OperatorParamError
+ """
+ pass
+
+
+class DecodeImage(object):
+ """ decode image """
+
+ def __init__(self, to_rgb=True, to_np=False, channel_first=False):
+ self.to_rgb = to_rgb
+ self.to_np = to_np # to numpy
+ self.channel_first = channel_first # only enabled when to_np is True
+
+ def __call__(self, img):
+ if six.PY2:
+ assert type(img) is str and len(
+ img) > 0, "invalid input 'img' in DecodeImage"
+ else:
+ assert type(img) is bytes and len(
+ img) > 0, "invalid input 'img' in DecodeImage"
+ data = np.frombuffer(img, dtype='uint8')
+ img = cv2.imdecode(data, 1)
+ if self.to_rgb:
+ assert img.shape[2] == 3, 'invalid shape of image[%s]' % (
+ img.shape)
+ img = img[:, :, ::-1]
+
+ if self.channel_first:
+ img = img.transpose((2, 0, 1))
+
+ return img
+
+
+class ResizeImage(object):
+ """ resize image """
+
+ def __init__(self,
+ size=None,
+ resize_short=None,
+ interpolation=None,
+ backend="cv2"):
+ if resize_short is not None and resize_short > 0:
+ self.resize_short = resize_short
+ self.w = None
+ self.h = None
+ elif size is not None:
+ self.resize_short = None
+ self.w = size if type(size) is int else size[0]
+ self.h = size if type(size) is int else size[1]
+ else:
+ raise OperatorParamError("invalid params for ReisizeImage for '\
+ 'both 'size' and 'resize_short' are None")
+
+ self._resize_func = UnifiedResize(
+ interpolation=interpolation, backend=backend)
+
+ def __call__(self, img):
+ img_h, img_w = img.shape[:2]
+ if self.resize_short is not None:
+ percent = float(self.resize_short) / min(img_w, img_h)
+ w = int(round(img_w * percent))
+ h = int(round(img_h * percent))
+ else:
+ w = self.w
+ h = self.h
+ return self._resize_func(img, (w, h))
+
+
+class CropImage(object):
+ """ crop image """
+
+ def __init__(self, size):
+ if type(size) is int:
+ self.size = (size, size)
+ else:
+ self.size = size # (h, w)
+
+ def __call__(self, img):
+ w, h = self.size
+ img_h, img_w = img.shape[:2]
+
+ if img_h < h or img_w < w:
+ raise Exception(
+ f"The size({h}, {w}) of CropImage must be greater than size({img_h}, {img_w}) of image. Please check image original size and size of ResizeImage if used."
+ )
+
+ w_start = (img_w - w) // 2
+ h_start = (img_h - h) // 2
+
+ w_end = w_start + w
+ h_end = h_start + h
+ return img[h_start:h_end, w_start:w_end, :]
+
+
+class RandCropImage(object):
+ """ random crop image """
+
+ def __init__(self,
+ size,
+ scale=None,
+ ratio=None,
+ interpolation=None,
+ backend="cv2"):
+ if type(size) is int:
+ self.size = (size, size) # (h, w)
+ else:
+ self.size = size
+
+ self.scale = [0.08, 1.0] if scale is None else scale
+ self.ratio = [3. / 4., 4. / 3.] if ratio is None else ratio
+
+ self._resize_func = UnifiedResize(
+ interpolation=interpolation, backend=backend)
+
+ def __call__(self, img):
+ size = self.size
+ scale = self.scale
+ ratio = self.ratio
+
+ aspect_ratio = math.sqrt(random.uniform(*ratio))
+ w = 1. * aspect_ratio
+ h = 1. / aspect_ratio
+
+ img_h, img_w = img.shape[:2]
+
+ bound = min((float(img_w) / img_h) / (w**2),
+ (float(img_h) / img_w) / (h**2))
+ scale_max = min(scale[1], bound)
+ scale_min = min(scale[0], bound)
+
+ target_area = img_w * img_h * random.uniform(scale_min, scale_max)
+ target_size = math.sqrt(target_area)
+ w = int(target_size * w)
+ h = int(target_size * h)
+
+ i = random.randint(0, img_w - w)
+ j = random.randint(0, img_h - h)
+
+ img = img[j:j + h, i:i + w, :]
+
+ return self._resize_func(img, size)
+
+
+class RandFlipImage(object):
+ """ random flip image
+ flip_code:
+ 1: Flipped Horizontally
+ 0: Flipped Vertically
+ -1: Flipped Horizontally & Vertically
+ """
+
+ def __init__(self, flip_code=1):
+ assert flip_code in [-1, 0, 1
+ ], "flip_code should be a value in [-1, 0, 1]"
+ self.flip_code = flip_code
+
+ def __call__(self, img):
+ if random.randint(0, 1) == 1:
+ return cv2.flip(img, self.flip_code)
+ else:
+ return img
+
+
+class AutoAugment(object):
+ def __init__(self):
+ self.policy = ImageNetPolicy()
+
+ def __call__(self, img):
+ from PIL import Image
+ img = np.ascontiguousarray(img)
+ img = Image.fromarray(img)
+ img = self.policy(img)
+ img = np.asarray(img)
+
+
+class NormalizeImage(object):
+ """ normalize image such as substract mean, divide std
+ """
+
+ def __init__(self,
+ scale=None,
+ mean=None,
+ std=None,
+ order='chw',
+ output_fp16=False,
+ channel_num=3):
+ if isinstance(scale, str):
+ scale = eval(scale)
+ assert channel_num in [
+ 3, 4
+ ], "channel number of input image should be set to 3 or 4."
+ self.channel_num = channel_num
+ self.output_dtype = 'float16' if output_fp16 else 'float32'
+ self.scale = np.float32(scale if scale is not None else 1.0 / 255.0)
+ self.order = order
+ mean = mean if mean is not None else [0.485, 0.456, 0.406]
+ std = std if std is not None else [0.229, 0.224, 0.225]
+
+ shape = (3, 1, 1) if self.order == 'chw' else (1, 1, 3)
+ self.mean = np.array(mean).reshape(shape).astype('float32')
+ self.std = np.array(std).reshape(shape).astype('float32')
+
+ def __call__(self, img):
+ from PIL import Image
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+
+ assert isinstance(img,
+ np.ndarray), "invalid input 'img' in NormalizeImage"
+
+ img = (img.astype('float32') * self.scale - self.mean) / self.std
+
+ if self.channel_num == 4:
+ img_h = img.shape[1] if self.order == 'chw' else img.shape[0]
+ img_w = img.shape[2] if self.order == 'chw' else img.shape[1]
+ pad_zeros = np.zeros(
+ (1, img_h, img_w)) if self.order == 'chw' else np.zeros(
+ (img_h, img_w, 1))
+ img = (np.concatenate(
+ (img, pad_zeros), axis=0)
+ if self.order == 'chw' else np.concatenate(
+ (img, pad_zeros), axis=2))
+ return img.astype(self.output_dtype)
+
+
+class ToCHWImage(object):
+ """ convert hwc image to chw image
+ """
+
+ def __init__(self):
+ pass
+
+ def __call__(self, img):
+ from PIL import Image
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+
+ return img.transpose((2, 0, 1))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/.recognition_web_service_onlyrec.py.swp b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/.recognition_web_service_onlyrec.py.swp
new file mode 100644
index 000000000..3fbc33b04
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/.recognition_web_service_onlyrec.py.swp differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log
new file mode 100644
index 000000000..7cbcaf186
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log
@@ -0,0 +1,2252 @@
+WARNING 2022-02-17 11:39:15,975 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:39:15,995 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:39:15,995 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:39:15,995 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:39:15,995 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:39:15,995 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:39:15,995 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:39:15,995 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:39:16,008 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:39:16,009 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:39:16,009 [dag.py:662] rec
+INFO 2022-02-17 11:39:16,009 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:39:16,009 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:39:16,009 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:39:16,009 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:39:16,009 [dag.py:686] - rec
+INFO 2022-02-17 11:39:16,009 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:39:16,009 [dag.py:684] [rec]
+INFO 2022-02-17 11:39:16,009 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:39:16,075 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:39:16,084 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:39:16,085 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:39:16,089 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:39:16,090 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:39:16,092 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:39:16,099 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:39:16,099 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:39:16,099 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:39:17,129 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:39:19,011 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:39:25,739 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069165.7397573
+INFO 2022-02-17 11:39:25,740 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069165.7409468
+INFO 2022-02-17 11:39:25,741 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2022-02-17 11:39:28,512 [operator.py:1109] (data_id=0 log_id=0) [rec|0] Failed to postprocess: postprocess() takes 4 positional arguments but 5 were given
+Traceback (most recent call last):
+ File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1105, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() takes 4 positional arguments but 5 were given
+ERROR 2022-02-17 11:39:28,515 [dag.py:410] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [rec|0] Failed to postprocess: postprocess() takes 4 positional arguments but 5 were given
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:40:11,401 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:40:11,401 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:40:11,401 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:40:11,401 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:40:11,402 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:40:11,402 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:40:11,402 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:40:11,414 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:40:11,414 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:40:11,415 [dag.py:662] rec
+INFO 2022-02-17 11:40:11,415 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:40:11,415 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:40:11,415 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:40:11,415 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:40:11,415 [dag.py:686] - rec
+INFO 2022-02-17 11:40:11,415 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:40:11,415 [dag.py:684] [rec]
+INFO 2022-02-17 11:40:11,415 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:40:11,430 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:40:11,439 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:40:11,440 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:40:11,468 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:40:11,469 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:40:11,471 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:40:11,479 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:40:11,480 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:40:11,480 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:40:12,540 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:40:14,355 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:40:26,642 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069226.6426268
+INFO 2022-02-17 11:40:26,643 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069226.643837
+INFO 2022-02-17 11:40:26,644 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 11:40:29,377 [dag.py:405] (data_id=0 log_id=0) Succ predict
+INFO 2022-02-17 11:41:41,337 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069301.3372242
+INFO 2022-02-17 11:41:41,337 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069301.3376215
+INFO 2022-02-17 11:41:41,337 [dag.py:369] (data_id=1 log_id=0) Succ Generate ID
+INFO 2022-02-17 11:41:41,351 [dag.py:405] (data_id=1 log_id=0) Succ predict
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:43:51,578 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:43:51,579 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:43:51,579 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:43:51,579 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:43:51,579 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:43:51,579 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:43:51,579 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:43:51,591 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:43:51,592 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:43:51,592 [dag.py:662] rec
+INFO 2022-02-17 11:43:51,592 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:43:51,592 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:43:51,592 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:43:51,592 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:43:51,593 [dag.py:686] - rec
+INFO 2022-02-17 11:43:51,593 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:43:51,593 [dag.py:684] [rec]
+INFO 2022-02-17 11:43:51,593 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:43:51,608 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:43:51,617 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:43:51,618 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:43:51,622 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:43:51,623 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:43:51,625 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:43:51,632 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:43:51,632 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:43:51,632 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:43:52,682 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:43:54,499 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:44:01,419 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069441.419735
+INFO 2022-02-17 11:44:01,421 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069441.4209738
+INFO 2022-02-17 11:44:01,421 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 11:44:04,142 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:45:00,983 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:45:00,983 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:45:00,984 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:45:00,984 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:45:00,984 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:45:00,984 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:45:00,984 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:45:00,996 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:45:00,997 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:45:00,997 [dag.py:662] rec
+INFO 2022-02-17 11:45:00,997 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:45:00,997 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:45:00,997 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:45:00,997 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:45:00,997 [dag.py:686] - rec
+INFO 2022-02-17 11:45:00,997 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:45:00,997 [dag.py:684] [rec]
+INFO 2022-02-17 11:45:00,997 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:45:01,012 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:45:01,022 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:45:01,022 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:45:01,026 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:45:01,027 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:45:01,029 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:45:01,036 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:45:01,037 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:45:01,037 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:45:02,091 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:45:03,902 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:45:06,447 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069506.4474986
+INFO 2022-02-17 11:45:06,448 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069506.4486632
+INFO 2022-02-17 11:45:06,449 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 11:45:09,237 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:49:47,978 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:49:47,979 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:49:47,979 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:49:47,979 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:49:47,979 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:49:47,979 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:49:47,979 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:49:47,991 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:49:47,992 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:49:47,992 [dag.py:662] rec
+INFO 2022-02-17 11:49:47,992 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:49:47,992 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:49:47,992 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:49:47,992 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:49:47,992 [dag.py:686] - rec
+INFO 2022-02-17 11:49:47,992 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:49:47,992 [dag.py:684] [rec]
+INFO 2022-02-17 11:49:47,992 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:49:48,007 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:49:48,017 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:49:48,017 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:49:48,021 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:49:48,022 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:49:48,024 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:49:48,030 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:49:48,030 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:49:48,030 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:49:49,065 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:49:50,923 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:49:51,725 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069791.7250278
+INFO 2022-02-17 11:49:51,726 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069791.7262204
+INFO 2022-02-17 11:49:51,726 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2022-02-17 11:50:16,675 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:50:16,693 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:50:16,694 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:50:16,694 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:50:16,694 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:50:16,694 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:50:16,694 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:50:16,694 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:50:16,706 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:50:16,707 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:50:16,707 [dag.py:662] rec
+INFO 2022-02-17 11:50:16,707 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:50:16,707 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:50:16,707 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:50:16,707 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:50:16,707 [dag.py:686] - rec
+INFO 2022-02-17 11:50:16,707 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:50:16,707 [dag.py:684] [rec]
+INFO 2022-02-17 11:50:16,707 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:50:16,783 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:50:16,792 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:50:16,793 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:50:16,797 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:50:16,798 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:50:16,800 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:50:16,807 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:50:16,807 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:50:16,807 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:50:17,851 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:50:19,691 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:50:25,635 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069825.635539
+INFO 2022-02-17 11:50:25,636 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069825.6367557
+INFO 2022-02-17 11:50:25,637 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 11:51:35,002 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 11:51:35,003 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:51:35,003 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 11:51:35,003 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 11:51:35,003 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 11:51:35,003 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 11:51:35,003 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 11:51:35,014 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 11:51:35,015 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 11:51:35,015 [dag.py:662] rec
+INFO 2022-02-17 11:51:35,015 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 11:51:35,015 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 11:51:35,015 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 11:51:35,015 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 11:51:35,015 [dag.py:686] - rec
+INFO 2022-02-17 11:51:35,015 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 11:51:35,015 [dag.py:684] [rec]
+INFO 2022-02-17 11:51:35,015 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 11:51:35,030 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 11:51:35,040 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 11:51:35,040 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 11:51:35,043 [dag.py:832] [DAG] start
+INFO 2022-02-17 11:51:35,044 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 11:51:35,046 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 11:51:35,052 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 11:51:35,053 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 11:51:35,053 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 11:51:36,063 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 11:51:37,907 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 11:51:40,088 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645069900.0879674
+INFO 2022-02-17 11:51:40,089 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645069900.0891907
+INFO 2022-02-17 11:51:40,089 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 11:51:42,801 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:15:19,783 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:15:19,783 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:15:19,783 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:15:19,784 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:15:19,784 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:15:19,784 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:15:19,784 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:15:19,796 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:15:19,797 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:15:19,797 [dag.py:662] rec
+INFO 2022-02-17 13:15:19,797 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:15:19,797 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:15:19,797 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:15:19,797 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:15:19,797 [dag.py:686] - rec
+INFO 2022-02-17 13:15:19,797 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:15:19,797 [dag.py:684] [rec]
+INFO 2022-02-17 13:15:19,797 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:15:19,813 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:15:19,822 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:15:19,823 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:15:19,827 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:15:19,828 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:15:19,830 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:15:19,837 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:15:19,837 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:15:19,838 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:15:20,885 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:15:22,721 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:15:27,334 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645074927.3338962
+INFO 2022-02-17 13:15:27,335 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645074927.3350475
+INFO 2022-02-17 13:15:27,335 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:15:30,083 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:24:47,291 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:24:47,303 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:24:47,303 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:24:47,303 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:24:47,303 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:24:47,303 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:24:47,303 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:24:47,303 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:24:47,314 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:24:47,315 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:24:47,315 [dag.py:662] rec
+INFO 2022-02-17 13:24:47,315 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:24:47,315 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:24:47,315 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:24:47,315 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:24:47,315 [dag.py:686] - rec
+INFO 2022-02-17 13:24:47,315 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:24:47,315 [dag.py:684] [rec]
+INFO 2022-02-17 13:24:47,315 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:24:47,330 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:24:47,340 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:24:47,340 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:24:47,344 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:24:47,345 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:24:47,347 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:24:47,354 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:24:47,354 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:24:47,354 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:24:48,445 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:24:50,308 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:24:52,415 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645075492.4151294
+INFO 2022-02-17 13:24:52,416 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645075492.416351
+INFO 2022-02-17 13:24:52,416 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:24:55,206 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:25:34,907 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:25:34,980 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:25:34,980 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:25:34,980 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:25:34,981 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:25:34,981 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:25:34,981 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:25:34,981 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:25:34,993 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:25:34,994 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:25:34,994 [dag.py:662] rec
+INFO 2022-02-17 13:25:34,994 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:25:34,994 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:25:34,994 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:25:34,994 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:25:34,994 [dag.py:686] - rec
+INFO 2022-02-17 13:25:34,994 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:25:34,994 [dag.py:684] [rec]
+INFO 2022-02-17 13:25:34,994 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:25:35,010 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:25:35,019 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:25:35,020 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:25:35,024 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:25:35,024 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:25:35,026 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:25:35,033 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:25:35,034 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:25:35,034 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:25:36,103 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:25:37,964 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:25:39,558 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645075539.5582185
+INFO 2022-02-17 13:25:39,559 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645075539.5594082
+INFO 2022-02-17 13:25:39,559 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:28:36,002 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:28:36,002 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:28:36,002 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:28:36,002 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:28:36,002 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:28:36,002 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:28:36,002 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:28:36,014 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:28:36,014 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:28:36,015 [dag.py:662] rec
+INFO 2022-02-17 13:28:36,015 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:28:36,015 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:28:36,015 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:28:36,015 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:28:36,015 [dag.py:686] - rec
+INFO 2022-02-17 13:28:36,015 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:28:36,015 [dag.py:684] [rec]
+INFO 2022-02-17 13:28:36,015 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:28:36,030 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:28:36,039 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:28:36,039 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:28:36,043 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:28:36,044 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:28:36,046 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:28:36,053 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:28:36,053 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:28:36,053 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:28:37,113 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:28:38,927 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:28:44,185 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645075724.1851497
+INFO 2022-02-17 13:28:44,186 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645075724.186309
+INFO 2022-02-17 13:28:44,186 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:28:46,924 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:33:50,875 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:33:50,875 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:33:50,887 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:33:50,887 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:33:50,887 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:33:50,887 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:33:50,887 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:33:50,887 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:33:50,887 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:33:50,901 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:33:50,902 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:33:50,902 [dag.py:662] rec
+INFO 2022-02-17 13:33:50,902 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:33:50,902 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:33:50,902 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:33:50,902 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:33:50,902 [dag.py:686] - rec
+INFO 2022-02-17 13:33:50,902 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:33:50,902 [dag.py:684] [rec]
+INFO 2022-02-17 13:33:50,902 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:33:50,918 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:33:50,977 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:33:50,977 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:33:50,982 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:33:50,983 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:33:50,985 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:33:50,991 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:33:50,992 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:33:50,992 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:33:52,067 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:33:53,924 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:34:04,445 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645076044.4450066
+INFO 2022-02-17 13:34:04,446 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645076044.4461415
+INFO 2022-02-17 13:34:04,446 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:34:07,203 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:38:14,876 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:38:14,971 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:38:14,971 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:38:14,971 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:38:14,971 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:38:14,971 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:38:14,971 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:38:14,972 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:38:14,985 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:38:14,986 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:38:14,986 [dag.py:662] rec
+INFO 2022-02-17 13:38:14,986 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:38:14,986 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:38:14,986 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:38:14,986 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:38:14,986 [dag.py:686] - rec
+INFO 2022-02-17 13:38:14,986 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:38:14,986 [dag.py:684] [rec]
+INFO 2022-02-17 13:38:14,986 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:38:15,002 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:38:15,011 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:38:15,012 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:38:15,016 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:38:15,017 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:38:15,019 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:38:15,026 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:38:15,026 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:38:15,026 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:38:16,054 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:38:17,886 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:38:25,102 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645076305.1018336
+INFO 2022-02-17 13:38:25,103 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645076305.1030562
+INFO 2022-02-17 13:38:25,103 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:38:27,823 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:41:58,835 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:41:58,837 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:41:58,878 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:41:58,879 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:41:58,879 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:41:58,879 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:41:58,879 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:41:58,879 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:41:58,879 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:41:58,891 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:41:58,892 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:41:58,892 [dag.py:662] rec
+INFO 2022-02-17 13:41:58,892 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:41:58,892 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:41:58,892 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:41:58,892 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:41:58,892 [dag.py:686] - rec
+INFO 2022-02-17 13:41:58,892 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:41:58,892 [dag.py:684] [rec]
+INFO 2022-02-17 13:41:58,892 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:41:58,908 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:41:58,918 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:41:58,918 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:41:58,922 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:41:58,923 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:41:58,925 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:41:58,931 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:41:58,932 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:41:58,932 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:41:59,935 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:42:02,434 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:42:28,077 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645076548.076952
+INFO 2022-02-17 13:42:28,078 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645076548.078083
+INFO 2022-02-17 13:42:28,078 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:42:30,814 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:45:45,888 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:45:45,888 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:45:45,888 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:45:45,888 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:45:45,888 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:45:45,889 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:45:45,889 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:45:45,900 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:45:45,901 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:45:45,901 [dag.py:662] rec
+INFO 2022-02-17 13:45:45,901 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:45:45,901 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:45:45,901 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:45:45,901 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:45:45,901 [dag.py:686] - rec
+INFO 2022-02-17 13:45:45,901 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:45:45,901 [dag.py:684] [rec]
+INFO 2022-02-17 13:45:45,901 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:45:45,916 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:45:45,926 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:45:45,926 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:45:45,930 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:45:45,931 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:45:45,933 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:45:45,939 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:45:45,940 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:45:45,940 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:45:46,919 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:45:49,242 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645076749.2424037
+INFO 2022-02-17 13:45:49,243 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645076749.2436109
+INFO 2022-02-17 13:45:49,244 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:45:49,645 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:45:52,450 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 13:48:08,096 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:48:08,170 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:48:08,170 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 13:48:08,181 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 13:48:08,182 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:48:08,182 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 13:48:08,182 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 13:48:08,182 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 13:48:08,182 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 13:48:08,182 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 13:48:08,195 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 13:48:08,196 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 13:48:08,196 [dag.py:662] rec
+INFO 2022-02-17 13:48:08,196 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 13:48:08,196 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 13:48:08,196 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 13:48:08,196 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 13:48:08,196 [dag.py:686] - rec
+INFO 2022-02-17 13:48:08,196 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 13:48:08,196 [dag.py:684] [rec]
+INFO 2022-02-17 13:48:08,196 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 13:48:08,213 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 13:48:08,222 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 13:48:08,223 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 13:48:08,227 [dag.py:832] [DAG] start
+INFO 2022-02-17 13:48:08,228 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 13:48:08,230 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 13:48:08,275 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 13:48:08,276 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 13:48:08,276 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 13:48:09,338 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 13:48:11,286 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645076891.2866788
+INFO 2022-02-17 13:48:11,287 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645076891.2878144
+INFO 2022-02-17 13:48:11,288 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 13:48:12,468 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 13:48:15,399 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:00:34,472 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:00:34,483 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 14:00:34,483 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:00:34,483 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 14:00:34,483 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 14:00:34,483 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 14:00:34,483 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 14:00:34,483 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 14:00:34,495 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 14:00:34,496 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 14:00:34,496 [dag.py:662] rec
+INFO 2022-02-17 14:00:34,496 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 14:00:34,496 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 14:00:34,496 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 14:00:34,496 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 14:00:34,497 [dag.py:686] - rec
+INFO 2022-02-17 14:00:34,497 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 14:00:34,497 [dag.py:684] [rec]
+INFO 2022-02-17 14:00:34,497 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 14:00:34,512 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 14:00:34,522 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 14:00:34,522 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 14:00:34,526 [dag.py:832] [DAG] start
+INFO 2022-02-17 14:00:34,527 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 14:00:34,529 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 14:00:34,536 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:00:34,536 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 14:00:34,536 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 14:00:35,542 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 14:00:38,036 [operator.py:1317] [rec|0] Succ init
+WARNING 2022-02-17 14:00:53,138 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:00:56,641 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645077656.6412687
+INFO 2022-02-17 14:00:56,642 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645077656.642419
+INFO 2022-02-17 14:00:56,642 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 14:00:59,372 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 14:09:35,769 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:09:35,783 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 2, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 14:09:35,783 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:09:35,783 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 14:09:35,783 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 14:09:35,783 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":2,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 14:09:35,783 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 14:09:35,783 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 14:09:35,797 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 14:09:35,798 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 14:09:35,798 [dag.py:662] rec
+INFO 2022-02-17 14:09:35,798 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 14:09:35,798 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 14:09:35,798 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 14:09:35,798 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 14:09:35,798 [dag.py:686] - rec
+INFO 2022-02-17 14:09:35,798 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 14:09:35,799 [dag.py:684] [rec]
+INFO 2022-02-17 14:09:35,799 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 14:09:35,816 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 14:09:35,826 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 14:09:35,827 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 14:09:35,832 [dag.py:832] [DAG] start
+INFO 2022-02-17 14:09:35,833 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 14:09:35,870 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 14:09:35,876 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:09:35,877 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 14:09:35,877 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 14:09:36,950 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 14:09:39,525 [operator.py:1317] [rec|0] Succ init
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:10:28,773 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:10:28,784 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 1, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 14:10:28,784 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:10:28,784 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 14:10:28,784 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 14:10:28,784 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":1,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 14:10:28,784 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 14:10:28,784 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 14:10:28,796 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 14:10:28,797 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 14:10:28,797 [dag.py:662] rec
+INFO 2022-02-17 14:10:28,797 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 14:10:28,797 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 14:10:28,797 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 14:10:28,797 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 14:10:28,797 [dag.py:686] - rec
+INFO 2022-02-17 14:10:28,798 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 14:10:28,798 [dag.py:684] [rec]
+INFO 2022-02-17 14:10:28,798 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 14:10:28,813 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 14:10:28,822 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 14:10:28,823 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 14:10:28,827 [dag.py:832] [DAG] start
+INFO 2022-02-17 14:10:28,827 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 14:10:28,870 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 14:10:28,873 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:10:28,873 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 14:10:28,873 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 14:10:29,826 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:False, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 14:10:32,335 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 14:10:33,070 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645078233.070603
+INFO 2022-02-17 14:10:33,071 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645078233.0718045
+INFO 2022-02-17 14:10:33,072 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 14:10:35,794 [dag.py:405] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-17 14:14:29,769 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:14:29,783 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 1, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 14:14:29,783 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:14:29,783 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 14:14:29,783 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 14:14:29,783 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":1,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 14:14:29,783 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 14:14:29,783 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 14:14:29,795 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 14:14:29,796 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 14:14:29,796 [dag.py:662] rec
+INFO 2022-02-17 14:14:29,796 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 14:14:29,796 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 14:14:29,796 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 14:14:29,796 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 14:14:29,796 [dag.py:686] - rec
+INFO 2022-02-17 14:14:29,796 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 14:14:29,796 [dag.py:684] [rec]
+INFO 2022-02-17 14:14:29,796 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 14:14:29,812 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 14:14:29,822 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 14:14:29,823 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 14:14:29,827 [dag.py:832] [DAG] start
+INFO 2022-02-17 14:14:29,827 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 14:14:29,829 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 14:14:29,837 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:14:29,837 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 14:14:29,837 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 14:14:30,832 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:False, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 14:14:33,327 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 14:14:39,181 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645078479.1810365
+INFO 2022-02-17 14:14:39,182 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645078479.1821644
+INFO 2022-02-17 14:14:39,182 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 14:14:41,917 [dag.py:405] (data_id=0 log_id=0) Succ predict
+INFO 2022-02-17 14:26:17,673 [loader.py:54] Loading faiss with AVX2 support.
+INFO 2022-02-17 14:26:17,673 [loader.py:58] Could not load library with AVX2 support due to:
+ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
+INFO 2022-02-17 14:26:17,673 [loader.py:64] Loading faiss.
+INFO 2022-02-17 14:26:17,693 [loader.py:66] Successfully loaded faiss.
+WARNING 2022-02-17 14:26:17,697 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:26:17,697 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-17 14:26:17,711 [operator.py:181] local_service_conf: {'model_config': './general_PPLCNet_x2_5_lite_v1.0_serving', 'device_type': 1, 'devices': '0', 'client_type': 'local_predictor', 'fetch_list': ['feature'], 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-17 14:26:17,711 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['feature'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:26:17,712 [operator.py:285] rec
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['feature']
+ client_config: ./general_PPLCNet_x2_5_lite_v1.0_serving/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-17 14:26:17,712 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-17 14:26:17,712 [pipeline_server.py:218]
+{
+ "worker_num":1,
+ "http_port":9315,
+ "rpc_port":9314,
+ "dag":{
+ "is_thread_op":false,
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "tracer":{
+ "interval_s":-1
+ },
+ "channel_recv_frist_arrive":false
+ },
+ "op":{
+ "rec":{
+ "concurrency":1,
+ "local_service_conf":{
+ "model_config":"./general_PPLCNet_x2_5_lite_v1.0_serving",
+ "device_type":1,
+ "devices":"0",
+ "client_type":"local_predictor",
+ "fetch_list":[
+ "feature"
+ ],
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "build_dag_each_worker":false
+}
+INFO 2022-02-17 14:26:17,712 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-17 14:26:17,712 [operator.py:308] Op(rec) use local rpc service at port: []
+INFO 2022-02-17 14:26:17,726 [dag.py:496] [DAG] Succ init
+INFO 2022-02-17 14:26:17,727 [dag.py:659] ================= USED OP =================
+INFO 2022-02-17 14:26:17,727 [dag.py:662] rec
+INFO 2022-02-17 14:26:17,727 [dag.py:663] -------------------------------------------
+INFO 2022-02-17 14:26:17,727 [dag.py:680] ================== DAG ====================
+INFO 2022-02-17 14:26:17,727 [dag.py:682] (VIEW 0)
+INFO 2022-02-17 14:26:17,727 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-17 14:26:17,727 [dag.py:686] - rec
+INFO 2022-02-17 14:26:17,727 [dag.py:682] (VIEW 1)
+INFO 2022-02-17 14:26:17,727 [dag.py:684] [rec]
+INFO 2022-02-17 14:26:17,727 [dag.py:687] -------------------------------------------
+INFO 2022-02-17 14:26:17,743 [dag.py:730] op:rec add input channel.
+INFO 2022-02-17 14:26:17,757 [dag.py:759] last op:rec add output channel
+INFO 2022-02-17 14:26:17,758 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-17 14:26:17,764 [dag.py:832] [DAG] start
+INFO 2022-02-17 14:26:17,765 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-17 14:26:17,767 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-17 14:26:17,777 [local_service_handler.py:172] Models(./general_PPLCNet_x2_5_lite_v1.0_serving) will be launched by device gpu. use_gpu:True, use_trt:False, use_lite:False, use_xpu:False, device_type:1, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-17 14:26:17,777 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-17 14:26:17,777 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-17 14:26:18,830 [local_predict.py:153] LocalPredictor load_model_config params: model_path:./general_PPLCNet_x2_5_lite_v1.0_serving, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:False, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-17 14:26:20,559 [operator.py:1317] [rec|0] Succ init
+INFO 2022-02-17 14:26:37,796 [pipeline_server.py:56] (log_id=0) inference request name:recognition self.name:recognition time:1645079197.7964764
+INFO 2022-02-17 14:26:37,797 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:recognition, method:prediction, time:1645079197.7977042
+INFO 2022-02-17 14:26:37,798 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-17 14:26:39,784 [dag.py:405] (data_id=0 log_id=0) Succ predict
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log.wf b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log.wf
new file mode 100644
index 000000000..3c85322cf
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.log.wf
@@ -0,0 +1,466 @@
+WARNING 2022-02-17 11:39:15,975 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:39:15,976 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:39:15,977 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2022-02-17 11:39:28,512 [operator.py:1109] (data_id=0 log_id=0) [rec|0] Failed to postprocess: postprocess() takes 4 positional arguments but 5 were given
+Traceback (most recent call last):
+ File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1105, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() takes 4 positional arguments but 5 were given
+ERROR 2022-02-17 11:39:28,515 [dag.py:410] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [rec|0] Failed to postprocess: postprocess() takes 4 positional arguments but 5 were given
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:40:11,389 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:40:11,390 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:43:51,478 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:43:51,479 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:45:00,971 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:45:00,972 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:49:47,902 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:49:47,903 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 11:50:16,675 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:50:16,676 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:50:16,677 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 11:51:34,991 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 11:51:34,992 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:15:00,101 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:15:00,102 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:15:19,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:15:19,772 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:24:47,291 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:24:47,292 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:25:34,907 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:25:34,968 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:25:34,969 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:28:35,990 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:28:35,991 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:33:50,873 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:33:50,874 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:33:50,875 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:33:50,875 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:38:14,874 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:38:14,875 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:38:14,876 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:41:58,835 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:41:58,836 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:41:58,837 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:45:45,876 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:45:45,877 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 13:48:08,096 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 13:48:08,168 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 13:48:08,169 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 13:48:08,170 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 13:48:08,170 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:00:34,470 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:00:34,471 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:00:34,472 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:00:53,138 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:00:53,139 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:00:53,140 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:09:35,769 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:09:35,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:09:35,771 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:10:28,771 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:10:28,772 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:10:28,773 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:14:29,769 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:14:29,770 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:14:29,771 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-17 14:26:17,697 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-17 14:26:17,697 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] tracer not set, use default: {}
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] interval_s not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-17 14:26:17,698 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.tracer b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/PipelineServingLogs/pipeline.tracer
new file mode 100644
index 000000000..e69de29bb
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/ProcessInfo.json b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/ProcessInfo.json
new file mode 100644
index 000000000..4e88c39e5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/ProcessInfo.json
@@ -0,0 +1 @@
+[{"pid": 827, "port": [9314, 9315], "model": "pipline", "start_time": 1645079177.7113242}]
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/config_onlyrec.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/config_onlyrec.yml
new file mode 100644
index 000000000..c04957537
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/config_onlyrec.yml
@@ -0,0 +1,33 @@
+#worker_num, 鏈澶у苟鍙戞暟銆傚綋build_dag_each_worker=True鏃, 妗嗘灦浼氬垱寤簑orker_num涓繘绋嬶紝姣忎釜杩涚▼鍐呮瀯寤篻rpcSever鍜孌AG
+##褰揵uild_dag_each_worker=False鏃讹紝妗嗘灦浼氳缃富绾跨▼grpc绾跨▼姹犵殑max_workers=worker_num
+worker_num: 1
+
+#http绔彛, rpc_port鍜宧ttp_port涓嶅厑璁稿悓鏃朵负绌恒傚綋rpc_port鍙敤涓攈ttp_port涓虹┖鏃讹紝涓嶈嚜鍔ㄧ敓鎴恏ttp_port
+http_port: 9315
+rpc_port: 9314
+
+dag:
+ #op璧勬簮绫诲瀷, True, 涓虹嚎绋嬫ā鍨嬶紱False锛屼负杩涚▼妯″瀷
+ is_thread_op: False
+op:
+ rec:
+ #骞跺彂鏁帮紝is_thread_op=True鏃讹紝涓虹嚎绋嬪苟鍙戯紱鍚﹀垯涓鸿繘绋嬪苟鍙
+ concurrency: 1
+
+ #褰搊p閰嶇疆娌℃湁server_endpoints鏃讹紝浠巐ocal_service_conf璇诲彇鏈湴鏈嶅姟閰嶇疆
+ local_service_conf:
+
+ #uci妯″瀷璺緞
+ model_config: ./general_PPLCNet_x2_5_lite_v1.0_serving
+
+ #璁$畻纭欢绫诲瀷: 绌虹己鏃剁敱devices鍐冲畾(CPU/GPU)锛0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu
+ device_type: 1
+
+ #璁$畻纭欢ID锛屽綋devices涓""鎴栦笉鍐欐椂涓篊PU棰勬祴锛涘綋devices涓"0", "0,1,2"鏃朵负GPU棰勬祴锛岃〃绀轰娇鐢ㄧ殑GPU鍗
+ devices: "0" # "0,1"
+
+ #client绫诲瀷锛屽寘鎷琤rpc, grpc鍜宭ocal_predictor.local_predictor涓嶅惎鍔⊿erving鏈嶅姟锛岃繘绋嬪唴棰勬祴
+ client_type: local_predictor
+
+ #Fetch缁撴灉鍒楄〃锛屼互client_config涓璮etch_var鐨刟lias_name涓哄噯
+ fetch_list: ["feature"]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/pipeline_http_client.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/pipeline_http_client.py
new file mode 100644
index 000000000..2c1421e29
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/pipeline_http_client.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# from paddle_serving_server.pipeline import PipelineClient
+import numpy as np
+import requests
+import json
+import cv2
+import base64
+import os
+from time import *
+import threading
+
+
+def demo(url,data,i):
+ begin_time = time()
+ r = requests.post(url=url, data=json.dumps(data))
+ end_time = time()
+ run_time = end_time-begin_time
+ print ('绾跨▼ %d 鏃堕棿 %f '%(i,run_time))
+ print(r.json())
+
+
+def cv2_to_base64(image):
+ return base64.b64encode(image).decode('utf8')
+
+url = "http://127.0.0.1:9315/recognition/prediction"
+with open(os.path.join(".", "test.jpg"), 'rb') as file:
+ image_data1 = file.read()
+image = cv2_to_base64(image_data1)
+
+for i in range(1):
+ print(i)
+ data = {"key": ["image"], "value": [image]}
+ r = requests.post(url=url, data=json.dumps(data))
+ print(r.json())
+ #t = threading.Thread(target=demo, args=(url,data,i,))
+ #t.start()
+
+
+
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/recognition_web_service_onlyrec.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/recognition_web_service_onlyrec.py
new file mode 100644
index 000000000..0e5484d23
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/recognition_web_service_onlyrec.py
@@ -0,0 +1,220 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from paddle_serving_server.web_service import WebService, Op
+import logging
+import numpy as np
+import sys
+import cv2
+from paddle_serving_app.reader import *
+import base64
+import os
+import faiss
+import pickle
+import json
+
+
+class DetOp(Op):
+ def init_op(self):
+ self.img_preprocess = Sequential([
+ #Resize((416, 416)),
+ Div(255.0),
+ Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False),
+ Transpose((2, 0, 1))
+ #Resize((416, 416)), Transpose((2, 0, 1))
+ ])
+
+ self.img_postprocess = RCNNPostprocess("label_list.txt", "output")
+ self.threshold = 0.3
+ self.max_det_results = 5
+
+ def Deresize(self, im, im_scale_x, im_scale_y):
+ #print("99999999999999997777777")
+ #print(im)
+ #print(im_scale_x,im_scale_y,cv2.INTER_LINEAR)
+
+ im = cv2.resize(
+ im,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=2)
+ #print(im)
+ return im
+
+ def generate_scale(self, im):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ Returns:
+ im_scale_x: the resize ratio of X
+ im_scale_y: the resize ratio of Y
+ """
+ target_size = [416, 416]
+ origin_shape = im.shape[:2]
+ resize_h, resize_w = target_size
+ im_scale_y = resize_h / float(origin_shape[0])
+ im_scale_x = resize_w / float(origin_shape[1])
+ return im_scale_y, im_scale_x
+
+ def preprocess(self, input_dicts, data_id, log_id):
+ (_, input_dict), = input_dicts.items()
+ imgs = []
+ raw_imgs = []
+ for key in input_dict.keys():
+ data = base64.b64decode(input_dict[key].encode('utf8'))
+ raw_imgs.append(data)
+ data = np.fromstring(data, np.uint8)
+ raw_im = cv2.imdecode(data, cv2.IMREAD_COLOR)[:, :, ::-1]
+ im_scale_y, im_scale_x = self.generate_scale(raw_im)
+ raw_im = self.Deresize(raw_im, im_scale_x, im_scale_y)
+ im = self.img_preprocess(raw_im)
+ #im = im.transpose((2, 0, 1)).copy()
+ #print(im)
+ imgs.append({
+ "image": im[np.newaxis, :],
+ "im_shape":
+ np.array(list(im.shape[1:])).reshape(-1)[np.newaxis, :],
+ "scale_factor":
+ np.array([[im_scale_y, im_scale_x]]).astype('float32'),
+ })
+ self.raw_img = raw_imgs
+
+ feed_dict = {
+ "image": np.concatenate(
+ [x["image"] for x in imgs], axis=0),
+ "im_shape": np.concatenate(
+ [x["im_shape"] for x in imgs], axis=0),
+ "scale_factor": np.concatenate(
+ [x["scale_factor"] for x in imgs], axis=0)
+ }
+ #print("feed_dict",feed_dict)
+ return feed_dict, False, None, ""
+
+ def postprocess(self, input_dicts, fetch_dict, log_id):
+ boxes = self.img_postprocess(fetch_dict, visualize=False)
+ #print("boxes",boxes)
+ boxes.sort(key=lambda x: x["score"], reverse=True)
+ boxes = filter(lambda x: x["score"] >= self.threshold,
+ boxes[:self.max_det_results])
+ boxes = list(boxes)
+ for i in range(len(boxes)):
+ boxes[i]["bbox"][2] += boxes[i]["bbox"][0] - 1
+ boxes[i]["bbox"][3] += boxes[i]["bbox"][1] - 1
+ result = json.dumps(boxes)
+ res_dict = {"bbox_result": result, "image": self.raw_img}
+ return res_dict, None, ""
+
+
+class RecOp(Op):
+ def init_op(self):
+ self.seq = Sequential([
+ BGR2RGB(), Resize((224, 224)), Div(255),
+ Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225],
+ False), Transpose((2, 0, 1))
+ ])
+
+ index_dir = "./index_result/"
+ assert os.path.exists(os.path.join(
+ index_dir, "vector.index")), "vector.index not found ..."
+ assert os.path.exists(os.path.join(
+ index_dir, "id_map.pkl")), "id_map.pkl not found ... "
+
+ self.searcher = faiss.read_index(
+ os.path.join(index_dir, "vector.index"))
+
+ with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:
+ self.id_map = pickle.load(fd)
+
+ self.rec_nms_thresold = 0.05
+ self.rec_score_thres = 0.5
+ self.feature_normalize = True
+ self.return_k =5
+
+ def preprocess(self, input_dicts, data_id, log_id):
+ (_, input_dict), = input_dicts.items()
+
+ raw_img = input_dict["image"]
+ raw_img = base64.b64decode(raw_img)
+ data = np.frombuffer(raw_img, np.uint8)
+ origin_img = cv2.imdecode(data, cv2.IMREAD_COLOR)
+ #construct batch images for rec
+ imgs = []
+ img = self.seq(origin_img)
+ imgs.append(img[np.newaxis, :].copy())
+
+ input_imgs = np.concatenate(imgs, axis=0)
+ return {"x": input_imgs}, False, None, ""
+
+ def nms_to_rec_results(self, results, thresh=0.1):
+ #print("results",results)
+ filtered_results = []
+ x1 = np.array([r["bbox"][0] for r in results]).astype("float32")
+ y1 = np.array([r["bbox"][1] for r in results]).astype("float32")
+ x2 = np.array([r["bbox"][2] for r in results]).astype("float32")
+ y2 = np.array([r["bbox"][3] for r in results]).astype("float32")
+ scores = np.array([r["rec_scores"] for r in results])
+
+ areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ order = scores.argsort()[::-1]
+ while order.size > 0:
+ i = order[0]
+ xx1 = np.maximum(x1[i], x1[order[1:]])
+ yy1 = np.maximum(y1[i], y1[order[1:]])
+ xx2 = np.minimum(x2[i], x2[order[1:]])
+ yy2 = np.minimum(y2[i], y2[order[1:]])
+
+ w = np.maximum(0.0, xx2 - xx1 + 1)
+ h = np.maximum(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (areas[i] + areas[order[1:]] - inter)
+ inds = np.where(ovr <= thresh)[0]
+ order = order[inds + 1]
+ filtered_results.append(results[i])
+ return filtered_results
+
+ def postprocess(self, input_dicts, fetch_dict, log_id,data_id = 0):
+ #print("fetch_dict",fetch_dict)
+ batch_features = fetch_dict["feature"]
+
+ if self.feature_normalize:
+ feas_norm = np.sqrt(
+ np.sum(np.square(batch_features), axis=1, keepdims=True))
+ batch_features = np.divide(batch_features, feas_norm)
+
+ scores, docs = self.searcher.search(batch_features, self.return_k)
+ print(scores)
+ results = []
+ for i in range(scores.shape[0]):
+ pred = {}
+ if scores[i][0] >= self.rec_score_thres:
+ pred["rec_docs"] = self.id_map[docs[i][0]].split()[1]
+ pred["rec_scores"] = scores[i][0]
+ results.append(pred)
+
+ #do nms
+ #results = self.nms_to_rec_results(results, self.rec_nms_thresold)
+ return {"result": str(results)}, None, ""
+
+
+class RecognitionService(WebService):
+ def get_pipeline_response(self, read_op):
+ #det_op = DetOp(name="det", input_ops=[read_op])
+ rec_op = RecOp(name="rec", input_ops=[read_op])
+ return rec_op
+
+
+product_recog_service = RecognitionService(name="recognition")
+product_recog_service.prepare_pipeline_config("config_onlyrec.yml")
+product_recog_service.run_service()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/test.jpg b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/test.jpg
new file mode 100644
index 000000000..c7c19da9a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/general_PPLCNet_x2_5_lite_v1.0/test.jpg differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/index_label.txt b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/index_label.txt
new file mode 100644
index 000000000..336b98473
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/index_label.txt
@@ -0,0 +1,144 @@
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15114.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15010.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15038.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15042.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15111.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15127.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15028.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10036.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15033.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15109.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10033.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15047.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15023.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15068.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15107.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15074.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15035.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15063.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15043.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15015.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15124.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10011.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15100.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15106.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15108.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15050.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15113.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15016.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15037.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15110.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15018.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15030.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15014.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15060.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15069.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15133.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15041.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15141.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15020.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15104.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15022.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10032.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15034.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15053.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15058.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15051.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15139.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15119.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15017.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15009.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15131.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15101.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15054.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15064.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15130.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15011.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15120.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15048.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15032.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15052.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10017.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15123.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15073.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15055.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15062.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10024.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15117.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15125.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15070.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15046.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15049.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15142.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15129.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15029.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10012.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15126.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15045.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15007.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15057.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15105.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15024.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15121.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15036.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15140.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15039.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15102.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15027.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15008.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15025.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15115.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15021.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15132.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15071.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15031.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15026.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15044.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15118.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15116.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15112.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15135.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15061.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15067.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15137.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15013.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10019.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15066.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15072.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10030.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15056.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15059.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15122.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/0-10001.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15012.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/鎽╂墭杞/15103.jpg 鎽╂墭杞
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10528.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10504.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10501.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10195.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10377.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10259.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10285.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10201.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10182.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/浜/0-10522.jpg 浜
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10112.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10015.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10013.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10073.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10020.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10071.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10078.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10055.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10072.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鍏朵粬鏉傜墿/0-10014.jpg 鍏朵粬鏉傜墿
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10549.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10381.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10193.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10290.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10685.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10180.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-11043.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10461.jpg 鑷杞
+/home/aistudio/data/data128448/index_motorcycle/鑷杞/0-10777.jpg 鑷杞
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/make_label.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/make_label.py
new file mode 100644
index 000000000..f6a563cd7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/make_label.py
@@ -0,0 +1,11 @@
+import os
+root_path = "/home/aistudio/data/data128448/index_motorcycle/"
+dirs = os.listdir(root_path)
+dir_dict = {"person":"浜","motorcycle":"鐢电摱杞/鎽╂墭杞","bicycle":"鑷杞","others":"鍏朵粬"}
+with open("index_label.txt","w") as f:
+ for dir in dirs:
+ path = root_path + dir + "/"
+ print(path)
+ filenames = os.listdir(path)
+ for filename in filenames:
+ f.write(path+filename+" "+dir+"\n")
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams
new file mode 100644
index 000000000..014e7c221
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams.info b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams.info
new file mode 100644
index 000000000..4b645bfe1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdiparams.info differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdmodel b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdmodel
new file mode 100644
index 000000000..a444c2daa
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/models/general_PPLCNet_x2_5_lite_v1.0_infer/inference.pdmodel differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log
new file mode 100644
index 000000000..3a6db0710
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log
@@ -0,0 +1,8074 @@
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 02:45:16,606 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 02:45:16,607 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 02:45:16,607 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 02:45:16,607 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 02:45:16,607 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18082,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9998,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 02:45:16,607 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 02:45:16,607 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 02:45:16,631 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 02:45:16,632 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 02:45:16,632 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 02:45:16,632 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 02:45:16,677 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 02:45:16,681 [dag.py:816] [DAG] start
+INFO 2021-12-29 02:45:16,682 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 02:45:16,688 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 02:45:16,710 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 02:45:16,710 [operator.py:1163] Init cuda env in process 0
+INFO 2021-12-29 02:45:16,710 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 02:45:17,939 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 02:45:19,138 [operator.py:1174] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 02:45:44,185 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 02:45:44,187 [operator.py:1422] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 02:45:44,188 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 02:45:45,873 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 76, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 02:45:45,877 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 03:07:14,512 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 03:07:14,513 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:07:14,513 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 03:07:14,513 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 03:07:14,513 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18082,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9998,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 03:07:14,513 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 03:07:14,513 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 03:07:14,538 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 03:07:14,539 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 03:07:14,539 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 03:07:14,539 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 03:07:14,585 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 03:07:14,589 [dag.py:816] [DAG] start
+INFO 2021-12-29 03:07:14,589 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 03:07:14,595 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 03:07:14,617 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:07:14,617 [operator.py:1163] Init cuda env in process 0
+INFO 2021-12-29 03:07:14,618 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 03:07:15,847 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 03:07:17,038 [operator.py:1174] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 03:07:20,880 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 03:07:20,882 [operator.py:1422] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 03:07:20,882 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 03:07:22,696 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:07:22,700 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:10:13,372 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 03:10:13,375 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 03:10:13,375 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:10:13,375 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 03:10:13,375 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 03:10:13,375 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18082,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9998,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 03:10:13,375 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 03:10:13,375 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 03:10:13,393 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 03:10:13,393 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 03:10:13,394 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 03:10:13,394 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 03:10:13,436 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 03:10:13,439 [dag.py:816] [DAG] start
+INFO 2021-12-29 03:10:13,440 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 03:10:13,445 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 03:10:13,468 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:10:13,469 [operator.py:1163] Init cuda env in process 0
+INFO 2021-12-29 03:10:13,469 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 03:10:14,628 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 03:10:15,826 [operator.py:1174] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 03:10:19,409 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 03:10:19,411 [operator.py:1422] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 03:10:19,411 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 03:10:21,260 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 78, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:10:21,264 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:11:47,325 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 03:11:47,325 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 03:11:47,325 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:11:47,325 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 03:11:47,325 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 03:11:47,326 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18082,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9998,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 03:11:47,326 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 03:11:47,326 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 03:11:47,348 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 03:11:47,349 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 03:11:47,349 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 03:11:47,349 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 03:11:47,395 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 03:11:47,398 [dag.py:816] [DAG] start
+INFO 2021-12-29 03:11:47,399 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 03:11:47,403 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 03:11:47,431 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 03:11:47,432 [operator.py:1163] Init cuda env in process 0
+INFO 2021-12-29 03:11:47,432 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 03:11:48,697 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 03:11:49,910 [operator.py:1174] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 03:11:53,938 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 03:11:53,939 [operator.py:1422] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 03:11:53,940 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 03:11:55,757 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 78, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:11:55,761 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 05:37:04,891 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 05:37:04,892 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:37:04,892 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 05:37:04,892 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 05:37:04,892 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 05:37:04,892 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 05:37:04,892 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 05:37:04,915 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 05:37:04,916 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 05:37:04,916 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 05:37:04,916 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 05:37:04,962 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 05:37:04,967 [dag.py:816] [DAG] start
+INFO 2021-12-29 05:37:04,968 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 05:37:04,974 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 05:37:04,992 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:37:04,993 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 05:37:04,993 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 05:37:06,170 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 05:37:07,358 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 05:37:14,620 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 05:37:14,621 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 05:37:14,621 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 05:37:16,537 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:37:16,542 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:40:11,811 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:40:11,811 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 05:40:11,811 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 05:40:11,811 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:40:11,811 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 05:40:11,811 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 05:40:11,812 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 05:40:11,812 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 05:40:11,812 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 05:40:11,837 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 05:40:11,838 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 05:40:11,839 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 05:40:11,839 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 05:40:11,880 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 05:40:11,884 [dag.py:816] [DAG] start
+INFO 2021-12-29 05:40:11,885 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 05:40:11,890 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 05:40:11,909 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:40:11,910 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 05:40:11,910 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 05:40:13,297 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 05:40:14,485 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 05:40:16,831 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 05:40:16,832 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 05:40:16,834 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 05:40:18,654 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:40:18,658 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 05:42:11,545 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 05:42:11,546 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:42:11,546 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 05:42:11,546 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 05:42:11,546 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 05:42:11,546 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 05:42:11,546 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 05:42:11,569 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 05:42:11,569 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 05:42:11,570 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 05:42:11,570 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 05:42:11,615 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 05:42:11,618 [dag.py:816] [DAG] start
+INFO 2021-12-29 05:42:11,619 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 05:42:11,626 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 05:42:11,650 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 05:42:11,651 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 05:42:11,651 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 05:42:12,841 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 05:42:14,032 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 05:42:17,467 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 05:42:17,469 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 05:42:17,470 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 05:42:19,333 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:42:19,340 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:08:54,357 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:08:54,358 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:08:54,358 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:08:54,358 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:08:54,358 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:08:54,358 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:08:54,358 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:08:54,380 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:08:54,381 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:08:54,381 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:08:54,381 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:08:54,427 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:08:54,431 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:08:54,432 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:08:54,438 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:08:54,460 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:08:54,460 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:08:54,461 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:08:55,641 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:08:56,841 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'yaml' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 33, in init_op
+ yml_conf = yaml.safe_load(f)
+NameError: name 'yaml' is not defined
+INFO 2021-12-29 06:09:00,289 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:09:00,290 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:09:00,291 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:10:19,805 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:10:19,805 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:10:19,806 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:10:19,806 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:10:19,806 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:10:19,806 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:10:19,806 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:10:19,830 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:10:19,831 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:10:19,831 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:10:19,831 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:10:19,878 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:10:19,882 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:10:19,883 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:10:19,890 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:10:19,904 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:10:19,905 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:10:19,905 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:10:21,094 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:10:22,339 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'yaml' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 33, in init_op
+ yml_conf = yaml.safe_load(f)
+NameError: name 'yaml' is not defined
+INFO 2021-12-29 06:10:26,440 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:10:26,441 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:10:26,442 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-29 06:11:59,537 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:11:59,538 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:11:59,538 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:12:08,933 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:12:08,934 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:12:08,934 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:12:08,934 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:12:08,934 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:12:08,934 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:12:08,934 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:12:08,958 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:12:08,959 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:12:08,959 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:12:08,959 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:12:09,004 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:12:09,008 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:12:09,009 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:12:09,016 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:12:09,032 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:12:09,032 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:12:09,032 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:12:10,227 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:12:11,443 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 40, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+INFO 2021-12-29 06:12:12,867 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:12:12,868 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:12:12,869 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2021-12-29 06:12:47,188 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:12:47,188 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:12:47,191 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:12:47,191 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:12:47,191 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:12:47,191 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:12:47,191 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:12:47,191 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:12:47,191 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:12:47,213 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:12:47,213 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:12:47,213 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:12:47,213 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:12:47,259 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:12:47,263 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:12:47,264 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:12:47,269 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:12:47,293 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:12:47,293 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:12:47,293 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:12:48,478 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:12:49,708 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 41, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+INFO 2021-12-29 06:12:50,036 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:12:50,037 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:12:50,038 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2021-12-29 06:15:10,463 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:15:10,466 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:15:10,466 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:15:10,466 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:15:10,466 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:15:10,466 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:15:10,466 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:15:10,466 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:15:10,488 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:15:10,488 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:15:10,489 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:15:10,489 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:15:10,535 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:15:10,540 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:15:10,540 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:15:10,546 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:15:10,568 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:15:10,569 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:15:10,569 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:15:11,748 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:15:12,951 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 43, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:17:36,321 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:17:36,321 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:17:36,324 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:17:36,324 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:17:36,324 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:17:36,324 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:17:36,324 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:17:36,324 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:17:36,324 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:17:36,346 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:17:36,347 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:17:36,347 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:17:36,348 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:17:36,391 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:17:36,394 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:17:36,395 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:17:36,401 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:17:36,427 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:17:36,428 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:17:36,428 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:17:37,610 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:17:38,816 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 44, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:18:17,411 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:18:17,412 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:18:17,412 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:18:17,412 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:18:17,412 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:18:17,412 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:18:17,412 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:18:17,436 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:18:17,436 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:18:17,436 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:18:17,437 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:18:17,477 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:18:17,482 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:18:17,483 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:18:17,491 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:18:17,505 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:18:17,506 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:18:17,506 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:18:18,701 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:18:19,908 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 45, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:19:57,871 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:19:57,874 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:19:57,874 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:19:57,874 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:19:57,874 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:19:57,874 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:19:57,875 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:19:57,875 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:19:57,900 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:19:57,900 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:19:57,901 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:19:57,901 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:19:57,945 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:19:57,950 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:19:57,951 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:19:57,957 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:19:57,975 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:19:57,975 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:19:57,976 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:19:59,153 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:20:00,415 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 45, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:21:28,632 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:21:28,632 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:21:28,632 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:21:28,632 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:21:28,632 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:21:28,632 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:21:28,632 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:21:28,657 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:21:28,658 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:21:28,658 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:21:28,658 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:21:28,702 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:21:28,705 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:21:28,706 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:21:28,711 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:21:28,728 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:21:28,728 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:21:28,729 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:21:29,937 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:21:31,123 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 46, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:25:12,051 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:25:12,051 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:25:12,054 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:25:12,054 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:25:12,054 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:25:12,054 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:25:12,054 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:25:12,054 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:25:12,054 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:25:12,078 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:25:12,078 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:25:12,079 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:25:12,079 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:25:12,121 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:25:12,125 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:25:12,125 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:25:12,131 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:25:12,155 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:25:12,156 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:25:12,156 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:25:13,482 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:25:14,695 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 46, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:25:29,447 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:25:29,447 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:25:29,447 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:25:29,447 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:25:29,448 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:25:29,448 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:25:29,448 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:25:29,471 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:25:29,472 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:25:29,472 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:25:29,472 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:25:29,515 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:25:29,519 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:25:29,519 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:25:29,525 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:25:29,544 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:25:29,544 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:25:29,545 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:25:30,728 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:25:31,935 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 47, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:31:43,452 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:31:43,455 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:31:43,455 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:31:43,455 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:31:43,455 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:31:43,455 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:31:43,455 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:31:43,456 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:31:43,479 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:31:43,479 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:31:43,480 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:31:43,480 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:31:43,524 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:31:43,527 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:31:43,528 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:31:43,534 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:31:43,559 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:31:43,560 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:31:43,560 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:31:44,753 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2021-12-29 06:31:45,962 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 49, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:32:31,020 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:32:31,023 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:32:31,023 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:32:31,023 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:32:31,023 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:32:31,023 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:32:31,023 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:32:31,024 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:32:31,048 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:32:31,049 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:32:31,049 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:32:31,049 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:32:31,091 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:32:31,095 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:32:31,096 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:32:31,102 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:32:31,127 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:32:31,128 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:32:31,128 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:32:32,329 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:32:33,520 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+WARNING 2021-12-29 06:33:05,247 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:33:05,252 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:33:05,252 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:33:05,252 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:33:05,252 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:33:05,252 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:33:05,253 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:33:05,253 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:33:05,276 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:33:05,277 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:33:05,277 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:33:05,277 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:33:05,321 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:33:05,324 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:33:05,325 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:33:05,331 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:33:05,355 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:33:05,356 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:33:05,356 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:33:06,480 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:33:07,686 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:40:19,259 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:40:19,259 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:40:19,259 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:40:19,259 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:40:19,259 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:40:19,260 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:40:19,260 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:40:19,260 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:40:19,285 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:40:19,286 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:40:19,286 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:40:19,286 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:40:19,333 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:40:19,337 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:40:19,337 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:40:19,343 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:40:19,359 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:40:19,360 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:40:19,360 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:40:20,576 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:40:21,771 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:40:25,060 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:40:25,061 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:40:25,062 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:40:25,105 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 54, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:40:25,111 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:42:03,383 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:42:03,383 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:42:03,383 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:42:03,383 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:42:03,383 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:42:03,383 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:42:03,384 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:42:03,408 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:42:03,409 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:42:03,409 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:42:03,410 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:42:03,452 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:42:03,456 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:42:03,456 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:42:03,464 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:42:03,483 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:42:03,484 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:42:03,484 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:42:04,712 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:42:05,914 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:42:13,067 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:42:13,069 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:42:13,070 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:42:13,106 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 54, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:42:13,112 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:42:44,176 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:42:44,176 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:42:44,177 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:42:44,177 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:42:44,177 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:42:44,177 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:42:44,177 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:42:44,201 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:42:44,201 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:42:44,202 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:42:44,202 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:42:44,248 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:42:44,252 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:42:44,253 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:42:44,259 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:42:44,278 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:42:44,279 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:42:44,279 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:42:45,483 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:42:46,679 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:42:47,152 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:42:47,153 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:42:47,154 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:42:47,195 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 55, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:42:47,200 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:44:34,235 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:44:34,235 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:44:34,235 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:44:34,235 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:44:34,235 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:44:34,235 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:44:34,236 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:44:34,236 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:44:34,258 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:44:34,259 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:44:34,259 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:44:34,259 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:44:34,301 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:44:34,306 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:44:34,307 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:44:34,313 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:44:34,329 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:44:34,330 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:44:34,330 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:44:35,557 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:44:36,750 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:44:43,663 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:44:43,664 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:44:43,665 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:44:43,710 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 55, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:44:43,715 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:46:19,032 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:46:19,032 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:46:19,033 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:46:19,033 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:46:19,033 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:46:19,033 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:46:19,033 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:46:19,057 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:46:19,058 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:46:19,058 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:46:19,058 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:46:19,108 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:46:19,111 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:46:19,112 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:46:19,118 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:46:19,146 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:46:19,147 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:46:19,147 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:46:20,373 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:46:21,561 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:46:25,488 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:46:25,490 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:46:25,490 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:46:25,581 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 71, in preprocess
+ im = self.img_preprocess(im)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 492, in __call__
+ img = t(img)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 642, in __call__
+ return F.normalize(img, self.mean, self.std, self.channel_first)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/functional.py", line 33, in normalize
+ img -= img_mean
+ValueError: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+ERROR 2021-12-29 06:46:25,587 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:51:01,069 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:51:01,069 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:51:01,069 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:51:01,069 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:51:01,069 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:51:01,069 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:51:01,069 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:51:01,094 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:51:01,095 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:51:01,095 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:51:01,095 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:51:01,140 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:51:01,144 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:51:01,145 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:51:01,151 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:51:01,171 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:51:01,172 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:51:01,172 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:51:02,379 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:51:03,568 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:51:06,257 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:51:06,258 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:51:06,259 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:51:07,885 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 89, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:51:07,889 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:51:56,944 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:51:56,947 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:51:56,947 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:51:56,947 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:51:56,947 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:51:56,948 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:51:56,948 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:51:56,948 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:51:56,971 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:51:56,972 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:51:56,972 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:51:56,972 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:51:57,019 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:51:57,023 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:51:57,024 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:51:57,029 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:51:57,049 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:51:57,050 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:51:57,050 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:51:58,256 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:51:59,445 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+WARNING 2021-12-29 06:52:13,864 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:52:13,867 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:52:13,867 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:52:13,867 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:52:13,867 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:52:13,868 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:52:13,868 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:52:13,868 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:52:13,892 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:52:13,893 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:52:13,893 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:52:13,893 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:52:13,936 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:52:13,939 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:52:13,940 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:52:13,945 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:52:13,966 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:52:13,966 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:52:13,966 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:52:15,154 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:52:16,355 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:52:18,677 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:52:18,678 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:52:18,679 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:52:20,309 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 86, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:52:20,314 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 06:54:00,626 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 06:54:00,626 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:54:00,627 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 06:54:00,627 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 06:54:00,627 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 06:54:00,627 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 06:54:00,627 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 06:54:00,652 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 06:54:00,653 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 06:54:00,653 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 06:54:00,653 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 06:54:00,699 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 06:54:00,703 [dag.py:816] [DAG] start
+INFO 2021-12-29 06:54:00,703 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 06:54:00,709 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 06:54:00,732 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 06:54:00,733 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 06:54:00,733 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 06:54:01,924 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 06:54:03,172 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 06:54:05,727 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 06:54:05,728 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 06:54:05,729 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 06:54:07,453 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 86, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:54:07,458 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:13:12,175 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:13:12,175 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:13:12,178 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:13:12,178 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:13:12,178 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:13:12,178 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:13:12,178 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:13:12,178 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:13:12,178 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:13:12,201 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:13:12,202 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:13:12,202 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:13:12,202 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:13:12,251 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:13:12,254 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:13:12,255 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:13:12,260 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:13:12,284 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:13:12,284 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:13:12,285 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:13:13,478 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:13:14,670 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:13:18,607 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:13:18,609 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:13:18,609 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:13:20,265 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:13:20,269 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:16:27,832 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:16:27,832 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:16:27,835 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:16:27,835 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:16:27,835 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:16:27,835 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:16:27,835 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:16:27,835 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:16:27,835 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:16:27,860 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:16:27,861 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:16:27,861 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:16:27,861 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:16:27,904 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:16:27,907 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:16:27,908 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:16:27,915 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:16:27,939 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:16:27,939 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:16:27,939 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:16:29,142 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:16:30,333 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:16:32,190 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:16:32,192 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:16:32,193 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:16:32,263 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: name 'im_shape' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 68, in preprocess
+ "im_shape": im_info[im_shape],#np.array(list(im.shape[1:])).reshape(-1)[np.newaxis,:],
+NameError: name 'im_shape' is not defined
+ERROR 2021-12-29 07:16:32,267 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: name 'im_shape' is not defined
+WARNING 2021-12-29 07:17:33,514 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:17:33,514 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:17:33,517 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:17:33,517 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:17:33,517 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:17:33,517 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:17:33,517 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:17:33,517 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:17:33,518 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:17:33,542 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:17:33,543 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:17:33,543 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:17:33,543 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:17:33,585 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:17:33,588 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:17:33,589 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:17:33,594 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:17:33,615 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:17:33,616 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:17:33,616 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:17:34,803 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:17:35,996 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:17:38,155 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:17:38,156 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:17:38,158 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:17:39,797 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:17:39,802 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:19:06,141 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:19:06,144 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:19:06,144 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:19:06,144 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:19:06,144 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:19:06,144 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:19:06,144 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:19:06,144 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:19:06,167 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:19:06,168 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:19:06,168 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:19:06,168 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:19:06,213 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:19:06,217 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:19:06,219 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:19:06,224 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:19:06,242 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:19:06,242 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:19:06,243 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:19:07,427 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:19:08,615 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:19:11,136 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:19:11,137 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:19:11,137 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:19:12,763 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:19:12,767 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:42:11,631 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:42:11,631 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:42:11,632 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:42:11,632 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:42:11,632 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:42:11,632 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:42:11,634 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:42:11,659 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:42:11,660 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:42:11,660 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:42:11,660 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:42:11,707 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:42:11,711 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:42:11,713 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:42:11,719 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:42:11,742 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:42:11,742 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:42:11,743 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:42:12,964 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:42:14,178 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:42:24,007 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:42:24,009 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:42:24,009 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:42:25,683 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:42:25,687 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:49:58,432 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-29 07:49:58,435 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-29 07:49:58,435 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:49:58,435 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-29 07:49:58,435 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-29 07:49:58,435 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-29 07:49:58,435 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-29 07:49:58,435 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-29 07:49:58,458 [dag.py:493] [DAG] Succ init
+INFO 2021-12-29 07:49:58,459 [dag.py:651] ================= USED OP =================
+INFO 2021-12-29 07:49:58,459 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-29 07:49:58,460 [dag.py:655] -------------------------------------------
+INFO 2021-12-29 07:49:58,505 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-29 07:49:58,509 [dag.py:816] [DAG] start
+INFO 2021-12-29 07:49:58,509 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-29 07:49:58,515 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-29 07:49:58,539 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-29 07:49:58,540 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-29 07:49:58,540 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-29 07:49:59,775 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-29 07:50:00,998 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-29 07:50:03,927 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-29 07:50:03,928 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-29 07:50:03,930 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-29 07:50:05,801 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_4.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_4.tmp_1.lod'
+ERROR 2021-12-29 07:50:05,806 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_4.tmp_1.lod'
+WARNING 2021-12-30 06:53:16,236 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 06:53:16,239 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 06:53:16,239 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 06:53:16,239 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 06:53:16,239 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 06:53:16,239 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 06:53:16,239 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 06:53:16,239 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 06:53:16,265 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 06:53:16,266 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 06:53:16,266 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 06:53:16,266 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 06:53:16,310 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 06:53:16,315 [dag.py:816] [DAG] start
+INFO 2021-12-30 06:53:16,316 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 06:53:16,323 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 06:53:16,341 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 06:53:16,341 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 06:53:16,342 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 06:53:17,563 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 06:53:18,787 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 06:53:26,765 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 06:53:26,766 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 06:53:26,766 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 06:53:28,471 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_3.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 81, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_3.tmp_1.lod'
+ERROR 2021-12-30 06:53:28,476 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_3.tmp_1.lod'
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 07:57:06,807 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 07:57:06,807 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 07:57:06,807 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 07:57:06,807 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 07:57:06,807 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 07:57:06,807 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 07:57:06,808 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 07:57:06,808 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 07:57:06,808 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 07:57:06,831 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 07:57:06,832 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 07:57:06,832 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 07:57:06,832 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 07:57:06,878 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 07:57:06,882 [dag.py:816] [DAG] start
+INFO 2021-12-30 07:57:06,883 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 07:57:06,890 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 07:57:06,909 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 07:57:06,910 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 07:57:06,910 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 07:57:08,117 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 07:57:09,361 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 07:57:13,486 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 07:57:13,487 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 07:57:13,487 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 07:57:15,249 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 81, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 07:57:15,253 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:12:07,135 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:12:07,135 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:12:07,135 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:12:07,135 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:12:07,135 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:12:07,135 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:12:07,136 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:12:07,160 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:12:07,161 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:12:07,161 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:12:07,161 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:12:07,206 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:12:07,210 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:12:07,211 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:12:07,218 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:12:07,237 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:12:07,237 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:12:07,237 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:12:08,426 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:12:09,638 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:12:13,041 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:12:13,042 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:12:13,043 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:12:14,700 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 82, in postprocess
+ np_score_list.append(fetch_dict[out_idx])
+KeyError: 0
+ERROR 2021-12-30 08:12:14,707 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:13:45,672 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:13:45,675 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:13:45,675 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:13:45,675 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:13:45,675 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:13:45,675 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:13:45,675 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:13:45,675 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:13:45,700 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:13:45,701 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:13:45,701 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:13:45,701 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:13:45,748 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:13:45,751 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:13:45,752 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:13:45,758 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:13:45,779 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:13:45,780 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:13:45,781 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:13:46,997 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:13:48,252 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:13:50,562 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:13:50,562 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:13:50,563 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:13:52,217 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[out_idx])
+KeyError: 0
+ERROR 2021-12-30 08:13:52,220 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:17:15,483 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:17:15,483 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:17:15,483 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:17:15,483 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:17:15,483 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:17:15,483 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:17:15,484 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:17:15,508 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:17:15,509 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:17:15,509 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:17:15,509 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:17:15,556 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:17:15,560 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:17:15,560 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:17:15,567 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:17:15,589 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:17:15,589 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:17:15,590 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:17:16,807 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:17:18,022 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:17:20,506 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:17:20,508 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:17:20,509 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:17:22,212 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:17:22,217 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:17:30,895 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:17:30,900 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:17:30,900 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:17:30,900 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:17:30,900 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:17:30,900 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:17:30,901 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:17:30,901 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:17:30,924 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:17:30,925 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:17:30,925 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:17:30,925 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:17:30,971 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:17:30,976 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:17:30,977 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:17:30,983 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:17:31,007 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:17:31,008 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:17:31,008 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:17:32,203 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:17:33,418 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:17:37,462 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:17:37,464 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:17:37,464 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:17:39,176 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:17:39,180 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:20:13,195 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:20:13,198 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:20:13,198 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:20:13,198 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:20:13,198 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:20:13,198 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:20:13,198 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:20:13,198 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:20:13,222 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:20:13,223 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:20:13,223 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:20:13,223 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:20:13,267 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:20:13,270 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:20:13,271 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:20:13,276 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:20:13,295 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:20:13,295 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:20:13,296 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:20:14,485 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:20:15,692 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:20:19,232 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:20:19,233 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:20:19,233 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:20:20,892 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 84, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:20:20,896 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:31:54,774 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:31:54,774 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:31:54,774 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:31:54,775 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:31:54,775 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:31:54,775 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:31:54,775 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:31:54,799 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:31:54,800 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:31:54,800 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:31:54,800 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:31:54,846 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:31:54,851 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:31:54,852 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:31:54,857 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:31:54,877 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:31:54,877 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:31:54,877 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:31:56,080 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:31:57,306 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:31:59,636 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:31:59,638 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:31:59,638 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:32:01,331 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 91, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:32:01,335 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:34:21,409 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:34:21,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:34:21,411 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:34:21,412 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:34:21,412 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:34:21,412 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:34:21,412 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:34:21,412 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:34:21,412 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:34:21,437 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:34:21,438 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:34:21,438 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:34:21,438 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:34:21,482 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:34:21,486 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:34:21,487 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:34:21,493 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:34:21,522 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:34:21,522 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:34:21,523 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:34:22,753 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:34:23,983 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:34:27,034 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:34:27,036 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:34:27,036 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:34:28,756 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 96, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:34:28,761 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:43:32,266 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:43:32,266 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:43:32,270 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:43:32,270 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:43:32,270 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:43:32,271 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:43:32,271 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:43:32,271 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:43:32,271 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:43:32,272 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:43:32,301 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:43:32,302 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:43:32,302 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:43:32,302 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:43:32,343 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:43:32,348 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:43:32,349 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:43:32,355 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:43:32,375 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:43:32,376 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:43:32,376 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:43:33,601 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:43:34,822 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:43:44,530 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:43:44,532 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:43:44,532 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:43:46,213 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 111, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:43:46,218 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:49:32,281 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:49:32,281 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:49:32,285 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:49:32,286 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:49:32,286 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:49:32,287 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:49:32,287 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:49:32,287 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:49:32,288 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:49:32,288 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:49:32,317 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:49:32,318 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:49:32,318 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:49:32,318 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:49:32,355 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:49:32,359 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:49:32,360 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:49:32,365 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:49:32,387 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:49:32,388 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:49:32,388 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:49:33,652 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:49:34,901 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:49:38,204 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:49:38,206 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:49:38,206 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:49:39,879 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 113, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:49:39,884 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:52:42,378 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:52:42,383 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:52:42,384 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:52:42,384 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:52:42,384 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:52:42,385 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:52:42,385 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:52:42,386 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:52:42,408 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:52:42,408 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:52:42,408 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:52:42,409 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:52:42,444 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:52:42,448 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:52:42,449 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:52:42,454 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:52:42,478 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:52:42,479 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:52:42,479 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:52:43,723 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:52:44,962 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 08:52:48,301 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 08:52:48,303 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 08:52:48,304 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 08:52:50,006 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 110, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:52:50,011 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:59:43,426 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 08:59:43,427 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 08:59:43,427 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:59:43,427 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 08:59:43,427 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 08:59:43,428 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 08:59:43,428 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 08:59:43,428 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 08:59:43,457 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 08:59:43,458 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 08:59:43,458 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 08:59:43,458 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 08:59:43,496 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 08:59:43,500 [dag.py:816] [DAG] start
+INFO 2021-12-30 08:59:43,501 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 08:59:43,507 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 08:59:43,525 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 08:59:43,525 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 08:59:43,526 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 08:59:44,766 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 08:59:45,988 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+WARNING 2021-12-30 09:03:09,956 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:03:09,956 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:03:09,961 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:03:09,961 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:03:09,961 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:03:09,962 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:03:09,962 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:03:09,962 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:03:09,962 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:03:09,989 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:03:09,990 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:03:09,990 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:03:09,990 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:03:10,028 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:03:10,033 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:03:10,033 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:03:10,039 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:03:10,058 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:03:10,059 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:03:10,059 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:03:11,906 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:03:11,907 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:03:11,907 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:03:11,907 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:03:11,908 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:03:11,908 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:03:11,908 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:03:11,938 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:03:11,939 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:03:11,939 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:03:11,939 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:03:11,981 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:03:11,985 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:03:11,986 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:03:11,992 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:03:12,011 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:03:12,011 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:03:12,011 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:03:13,259 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:03:14,522 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:03:20,138 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:03:20,139 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:03:20,139 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:03:22,048 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 09:03:22,053 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 09:07:27,378 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:07:27,383 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:07:27,384 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:07:27,384 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:07:27,384 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:07:27,385 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":18083,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:07:27,385 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:07:27,385 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:07:27,415 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:07:27,416 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:07:27,416 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:07:27,416 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:07:27,458 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:07:27,462 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:07:27,463 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:07:27,468 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:07:27,490 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:07:27,491 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:07:27,491 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:07:28,741 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:07:29,959 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+WARNING 2021-12-30 09:07:48,750 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:07:48,754 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:07:48,754 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:07:48,755 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:07:48,755 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:07:48,755 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:07:48,755 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:07:48,756 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:07:48,756 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:07:48,756 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:07:48,785 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:07:48,785 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:07:48,786 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:07:48,786 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:07:48,827 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:07:48,831 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:07:48,832 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:07:48,838 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:07:48,859 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:07:48,859 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:07:48,860 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:07:50,077 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:07:50,732 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:07:50,733 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:07:50,734 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:07:51,330 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+ERROR 2021-12-30 09:07:53,007 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 09:07:53,011 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 09:08:06,502 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:08:06,507 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:08:06,507 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:08:06,508 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:08:06,508 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:08:06,508 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:08:06,508 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:08:06,508 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:08:06,533 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:08:06,534 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:08:06,534 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:08:06,534 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:08:06,572 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:08:06,576 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:08:06,577 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:08:06,583 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:08:06,609 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:08:06,610 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:08:06,610 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:08:07,840 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:08:09,064 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:08:09,866 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:08:09,868 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:08:09,868 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:08:11,524 [operator.py:1000] (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+ERROR 2021-12-30 09:08:11,527 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+WARNING 2021-12-30 09:19:39,408 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:19:39,412 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:19:39,412 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:19:39,413 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:19:39,414 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:19:39,414 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:19:39,414 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:19:39,415 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:19:39,415 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:19:39,415 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:19:39,447 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:19:39,447 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:19:39,447 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:19:39,447 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:19:39,488 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:19:39,492 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:19:39,493 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:19:39,499 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:19:39,520 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:19:39,520 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:19:39,520 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:19:40,827 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:19:42,045 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:19:45,115 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:19:45,116 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:19:45,117 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:19:46,754 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'res_dict' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 101, in postprocess
+ res_dict[b] = {}
+NameError: name 'res_dict' is not defined
+ERROR 2021-12-30 09:19:46,757 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'res_dict' is not defined
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:20:11,184 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:20:11,184 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:20:11,184 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:20:11,184 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:20:11,184 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:20:11,184 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:20:11,185 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:20:11,204 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:20:11,205 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:20:11,205 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:20:11,205 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:20:11,244 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:20:11,247 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:20:11,248 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:20:11,254 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:20:11,279 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:20:11,280 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:20:11,280 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:20:12,540 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:20:13,094 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:20:13,095 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:20:13,095 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:20:13,763 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+ERROR 2021-12-30 09:20:15,417 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict[b] = {}
+IndexError: list assignment index out of range
+ERROR 2021-12-30 09:20:15,421 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+WARNING 2021-12-30 09:21:19,542 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:21:19,547 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:21:19,547 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:21:19,547 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:21:19,547 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:21:19,548 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:21:19,548 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:21:19,548 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:21:19,578 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:21:19,578 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:21:19,579 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:21:19,579 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:21:19,619 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:21:19,624 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:21:19,625 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:21:19,632 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:21:19,655 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:21:19,655 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:21:19,656 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:21:20,877 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:21:22,086 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:21:22,662 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:21:22,663 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:21:22,666 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:21:24,326 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 103, in postprocess
+ res_dict[b] = {}
+IndexError: list assignment index out of range
+ERROR 2021-12-30 09:21:24,330 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:23:31,828 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:23:31,829 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:23:31,829 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:23:31,829 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:23:31,830 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:23:31,830 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:23:31,830 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:23:31,860 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:23:31,861 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:23:31,861 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:23:31,861 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:23:31,903 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:23:31,907 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:23:31,907 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:23:31,913 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:23:31,943 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:23:31,943 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:23:31,943 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:23:33,178 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:23:34,435 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:23:35,932 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:23:35,933 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:23:35,934 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:23:37,722 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'a' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 101, in postprocess
+ for b in range(a.ndim):
+NameError: name 'a' is not defined
+ERROR 2021-12-30 09:23:37,726 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'a' is not defined
+WARNING 2021-12-30 09:24:02,842 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:24:02,842 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:24:02,846 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:24:02,846 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:24:02,847 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:24:02,847 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:24:02,847 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:24:02,847 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:24:02,847 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:24:02,848 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:24:02,877 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:24:02,878 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:24:02,878 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:24:02,878 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:24:02,920 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:24:02,924 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:24:02,925 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:24:02,931 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:24:02,953 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:24:02,954 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:24:02,954 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:24:04,079 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:24:04,080 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:24:04,081 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:24:04,185 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:24:05,401 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+ERROR 2021-12-30 09:24:07,046 [operator.py:1000] (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+ERROR 2021-12-30 09:24:07,049 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+WARNING 2021-12-30 09:25:54,030 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:25:54,034 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:25:54,034 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:25:54,035 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:25:54,036 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:25:54,036 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:25:54,036 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:25:54,037 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:25:54,037 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:25:54,037 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:25:54,066 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:25:54,066 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:25:54,066 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:25:54,066 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:25:54,109 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:25:54,113 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:25:54,114 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:25:54,120 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:25:54,138 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:25:54,138 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:25:54,139 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:25:55,393 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:25:56,645 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:25:56,651 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:25:56,652 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:25:56,652 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:25:58,333 [dag.py:404] (data_id=0 log_id=0) Succ predict
+ERROR 2021-12-30 09:25:58,334 [operator.py:1487] (logid=0) Failed to pack RPC response package:
+WARNING 2021-12-30 09:39:07,135 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:39:07,140 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:39:07,140 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:39:07,140 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:39:07,140 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:39:07,141 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:39:07,141 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:39:07,141 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:39:07,169 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:39:07,170 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:39:07,170 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:39:07,170 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:39:07,210 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:39:07,215 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:39:07,216 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:39:07,223 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:39:07,243 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:39:07,243 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:39:07,244 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:39:08,460 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:39:09,703 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:39:25,749 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:39:25,750 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:39:25,751 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:39:27,547 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 09:40:00,135 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:40:00,136 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:40:00,136 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:40:00,249 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:00,253 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+INFO 2021-12-30 09:40:48,641 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:40:48,642 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:40:48,642 [dag.py:368] (data_id=2 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:40:48,711 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:48,714 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+INFO 2021-12-30 09:40:53,612 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:40:53,613 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:40:53,614 [dag.py:368] (data_id=3 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:40:53,720 [operator.py:973] (data_id=3 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:53,723 [dag.py:409] (data_id=3 log_id=0) Failed to predict: (data_id=3 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:42:01,778 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:42:01,783 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:42:01,783 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:42:01,784 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:42:01,784 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:42:01,784 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:42:01,784 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:42:01,784 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:42:01,813 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:42:01,814 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:42:01,814 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:42:01,814 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:42:01,853 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:42:01,858 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:42:01,859 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:42:01,865 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:42:01,889 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:42:01,889 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:42:01,889 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:42:03,109 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:42:04,298 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:42:04,956 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:42:04,958 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:42:04,958 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:42:06,612 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 09:42:06,616 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+INFO 2021-12-30 09:42:19,843 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:42:19,844 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:42:19,845 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:42:19,953 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 09:42:19,957 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:42:23,967 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:42:23,968 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:42:23,968 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:42:23,968 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:42:23,969 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:42:23,969 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:42:23,970 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:42:23,970 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:42:24,000 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:42:24,001 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:42:24,001 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:42:24,001 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:42:24,042 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:42:24,047 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:42:24,048 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:42:24,054 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:42:24,072 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:42:24,073 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:42:24,073 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:42:25,280 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:42:26,470 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:42:26,701 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:42:26,702 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:42:26,702 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:42:28,405 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 09:43:10,878 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:43:10,879 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:43:10,879 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:43:10,985 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:43:10,988 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+INFO 2021-12-30 09:43:15,839 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:43:15,840 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:43:15,840 [dag.py:368] (data_id=2 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:43:15,944 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:43:15,950 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:44:54,821 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:44:54,826 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:44:54,826 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:44:54,826 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:44:54,826 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:44:54,827 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:44:54,827 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:44:54,827 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:44:54,856 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:44:54,856 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:44:54,857 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:44:54,857 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:44:54,898 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:44:54,901 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:44:54,902 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:44:54,908 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:44:54,927 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:44:54,927 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:44:54,927 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:44:56,140 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:44:57,521 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:44:59,119 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:44:59,121 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:44:59,122 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:45:00,979 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 09:45:05,363 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:45:05,364 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:45:05,364 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:45:05,468 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:45:05,472 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:46:17,679 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:46:17,683 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:46:17,683 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 09:46:17,684 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 09:46:17,684 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:46:17,684 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 09:46:17,684 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 09:46:17,685 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 09:46:17,685 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 09:46:17,685 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 09:46:17,714 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 09:46:17,715 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 09:46:17,715 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 09:46:17,715 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 09:46:17,756 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 09:46:17,760 [dag.py:816] [DAG] start
+INFO 2021-12-30 09:46:17,761 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 09:46:17,766 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 09:46:17,795 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 09:46:17,796 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 09:46:17,796 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 09:46:19,035 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 09:46:20,237 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 09:46:21,068 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:46:21,070 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:46:21,071 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 09:46:22,764 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 09:46:25,462 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 09:46:25,463 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 09:46:25,463 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 09:46:25,584 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:46:25,588 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+INFO 2021-12-30 11:05:40,431 [pipeline_server.py:51] (log_id=0) inference request name:recognition self.name:ppyolo_mbv3
+ERROR 2021-12-30 11:05:40,432 [pipeline_server.py:55] (log_id=0) name dismatch error. request.name:recognition,server.name=ppyolo_mbv3
+INFO 2021-12-30 11:05:49,163 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:05:49,164 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:05:49,164 [dag.py:368] (data_id=2 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:05:49,271 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:05:49,274 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:05:54,559 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:05:54,564 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:05:54,564 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:05:54,565 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:05:54,565 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:05:54,566 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:05:54,566 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:05:54,566 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:05:54,596 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:05:54,597 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:05:54,597 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:05:54,597 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:05:54,636 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:05:54,641 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:05:54,642 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:05:54,648 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:05:54,671 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:05:54,671 [operator.py:1167] Init cuda env in process 0
+INFO 2021-12-30 11:05:54,672 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:05:55,892 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:05:56,096 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:05:56,097 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:05:56,098 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:05:57,079 [operator.py:1178] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 11:05:58,729 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 11:15:50,113 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:15:50,115 [operator.py:1426] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:15:50,115 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:15:50,225 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:15:50,229 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:24:24,432 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:24:24,432 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:24:24,432 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:24:24,433 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:24:24,433 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:24:24,433 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:24:24,433 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:24:24,462 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:24:24,463 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:24:24,463 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:24:24,463 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:24:24,503 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:24:24,508 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:24:24,508 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:24:24,513 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:24:24,533 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:24:24,534 [operator.py:1170] Init cuda env in process 0
+INFO 2021-12-30 11:24:24,534 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:24:25,753 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:24:26,946 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 11:24:28,169 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:24:28,171 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:24:28,171 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:24:30,117 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 11:24:39,879 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:24:39,880 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:24:39,880 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:24:40,005 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:24:40,009 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:37:49,610 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:37:49,615 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:37:49,615 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:37:49,616 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:37:49,616 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:37:49,617 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:37:49,617 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:37:49,617 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:37:49,647 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:37:49,648 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:37:49,648 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:37:49,648 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:37:49,689 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:37:49,693 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:37:49,693 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:37:49,699 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:37:49,728 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:37:49,728 [operator.py:1170] Init cuda env in process 0
+INFO 2021-12-30 11:37:49,729 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:37:50,942 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:37:52,133 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 11:37:52,375 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:37:52,377 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:37:52,377 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:37:54,033 [operator.py:976] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:37:54,039 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+INFO 2021-12-30 11:37:55,868 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:37:55,869 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:37:55,869 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:37:55,960 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:37:55,963 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 11:38:09,257 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:38:09,262 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:38:09,262 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:38:09,262 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:38:09,262 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:38:09,263 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:38:09,263 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:38:09,263 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:38:09,291 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:38:09,292 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:38:09,292 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:38:09,292 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:38:09,331 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:38:09,335 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:38:09,336 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:38:09,342 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:38:09,360 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:38:09,361 [operator.py:1170] Init cuda env in process 0
+INFO 2021-12-30 11:38:09,361 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:38:10,578 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:38:10,613 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:38:10,615 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:38:10,615 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:38:11,767 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+ERROR 2021-12-30 11:38:13,407 [operator.py:976] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:38:13,411 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+INFO 2021-12-30 11:40:01,272 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:40:01,273 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:40:01,274 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+ERROR 2021-12-30 11:40:01,379 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ self.post_process = PicoDetPostProcess(
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:40:01,383 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 11:40:05,557 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:40:05,557 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:40:05,561 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:40:05,561 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:40:05,562 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:40:05,562 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:40:05,562 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:40:05,562 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:40:05,562 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:40:05,592 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:40:05,593 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:40:05,593 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:40:05,593 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:40:05,633 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:40:05,638 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:40:05,639 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:40:05,646 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:40:05,665 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:40:05,665 [operator.py:1170] Init cuda env in process 0
+INFO 2021-12-30 11:40:05,665 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:40:06,892 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:40:08,120 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 11:40:08,937 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:40:08,939 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:40:08,939 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:40:10,596 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 11:40:13,169 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:40:13,169 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:40:13,170 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:40:13,272 [dag.py:404] (data_id=1 log_id=0) Succ predict
+INFO 2021-12-30 11:40:59,103 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:40:59,104 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:40:59,105 [dag.py:368] (data_id=2 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:40:59,221 [dag.py:404] (data_id=2 log_id=0) Succ predict
+WARNING 2021-12-30 11:41:26,917 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:41:26,921 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2021-12-30 11:41:26,922 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2021-12-30 11:41:26,922 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:41:26,922 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2021-12-30 11:41:26,923 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2021-12-30 11:41:26,923 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2021-12-30 11:41:26,924 [pipeline_server.py:212] -------------------------------------------
+INFO 2021-12-30 11:41:26,924 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2021-12-30 11:41:26,953 [dag.py:493] [DAG] Succ init
+INFO 2021-12-30 11:41:26,954 [dag.py:651] ================= USED OP =================
+INFO 2021-12-30 11:41:26,954 [dag.py:654] ppyolo_mbv3
+INFO 2021-12-30 11:41:26,954 [dag.py:655] -------------------------------------------
+INFO 2021-12-30 11:41:26,993 [dag.py:784] [DAG] Succ build DAG
+INFO 2021-12-30 11:41:26,997 [dag.py:816] [DAG] start
+INFO 2021-12-30 11:41:26,997 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2021-12-30 11:41:27,003 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2021-12-30 11:41:27,026 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2021-12-30 11:41:27,027 [operator.py:1170] Init cuda env in process 0
+INFO 2021-12-30 11:41:27,027 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2021-12-30 11:41:28,253 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2021-12-30 11:41:28,845 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:41:28,846 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:41:28,847 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:41:29,451 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+INFO 2021-12-30 11:41:31,104 [dag.py:404] (data_id=0 log_id=0) Succ predict
+INFO 2021-12-30 11:41:33,109 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2021-12-30 11:41:33,110 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2021-12-30 11:41:33,110 [dag.py:368] (data_id=1 log_id=0) Succ Generate ID
+INFO 2021-12-30 11:41:33,209 [dag.py:404] (data_id=1 log_id=0) Succ predict
+WARNING 2022-02-14 09:24:33,065 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-14 09:24:33,074 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-14 09:24:33,074 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:24:33,075 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-14 09:24:33,075 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2022-02-14 09:24:33,075 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-14 09:24:33,075 [pipeline_server.py:212] -------------------------------------------
+INFO 2022-02-14 09:24:33,075 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-14 09:24:33,096 [dag.py:493] [DAG] Succ init
+INFO 2022-02-14 09:24:33,096 [dag.py:651] ================= USED OP =================
+INFO 2022-02-14 09:24:33,097 [dag.py:654] ppyolo_mbv3
+INFO 2022-02-14 09:24:33,097 [dag.py:655] -------------------------------------------
+INFO 2022-02-14 09:24:33,137 [dag.py:784] [DAG] Succ build DAG
+INFO 2022-02-14 09:24:33,141 [dag.py:816] [DAG] start
+INFO 2022-02-14 09:24:33,142 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-14 09:24:33,147 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2022-02-14 09:24:33,174 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:24:33,175 [operator.py:1170] Init cuda env in process 0
+INFO 2022-02-14 09:24:33,175 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-14 09:24:34,409 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2022-02-14 09:24:35,602 [operator.py:1179] [ppyolo_mbv3|0] failed to init op: [Errno 2] No such file or directory: 'label_list.txt'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1174, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1363, in _initialize
+ self.init_op()
+ File "web_service.py", line 30, in init_op
+ self.img_postprocess = RCNNPostprocess("label_list.txt", "output")
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 288, in __init__
+ with open(label_file) as fin:
+FileNotFoundError: [Errno 2] No such file or directory: 'label_list.txt'
+WARNING 2022-02-14 09:24:42,704 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-14 09:24:42,709 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-14 09:24:42,709 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:24:42,710 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-14 09:24:42,710 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2022-02-14 09:24:42,710 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-14 09:24:42,710 [pipeline_server.py:212] -------------------------------------------
+INFO 2022-02-14 09:24:42,710 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-14 09:24:42,741 [dag.py:493] [DAG] Succ init
+INFO 2022-02-14 09:24:42,741 [dag.py:651] ================= USED OP =================
+INFO 2022-02-14 09:24:42,741 [dag.py:654] ppyolo_mbv3
+INFO 2022-02-14 09:24:42,742 [dag.py:655] -------------------------------------------
+INFO 2022-02-14 09:24:42,783 [dag.py:784] [DAG] Succ build DAG
+INFO 2022-02-14 09:24:42,789 [dag.py:816] [DAG] start
+INFO 2022-02-14 09:24:42,790 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-14 09:24:42,796 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2022-02-14 09:24:42,815 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:24:42,815 [operator.py:1170] Init cuda env in process 0
+INFO 2022-02-14 09:24:42,815 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-14 09:24:44,053 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+CRITICAL 2022-02-14 09:24:45,275 [operator.py:1179] [ppyolo_mbv3|0] failed to init op: [Errno 2] No such file or directory: 'label_list.txt'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1174, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1363, in _initialize
+ self.init_op()
+ File "web_service.py", line 30, in init_op
+ self.img_postprocess = RCNNPostprocess("label_list.txt", "output")
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 288, in __init__
+ with open(label_file) as fin:
+FileNotFoundError: [Errno 2] No such file or directory: 'label_list.txt'
+INFO 2022-02-14 09:24:57,166 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2022-02-14 09:24:57,168 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2022-02-14 09:24:57,169 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:26:03,675 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-14 09:26:03,676 [operator.py:163] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-14 09:26:03,676 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:26:03,676 [operator.py:267] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-14 09:26:03,676 [pipeline_server.py:204] ============= PIPELINE SERVER =============
+INFO 2022-02-14 09:26:03,677 [pipeline_server.py:207]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-14 09:26:03,677 [pipeline_server.py:212] -------------------------------------------
+INFO 2022-02-14 09:26:03,677 [operator.py:290] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-14 09:26:03,706 [dag.py:493] [DAG] Succ init
+INFO 2022-02-14 09:26:03,707 [dag.py:651] ================= USED OP =================
+INFO 2022-02-14 09:26:03,707 [dag.py:654] ppyolo_mbv3
+INFO 2022-02-14 09:26:03,707 [dag.py:655] -------------------------------------------
+INFO 2022-02-14 09:26:03,747 [dag.py:784] [DAG] Succ build DAG
+INFO 2022-02-14 09:26:03,752 [dag.py:816] [DAG] start
+INFO 2022-02-14 09:26:03,753 [dag.py:181] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-14 09:26:03,761 [pipeline_server.py:47] [PipelineServicer] succ init
+INFO 2022-02-14 09:26:03,776 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-14 09:26:03,777 [operator.py:1170] Init cuda env in process 0
+INFO 2022-02-14 09:26:03,777 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-14 09:26:04,993 [local_predict.py:115] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-14 09:26:05,574 [pipeline_server.py:51] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3
+INFO 2022-02-14 09:26:05,576 [operator.py:1429] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction
+INFO 2022-02-14 09:26:05,577 [dag.py:368] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-14 09:26:06,239 [operator.py:1181] [ppyolo_mbv3|0] Succ init
+INFO 2022-02-14 09:26:07,900 [dag.py:404] (data_id=0 log_id=0) Succ predict
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-16 16:56:51,846 [operator.py:181] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-16 16:56:51,847 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 16:56:51,847 [operator.py:285] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-16 16:56:51,847 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-16 16:56:51,847 [pipeline_server.py:218]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "channel_recv_frist_arrive":false
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-16 16:56:51,847 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-16 16:56:51,847 [operator.py:308] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-16 16:56:51,869 [dag.py:496] [DAG] Succ init
+INFO 2022-02-16 16:56:51,869 [dag.py:659] ================= USED OP =================
+INFO 2022-02-16 16:56:51,869 [dag.py:662] ppyolo_mbv3
+INFO 2022-02-16 16:56:51,870 [dag.py:663] -------------------------------------------
+INFO 2022-02-16 16:56:51,870 [dag.py:680] ================== DAG ====================
+INFO 2022-02-16 16:56:51,870 [dag.py:682] (VIEW 0)
+INFO 2022-02-16 16:56:51,870 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-16 16:56:51,870 [dag.py:686] - ppyolo_mbv3
+INFO 2022-02-16 16:56:51,870 [dag.py:682] (VIEW 1)
+INFO 2022-02-16 16:56:51,870 [dag.py:684] [ppyolo_mbv3]
+INFO 2022-02-16 16:56:51,870 [dag.py:687] -------------------------------------------
+INFO 2022-02-16 16:56:51,885 [dag.py:730] op:ppyolo_mbv3 add input channel.
+INFO 2022-02-16 16:56:51,895 [dag.py:759] last op:ppyolo_mbv3 add output channel
+INFO 2022-02-16 16:56:51,895 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-16 16:56:51,899 [dag.py:832] [DAG] start
+INFO 2022-02-16 16:56:51,899 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-16 16:56:51,905 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-16 16:56:51,911 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 16:56:51,912 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-16 16:56:51,912 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-16 16:56:52,885 [local_predict.py:153] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-16 16:56:55,000 [operator.py:1317] [ppyolo_mbv3|0] Succ init
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-16 17:05:23,154 [operator.py:181] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-16 17:05:23,154 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 17:05:23,154 [operator.py:285] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-16 17:05:23,154 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-16 17:05:23,155 [pipeline_server.py:218]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "channel_recv_frist_arrive":false
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-16 17:05:23,155 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-16 17:05:23,155 [operator.py:308] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-16 17:05:23,176 [dag.py:496] [DAG] Succ init
+INFO 2022-02-16 17:05:23,177 [dag.py:659] ================= USED OP =================
+INFO 2022-02-16 17:05:23,177 [dag.py:662] ppyolo_mbv3
+INFO 2022-02-16 17:05:23,177 [dag.py:663] -------------------------------------------
+INFO 2022-02-16 17:05:23,177 [dag.py:680] ================== DAG ====================
+INFO 2022-02-16 17:05:23,177 [dag.py:682] (VIEW 0)
+INFO 2022-02-16 17:05:23,177 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-16 17:05:23,177 [dag.py:686] - ppyolo_mbv3
+INFO 2022-02-16 17:05:23,177 [dag.py:682] (VIEW 1)
+INFO 2022-02-16 17:05:23,177 [dag.py:684] [ppyolo_mbv3]
+INFO 2022-02-16 17:05:23,177 [dag.py:687] -------------------------------------------
+INFO 2022-02-16 17:05:23,192 [dag.py:730] op:ppyolo_mbv3 add input channel.
+INFO 2022-02-16 17:05:23,202 [dag.py:759] last op:ppyolo_mbv3 add output channel
+INFO 2022-02-16 17:05:23,202 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-16 17:05:23,205 [dag.py:832] [DAG] start
+INFO 2022-02-16 17:05:23,206 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-16 17:05:23,211 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-16 17:05:23,229 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 17:05:23,229 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-16 17:05:23,229 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-16 17:05:24,167 [local_predict.py:153] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-16 17:05:26,236 [operator.py:1317] [ppyolo_mbv3|0] Succ init
+INFO 2022-02-16 17:05:47,771 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002347.7711458
+INFO 2022-02-16 17:05:47,772 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002347.7724555
+INFO 2022-02-16 17:05:47,772 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:05:50,421 [dag.py:405] (data_id=0 log_id=0) Succ predict
+INFO 2022-02-16 17:05:50,447 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002350.4476814
+INFO 2022-02-16 17:05:50,448 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002350.448209
+INFO 2022-02-16 17:05:50,448 [dag.py:369] (data_id=1 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:05:50,527 [dag.py:405] (data_id=1 log_id=0) Succ predict
+INFO 2022-02-16 17:07:00,293 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002420.2930179
+INFO 2022-02-16 17:07:00,293 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002420.2936485
+INFO 2022-02-16 17:07:00,293 [dag.py:369] (data_id=2 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:07:00,362 [dag.py:405] (data_id=2 log_id=0) Succ predict
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+INFO 2022-02-16 17:09:22,057 [operator.py:181] local_service_conf: {'client_type': 'local_predictor', 'device_type': 2, 'devices': '0', 'fetch_list': ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], 'model_config': 'serving_server/', 'workdir': '', 'thread_num': 2, 'mem_optim': True, 'ir_optim': False, 'precision': 'fp32', 'use_calib': False, 'use_mkldnn': False, 'mkldnn_cache_capacity': 0}
+INFO 2022-02-16 17:09:22,058 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1'], precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 17:09:22,058 [operator.py:285] ppyolo_mbv3
+ input_ops: @DAGExecutor,
+ server_endpoints: None
+ fetch_list: ['save_infer_model/scale_0.tmp_1', 'save_infer_model/scale_1.tmp_1', 'save_infer_model/scale_2.tmp_1', 'save_infer_model/scale_3.tmp_1', 'save_infer_model/scale_4.tmp_1', 'save_infer_model/scale_5.tmp_1', 'save_infer_model/scale_6.tmp_1', 'save_infer_model/scale_7.tmp_1']
+ client_config: serving_server/serving_server_conf.prototxt
+ concurrency: 1,
+ timeout(s): -1,
+ retry: 1,
+ batch_size: 1,
+ auto_batching_timeout(s): None
+INFO 2022-02-16 17:09:22,058 [pipeline_server.py:215] ============= PIPELINE SERVER =============
+INFO 2022-02-16 17:09:22,058 [pipeline_server.py:218]
+{
+ "dag":{
+ "is_thread_op":false,
+ "tracer":{
+ "interval_s":30
+ },
+ "retry":1,
+ "client_type":"brpc",
+ "use_profile":false,
+ "channel_size":0,
+ "channel_recv_frist_arrive":false
+ },
+ "http_port":2009,
+ "op":{
+ "ppyolo_mbv3":{
+ "concurrency":1,
+ "local_service_conf":{
+ "client_type":"local_predictor",
+ "device_type":2,
+ "devices":"0",
+ "fetch_list":[
+ "save_infer_model/scale_0.tmp_1",
+ "save_infer_model/scale_1.tmp_1",
+ "save_infer_model/scale_2.tmp_1",
+ "save_infer_model/scale_3.tmp_1",
+ "save_infer_model/scale_4.tmp_1",
+ "save_infer_model/scale_5.tmp_1",
+ "save_infer_model/scale_6.tmp_1",
+ "save_infer_model/scale_7.tmp_1"
+ ],
+ "model_config":"serving_server/",
+ "workdir":"",
+ "thread_num":2,
+ "mem_optim":true,
+ "ir_optim":false,
+ "precision":"fp32",
+ "use_calib":false,
+ "use_mkldnn":false,
+ "mkldnn_cache_capacity":0
+ },
+ "timeout":-1,
+ "retry":1,
+ "batch_size":1,
+ "auto_batching_timeout":-1
+ }
+ },
+ "rpc_port":9999,
+ "worker_num":20,
+ "build_dag_each_worker":false
+}
+INFO 2022-02-16 17:09:22,058 [pipeline_server.py:223] -------------------------------------------
+INFO 2022-02-16 17:09:22,058 [operator.py:308] Op(ppyolo_mbv3) use local rpc service at port: []
+INFO 2022-02-16 17:09:22,079 [dag.py:496] [DAG] Succ init
+INFO 2022-02-16 17:09:22,080 [dag.py:659] ================= USED OP =================
+INFO 2022-02-16 17:09:22,080 [dag.py:662] ppyolo_mbv3
+INFO 2022-02-16 17:09:22,080 [dag.py:663] -------------------------------------------
+INFO 2022-02-16 17:09:22,080 [dag.py:680] ================== DAG ====================
+INFO 2022-02-16 17:09:22,080 [dag.py:682] (VIEW 0)
+INFO 2022-02-16 17:09:22,080 [dag.py:684] [@DAGExecutor]
+INFO 2022-02-16 17:09:22,080 [dag.py:686] - ppyolo_mbv3
+INFO 2022-02-16 17:09:22,081 [dag.py:682] (VIEW 1)
+INFO 2022-02-16 17:09:22,081 [dag.py:684] [ppyolo_mbv3]
+INFO 2022-02-16 17:09:22,081 [dag.py:687] -------------------------------------------
+INFO 2022-02-16 17:09:22,095 [dag.py:730] op:ppyolo_mbv3 add input channel.
+INFO 2022-02-16 17:09:22,105 [dag.py:759] last op:ppyolo_mbv3 add output channel
+INFO 2022-02-16 17:09:22,105 [dag.py:800] [DAG] Succ build DAG
+INFO 2022-02-16 17:09:22,108 [dag.py:832] [DAG] start
+INFO 2022-02-16 17:09:22,109 [dag.py:182] [DAG] set in channel succ, name [@DAGExecutor]
+INFO 2022-02-16 17:09:22,114 [pipeline_server.py:51] [PipelineServicer] succ init
+INFO 2022-02-16 17:09:22,119 [local_service_handler.py:172] Models(serving_server/) will be launched by device gpu. use_gpu:True, use_trt:True, use_lite:False, use_xpu:False, device_type:2, devices:[0], mem_optim:True, ir_optim:False, use_profile:False, thread_num:2, client_type:local_predictor, fetch_names:None, precision:fp32, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None
+INFO 2022-02-16 17:09:22,120 [operator.py:1306] Init cuda env in process 0
+INFO 2022-02-16 17:09:22,120 [local_service_handler.py:208] GET_CLIENT : concurrency_idx=0, device_num=1
+INFO 2022-02-16 17:09:23,058 [local_predict.py:153] LocalPredictor load_model_config params: model_path:serving_server/, use_gpu:True, gpu_id:0, use_profile:False, thread_num:2, mem_optim:True, ir_optim:False, use_trt:True, use_lite:False, use_xpu:False, precision:fp32, use_calib:False, use_mkldnn:False, mkldnn_cache_capacity:0, mkldnn_op_list:None, mkldnn_bf16_op_list:None, use_feed_fetch_ops:False,
+INFO 2022-02-16 17:09:24,980 [operator.py:1317] [ppyolo_mbv3|0] Succ init
+INFO 2022-02-16 17:10:00,414 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002600.4143305
+INFO 2022-02-16 17:10:00,415 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002600.4155881
+INFO 2022-02-16 17:10:00,416 [dag.py:369] (data_id=0 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:10:02,981 [dag.py:405] (data_id=0 log_id=0) Succ predict
+INFO 2022-02-16 17:14:52,096 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002892.0961268
+INFO 2022-02-16 17:14:52,096 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002892.096749
+INFO 2022-02-16 17:14:52,097 [dag.py:369] (data_id=1 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:14:52,172 [dag.py:405] (data_id=1 log_id=0) Succ predict
+INFO 2022-02-16 17:15:06,391 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645002906.390965
+INFO 2022-02-16 17:15:06,391 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645002906.3916032
+INFO 2022-02-16 17:15:06,391 [dag.py:369] (data_id=2 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:15:06,459 [dag.py:405] (data_id=2 log_id=0) Succ predict
+INFO 2022-02-16 17:21:25,074 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003285.0745604
+INFO 2022-02-16 17:21:25,075 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003285.075034
+INFO 2022-02-16 17:21:25,075 [dag.py:369] (data_id=3 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:21:25,141 [dag.py:405] (data_id=3 log_id=0) Succ predict
+INFO 2022-02-16 17:21:53,963 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003313.9639173
+INFO 2022-02-16 17:21:53,964 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003313.9646337
+INFO 2022-02-16 17:21:53,964 [dag.py:369] (data_id=4 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:21:54,033 [dag.py:405] (data_id=4 log_id=0) Succ predict
+INFO 2022-02-16 17:22:47,816 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003367.8165774
+INFO 2022-02-16 17:22:47,817 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003367.8172152
+INFO 2022-02-16 17:22:47,817 [dag.py:369] (data_id=5 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:22:47,886 [dag.py:405] (data_id=5 log_id=0) Succ predict
+INFO 2022-02-16 17:23:03,882 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003383.882905
+INFO 2022-02-16 17:23:03,883 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003383.8835778
+INFO 2022-02-16 17:23:03,883 [dag.py:369] (data_id=6 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:23:03,951 [dag.py:405] (data_id=6 log_id=0) Succ predict
+INFO 2022-02-16 17:23:30,593 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003410.5935636
+INFO 2022-02-16 17:23:30,594 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003410.5942206
+INFO 2022-02-16 17:23:30,594 [dag.py:369] (data_id=7 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:23:30,662 [dag.py:405] (data_id=7 log_id=0) Succ predict
+INFO 2022-02-16 17:24:12,780 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003452.7805502
+INFO 2022-02-16 17:24:12,781 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003452.7814457
+INFO 2022-02-16 17:24:12,781 [dag.py:369] (data_id=8 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:24:12,853 [dag.py:405] (data_id=8 log_id=0) Succ predict
+INFO 2022-02-16 17:24:23,797 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003463.7971623
+INFO 2022-02-16 17:24:23,797 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003463.7978337
+INFO 2022-02-16 17:24:23,798 [dag.py:369] (data_id=9 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:24:23,867 [dag.py:405] (data_id=9 log_id=0) Succ predict
+INFO 2022-02-16 17:24:43,980 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003483.9801416
+INFO 2022-02-16 17:24:43,980 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003483.9806335
+INFO 2022-02-16 17:24:43,980 [dag.py:369] (data_id=10 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:24:44,049 [dag.py:405] (data_id=10 log_id=0) Succ predict
+INFO 2022-02-16 17:24:54,159 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003494.15903
+INFO 2022-02-16 17:24:54,159 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003494.1595104
+INFO 2022-02-16 17:24:54,159 [dag.py:369] (data_id=11 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:24:54,226 [dag.py:405] (data_id=11 log_id=0) Succ predict
+INFO 2022-02-16 17:25:12,386 [pipeline_server.py:56] (log_id=0) inference request name:ppyolo_mbv3 self.name:ppyolo_mbv3 time:1645003512.3861694
+INFO 2022-02-16 17:25:12,386 [operator.py:1723] RequestOp unpack one request. log_id:0, clientip: name:ppyolo_mbv3, method:prediction, time:1645003512.3868122
+INFO 2022-02-16 17:25:12,387 [dag.py:369] (data_id=12 log_id=0) Succ Generate ID
+INFO 2022-02-16 17:25:12,455 [dag.py:405] (data_id=12 log_id=0) Succ predict
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log.wf b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log.wf
new file mode 100644
index 000000000..377d4cf33
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.log.wf
@@ -0,0 +1,2000 @@
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 02:45:16,604 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 02:45:16,605 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 02:45:16,606 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 02:45:45,873 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 76, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 02:45:45,877 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:07:14,510 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:07:14,511 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:07:14,512 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 03:07:22,696 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:07:22,700 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:10:13,372 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:10:13,373 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:10:13,374 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 03:10:21,260 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 78, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:10:21,264 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 03:11:47,323 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 03:11:47,324 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 03:11:47,325 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 03:11:55,757 [operator.py:969] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 965, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 78, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 429, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 03:11:55,761 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:35:58,321 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:35:58,322 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:35:58,323 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:37:04,889 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:37:04,890 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:37:04,891 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 05:37:16,537 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:37:16,542 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:40:11,809 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:40:11,810 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:40:11,811 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:40:11,811 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 05:40:18,654 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:40:18,658 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:42:11,543 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 05:42:11,544 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 05:42:11,545 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 05:42:19,333 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 77, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 05:42:19,340 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:08:54,355 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:08:54,356 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:08:54,357 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:08:56,841 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'yaml' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 33, in init_op
+ yml_conf = yaml.safe_load(f)
+NameError: name 'yaml' is not defined
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:10:19,803 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:10:19,804 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:10:19,805 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:10:22,339 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'yaml' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 33, in init_op
+ yml_conf = yaml.safe_load(f)
+NameError: name 'yaml' is not defined
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:08,931 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:12:08,932 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:12:08,933 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:12:11,443 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 40, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:12:47,188 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:12:47,188 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:12:47,189 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:12:47,190 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:12:49,708 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 41, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:15:10,463 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:15:10,464 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:15:10,465 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:15:12,951 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 43, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:17:36,321 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:17:36,321 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:17:36,322 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:17:36,323 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:17:38,816 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 44, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:18:17,409 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:18:17,410 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:18:17,411 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:18:19,908 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: __init__() got an unexpected keyword argument 'interp'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 45, in init_op
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+TypeError: __init__() got an unexpected keyword argument 'interp'
+WARNING 2021-12-29 06:19:57,871 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:19:57,872 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:19:57,873 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:20:00,415 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 45, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:21:28,629 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:21:28,630 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:21:28,631 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:21:31,123 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 46, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:25:12,051 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:25:12,051 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:25:12,052 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:25:12,053 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:25:14,695 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 46, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:25:29,445 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:25:29,446 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:25:29,447 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:25:31,935 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 47, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:31:43,452 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:31:43,453 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:31:43,454 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2021-12-29 06:31:45,962 [operator.py:1176] [ppyolo_mbv3|0] failed to init op: name 'preprocess_ops' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1171, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1360, in _initialize
+ self.init_op()
+ File "web_service.py", line 49, in init_op
+ preprocess_ops.append(eval(op_type)(**new_op_info))
+NameError: name 'preprocess_ops' is not defined
+WARNING 2021-12-29 06:32:31,020 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:32:31,021 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:32:31,022 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 06:33:05,247 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:33:05,250 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:33:05,251 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:40:19,257 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:40:19,258 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:40:19,259 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:40:25,105 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 54, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:40:25,111 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:03,381 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:42:03,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:42:13,106 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 54, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:42:13,112 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:42:44,174 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:42:44,175 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:42:44,176 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:42:47,195 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 55, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:42:47,200 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:44:34,233 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:44:34,234 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:44:34,235 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:44:43,710 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 55, in preprocess
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+UnboundLocalError: local variable 'im_info' referenced before assignment
+ERROR 2021-12-29 06:44:43,715 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: local variable 'im_info' referenced before assignment
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:46:19,030 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:46:19,031 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:46:19,032 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:46:25,581 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 71, in preprocess
+ im = self.img_preprocess(im)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 492, in __call__
+ img = t(img)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 642, in __call__
+ return F.normalize(img, self.mean, self.std, self.channel_first)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/functional.py", line 33, in normalize
+ img -= img_mean
+ValueError: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+ERROR 2021-12-29 06:46:25,587 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: operands could not be broadcast together with shapes (3,640,640) (1,1,3) (3,640,640)
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:01,066 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:51:01,067 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:51:01,068 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:51:07,885 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 89, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:51:07,889 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:51:56,944 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:51:56,945 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:51:56,946 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 06:52:13,864 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:52:13,865 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:52:13,866 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:52:20,309 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 86, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:52:20,314 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:54:00,624 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 06:54:00,625 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 06:54:00,626 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 06:54:07,453 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 86, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 06:54:07,458 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:13:12,175 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:13:12,175 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:13:12,176 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:13:12,177 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:13:20,265 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 430, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:13:20,269 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:16:27,832 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:16:27,832 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:16:27,833 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:16:27,834 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:16:32,263 [operator.py:695] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: name 'im_shape' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 678, in _run_preprocess
+ parsed_data, data_id, logid_dict.get(data_id))
+ File "web_service.py", line 68, in preprocess
+ "im_shape": im_info[im_shape],#np.array(list(im.shape[1:])).reshape(-1)[np.newaxis,:],
+NameError: name 'im_shape' is not defined
+ERROR 2021-12-29 07:16:32,267 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to preprocess: name 'im_shape' is not defined
+WARNING 2021-12-29 07:17:33,514 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:17:33,514 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:17:33,515 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:17:33,516 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:17:39,797 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:17:39,802 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:19:06,141 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:19:06,142 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:19:06,143 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:19:12,763 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:19:12,767 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:40:25,237 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:40:25,238 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:40:25,239 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:42:11,629 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:42:11,630 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:42:11,631 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:42:25,683 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_0.tmp_1.lod'
+ERROR 2021-12-29 07:42:25,687 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_0.tmp_1.lod'
+WARNING 2021-12-29 07:49:58,432 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-29 07:49:58,433 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-29 07:49:58,434 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-29 07:50:05,801 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_4.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 87, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_4.tmp_1.lod'
+ERROR 2021-12-29 07:50:05,806 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_4.tmp_1.lod'
+WARNING 2021-12-30 06:53:16,236 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 06:53:16,237 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 06:53:16,238 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 06:53:28,471 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_3.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 81, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_3.tmp_1.lod'
+ERROR 2021-12-30 06:53:28,476 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_3.tmp_1.lod'
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 07:57:06,805 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 07:57:06,806 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 07:57:06,807 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 07:57:06,807 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 07:57:15,249 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 81, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 07:57:15,253 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:12:07,133 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:12:07,134 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:12:14,700 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 82, in postprocess
+ np_score_list.append(fetch_dict[out_idx])
+KeyError: 0
+ERROR 2021-12-30 08:12:14,707 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:13:45,672 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:13:45,673 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:13:45,674 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:13:52,217 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[out_idx])
+KeyError: 0
+ERROR 2021-12-30 08:13:52,220 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:17:15,481 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:17:15,482 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:17:22,212 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:17:22,217 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:17:30,895 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:17:30,898 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:17:30,899 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:17:39,176 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 83, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:17:39,180 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:20:13,195 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:20:13,196 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:20:13,197 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:20:20,892 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 84, in postprocess
+ np_score_list.append(fetch_dict[i])
+KeyError: 0
+ERROR 2021-12-30 08:20:20,896 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 0
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:31:54,772 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:31:54,773 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:31:54,774 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:32:01,331 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 91, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:32:01,335 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:34:21,409 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:34:21,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:34:21,410 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:34:21,411 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:34:28,756 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 96, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:34:28,761 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:43:32,266 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:43:32,266 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:43:32,267 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:43:32,268 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:43:32,269 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:43:32,270 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:43:46,213 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 111, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:43:46,218 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:49:32,281 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:49:32,281 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:49:32,282 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:49:32,283 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:49:32,284 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:49:32,285 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:49:39,879 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 113, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:49:39,884 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:52:42,378 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:52:42,379 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:52:42,380 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:52:42,381 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:52:42,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 08:52:50,006 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 110, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 08:52:50,011 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:59:43,422 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 08:59:43,423 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 08:59:43,424 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 08:59:43,425 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 08:59:43,426 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-30 09:03:09,956 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:03:09,956 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:03:09,957 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:03:09,958 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:03:09,959 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:11,902 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:03:11,903 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:03:11,904 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:03:11,905 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:03:22,048 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 09:03:22,053 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 09:07:27,378 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:07:27,379 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:27,380 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:07:27,381 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:07:27,382 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-30 09:07:48,750 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:07:48,751 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:07:48,752 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:07:48,753 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:07:48,754 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:07:48,754 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:07:53,007 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 431, in __call__
+ self.clsid2catid)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 346, in _get_bbox_result
+ lod = [fetch_map[fetch_name + '.lod']]
+KeyError: 'save_infer_model/scale_7.tmp_1.lod'
+ERROR 2021-12-30 09:07:53,011 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: 'save_infer_model/scale_7.tmp_1.lod'
+WARNING 2021-12-30 09:08:06,502 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:08:06,503 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:08:06,504 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:08:06,505 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:08:06,506 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:08:11,524 [operator.py:1000] (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+ERROR 2021-12-30 09:08:11,527 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+WARNING 2021-12-30 09:19:39,408 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:19:39,409 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:19:39,410 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:19:39,411 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:19:39,412 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:19:39,412 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:19:46,754 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'res_dict' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 101, in postprocess
+ res_dict[b] = {}
+NameError: name 'res_dict' is not defined
+ERROR 2021-12-30 09:19:46,757 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'res_dict' is not defined
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:20:11,181 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:20:11,182 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:20:11,183 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:20:15,417 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 102, in postprocess
+ res_dict[b] = {}
+IndexError: list assignment index out of range
+ERROR 2021-12-30 09:20:15,421 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+WARNING 2021-12-30 09:21:19,542 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:21:19,543 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:21:19,544 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:21:19,545 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:21:19,546 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:21:24,326 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 103, in postprocess
+ res_dict[b] = {}
+IndexError: list assignment index out of range
+ERROR 2021-12-30 09:21:24,330 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: list assignment index out of range
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:23:31,824 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:23:31,825 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:23:31,826 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:23:31,827 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:23:37,722 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'a' is not defined
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 101, in postprocess
+ for b in range(a.ndim):
+NameError: name 'a' is not defined
+ERROR 2021-12-30 09:23:37,726 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: name 'a' is not defined
+WARNING 2021-12-30 09:24:02,842 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:24:02,842 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:24:02,843 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:24:02,844 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:24:02,845 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:24:02,846 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:24:07,046 [operator.py:1000] (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+ERROR 2021-12-30 09:24:07,049 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (log_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: output of postprocess funticon must be dict type, but get
+WARNING 2021-12-30 09:25:54,030 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:25:54,031 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:25:54,032 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:25:54,033 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:25:54,034 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:25:54,034 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:25:58,334 [operator.py:1487] (logid=0) Failed to pack RPC response package:
+WARNING 2021-12-30 09:39:07,135 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:39:07,136 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:39:07,137 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:39:07,138 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:39:07,139 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:40:00,249 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:00,253 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:48,711 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:48,714 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:53,720 [operator.py:973] (data_id=3 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:40:53,723 [dag.py:409] (data_id=3 log_id=0) Failed to predict: (data_id=3 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:42:01,778 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:42:01,779 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:42:01,780 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:42:01,781 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:42:01,782 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:42:06,612 [operator.py:973] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 09:42:06,616 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 09:42:19,953 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 09:42:19,957 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:42:23,963 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:42:23,964 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:42:23,965 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:42:23,966 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:42:23,967 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:43:10,985 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:43:10,988 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:43:15,944 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:43:15,950 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:44:54,821 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:44:54,822 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:44:54,823 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:44:54,824 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:44:54,825 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:45:05,468 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:45:05,472 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 09:46:17,679 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 09:46:17,680 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 09:46:17,681 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 09:46:17,682 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 09:46:17,683 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 09:46:17,683 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 09:46:25,584 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 09:46:25,588 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:05:40,432 [pipeline_server.py:55] (log_id=0) name dismatch error. request.name:recognition,server.name=ppyolo_mbv3
+ERROR 2021-12-30 11:05:49,271 [operator.py:973] (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:05:49,274 [dag.py:409] (data_id=2 log_id=0) Failed to predict: (data_id=2 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:05:54,559 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:05:54,560 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:05:54,561 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:05:54,562 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:05:54,563 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 11:15:50,225 [operator.py:973] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 969, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:15:50,229 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:24:24,428 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:24:24,429 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:24:24,430 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:24:24,431 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 11:24:40,005 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+TypeError: __call__() takes 3 positional arguments but 4 were given
+ERROR 2021-12-30 11:24:40,009 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: __call__() takes 3 positional arguments but 4 were given
+WARNING 2021-12-30 11:37:49,610 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:37:49,611 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:37:49,612 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:37:49,613 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:37:49,614 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 11:37:54,033 [operator.py:976] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:37:54,039 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:37:55,960 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:37:55,963 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 11:38:09,257 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:38:09,258 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:38:09,259 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:38:09,260 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:38:09,261 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+ERROR 2021-12-30 11:38:13,407 [operator.py:976] (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ np_boxes, np_boxes_num = self.postprocess(np_score_list, np_boxes_list)
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:38:13,411 [dag.py:409] (data_id=0 log_id=0) Failed to predict: (data_id=0 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:40:01,379 [operator.py:976] (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 972, in _run_postprocess
+ logid_dict.get(data_id))
+ File "web_service.py", line 92, in postprocess
+ self.post_process = PicoDetPostProcess(
+TypeError: postprocess() missing 1 required positional argument: 'log_id'
+ERROR 2021-12-30 11:40:01,383 [dag.py:409] (data_id=1 log_id=0) Failed to predict: (data_id=1 log_id=0) [ppyolo_mbv3|0] Failed to postprocess: postprocess() missing 1 required positional argument: 'log_id'
+WARNING 2021-12-30 11:40:05,557 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:40:05,557 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:40:05,558 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:40:05,559 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:40:05,560 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2021-12-30 11:41:26,917 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2021-12-30 11:41:26,918 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2021-12-30 11:41:26,919 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2021-12-30 11:41:26,920 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2021-12-30 11:41:26,921 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-14 09:24:33,065 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:24:33,072 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:24:33,073 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:24:33,074 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2022-02-14 09:24:35,602 [operator.py:1179] [ppyolo_mbv3|0] failed to init op: [Errno 2] No such file or directory: 'label_list.txt'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1174, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1363, in _initialize
+ self.init_op()
+ File "web_service.py", line 30, in init_op
+ self.img_postprocess = RCNNPostprocess("label_list.txt", "output")
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 288, in __init__
+ with open(label_file) as fin:
+FileNotFoundError: [Errno 2] No such file or directory: 'label_list.txt'
+WARNING 2022-02-14 09:24:42,704 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:24:42,705 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:24:42,706 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:24:42,707 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:24:42,708 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+CRITICAL 2022-02-14 09:24:45,275 [operator.py:1179] [ppyolo_mbv3|0] failed to init op: [Errno 2] No such file or directory: 'label_list.txt'
+Traceback (most recent call last):
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1174, in _run
+ profiler = self._initialize(is_thread_op, concurrency_idx)
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_server/pipeline/operator.py", line 1363, in _initialize
+ self.init_op()
+ File "web_service.py", line 30, in init_op
+ self.img_postprocess = RCNNPostprocess("label_list.txt", "output")
+ File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddle_serving_app/reader/image_reader.py", line 288, in __init__
+ with open(label_file) as fin:
+FileNotFoundError: [Errno 2] No such file or directory: 'label_list.txt'
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:26:03,671 [pipeline_server.py:496] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] use_profile not set, use default: False
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] timeout not set, use default: -1
+WARNING 2022-02-14 09:26:03,672 [pipeline_server.py:496] [CONF] retry not set, use default: 1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] workdir not set, use default:
+WARNING 2022-02-14 09:26:03,673 [pipeline_server.py:496] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] precision not set, use default: fp32
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] use_calib not set, use default: False
+WARNING 2022-02-14 09:26:03,674 [pipeline_server.py:496] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-14 09:26:03,675 [pipeline_server.py:496] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 16:56:51,836 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 16:56:51,837 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 17:05:23,144 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 17:05:23,145 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] build_dag_each_worker not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] client_type not set, use default: brpc
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] use_profile not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] channel_size not set, use default: 0
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] channel_recv_frist_arrive not set, use default: False
+WARNING 2022-02-16 17:09:22,047 [pipeline_server.py:509] [CONF] timeout not set, use default: -1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] retry not set, use default: 1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] batch_size not set, use default: 1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] auto_batching_timeout not set, use default: -1
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] workdir not set, use default:
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] thread_num not set, use default: 2
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] mem_optim not set, use default: True
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] ir_optim not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] precision not set, use default: fp32
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] use_calib not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] use_mkldnn not set, use default: False
+WARNING 2022-02-16 17:09:22,048 [pipeline_server.py:509] [CONF] mkldnn_cache_capacity not set, use default: 0
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.tracer b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.tracer
new file mode 100644
index 000000000..e32c459a2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/PipelineServingLogs/pipeline.tracer
@@ -0,0 +1,6853 @@
+2021-12-29 02:45:16,713 ==================== TRACER ======================
+2021-12-29 02:45:16,715 Channel (server worker num[20]):
+2021-12-29 02:45:16,717 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:45:16,718 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:45:46,748 ==================== TRACER ======================
+2021-12-29 02:45:46,749 DAGExecutor:
+2021-12-29 02:45:46,749 Query count[1]
+2021-12-29 02:45:46,749 QPS[0.03333333333333333 q/s]
+2021-12-29 02:45:46,750 Succ[0.0]
+2021-12-29 02:45:46,750 Error req[0]
+2021-12-29 02:45:46,750 Latency:
+2021-12-29 02:45:46,750 ave[1691.297 ms]
+2021-12-29 02:45:46,750 .50[1691.297 ms]
+2021-12-29 02:45:46,751 .60[1691.297 ms]
+2021-12-29 02:45:46,751 .70[1691.297 ms]
+2021-12-29 02:45:46,751 .80[1691.297 ms]
+2021-12-29 02:45:46,751 .90[1691.297 ms]
+2021-12-29 02:45:46,751 .95[1691.297 ms]
+2021-12-29 02:45:46,752 .99[1691.297 ms]
+2021-12-29 02:45:46,752 Channel (server worker num[20]):
+2021-12-29 02:45:46,753 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:45:46,753 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:46:16,784 ==================== TRACER ======================
+2021-12-29 02:46:16,785 Channel (server worker num[20]):
+2021-12-29 02:46:16,786 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:46:16,786 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:46:46,817 ==================== TRACER ======================
+2021-12-29 02:46:46,818 Channel (server worker num[20]):
+2021-12-29 02:46:46,818 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:46:46,819 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:47:16,849 ==================== TRACER ======================
+2021-12-29 02:47:16,850 Channel (server worker num[20]):
+2021-12-29 02:47:16,851 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:47:16,852 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:47:46,882 ==================== TRACER ======================
+2021-12-29 02:47:46,883 Channel (server worker num[20]):
+2021-12-29 02:47:46,884 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:47:46,884 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:48:16,888 ==================== TRACER ======================
+2021-12-29 02:48:16,889 Channel (server worker num[20]):
+2021-12-29 02:48:16,890 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:48:16,891 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:48:46,916 ==================== TRACER ======================
+2021-12-29 02:48:46,917 Channel (server worker num[20]):
+2021-12-29 02:48:46,918 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:48:46,919 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:49:16,949 ==================== TRACER ======================
+2021-12-29 02:49:16,950 Channel (server worker num[20]):
+2021-12-29 02:49:16,951 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:49:16,951 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:49:46,982 ==================== TRACER ======================
+2021-12-29 02:49:46,982 Channel (server worker num[20]):
+2021-12-29 02:49:46,983 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:49:46,984 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:50:17,014 ==================== TRACER ======================
+2021-12-29 02:50:17,015 Channel (server worker num[20]):
+2021-12-29 02:50:17,016 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:50:17,017 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:50:47,047 ==================== TRACER ======================
+2021-12-29 02:50:47,048 Channel (server worker num[20]):
+2021-12-29 02:50:47,049 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:50:47,049 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:51:17,080 ==================== TRACER ======================
+2021-12-29 02:51:17,080 Channel (server worker num[20]):
+2021-12-29 02:51:17,081 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:51:17,082 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:51:47,112 ==================== TRACER ======================
+2021-12-29 02:51:47,113 Channel (server worker num[20]):
+2021-12-29 02:51:47,114 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:51:47,114 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:52:17,145 ==================== TRACER ======================
+2021-12-29 02:52:17,146 Channel (server worker num[20]):
+2021-12-29 02:52:17,146 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:52:17,147 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:52:47,177 ==================== TRACER ======================
+2021-12-29 02:52:47,178 Channel (server worker num[20]):
+2021-12-29 02:52:47,179 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:52:47,179 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:53:17,210 ==================== TRACER ======================
+2021-12-29 02:53:17,211 Channel (server worker num[20]):
+2021-12-29 02:53:17,211 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:53:17,212 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:53:47,243 ==================== TRACER ======================
+2021-12-29 02:53:47,243 Channel (server worker num[20]):
+2021-12-29 02:53:47,244 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:53:47,245 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:54:17,275 ==================== TRACER ======================
+2021-12-29 02:54:17,276 Channel (server worker num[20]):
+2021-12-29 02:54:17,277 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:54:17,278 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:54:47,308 ==================== TRACER ======================
+2021-12-29 02:54:47,309 Channel (server worker num[20]):
+2021-12-29 02:54:47,310 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:54:47,310 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:55:17,341 ==================== TRACER ======================
+2021-12-29 02:55:17,342 Channel (server worker num[20]):
+2021-12-29 02:55:17,342 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:55:17,343 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:55:47,373 ==================== TRACER ======================
+2021-12-29 02:55:47,374 Channel (server worker num[20]):
+2021-12-29 02:55:47,375 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:55:47,376 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:56:17,406 ==================== TRACER ======================
+2021-12-29 02:56:17,407 Channel (server worker num[20]):
+2021-12-29 02:56:17,408 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:56:17,409 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:56:47,439 ==================== TRACER ======================
+2021-12-29 02:56:47,440 Channel (server worker num[20]):
+2021-12-29 02:56:47,441 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:56:47,441 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:57:17,453 ==================== TRACER ======================
+2021-12-29 02:57:17,454 Channel (server worker num[20]):
+2021-12-29 02:57:17,455 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:57:17,456 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:57:47,486 ==================== TRACER ======================
+2021-12-29 02:57:47,487 Channel (server worker num[20]):
+2021-12-29 02:57:47,488 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:57:47,489 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:58:17,519 ==================== TRACER ======================
+2021-12-29 02:58:17,520 Channel (server worker num[20]):
+2021-12-29 02:58:17,521 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:58:17,521 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:58:47,552 ==================== TRACER ======================
+2021-12-29 02:58:47,553 Channel (server worker num[20]):
+2021-12-29 02:58:47,553 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:58:47,554 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:59:17,584 ==================== TRACER ======================
+2021-12-29 02:59:17,585 Channel (server worker num[20]):
+2021-12-29 02:59:17,586 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:59:17,587 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 02:59:47,617 ==================== TRACER ======================
+2021-12-29 02:59:47,618 Channel (server worker num[20]):
+2021-12-29 02:59:47,619 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 02:59:47,620 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:00:17,650 ==================== TRACER ======================
+2021-12-29 03:00:17,651 Channel (server worker num[20]):
+2021-12-29 03:00:17,652 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:00:17,652 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:00:47,683 ==================== TRACER ======================
+2021-12-29 03:00:47,684 Channel (server worker num[20]):
+2021-12-29 03:00:47,684 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:00:47,685 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:01:17,715 ==================== TRACER ======================
+2021-12-29 03:01:17,716 Channel (server worker num[20]):
+2021-12-29 03:01:17,717 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:01:17,718 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:01:47,748 ==================== TRACER ======================
+2021-12-29 03:01:47,749 Channel (server worker num[20]):
+2021-12-29 03:01:47,750 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:01:47,750 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:02:17,781 ==================== TRACER ======================
+2021-12-29 03:02:17,782 Channel (server worker num[20]):
+2021-12-29 03:02:17,782 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:02:17,783 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:02:47,813 ==================== TRACER ======================
+2021-12-29 03:02:47,814 Channel (server worker num[20]):
+2021-12-29 03:02:47,815 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:02:47,816 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:03:17,846 ==================== TRACER ======================
+2021-12-29 03:03:17,847 Channel (server worker num[20]):
+2021-12-29 03:03:17,848 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:03:17,848 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:03:47,879 ==================== TRACER ======================
+2021-12-29 03:03:47,879 Channel (server worker num[20]):
+2021-12-29 03:03:47,880 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:03:47,881 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:04:17,911 ==================== TRACER ======================
+2021-12-29 03:04:17,912 Channel (server worker num[20]):
+2021-12-29 03:04:17,913 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:04:17,914 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:04:47,944 ==================== TRACER ======================
+2021-12-29 03:04:47,945 Channel (server worker num[20]):
+2021-12-29 03:04:47,946 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:04:47,947 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:05:17,965 ==================== TRACER ======================
+2021-12-29 03:05:17,966 Channel (server worker num[20]):
+2021-12-29 03:05:17,967 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:05:17,968 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:05:47,998 ==================== TRACER ======================
+2021-12-29 03:05:47,999 Channel (server worker num[20]):
+2021-12-29 03:05:48,000 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:05:48,001 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:06:18,031 ==================== TRACER ======================
+2021-12-29 03:06:18,032 Channel (server worker num[20]):
+2021-12-29 03:06:18,033 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:06:18,033 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:06:48,064 ==================== TRACER ======================
+2021-12-29 03:06:48,065 Channel (server worker num[20]):
+2021-12-29 03:06:48,065 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:06:48,066 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:07:14,621 ==================== TRACER ======================
+2021-12-29 03:07:14,623 Channel (server worker num[20]):
+2021-12-29 03:07:14,625 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:07:14,625 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:07:44,656 ==================== TRACER ======================
+2021-12-29 03:07:44,657 DAGExecutor:
+2021-12-29 03:07:44,657 Query count[1]
+2021-12-29 03:07:44,657 QPS[0.03333333333333333 q/s]
+2021-12-29 03:07:44,657 Succ[0.0]
+2021-12-29 03:07:44,658 Error req[0]
+2021-12-29 03:07:44,658 Latency:
+2021-12-29 03:07:44,658 ave[1819.424 ms]
+2021-12-29 03:07:44,658 .50[1819.424 ms]
+2021-12-29 03:07:44,658 .60[1819.424 ms]
+2021-12-29 03:07:44,659 .70[1819.424 ms]
+2021-12-29 03:07:44,659 .80[1819.424 ms]
+2021-12-29 03:07:44,659 .90[1819.424 ms]
+2021-12-29 03:07:44,659 .95[1819.424 ms]
+2021-12-29 03:07:44,659 .99[1819.424 ms]
+2021-12-29 03:07:44,659 Channel (server worker num[20]):
+2021-12-29 03:07:44,660 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:07:44,661 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:08:14,691 ==================== TRACER ======================
+2021-12-29 03:08:14,692 Channel (server worker num[20]):
+2021-12-29 03:08:14,693 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:08:14,694 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:08:44,724 ==================== TRACER ======================
+2021-12-29 03:08:44,725 Channel (server worker num[20]):
+2021-12-29 03:08:44,726 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:08:44,727 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:09:14,757 ==================== TRACER ======================
+2021-12-29 03:09:14,758 Channel (server worker num[20]):
+2021-12-29 03:09:14,759 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:09:14,760 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:09:44,776 ==================== TRACER ======================
+2021-12-29 03:09:44,777 Channel (server worker num[20]):
+2021-12-29 03:09:44,778 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:09:44,779 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:10:13,471 ==================== TRACER ======================
+2021-12-29 03:10:13,473 Channel (server worker num[20]):
+2021-12-29 03:10:13,475 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:10:13,476 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:10:43,506 ==================== TRACER ======================
+2021-12-29 03:10:43,508 DAGExecutor:
+2021-12-29 03:10:43,508 Query count[1]
+2021-12-29 03:10:43,508 QPS[0.03333333333333333 q/s]
+2021-12-29 03:10:43,508 Succ[0.0]
+2021-12-29 03:10:43,508 Error req[0]
+2021-12-29 03:10:43,509 Latency:
+2021-12-29 03:10:43,509 ave[1855.084 ms]
+2021-12-29 03:10:43,509 .50[1855.084 ms]
+2021-12-29 03:10:43,509 .60[1855.084 ms]
+2021-12-29 03:10:43,509 .70[1855.084 ms]
+2021-12-29 03:10:43,509 .80[1855.084 ms]
+2021-12-29 03:10:43,510 .90[1855.084 ms]
+2021-12-29 03:10:43,510 .95[1855.084 ms]
+2021-12-29 03:10:43,510 .99[1855.084 ms]
+2021-12-29 03:10:43,510 Channel (server worker num[20]):
+2021-12-29 03:10:43,511 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:10:43,512 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:11:13,542 ==================== TRACER ======================
+2021-12-29 03:11:13,543 Channel (server worker num[20]):
+2021-12-29 03:11:13,544 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:11:13,544 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:11:43,560 ==================== TRACER ======================
+2021-12-29 03:11:43,561 Channel (server worker num[20]):
+2021-12-29 03:11:43,562 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:11:43,562 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:11:47,437 ==================== TRACER ======================
+2021-12-29 03:11:47,438 Channel (server worker num[20]):
+2021-12-29 03:11:47,440 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:11:47,441 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:12:17,457 ==================== TRACER ======================
+2021-12-29 03:12:17,458 DAGExecutor:
+2021-12-29 03:12:17,459 Query count[1]
+2021-12-29 03:12:17,459 QPS[0.03333333333333333 q/s]
+2021-12-29 03:12:17,459 Succ[0.0]
+2021-12-29 03:12:17,459 Error req[0]
+2021-12-29 03:12:17,459 Latency:
+2021-12-29 03:12:17,460 ave[1822.881 ms]
+2021-12-29 03:12:17,460 .50[1822.881 ms]
+2021-12-29 03:12:17,460 .60[1822.881 ms]
+2021-12-29 03:12:17,460 .70[1822.881 ms]
+2021-12-29 03:12:17,460 .80[1822.881 ms]
+2021-12-29 03:12:17,461 .90[1822.881 ms]
+2021-12-29 03:12:17,461 .95[1822.881 ms]
+2021-12-29 03:12:17,461 .99[1822.881 ms]
+2021-12-29 03:12:17,461 Channel (server worker num[20]):
+2021-12-29 03:12:17,462 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:12:17,463 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:12:47,493 ==================== TRACER ======================
+2021-12-29 03:12:47,494 Channel (server worker num[20]):
+2021-12-29 03:12:47,495 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:12:47,495 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:13:17,502 ==================== TRACER ======================
+2021-12-29 03:13:17,502 Channel (server worker num[20]):
+2021-12-29 03:13:17,503 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:13:17,504 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:13:47,534 ==================== TRACER ======================
+2021-12-29 03:13:47,535 Channel (server worker num[20]):
+2021-12-29 03:13:47,536 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:13:47,537 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:14:17,567 ==================== TRACER ======================
+2021-12-29 03:14:17,568 Channel (server worker num[20]):
+2021-12-29 03:14:17,569 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:14:17,570 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:14:47,595 ==================== TRACER ======================
+2021-12-29 03:14:47,596 Channel (server worker num[20]):
+2021-12-29 03:14:47,597 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:14:47,598 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:15:17,628 ==================== TRACER ======================
+2021-12-29 03:15:17,629 Channel (server worker num[20]):
+2021-12-29 03:15:17,630 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:15:17,631 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:15:47,661 ==================== TRACER ======================
+2021-12-29 03:15:47,662 Channel (server worker num[20]):
+2021-12-29 03:15:47,663 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:15:47,663 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:16:17,694 ==================== TRACER ======================
+2021-12-29 03:16:17,695 Channel (server worker num[20]):
+2021-12-29 03:16:17,695 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:16:17,696 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:16:47,707 ==================== TRACER ======================
+2021-12-29 03:16:47,707 Channel (server worker num[20]):
+2021-12-29 03:16:47,708 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:16:47,709 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:17:17,739 ==================== TRACER ======================
+2021-12-29 03:17:17,740 Channel (server worker num[20]):
+2021-12-29 03:17:17,741 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:17:17,742 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:17:47,769 ==================== TRACER ======================
+2021-12-29 03:17:47,770 Channel (server worker num[20]):
+2021-12-29 03:17:47,770 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:17:47,771 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:18:17,801 ==================== TRACER ======================
+2021-12-29 03:18:17,802 Channel (server worker num[20]):
+2021-12-29 03:18:17,803 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:18:17,804 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:18:47,834 ==================== TRACER ======================
+2021-12-29 03:18:47,835 Channel (server worker num[20]):
+2021-12-29 03:18:47,836 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:18:47,837 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:19:17,846 ==================== TRACER ======================
+2021-12-29 03:19:17,847 Channel (server worker num[20]):
+2021-12-29 03:19:17,848 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:19:17,848 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:19:47,879 ==================== TRACER ======================
+2021-12-29 03:19:47,880 Channel (server worker num[20]):
+2021-12-29 03:19:47,881 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:19:47,881 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:20:17,912 ==================== TRACER ======================
+2021-12-29 03:20:17,912 Channel (server worker num[20]):
+2021-12-29 03:20:17,913 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:20:17,914 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:20:47,944 ==================== TRACER ======================
+2021-12-29 03:20:47,945 Channel (server worker num[20]):
+2021-12-29 03:20:47,946 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:20:47,947 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:21:17,977 ==================== TRACER ======================
+2021-12-29 03:21:17,978 Channel (server worker num[20]):
+2021-12-29 03:21:17,978 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:21:17,979 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:21:48,009 ==================== TRACER ======================
+2021-12-29 03:21:48,010 Channel (server worker num[20]):
+2021-12-29 03:21:48,011 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:21:48,012 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:22:18,042 ==================== TRACER ======================
+2021-12-29 03:22:18,043 Channel (server worker num[20]):
+2021-12-29 03:22:18,044 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:22:18,045 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:22:48,075 ==================== TRACER ======================
+2021-12-29 03:22:48,076 Channel (server worker num[20]):
+2021-12-29 03:22:48,076 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:22:48,077 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:23:18,107 ==================== TRACER ======================
+2021-12-29 03:23:18,108 Channel (server worker num[20]):
+2021-12-29 03:23:18,109 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:23:18,110 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:23:48,140 ==================== TRACER ======================
+2021-12-29 03:23:48,141 Channel (server worker num[20]):
+2021-12-29 03:23:48,142 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:23:48,143 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:24:18,173 ==================== TRACER ======================
+2021-12-29 03:24:18,174 Channel (server worker num[20]):
+2021-12-29 03:24:18,175 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:24:18,175 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:24:48,206 ==================== TRACER ======================
+2021-12-29 03:24:48,206 Channel (server worker num[20]):
+2021-12-29 03:24:48,207 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:24:48,208 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:25:18,238 ==================== TRACER ======================
+2021-12-29 03:25:18,239 Channel (server worker num[20]):
+2021-12-29 03:25:18,242 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:25:18,242 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:25:48,273 ==================== TRACER ======================
+2021-12-29 03:25:48,273 Channel (server worker num[20]):
+2021-12-29 03:25:48,274 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:25:48,275 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:26:18,305 ==================== TRACER ======================
+2021-12-29 03:26:18,306 Channel (server worker num[20]):
+2021-12-29 03:26:18,307 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:26:18,308 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:26:48,338 ==================== TRACER ======================
+2021-12-29 03:26:48,339 Channel (server worker num[20]):
+2021-12-29 03:26:48,340 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:26:48,340 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:27:18,371 ==================== TRACER ======================
+2021-12-29 03:27:18,371 Channel (server worker num[20]):
+2021-12-29 03:27:18,372 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:27:18,373 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:27:48,403 ==================== TRACER ======================
+2021-12-29 03:27:48,404 Channel (server worker num[20]):
+2021-12-29 03:27:48,405 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:27:48,406 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:28:18,436 ==================== TRACER ======================
+2021-12-29 03:28:18,437 Channel (server worker num[20]):
+2021-12-29 03:28:18,438 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:28:18,438 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:28:48,469 ==================== TRACER ======================
+2021-12-29 03:28:48,470 Channel (server worker num[20]):
+2021-12-29 03:28:48,470 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:28:48,471 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:29:18,501 ==================== TRACER ======================
+2021-12-29 03:29:18,502 Channel (server worker num[20]):
+2021-12-29 03:29:18,503 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:29:18,504 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:29:48,534 ==================== TRACER ======================
+2021-12-29 03:29:48,535 Channel (server worker num[20]):
+2021-12-29 03:29:48,536 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:29:48,536 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:30:18,567 ==================== TRACER ======================
+2021-12-29 03:30:18,568 Channel (server worker num[20]):
+2021-12-29 03:30:18,568 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:30:18,569 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:30:48,600 ==================== TRACER ======================
+2021-12-29 03:30:48,600 Channel (server worker num[20]):
+2021-12-29 03:30:48,601 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:30:48,602 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:31:18,632 ==================== TRACER ======================
+2021-12-29 03:31:18,633 Channel (server worker num[20]):
+2021-12-29 03:31:18,634 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:31:18,634 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:31:48,665 ==================== TRACER ======================
+2021-12-29 03:31:48,665 Channel (server worker num[20]):
+2021-12-29 03:31:48,666 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:31:48,667 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:32:18,697 ==================== TRACER ======================
+2021-12-29 03:32:18,698 Channel (server worker num[20]):
+2021-12-29 03:32:18,699 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:32:18,700 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:32:48,730 ==================== TRACER ======================
+2021-12-29 03:32:48,731 Channel (server worker num[20]):
+2021-12-29 03:32:48,731 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:32:48,732 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:33:18,762 ==================== TRACER ======================
+2021-12-29 03:33:18,763 Channel (server worker num[20]):
+2021-12-29 03:33:18,764 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:33:18,765 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:33:48,776 ==================== TRACER ======================
+2021-12-29 03:33:48,777 Channel (server worker num[20]):
+2021-12-29 03:33:48,778 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:33:48,779 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:34:18,809 ==================== TRACER ======================
+2021-12-29 03:34:18,810 Channel (server worker num[20]):
+2021-12-29 03:34:18,810 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:34:18,811 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:34:48,842 ==================== TRACER ======================
+2021-12-29 03:34:48,842 Channel (server worker num[20]):
+2021-12-29 03:34:48,843 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:34:48,844 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:35:18,874 ==================== TRACER ======================
+2021-12-29 03:35:18,875 Channel (server worker num[20]):
+2021-12-29 03:35:18,876 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:35:18,877 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:35:48,907 ==================== TRACER ======================
+2021-12-29 03:35:48,908 Channel (server worker num[20]):
+2021-12-29 03:35:48,909 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:35:48,909 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:36:18,940 ==================== TRACER ======================
+2021-12-29 03:36:18,941 Channel (server worker num[20]):
+2021-12-29 03:36:18,941 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:36:18,942 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:36:48,965 ==================== TRACER ======================
+2021-12-29 03:36:48,965 Channel (server worker num[20]):
+2021-12-29 03:36:48,966 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:36:48,967 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:37:18,997 ==================== TRACER ======================
+2021-12-29 03:37:18,998 Channel (server worker num[20]):
+2021-12-29 03:37:18,999 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:37:19,000 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:37:49,030 ==================== TRACER ======================
+2021-12-29 03:37:49,031 Channel (server worker num[20]):
+2021-12-29 03:37:49,031 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:37:49,032 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:38:19,035 ==================== TRACER ======================
+2021-12-29 03:38:19,036 Channel (server worker num[20]):
+2021-12-29 03:38:19,037 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:38:19,037 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:38:49,068 ==================== TRACER ======================
+2021-12-29 03:38:49,069 Channel (server worker num[20]):
+2021-12-29 03:38:49,069 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:38:49,070 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:39:19,100 ==================== TRACER ======================
+2021-12-29 03:39:19,101 Channel (server worker num[20]):
+2021-12-29 03:39:19,102 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:39:19,103 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:39:49,133 ==================== TRACER ======================
+2021-12-29 03:39:49,134 Channel (server worker num[20]):
+2021-12-29 03:39:49,135 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:39:49,135 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:40:19,166 ==================== TRACER ======================
+2021-12-29 03:40:19,166 Channel (server worker num[20]):
+2021-12-29 03:40:19,167 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:40:19,168 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:40:49,198 ==================== TRACER ======================
+2021-12-29 03:40:49,199 Channel (server worker num[20]):
+2021-12-29 03:40:49,200 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:40:49,201 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:41:19,231 ==================== TRACER ======================
+2021-12-29 03:41:19,232 Channel (server worker num[20]):
+2021-12-29 03:41:19,233 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:41:19,233 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:41:49,264 ==================== TRACER ======================
+2021-12-29 03:41:49,264 Channel (server worker num[20]):
+2021-12-29 03:41:49,265 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:41:49,266 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:42:19,296 ==================== TRACER ======================
+2021-12-29 03:42:19,297 Channel (server worker num[20]):
+2021-12-29 03:42:19,298 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:42:19,299 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:42:49,329 ==================== TRACER ======================
+2021-12-29 03:42:49,330 Channel (server worker num[20]):
+2021-12-29 03:42:49,331 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:42:49,332 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:43:19,362 ==================== TRACER ======================
+2021-12-29 03:43:19,363 Channel (server worker num[20]):
+2021-12-29 03:43:19,364 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:43:19,364 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:43:49,395 ==================== TRACER ======================
+2021-12-29 03:43:49,395 Channel (server worker num[20]):
+2021-12-29 03:43:49,396 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:43:49,397 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:44:19,427 ==================== TRACER ======================
+2021-12-29 03:44:19,428 Channel (server worker num[20]):
+2021-12-29 03:44:19,429 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:44:19,430 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:44:49,460 ==================== TRACER ======================
+2021-12-29 03:44:49,461 Channel (server worker num[20]):
+2021-12-29 03:44:49,462 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:44:49,462 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:45:19,467 ==================== TRACER ======================
+2021-12-29 03:45:19,468 Channel (server worker num[20]):
+2021-12-29 03:45:19,469 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:45:19,470 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:45:49,500 ==================== TRACER ======================
+2021-12-29 03:45:49,501 Channel (server worker num[20]):
+2021-12-29 03:45:49,502 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:45:49,502 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:46:19,533 ==================== TRACER ======================
+2021-12-29 03:46:19,533 Channel (server worker num[20]):
+2021-12-29 03:46:19,534 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:46:19,535 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:46:49,565 ==================== TRACER ======================
+2021-12-29 03:46:49,566 Channel (server worker num[20]):
+2021-12-29 03:46:49,567 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:46:49,567 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:47:19,589 ==================== TRACER ======================
+2021-12-29 03:47:19,590 Channel (server worker num[20]):
+2021-12-29 03:47:19,591 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:47:19,592 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:47:49,609 ==================== TRACER ======================
+2021-12-29 03:47:49,610 Channel (server worker num[20]):
+2021-12-29 03:47:49,610 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:47:49,611 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:48:19,641 ==================== TRACER ======================
+2021-12-29 03:48:19,642 Channel (server worker num[20]):
+2021-12-29 03:48:19,643 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:48:19,644 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:48:49,674 ==================== TRACER ======================
+2021-12-29 03:48:49,675 Channel (server worker num[20]):
+2021-12-29 03:48:49,676 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:48:49,677 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:49:19,707 ==================== TRACER ======================
+2021-12-29 03:49:19,708 Channel (server worker num[20]):
+2021-12-29 03:49:19,709 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:49:19,709 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:49:49,740 ==================== TRACER ======================
+2021-12-29 03:49:49,740 Channel (server worker num[20]):
+2021-12-29 03:49:49,741 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:49:49,742 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:50:19,772 ==================== TRACER ======================
+2021-12-29 03:50:19,773 Channel (server worker num[20]):
+2021-12-29 03:50:19,774 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:50:19,775 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:50:49,805 ==================== TRACER ======================
+2021-12-29 03:50:49,806 Channel (server worker num[20]):
+2021-12-29 03:50:49,806 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:50:49,807 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:51:19,837 ==================== TRACER ======================
+2021-12-29 03:51:19,838 Channel (server worker num[20]):
+2021-12-29 03:51:19,839 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:51:19,840 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:51:49,870 ==================== TRACER ======================
+2021-12-29 03:51:49,871 Channel (server worker num[20]):
+2021-12-29 03:51:49,872 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:51:49,872 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:52:19,880 ==================== TRACER ======================
+2021-12-29 03:52:19,881 Channel (server worker num[20]):
+2021-12-29 03:52:19,882 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:52:19,882 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:52:49,913 ==================== TRACER ======================
+2021-12-29 03:52:49,913 Channel (server worker num[20]):
+2021-12-29 03:52:49,914 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:52:49,915 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:53:19,945 ==================== TRACER ======================
+2021-12-29 03:53:19,946 Channel (server worker num[20]):
+2021-12-29 03:53:19,947 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:53:19,947 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:53:49,978 ==================== TRACER ======================
+2021-12-29 03:53:49,979 Channel (server worker num[20]):
+2021-12-29 03:53:49,980 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:53:49,980 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:54:20,011 ==================== TRACER ======================
+2021-12-29 03:54:20,011 Channel (server worker num[20]):
+2021-12-29 03:54:20,012 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:54:20,013 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:54:50,043 ==================== TRACER ======================
+2021-12-29 03:54:50,044 Channel (server worker num[20]):
+2021-12-29 03:54:50,045 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:54:50,046 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:55:20,076 ==================== TRACER ======================
+2021-12-29 03:55:20,077 Channel (server worker num[20]):
+2021-12-29 03:55:20,078 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:55:20,078 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:55:50,109 ==================== TRACER ======================
+2021-12-29 03:55:50,110 Channel (server worker num[20]):
+2021-12-29 03:55:50,110 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:55:50,111 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:56:20,141 ==================== TRACER ======================
+2021-12-29 03:56:20,142 Channel (server worker num[20]):
+2021-12-29 03:56:20,143 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:56:20,144 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:56:50,174 ==================== TRACER ======================
+2021-12-29 03:56:50,175 Channel (server worker num[20]):
+2021-12-29 03:56:50,175 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:56:50,176 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:57:20,206 ==================== TRACER ======================
+2021-12-29 03:57:20,207 Channel (server worker num[20]):
+2021-12-29 03:57:20,208 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:57:20,209 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:57:50,239 ==================== TRACER ======================
+2021-12-29 03:57:50,240 Channel (server worker num[20]):
+2021-12-29 03:57:50,241 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:57:50,241 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:58:20,272 ==================== TRACER ======================
+2021-12-29 03:58:20,273 Channel (server worker num[20]):
+2021-12-29 03:58:20,273 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:58:20,274 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:58:50,304 ==================== TRACER ======================
+2021-12-29 03:58:50,305 Channel (server worker num[20]):
+2021-12-29 03:58:50,306 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:58:50,307 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:59:20,337 ==================== TRACER ======================
+2021-12-29 03:59:20,338 Channel (server worker num[20]):
+2021-12-29 03:59:20,339 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:59:20,340 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 03:59:50,370 ==================== TRACER ======================
+2021-12-29 03:59:50,371 Channel (server worker num[20]):
+2021-12-29 03:59:50,372 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 03:59:50,372 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:00:20,403 ==================== TRACER ======================
+2021-12-29 04:00:20,404 Channel (server worker num[20]):
+2021-12-29 04:00:20,404 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:00:20,405 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:00:50,435 ==================== TRACER ======================
+2021-12-29 04:00:50,436 Channel (server worker num[20]):
+2021-12-29 04:00:50,437 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:00:50,438 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:01:20,468 ==================== TRACER ======================
+2021-12-29 04:01:20,469 Channel (server worker num[20]):
+2021-12-29 04:01:20,470 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:01:20,471 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:01:50,501 ==================== TRACER ======================
+2021-12-29 04:01:50,502 Channel (server worker num[20]):
+2021-12-29 04:01:50,503 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:01:50,503 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:02:20,534 ==================== TRACER ======================
+2021-12-29 04:02:20,535 Channel (server worker num[20]):
+2021-12-29 04:02:20,535 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:02:20,536 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:02:50,567 ==================== TRACER ======================
+2021-12-29 04:02:50,567 Channel (server worker num[20]):
+2021-12-29 04:02:50,568 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:02:50,569 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:03:20,596 ==================== TRACER ======================
+2021-12-29 04:03:20,597 Channel (server worker num[20]):
+2021-12-29 04:03:20,598 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:03:20,598 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:03:50,629 ==================== TRACER ======================
+2021-12-29 04:03:50,630 Channel (server worker num[20]):
+2021-12-29 04:03:50,631 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:03:50,631 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:04:20,660 ==================== TRACER ======================
+2021-12-29 04:04:20,661 Channel (server worker num[20]):
+2021-12-29 04:04:20,662 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:04:20,662 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:04:50,693 ==================== TRACER ======================
+2021-12-29 04:04:50,693 Channel (server worker num[20]):
+2021-12-29 04:04:50,694 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:04:50,695 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:05:20,725 ==================== TRACER ======================
+2021-12-29 04:05:20,726 Channel (server worker num[20]):
+2021-12-29 04:05:20,727 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:05:20,728 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:05:50,758 ==================== TRACER ======================
+2021-12-29 04:05:50,759 Channel (server worker num[20]):
+2021-12-29 04:05:50,760 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:05:50,761 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:06:20,791 ==================== TRACER ======================
+2021-12-29 04:06:20,792 Channel (server worker num[20]):
+2021-12-29 04:06:20,793 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:06:20,793 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:06:50,824 ==================== TRACER ======================
+2021-12-29 04:06:50,825 Channel (server worker num[20]):
+2021-12-29 04:06:50,826 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:06:50,826 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:07:20,857 ==================== TRACER ======================
+2021-12-29 04:07:20,858 Channel (server worker num[20]):
+2021-12-29 04:07:20,858 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:07:20,859 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:07:50,889 ==================== TRACER ======================
+2021-12-29 04:07:50,890 Channel (server worker num[20]):
+2021-12-29 04:07:50,891 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:07:50,892 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:08:20,908 ==================== TRACER ======================
+2021-12-29 04:08:20,909 Channel (server worker num[20]):
+2021-12-29 04:08:20,910 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:08:20,911 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:08:50,941 ==================== TRACER ======================
+2021-12-29 04:08:50,942 Channel (server worker num[20]):
+2021-12-29 04:08:50,943 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:08:50,943 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:09:20,974 ==================== TRACER ======================
+2021-12-29 04:09:20,974 Channel (server worker num[20]):
+2021-12-29 04:09:20,975 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:09:20,976 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:09:51,006 ==================== TRACER ======================
+2021-12-29 04:09:51,007 Channel (server worker num[20]):
+2021-12-29 04:09:51,008 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:09:51,009 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:10:21,039 ==================== TRACER ======================
+2021-12-29 04:10:21,040 Channel (server worker num[20]):
+2021-12-29 04:10:21,041 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:10:21,042 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:10:51,072 ==================== TRACER ======================
+2021-12-29 04:10:51,073 Channel (server worker num[20]):
+2021-12-29 04:10:51,074 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:10:51,074 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:11:21,105 ==================== TRACER ======================
+2021-12-29 04:11:21,106 Channel (server worker num[20]):
+2021-12-29 04:11:21,107 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:11:21,107 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:11:51,138 ==================== TRACER ======================
+2021-12-29 04:11:51,138 Channel (server worker num[20]):
+2021-12-29 04:11:51,139 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:11:51,140 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:12:21,170 ==================== TRACER ======================
+2021-12-29 04:12:21,171 Channel (server worker num[20]):
+2021-12-29 04:12:21,172 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:12:21,173 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:12:51,200 ==================== TRACER ======================
+2021-12-29 04:12:51,201 Channel (server worker num[20]):
+2021-12-29 04:12:51,202 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:12:51,202 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:13:21,233 ==================== TRACER ======================
+2021-12-29 04:13:21,234 Channel (server worker num[20]):
+2021-12-29 04:13:21,235 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:13:21,235 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:13:51,266 ==================== TRACER ======================
+2021-12-29 04:13:51,266 Channel (server worker num[20]):
+2021-12-29 04:13:51,267 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:13:51,268 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:14:21,298 ==================== TRACER ======================
+2021-12-29 04:14:21,299 Channel (server worker num[20]):
+2021-12-29 04:14:21,300 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:14:21,301 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:14:51,331 ==================== TRACER ======================
+2021-12-29 04:14:51,332 Channel (server worker num[20]):
+2021-12-29 04:14:51,333 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:14:51,334 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:15:21,364 ==================== TRACER ======================
+2021-12-29 04:15:21,365 Channel (server worker num[20]):
+2021-12-29 04:15:21,366 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:15:21,366 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:15:51,397 ==================== TRACER ======================
+2021-12-29 04:15:51,398 Channel (server worker num[20]):
+2021-12-29 04:15:51,398 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:15:51,399 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:16:21,430 ==================== TRACER ======================
+2021-12-29 04:16:21,430 Channel (server worker num[20]):
+2021-12-29 04:16:21,431 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:16:21,432 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:16:51,462 ==================== TRACER ======================
+2021-12-29 04:16:51,463 Channel (server worker num[20]):
+2021-12-29 04:16:51,464 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:16:51,465 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:17:21,491 ==================== TRACER ======================
+2021-12-29 04:17:21,492 Channel (server worker num[20]):
+2021-12-29 04:17:21,493 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:17:21,494 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:17:51,524 ==================== TRACER ======================
+2021-12-29 04:17:51,525 Channel (server worker num[20]):
+2021-12-29 04:17:51,526 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:17:51,527 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:18:21,557 ==================== TRACER ======================
+2021-12-29 04:18:21,558 Channel (server worker num[20]):
+2021-12-29 04:18:21,559 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:18:21,559 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:18:51,590 ==================== TRACER ======================
+2021-12-29 04:18:51,591 Channel (server worker num[20]):
+2021-12-29 04:18:51,592 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:18:51,592 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:19:21,623 ==================== TRACER ======================
+2021-12-29 04:19:21,624 Channel (server worker num[20]):
+2021-12-29 04:19:21,624 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:19:21,625 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:19:51,655 ==================== TRACER ======================
+2021-12-29 04:19:51,656 Channel (server worker num[20]):
+2021-12-29 04:19:51,657 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:19:51,658 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:20:21,688 ==================== TRACER ======================
+2021-12-29 04:20:21,689 Channel (server worker num[20]):
+2021-12-29 04:20:21,690 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:20:21,691 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:20:51,721 ==================== TRACER ======================
+2021-12-29 04:20:51,722 Channel (server worker num[20]):
+2021-12-29 04:20:51,723 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:20:51,724 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:21:21,754 ==================== TRACER ======================
+2021-12-29 04:21:21,755 Channel (server worker num[20]):
+2021-12-29 04:21:21,756 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:21:21,756 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:21:51,787 ==================== TRACER ======================
+2021-12-29 04:21:51,788 Channel (server worker num[20]):
+2021-12-29 04:21:51,788 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:21:51,789 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:22:21,819 ==================== TRACER ======================
+2021-12-29 04:22:21,820 Channel (server worker num[20]):
+2021-12-29 04:22:21,821 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:22:21,822 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:22:51,852 ==================== TRACER ======================
+2021-12-29 04:22:51,853 Channel (server worker num[20]):
+2021-12-29 04:22:51,854 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:22:51,855 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:23:21,858 ==================== TRACER ======================
+2021-12-29 04:23:21,859 Channel (server worker num[20]):
+2021-12-29 04:23:21,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:23:21,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:23:51,891 ==================== TRACER ======================
+2021-12-29 04:23:51,891 Channel (server worker num[20]):
+2021-12-29 04:23:51,892 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:23:51,893 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:24:21,923 ==================== TRACER ======================
+2021-12-29 04:24:21,924 Channel (server worker num[20]):
+2021-12-29 04:24:21,925 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:24:21,926 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:24:51,952 ==================== TRACER ======================
+2021-12-29 04:24:51,953 Channel (server worker num[20]):
+2021-12-29 04:24:51,953 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:24:51,954 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:25:21,984 ==================== TRACER ======================
+2021-12-29 04:25:21,985 Channel (server worker num[20]):
+2021-12-29 04:25:21,986 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:25:21,987 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:25:52,017 ==================== TRACER ======================
+2021-12-29 04:25:52,018 Channel (server worker num[20]):
+2021-12-29 04:25:52,019 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:25:52,019 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:26:22,050 ==================== TRACER ======================
+2021-12-29 04:26:22,051 Channel (server worker num[20]):
+2021-12-29 04:26:22,051 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:26:22,052 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:26:52,082 ==================== TRACER ======================
+2021-12-29 04:26:52,083 Channel (server worker num[20]):
+2021-12-29 04:26:52,084 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:26:52,085 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:27:22,115 ==================== TRACER ======================
+2021-12-29 04:27:22,116 Channel (server worker num[20]):
+2021-12-29 04:27:22,117 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:27:22,118 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:27:52,148 ==================== TRACER ======================
+2021-12-29 04:27:52,149 Channel (server worker num[20]):
+2021-12-29 04:27:52,150 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:27:52,151 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:28:22,181 ==================== TRACER ======================
+2021-12-29 04:28:22,182 Channel (server worker num[20]):
+2021-12-29 04:28:22,183 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:28:22,183 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:28:52,214 ==================== TRACER ======================
+2021-12-29 04:28:52,215 Channel (server worker num[20]):
+2021-12-29 04:28:52,215 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:28:52,216 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:29:22,247 ==================== TRACER ======================
+2021-12-29 04:29:22,247 Channel (server worker num[20]):
+2021-12-29 04:29:22,248 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:29:22,249 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:29:52,279 ==================== TRACER ======================
+2021-12-29 04:29:52,280 Channel (server worker num[20]):
+2021-12-29 04:29:52,281 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:29:52,282 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:30:22,291 ==================== TRACER ======================
+2021-12-29 04:30:22,292 Channel (server worker num[20]):
+2021-12-29 04:30:22,293 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:30:22,293 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:30:52,324 ==================== TRACER ======================
+2021-12-29 04:30:52,325 Channel (server worker num[20]):
+2021-12-29 04:30:52,326 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:30:52,326 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:31:22,357 ==================== TRACER ======================
+2021-12-29 04:31:22,357 Channel (server worker num[20]):
+2021-12-29 04:31:22,358 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:31:22,359 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:31:52,388 ==================== TRACER ======================
+2021-12-29 04:31:52,389 Channel (server worker num[20]):
+2021-12-29 04:31:52,390 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:31:52,391 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:32:22,421 ==================== TRACER ======================
+2021-12-29 04:32:22,422 Channel (server worker num[20]):
+2021-12-29 04:32:22,423 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:32:22,423 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:32:52,437 ==================== TRACER ======================
+2021-12-29 04:32:52,438 Channel (server worker num[20]):
+2021-12-29 04:32:52,439 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:32:52,440 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:33:22,470 ==================== TRACER ======================
+2021-12-29 04:33:22,471 Channel (server worker num[20]):
+2021-12-29 04:33:22,472 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:33:22,472 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:33:52,503 ==================== TRACER ======================
+2021-12-29 04:33:52,504 Channel (server worker num[20]):
+2021-12-29 04:33:52,504 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:33:52,505 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:34:22,536 ==================== TRACER ======================
+2021-12-29 04:34:22,536 Channel (server worker num[20]):
+2021-12-29 04:34:22,537 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:34:22,538 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:34:52,552 ==================== TRACER ======================
+2021-12-29 04:34:52,553 Channel (server worker num[20]):
+2021-12-29 04:34:52,554 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:34:52,555 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:35:22,585 ==================== TRACER ======================
+2021-12-29 04:35:22,586 Channel (server worker num[20]):
+2021-12-29 04:35:22,587 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:35:22,587 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:35:52,596 ==================== TRACER ======================
+2021-12-29 04:35:52,597 Channel (server worker num[20]):
+2021-12-29 04:35:52,598 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:35:52,599 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:36:22,629 ==================== TRACER ======================
+2021-12-29 04:36:22,630 Channel (server worker num[20]):
+2021-12-29 04:36:22,631 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:36:22,632 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:36:52,662 ==================== TRACER ======================
+2021-12-29 04:36:52,663 Channel (server worker num[20]):
+2021-12-29 04:36:52,664 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:36:52,664 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:37:22,695 ==================== TRACER ======================
+2021-12-29 04:37:22,696 Channel (server worker num[20]):
+2021-12-29 04:37:22,696 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:37:22,697 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:37:52,728 ==================== TRACER ======================
+2021-12-29 04:37:52,728 Channel (server worker num[20]):
+2021-12-29 04:37:52,729 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:37:52,730 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:38:22,760 ==================== TRACER ======================
+2021-12-29 04:38:22,761 Channel (server worker num[20]):
+2021-12-29 04:38:22,762 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:38:22,763 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:38:52,793 ==================== TRACER ======================
+2021-12-29 04:38:52,794 Channel (server worker num[20]):
+2021-12-29 04:38:52,795 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:38:52,796 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:39:22,826 ==================== TRACER ======================
+2021-12-29 04:39:22,827 Channel (server worker num[20]):
+2021-12-29 04:39:22,828 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:39:22,828 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:39:52,832 ==================== TRACER ======================
+2021-12-29 04:39:52,833 Channel (server worker num[20]):
+2021-12-29 04:39:52,834 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:39:52,834 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:40:22,849 ==================== TRACER ======================
+2021-12-29 04:40:22,850 Channel (server worker num[20]):
+2021-12-29 04:40:22,851 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:40:22,851 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:40:52,876 ==================== TRACER ======================
+2021-12-29 04:40:52,877 Channel (server worker num[20]):
+2021-12-29 04:40:52,878 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:40:52,878 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:41:22,908 ==================== TRACER ======================
+2021-12-29 04:41:22,909 Channel (server worker num[20]):
+2021-12-29 04:41:22,910 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:41:22,911 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:41:52,916 ==================== TRACER ======================
+2021-12-29 04:41:52,917 Channel (server worker num[20]):
+2021-12-29 04:41:52,918 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:41:52,919 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:42:22,949 ==================== TRACER ======================
+2021-12-29 04:42:22,950 Channel (server worker num[20]):
+2021-12-29 04:42:22,951 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:42:22,952 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:42:52,982 ==================== TRACER ======================
+2021-12-29 04:42:52,983 Channel (server worker num[20]):
+2021-12-29 04:42:52,984 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:42:52,985 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:43:23,014 ==================== TRACER ======================
+2021-12-29 04:43:23,014 Channel (server worker num[20]):
+2021-12-29 04:43:23,015 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:43:23,016 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:43:53,044 ==================== TRACER ======================
+2021-12-29 04:43:53,045 Channel (server worker num[20]):
+2021-12-29 04:43:53,046 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:43:53,047 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:44:23,077 ==================== TRACER ======================
+2021-12-29 04:44:23,078 Channel (server worker num[20]):
+2021-12-29 04:44:23,079 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:44:23,079 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:44:53,110 ==================== TRACER ======================
+2021-12-29 04:44:53,111 Channel (server worker num[20]):
+2021-12-29 04:44:53,111 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:44:53,112 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:45:23,142 ==================== TRACER ======================
+2021-12-29 04:45:23,143 Channel (server worker num[20]):
+2021-12-29 04:45:23,144 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:45:23,145 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:45:53,175 ==================== TRACER ======================
+2021-12-29 04:45:53,176 Channel (server worker num[20]):
+2021-12-29 04:45:53,177 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:45:53,178 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:46:23,208 ==================== TRACER ======================
+2021-12-29 04:46:23,209 Channel (server worker num[20]):
+2021-12-29 04:46:23,210 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:46:23,211 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:46:53,241 ==================== TRACER ======================
+2021-12-29 04:46:53,242 Channel (server worker num[20]):
+2021-12-29 04:46:53,243 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:46:53,244 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:47:23,274 ==================== TRACER ======================
+2021-12-29 04:47:23,275 Channel (server worker num[20]):
+2021-12-29 04:47:23,276 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:47:23,277 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:47:53,307 ==================== TRACER ======================
+2021-12-29 04:47:53,308 Channel (server worker num[20]):
+2021-12-29 04:47:53,309 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:47:53,309 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:48:23,340 ==================== TRACER ======================
+2021-12-29 04:48:23,341 Channel (server worker num[20]):
+2021-12-29 04:48:23,342 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:48:23,342 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:48:53,373 ==================== TRACER ======================
+2021-12-29 04:48:53,373 Channel (server worker num[20]):
+2021-12-29 04:48:53,374 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:48:53,375 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:49:23,405 ==================== TRACER ======================
+2021-12-29 04:49:23,406 Channel (server worker num[20]):
+2021-12-29 04:49:23,407 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:49:23,408 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:49:53,438 ==================== TRACER ======================
+2021-12-29 04:49:53,439 Channel (server worker num[20]):
+2021-12-29 04:49:53,440 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:49:53,440 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:50:23,471 ==================== TRACER ======================
+2021-12-29 04:50:23,472 Channel (server worker num[20]):
+2021-12-29 04:50:23,473 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:50:23,473 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:50:53,504 ==================== TRACER ======================
+2021-12-29 04:50:53,505 Channel (server worker num[20]):
+2021-12-29 04:50:53,505 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:50:53,506 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:51:23,536 ==================== TRACER ======================
+2021-12-29 04:51:23,537 Channel (server worker num[20]):
+2021-12-29 04:51:23,538 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:51:23,539 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:51:53,569 ==================== TRACER ======================
+2021-12-29 04:51:53,570 Channel (server worker num[20]):
+2021-12-29 04:51:53,571 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:51:53,572 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:52:23,602 ==================== TRACER ======================
+2021-12-29 04:52:23,603 Channel (server worker num[20]):
+2021-12-29 04:52:23,604 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:52:23,605 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:52:53,635 ==================== TRACER ======================
+2021-12-29 04:52:53,636 Channel (server worker num[20]):
+2021-12-29 04:52:53,637 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:52:53,637 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:53:23,668 ==================== TRACER ======================
+2021-12-29 04:53:23,669 Channel (server worker num[20]):
+2021-12-29 04:53:23,669 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:53:23,670 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:53:53,700 ==================== TRACER ======================
+2021-12-29 04:53:53,701 Channel (server worker num[20]):
+2021-12-29 04:53:53,702 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:53:53,703 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:54:23,733 ==================== TRACER ======================
+2021-12-29 04:54:23,734 Channel (server worker num[20]):
+2021-12-29 04:54:23,735 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:54:23,736 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:54:53,766 ==================== TRACER ======================
+2021-12-29 04:54:53,767 Channel (server worker num[20]):
+2021-12-29 04:54:53,768 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:54:53,768 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:55:23,799 ==================== TRACER ======================
+2021-12-29 04:55:23,800 Channel (server worker num[20]):
+2021-12-29 04:55:23,801 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:55:23,801 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:55:53,832 ==================== TRACER ======================
+2021-12-29 04:55:53,833 Channel (server worker num[20]):
+2021-12-29 04:55:53,833 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:55:53,834 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:56:23,864 ==================== TRACER ======================
+2021-12-29 04:56:23,865 Channel (server worker num[20]):
+2021-12-29 04:56:23,866 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:56:23,867 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:56:53,897 ==================== TRACER ======================
+2021-12-29 04:56:53,898 Channel (server worker num[20]):
+2021-12-29 04:56:53,899 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:56:53,900 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:57:23,930 ==================== TRACER ======================
+2021-12-29 04:57:23,931 Channel (server worker num[20]):
+2021-12-29 04:57:23,932 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:57:23,932 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:57:53,963 ==================== TRACER ======================
+2021-12-29 04:57:53,964 Channel (server worker num[20]):
+2021-12-29 04:57:53,964 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:57:53,965 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:58:23,996 ==================== TRACER ======================
+2021-12-29 04:58:23,996 Channel (server worker num[20]):
+2021-12-29 04:58:23,997 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:58:23,998 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:58:54,028 ==================== TRACER ======================
+2021-12-29 04:58:54,029 Channel (server worker num[20]):
+2021-12-29 04:58:54,030 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:58:54,030 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:59:24,061 ==================== TRACER ======================
+2021-12-29 04:59:24,062 Channel (server worker num[20]):
+2021-12-29 04:59:24,063 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:59:24,063 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 04:59:54,094 ==================== TRACER ======================
+2021-12-29 04:59:54,094 Channel (server worker num[20]):
+2021-12-29 04:59:54,095 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 04:59:54,096 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:00:24,126 ==================== TRACER ======================
+2021-12-29 05:00:24,127 Channel (server worker num[20]):
+2021-12-29 05:00:24,128 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:00:24,129 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:00:54,159 ==================== TRACER ======================
+2021-12-29 05:00:54,160 Channel (server worker num[20]):
+2021-12-29 05:00:54,161 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:00:54,162 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:01:24,190 ==================== TRACER ======================
+2021-12-29 05:01:24,191 Channel (server worker num[20]):
+2021-12-29 05:01:24,192 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:01:24,192 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:01:54,223 ==================== TRACER ======================
+2021-12-29 05:01:54,223 Channel (server worker num[20]):
+2021-12-29 05:01:54,224 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:01:54,225 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:02:24,255 ==================== TRACER ======================
+2021-12-29 05:02:24,256 Channel (server worker num[20]):
+2021-12-29 05:02:24,257 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:02:24,258 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:02:54,288 ==================== TRACER ======================
+2021-12-29 05:02:54,289 Channel (server worker num[20]):
+2021-12-29 05:02:54,290 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:02:54,291 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:03:24,321 ==================== TRACER ======================
+2021-12-29 05:03:24,322 Channel (server worker num[20]):
+2021-12-29 05:03:24,323 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:03:24,323 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:03:54,354 ==================== TRACER ======================
+2021-12-29 05:03:54,355 Channel (server worker num[20]):
+2021-12-29 05:03:54,355 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:03:54,356 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:04:24,387 ==================== TRACER ======================
+2021-12-29 05:04:24,387 Channel (server worker num[20]):
+2021-12-29 05:04:24,388 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:04:24,389 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:04:54,419 ==================== TRACER ======================
+2021-12-29 05:04:54,420 Channel (server worker num[20]):
+2021-12-29 05:04:54,421 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:04:54,422 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:05:24,434 ==================== TRACER ======================
+2021-12-29 05:05:24,435 Channel (server worker num[20]):
+2021-12-29 05:05:24,436 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:05:24,437 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:05:54,467 ==================== TRACER ======================
+2021-12-29 05:05:54,468 Channel (server worker num[20]):
+2021-12-29 05:05:54,469 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:05:54,469 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:06:24,500 ==================== TRACER ======================
+2021-12-29 05:06:24,501 Channel (server worker num[20]):
+2021-12-29 05:06:24,501 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:06:24,502 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:06:54,532 ==================== TRACER ======================
+2021-12-29 05:06:54,533 Channel (server worker num[20]):
+2021-12-29 05:06:54,534 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:06:54,535 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:07:24,565 ==================== TRACER ======================
+2021-12-29 05:07:24,566 Channel (server worker num[20]):
+2021-12-29 05:07:24,567 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:07:24,568 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:07:54,598 ==================== TRACER ======================
+2021-12-29 05:07:54,599 Channel (server worker num[20]):
+2021-12-29 05:07:54,600 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:07:54,600 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:08:24,631 ==================== TRACER ======================
+2021-12-29 05:08:24,632 Channel (server worker num[20]):
+2021-12-29 05:08:24,633 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:08:24,633 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:08:54,664 ==================== TRACER ======================
+2021-12-29 05:08:54,665 Channel (server worker num[20]):
+2021-12-29 05:08:54,665 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:08:54,666 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:09:24,696 ==================== TRACER ======================
+2021-12-29 05:09:24,697 Channel (server worker num[20]):
+2021-12-29 05:09:24,698 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:09:24,699 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:09:54,729 ==================== TRACER ======================
+2021-12-29 05:09:54,730 Channel (server worker num[20]):
+2021-12-29 05:09:54,731 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:09:54,732 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:10:24,762 ==================== TRACER ======================
+2021-12-29 05:10:24,763 Channel (server worker num[20]):
+2021-12-29 05:10:24,764 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:10:24,765 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:10:54,795 ==================== TRACER ======================
+2021-12-29 05:10:54,796 Channel (server worker num[20]):
+2021-12-29 05:10:54,798 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:10:54,799 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:11:24,829 ==================== TRACER ======================
+2021-12-29 05:11:24,830 Channel (server worker num[20]):
+2021-12-29 05:11:24,831 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:11:24,832 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:11:54,858 ==================== TRACER ======================
+2021-12-29 05:11:54,859 Channel (server worker num[20]):
+2021-12-29 05:11:54,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:11:54,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:12:24,890 ==================== TRACER ======================
+2021-12-29 05:12:24,891 Channel (server worker num[20]):
+2021-12-29 05:12:24,892 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:12:24,893 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:12:54,923 ==================== TRACER ======================
+2021-12-29 05:12:54,924 Channel (server worker num[20]):
+2021-12-29 05:12:54,925 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:12:54,926 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:13:24,935 ==================== TRACER ======================
+2021-12-29 05:13:24,936 Channel (server worker num[20]):
+2021-12-29 05:13:24,937 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:13:24,937 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:13:54,968 ==================== TRACER ======================
+2021-12-29 05:13:54,969 Channel (server worker num[20]):
+2021-12-29 05:13:54,970 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:13:54,970 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:14:25,001 ==================== TRACER ======================
+2021-12-29 05:14:25,001 Channel (server worker num[20]):
+2021-12-29 05:14:25,002 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:14:25,003 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:14:55,033 ==================== TRACER ======================
+2021-12-29 05:14:55,034 Channel (server worker num[20]):
+2021-12-29 05:14:55,035 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:14:55,036 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:15:25,066 ==================== TRACER ======================
+2021-12-29 05:15:25,067 Channel (server worker num[20]):
+2021-12-29 05:15:25,068 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:15:25,069 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:15:55,099 ==================== TRACER ======================
+2021-12-29 05:15:55,100 Channel (server worker num[20]):
+2021-12-29 05:15:55,101 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:15:55,101 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:16:25,132 ==================== TRACER ======================
+2021-12-29 05:16:25,133 Channel (server worker num[20]):
+2021-12-29 05:16:25,134 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:16:25,134 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:16:55,165 ==================== TRACER ======================
+2021-12-29 05:16:55,165 Channel (server worker num[20]):
+2021-12-29 05:16:55,166 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:16:55,167 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:17:25,197 ==================== TRACER ======================
+2021-12-29 05:17:25,198 Channel (server worker num[20]):
+2021-12-29 05:17:25,199 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:17:25,200 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:17:55,230 ==================== TRACER ======================
+2021-12-29 05:17:55,231 Channel (server worker num[20]):
+2021-12-29 05:17:55,232 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:17:55,233 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:18:25,263 ==================== TRACER ======================
+2021-12-29 05:18:25,264 Channel (server worker num[20]):
+2021-12-29 05:18:25,265 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:18:25,265 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:18:55,296 ==================== TRACER ======================
+2021-12-29 05:18:55,297 Channel (server worker num[20]):
+2021-12-29 05:18:55,298 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:18:55,298 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:19:25,329 ==================== TRACER ======================
+2021-12-29 05:19:25,330 Channel (server worker num[20]):
+2021-12-29 05:19:25,330 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:19:25,331 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:19:55,361 ==================== TRACER ======================
+2021-12-29 05:19:55,362 Channel (server worker num[20]):
+2021-12-29 05:19:55,363 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:19:55,364 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:20:25,394 ==================== TRACER ======================
+2021-12-29 05:20:25,395 Channel (server worker num[20]):
+2021-12-29 05:20:25,396 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:20:25,397 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:20:55,427 ==================== TRACER ======================
+2021-12-29 05:20:55,428 Channel (server worker num[20]):
+2021-12-29 05:20:55,429 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:20:55,430 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:21:25,444 ==================== TRACER ======================
+2021-12-29 05:21:25,445 Channel (server worker num[20]):
+2021-12-29 05:21:25,445 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:21:25,446 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:21:55,476 ==================== TRACER ======================
+2021-12-29 05:21:55,477 Channel (server worker num[20]):
+2021-12-29 05:21:55,478 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:21:55,479 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:22:25,509 ==================== TRACER ======================
+2021-12-29 05:22:25,510 Channel (server worker num[20]):
+2021-12-29 05:22:25,511 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:22:25,512 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:22:55,542 ==================== TRACER ======================
+2021-12-29 05:22:55,543 Channel (server worker num[20]):
+2021-12-29 05:22:55,544 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:22:55,544 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:23:25,574 ==================== TRACER ======================
+2021-12-29 05:23:25,575 Channel (server worker num[20]):
+2021-12-29 05:23:25,576 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:23:25,577 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:23:55,607 ==================== TRACER ======================
+2021-12-29 05:23:55,608 Channel (server worker num[20]):
+2021-12-29 05:23:55,609 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:23:55,610 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:24:25,640 ==================== TRACER ======================
+2021-12-29 05:24:25,641 Channel (server worker num[20]):
+2021-12-29 05:24:25,642 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:24:25,642 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:24:55,673 ==================== TRACER ======================
+2021-12-29 05:24:55,674 Channel (server worker num[20]):
+2021-12-29 05:24:55,674 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:24:55,675 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:25:25,706 ==================== TRACER ======================
+2021-12-29 05:25:25,706 Channel (server worker num[20]):
+2021-12-29 05:25:25,707 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:25:25,708 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:25:55,738 ==================== TRACER ======================
+2021-12-29 05:25:55,739 Channel (server worker num[20]):
+2021-12-29 05:25:55,740 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:25:55,740 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:26:25,771 ==================== TRACER ======================
+2021-12-29 05:26:25,772 Channel (server worker num[20]):
+2021-12-29 05:26:25,772 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:26:25,773 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:26:55,804 ==================== TRACER ======================
+2021-12-29 05:26:55,804 Channel (server worker num[20]):
+2021-12-29 05:26:55,805 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:26:55,806 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:27:25,825 ==================== TRACER ======================
+2021-12-29 05:27:25,826 Channel (server worker num[20]):
+2021-12-29 05:27:25,826 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:27:25,827 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:27:55,857 ==================== TRACER ======================
+2021-12-29 05:27:55,858 Channel (server worker num[20]):
+2021-12-29 05:27:55,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:27:55,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:28:25,890 ==================== TRACER ======================
+2021-12-29 05:28:25,891 Channel (server worker num[20]):
+2021-12-29 05:28:25,892 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:28:25,892 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:28:55,923 ==================== TRACER ======================
+2021-12-29 05:28:55,924 Channel (server worker num[20]):
+2021-12-29 05:28:55,924 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:28:55,925 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:29:25,955 ==================== TRACER ======================
+2021-12-29 05:29:25,956 Channel (server worker num[20]):
+2021-12-29 05:29:25,957 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:29:25,958 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:29:55,990 ==================== TRACER ======================
+2021-12-29 05:29:56,009 Channel (server worker num[20]):
+2021-12-29 05:29:56,050 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:29:56,079 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:30:26,119 ==================== TRACER ======================
+2021-12-29 05:30:26,119 Channel (server worker num[20]):
+2021-12-29 05:30:26,120 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:30:26,121 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:30:56,151 ==================== TRACER ======================
+2021-12-29 05:30:56,152 Channel (server worker num[20]):
+2021-12-29 05:30:56,153 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:30:56,154 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:31:26,184 ==================== TRACER ======================
+2021-12-29 05:31:26,185 Channel (server worker num[20]):
+2021-12-29 05:31:26,186 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:31:26,187 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:31:56,217 ==================== TRACER ======================
+2021-12-29 05:31:56,218 Channel (server worker num[20]):
+2021-12-29 05:31:56,219 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:31:56,219 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:32:26,250 ==================== TRACER ======================
+2021-12-29 05:32:26,251 Channel (server worker num[20]):
+2021-12-29 05:32:26,251 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:32:26,252 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:32:56,281 ==================== TRACER ======================
+2021-12-29 05:32:56,282 Channel (server worker num[20]):
+2021-12-29 05:32:56,283 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:32:56,284 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:33:26,312 ==================== TRACER ======================
+2021-12-29 05:33:26,313 Channel (server worker num[20]):
+2021-12-29 05:33:26,314 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:33:26,315 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:33:56,345 ==================== TRACER ======================
+2021-12-29 05:33:56,346 Channel (server worker num[20]):
+2021-12-29 05:33:56,347 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:33:56,347 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:34:26,378 ==================== TRACER ======================
+2021-12-29 05:34:26,378 Channel (server worker num[20]):
+2021-12-29 05:34:26,379 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:34:26,380 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:34:56,408 ==================== TRACER ======================
+2021-12-29 05:34:56,409 Channel (server worker num[20]):
+2021-12-29 05:34:56,410 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:34:56,410 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:35:26,440 ==================== TRACER ======================
+2021-12-29 05:35:26,441 Channel (server worker num[20]):
+2021-12-29 05:35:26,442 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:35:26,442 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:35:56,472 ==================== TRACER ======================
+2021-12-29 05:35:56,473 Channel (server worker num[20]):
+2021-12-29 05:35:56,474 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:35:56,475 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:36:26,505 ==================== TRACER ======================
+2021-12-29 05:36:26,506 Channel (server worker num[20]):
+2021-12-29 05:36:26,507 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:36:26,508 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:36:56,538 ==================== TRACER ======================
+2021-12-29 05:36:56,539 Channel (server worker num[20]):
+2021-12-29 05:36:56,540 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:36:56,540 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:37:04,999 ==================== TRACER ======================
+2021-12-29 05:37:05,001 Channel (server worker num[20]):
+2021-12-29 05:37:05,003 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:37:05,004 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:37:26,571 ==================== TRACER ======================
+2021-12-29 05:37:26,572 Channel (server worker num[20]):
+2021-12-29 05:37:26,573 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:37:26,573 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:37:35,034 ==================== TRACER ======================
+2021-12-29 05:37:35,035 DAGExecutor:
+2021-12-29 05:37:35,036 Query count[1]
+2021-12-29 05:37:35,036 QPS[0.03333333333333333 q/s]
+2021-12-29 05:37:35,036 Succ[0.0]
+2021-12-29 05:37:35,036 Error req[0]
+2021-12-29 05:37:35,037 Latency:
+2021-12-29 05:37:35,037 ave[1922.774 ms]
+2021-12-29 05:37:35,037 .50[1922.774 ms]
+2021-12-29 05:37:35,037 .60[1922.774 ms]
+2021-12-29 05:37:35,037 .70[1922.774 ms]
+2021-12-29 05:37:35,037 .80[1922.774 ms]
+2021-12-29 05:37:35,038 .90[1922.774 ms]
+2021-12-29 05:37:35,038 .95[1922.774 ms]
+2021-12-29 05:37:35,038 .99[1922.774 ms]
+2021-12-29 05:37:35,038 Channel (server worker num[20]):
+2021-12-29 05:37:35,039 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:37:35,040 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:37:56,604 ==================== TRACER ======================
+2021-12-29 05:37:56,604 Channel (server worker num[20]):
+2021-12-29 05:37:56,605 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:37:56,606 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:38:05,070 ==================== TRACER ======================
+2021-12-29 05:38:05,071 Channel (server worker num[20]):
+2021-12-29 05:38:05,072 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:38:05,072 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:38:26,636 ==================== TRACER ======================
+2021-12-29 05:38:26,637 Channel (server worker num[20]):
+2021-12-29 05:38:26,638 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:38:26,639 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:38:35,103 ==================== TRACER ======================
+2021-12-29 05:38:35,104 Channel (server worker num[20]):
+2021-12-29 05:38:35,104 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:38:35,105 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:38:56,669 ==================== TRACER ======================
+2021-12-29 05:38:56,670 Channel (server worker num[20]):
+2021-12-29 05:38:56,671 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:38:56,672 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:39:05,135 ==================== TRACER ======================
+2021-12-29 05:39:05,136 Channel (server worker num[20]):
+2021-12-29 05:39:05,137 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:39:05,138 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:39:26,702 ==================== TRACER ======================
+2021-12-29 05:39:26,703 Channel (server worker num[20]):
+2021-12-29 05:39:26,704 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:39:26,704 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:39:35,168 ==================== TRACER ======================
+2021-12-29 05:39:35,169 Channel (server worker num[20]):
+2021-12-29 05:39:35,170 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:39:35,170 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:39:56,735 ==================== TRACER ======================
+2021-12-29 05:39:56,735 Channel (server worker num[20]):
+2021-12-29 05:39:56,736 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:39:56,737 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:40:05,171 ==================== TRACER ======================
+2021-12-29 05:40:05,172 Channel (server worker num[20]):
+2021-12-29 05:40:05,173 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:40:05,173 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:40:11,916 ==================== TRACER ======================
+2021-12-29 05:40:11,918 Channel (server worker num[20]):
+2021-12-29 05:40:11,922 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:40:11,922 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:40:26,767 ==================== TRACER ======================
+2021-12-29 05:40:26,768 Channel (server worker num[20]):
+2021-12-29 05:40:26,769 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:40:26,770 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:40:41,953 ==================== TRACER ======================
+2021-12-29 05:40:41,954 DAGExecutor:
+2021-12-29 05:40:41,954 Query count[1]
+2021-12-29 05:40:41,954 QPS[0.03333333333333333 q/s]
+2021-12-29 05:40:41,955 Succ[0.0]
+2021-12-29 05:40:41,955 Error req[0]
+2021-12-29 05:40:41,955 Latency:
+2021-12-29 05:40:41,955 ave[1827.423 ms]
+2021-12-29 05:40:41,955 .50[1827.423 ms]
+2021-12-29 05:40:41,956 .60[1827.423 ms]
+2021-12-29 05:40:41,956 .70[1827.423 ms]
+2021-12-29 05:40:41,956 .80[1827.423 ms]
+2021-12-29 05:40:41,956 .90[1827.423 ms]
+2021-12-29 05:40:41,956 .95[1827.423 ms]
+2021-12-29 05:40:41,957 .99[1827.423 ms]
+2021-12-29 05:40:41,957 Channel (server worker num[20]):
+2021-12-29 05:40:41,957 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:40:41,958 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:40:56,776 ==================== TRACER ======================
+2021-12-29 05:40:56,777 Channel (server worker num[20]):
+2021-12-29 05:40:56,778 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:40:56,779 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:41:11,989 ==================== TRACER ======================
+2021-12-29 05:41:11,989 Channel (server worker num[20]):
+2021-12-29 05:41:11,990 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:41:11,991 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:41:26,808 ==================== TRACER ======================
+2021-12-29 05:41:26,809 Channel (server worker num[20]):
+2021-12-29 05:41:26,810 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:41:26,810 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:41:42,021 ==================== TRACER ======================
+2021-12-29 05:41:42,022 Channel (server worker num[20]):
+2021-12-29 05:41:42,023 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:41:42,024 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:41:56,841 ==================== TRACER ======================
+2021-12-29 05:41:56,842 Channel (server worker num[20]):
+2021-12-29 05:41:56,842 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:41:56,843 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:42:11,656 ==================== TRACER ======================
+2021-12-29 05:42:11,657 Channel (server worker num[20]):
+2021-12-29 05:42:11,659 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:42:11,660 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:42:26,872 ==================== TRACER ======================
+2021-12-29 05:42:26,873 Channel (server worker num[20]):
+2021-12-29 05:42:26,874 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:42:26,874 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:42:41,691 ==================== TRACER ======================
+2021-12-29 05:42:41,692 DAGExecutor:
+2021-12-29 05:42:41,692 Query count[1]
+2021-12-29 05:42:41,693 QPS[0.03333333333333333 q/s]
+2021-12-29 05:42:41,693 Succ[0.0]
+2021-12-29 05:42:41,693 Error req[0]
+2021-12-29 05:42:41,693 Latency:
+2021-12-29 05:42:41,693 ave[1872.395 ms]
+2021-12-29 05:42:41,694 .50[1872.395 ms]
+2021-12-29 05:42:41,694 .60[1872.395 ms]
+2021-12-29 05:42:41,694 .70[1872.395 ms]
+2021-12-29 05:42:41,694 .80[1872.395 ms]
+2021-12-29 05:42:41,694 .90[1872.395 ms]
+2021-12-29 05:42:41,695 .95[1872.395 ms]
+2021-12-29 05:42:41,695 .99[1872.395 ms]
+2021-12-29 05:42:41,695 Channel (server worker num[20]):
+2021-12-29 05:42:41,696 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:42:41,697 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:42:56,904 ==================== TRACER ======================
+2021-12-29 05:42:56,905 Channel (server worker num[20]):
+2021-12-29 05:42:56,906 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:42:56,906 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:43:11,727 ==================== TRACER ======================
+2021-12-29 05:43:11,728 Channel (server worker num[20]):
+2021-12-29 05:43:11,729 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:43:11,730 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:43:26,937 ==================== TRACER ======================
+2021-12-29 05:43:26,938 Channel (server worker num[20]):
+2021-12-29 05:43:26,939 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:43:26,939 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:43:41,760 ==================== TRACER ======================
+2021-12-29 05:43:41,761 Channel (server worker num[20]):
+2021-12-29 05:43:41,761 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:43:41,762 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:43:56,943 ==================== TRACER ======================
+2021-12-29 05:43:56,944 Channel (server worker num[20]):
+2021-12-29 05:43:56,944 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:43:56,945 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:44:11,792 ==================== TRACER ======================
+2021-12-29 05:44:11,793 Channel (server worker num[20]):
+2021-12-29 05:44:11,794 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:44:11,795 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:44:26,976 ==================== TRACER ======================
+2021-12-29 05:44:26,976 Channel (server worker num[20]):
+2021-12-29 05:44:26,977 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:44:26,978 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:44:41,800 ==================== TRACER ======================
+2021-12-29 05:44:41,801 Channel (server worker num[20]):
+2021-12-29 05:44:41,802 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:44:41,803 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:44:57,008 ==================== TRACER ======================
+2021-12-29 05:44:57,009 Channel (server worker num[20]):
+2021-12-29 05:44:57,010 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:44:57,011 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:45:11,833 ==================== TRACER ======================
+2021-12-29 05:45:11,834 Channel (server worker num[20]):
+2021-12-29 05:45:11,835 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:45:11,835 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:45:27,041 ==================== TRACER ======================
+2021-12-29 05:45:27,042 Channel (server worker num[20]):
+2021-12-29 05:45:27,043 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:45:27,043 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:45:41,866 ==================== TRACER ======================
+2021-12-29 05:45:41,867 Channel (server worker num[20]):
+2021-12-29 05:45:41,867 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:45:41,868 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:45:57,074 ==================== TRACER ======================
+2021-12-29 05:45:57,075 Channel (server worker num[20]):
+2021-12-29 05:45:57,075 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:45:57,076 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:46:11,898 ==================== TRACER ======================
+2021-12-29 05:46:11,899 Channel (server worker num[20]):
+2021-12-29 05:46:11,900 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:46:11,901 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:46:27,092 ==================== TRACER ======================
+2021-12-29 05:46:27,093 Channel (server worker num[20]):
+2021-12-29 05:46:27,094 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:46:27,095 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:46:41,931 ==================== TRACER ======================
+2021-12-29 05:46:41,932 Channel (server worker num[20]):
+2021-12-29 05:46:41,933 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:46:41,934 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:46:57,108 ==================== TRACER ======================
+2021-12-29 05:46:57,109 Channel (server worker num[20]):
+2021-12-29 05:46:57,110 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:46:57,111 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:47:11,964 ==================== TRACER ======================
+2021-12-29 05:47:11,965 Channel (server worker num[20]):
+2021-12-29 05:47:11,966 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:47:11,967 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:47:27,141 ==================== TRACER ======================
+2021-12-29 05:47:27,142 Channel (server worker num[20]):
+2021-12-29 05:47:27,143 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:47:27,143 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:47:41,974 ==================== TRACER ======================
+2021-12-29 05:47:41,975 Channel (server worker num[20]):
+2021-12-29 05:47:41,976 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:47:41,977 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:47:57,174 ==================== TRACER ======================
+2021-12-29 05:47:57,174 Channel (server worker num[20]):
+2021-12-29 05:47:57,175 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:47:57,176 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:48:12,007 ==================== TRACER ======================
+2021-12-29 05:48:12,008 Channel (server worker num[20]):
+2021-12-29 05:48:12,009 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:48:12,010 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:48:27,206 ==================== TRACER ======================
+2021-12-29 05:48:27,207 Channel (server worker num[20]):
+2021-12-29 05:48:27,208 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:48:27,209 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:48:42,040 ==================== TRACER ======================
+2021-12-29 05:48:42,041 Channel (server worker num[20]):
+2021-12-29 05:48:42,042 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:48:42,042 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:48:57,239 ==================== TRACER ======================
+2021-12-29 05:48:57,240 Channel (server worker num[20]):
+2021-12-29 05:48:57,241 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:48:57,242 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:49:12,073 ==================== TRACER ======================
+2021-12-29 05:49:12,074 Channel (server worker num[20]):
+2021-12-29 05:49:12,074 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:49:12,075 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:49:27,272 ==================== TRACER ======================
+2021-12-29 05:49:27,273 Channel (server worker num[20]):
+2021-12-29 05:49:27,274 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:49:27,274 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:49:42,105 ==================== TRACER ======================
+2021-12-29 05:49:42,106 Channel (server worker num[20]):
+2021-12-29 05:49:42,107 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:49:42,108 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:49:57,305 ==================== TRACER ======================
+2021-12-29 05:49:57,305 Channel (server worker num[20]):
+2021-12-29 05:49:57,306 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:49:57,307 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:50:12,138 ==================== TRACER ======================
+2021-12-29 05:50:12,139 Channel (server worker num[20]):
+2021-12-29 05:50:12,140 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:50:12,141 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:50:27,337 ==================== TRACER ======================
+2021-12-29 05:50:27,338 Channel (server worker num[20]):
+2021-12-29 05:50:27,339 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:50:27,340 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:50:42,171 ==================== TRACER ======================
+2021-12-29 05:50:42,172 Channel (server worker num[20]):
+2021-12-29 05:50:42,173 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:50:42,174 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:50:57,370 ==================== TRACER ======================
+2021-12-29 05:50:57,371 Channel (server worker num[20]):
+2021-12-29 05:50:57,372 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:50:57,373 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:51:12,204 ==================== TRACER ======================
+2021-12-29 05:51:12,205 Channel (server worker num[20]):
+2021-12-29 05:51:12,206 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:51:12,206 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:51:27,381 ==================== TRACER ======================
+2021-12-29 05:51:27,382 Channel (server worker num[20]):
+2021-12-29 05:51:27,383 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:51:27,383 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:51:42,237 ==================== TRACER ======================
+2021-12-29 05:51:42,238 Channel (server worker num[20]):
+2021-12-29 05:51:42,238 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:51:42,239 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:51:57,414 ==================== TRACER ======================
+2021-12-29 05:51:57,415 Channel (server worker num[20]):
+2021-12-29 05:51:57,415 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:51:57,416 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:52:12,254 ==================== TRACER ======================
+2021-12-29 05:52:12,255 Channel (server worker num[20]):
+2021-12-29 05:52:12,256 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:52:12,256 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:52:27,446 ==================== TRACER ======================
+2021-12-29 05:52:27,447 Channel (server worker num[20]):
+2021-12-29 05:52:27,448 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:52:27,449 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:52:42,287 ==================== TRACER ======================
+2021-12-29 05:52:42,288 Channel (server worker num[20]):
+2021-12-29 05:52:42,288 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:52:42,289 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:52:57,479 ==================== TRACER ======================
+2021-12-29 05:52:57,480 Channel (server worker num[20]):
+2021-12-29 05:52:57,481 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:52:57,482 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:53:12,319 ==================== TRACER ======================
+2021-12-29 05:53:12,320 Channel (server worker num[20]):
+2021-12-29 05:53:12,321 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:53:12,322 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:53:27,512 ==================== TRACER ======================
+2021-12-29 05:53:27,513 Channel (server worker num[20]):
+2021-12-29 05:53:27,514 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:53:27,514 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:53:42,352 ==================== TRACER ======================
+2021-12-29 05:53:42,353 Channel (server worker num[20]):
+2021-12-29 05:53:42,354 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:53:42,355 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:53:57,520 ==================== TRACER ======================
+2021-12-29 05:53:57,521 Channel (server worker num[20]):
+2021-12-29 05:53:57,521 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:53:57,522 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:54:12,369 ==================== TRACER ======================
+2021-12-29 05:54:12,370 Channel (server worker num[20]):
+2021-12-29 05:54:12,371 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:54:12,371 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:54:27,552 ==================== TRACER ======================
+2021-12-29 05:54:27,553 Channel (server worker num[20]):
+2021-12-29 05:54:27,554 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:54:27,555 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:54:42,400 ==================== TRACER ======================
+2021-12-29 05:54:42,401 Channel (server worker num[20]):
+2021-12-29 05:54:42,402 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:54:42,403 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:54:57,585 ==================== TRACER ======================
+2021-12-29 05:54:57,586 Channel (server worker num[20]):
+2021-12-29 05:54:57,587 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:54:57,588 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:55:12,433 ==================== TRACER ======================
+2021-12-29 05:55:12,434 Channel (server worker num[20]):
+2021-12-29 05:55:12,435 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:55:12,435 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:55:27,618 ==================== TRACER ======================
+2021-12-29 05:55:27,619 Channel (server worker num[20]):
+2021-12-29 05:55:27,620 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:55:27,621 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:55:42,466 ==================== TRACER ======================
+2021-12-29 05:55:42,467 Channel (server worker num[20]):
+2021-12-29 05:55:42,467 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:55:42,468 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:55:57,648 ==================== TRACER ======================
+2021-12-29 05:55:57,649 Channel (server worker num[20]):
+2021-12-29 05:55:57,650 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:55:57,650 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:56:12,498 ==================== TRACER ======================
+2021-12-29 05:56:12,499 Channel (server worker num[20]):
+2021-12-29 05:56:12,500 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:56:12,501 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:56:27,680 ==================== TRACER ======================
+2021-12-29 05:56:27,681 Channel (server worker num[20]):
+2021-12-29 05:56:27,683 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:56:27,683 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:56:42,531 ==================== TRACER ======================
+2021-12-29 05:56:42,532 Channel (server worker num[20]):
+2021-12-29 05:56:42,533 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:56:42,534 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:56:57,714 ==================== TRACER ======================
+2021-12-29 05:56:57,715 Channel (server worker num[20]):
+2021-12-29 05:56:57,716 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:56:57,716 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:57:12,564 ==================== TRACER ======================
+2021-12-29 05:57:12,565 Channel (server worker num[20]):
+2021-12-29 05:57:12,566 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:57:12,567 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:57:27,747 ==================== TRACER ======================
+2021-12-29 05:57:27,748 Channel (server worker num[20]):
+2021-12-29 05:57:27,748 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:57:27,749 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:57:42,585 ==================== TRACER ======================
+2021-12-29 05:57:42,586 Channel (server worker num[20]):
+2021-12-29 05:57:42,586 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:57:42,587 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:57:57,779 ==================== TRACER ======================
+2021-12-29 05:57:57,780 Channel (server worker num[20]):
+2021-12-29 05:57:57,781 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:57:57,782 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:58:12,617 ==================== TRACER ======================
+2021-12-29 05:58:12,618 Channel (server worker num[20]):
+2021-12-29 05:58:12,619 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:58:12,620 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:58:27,812 ==================== TRACER ======================
+2021-12-29 05:58:27,813 Channel (server worker num[20]):
+2021-12-29 05:58:27,814 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:58:27,815 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:58:42,650 ==================== TRACER ======================
+2021-12-29 05:58:42,651 Channel (server worker num[20]):
+2021-12-29 05:58:42,652 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:58:42,653 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:58:57,815 ==================== TRACER ======================
+2021-12-29 05:58:57,816 Channel (server worker num[20]):
+2021-12-29 05:58:57,817 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:58:57,818 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:59:12,683 ==================== TRACER ======================
+2021-12-29 05:59:12,684 Channel (server worker num[20]):
+2021-12-29 05:59:12,685 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:59:12,686 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:59:27,848 ==================== TRACER ======================
+2021-12-29 05:59:27,849 Channel (server worker num[20]):
+2021-12-29 05:59:27,850 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:59:27,850 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:59:42,716 ==================== TRACER ======================
+2021-12-29 05:59:42,717 Channel (server worker num[20]):
+2021-12-29 05:59:42,718 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:59:42,718 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 05:59:57,880 ==================== TRACER ======================
+2021-12-29 05:59:57,881 Channel (server worker num[20]):
+2021-12-29 05:59:57,882 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 05:59:57,882 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:00:12,749 ==================== TRACER ======================
+2021-12-29 06:00:12,750 Channel (server worker num[20]):
+2021-12-29 06:00:12,751 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:00:12,751 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:00:27,913 ==================== TRACER ======================
+2021-12-29 06:00:27,914 Channel (server worker num[20]):
+2021-12-29 06:00:27,914 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:00:27,915 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:00:42,782 ==================== TRACER ======================
+2021-12-29 06:00:42,782 Channel (server worker num[20]):
+2021-12-29 06:00:42,783 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:00:42,784 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:00:57,946 ==================== TRACER ======================
+2021-12-29 06:00:57,946 Channel (server worker num[20]):
+2021-12-29 06:00:57,947 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:00:57,948 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:01:12,814 ==================== TRACER ======================
+2021-12-29 06:01:12,815 Channel (server worker num[20]):
+2021-12-29 06:01:12,816 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:01:12,817 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:01:27,978 ==================== TRACER ======================
+2021-12-29 06:01:27,979 Channel (server worker num[20]):
+2021-12-29 06:01:27,980 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:01:27,981 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:01:42,847 ==================== TRACER ======================
+2021-12-29 06:01:42,848 Channel (server worker num[20]):
+2021-12-29 06:01:42,849 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:01:42,850 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:01:58,011 ==================== TRACER ======================
+2021-12-29 06:01:58,012 Channel (server worker num[20]):
+2021-12-29 06:01:58,013 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:01:58,013 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:02:12,880 ==================== TRACER ======================
+2021-12-29 06:02:12,881 Channel (server worker num[20]):
+2021-12-29 06:02:12,882 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:02:12,883 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:02:28,044 ==================== TRACER ======================
+2021-12-29 06:02:28,045 Channel (server worker num[20]):
+2021-12-29 06:02:28,045 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:02:28,046 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:02:42,913 ==================== TRACER ======================
+2021-12-29 06:02:42,914 Channel (server worker num[20]):
+2021-12-29 06:02:42,915 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:02:42,915 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:02:58,077 ==================== TRACER ======================
+2021-12-29 06:02:58,077 Channel (server worker num[20]):
+2021-12-29 06:02:58,078 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:02:58,079 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:03:12,946 ==================== TRACER ======================
+2021-12-29 06:03:12,946 Channel (server worker num[20]):
+2021-12-29 06:03:12,947 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:03:12,948 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:03:28,109 ==================== TRACER ======================
+2021-12-29 06:03:28,110 Channel (server worker num[20]):
+2021-12-29 06:03:28,111 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:03:28,112 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:03:58,142 ==================== TRACER ======================
+2021-12-29 06:03:58,143 Channel (server worker num[20]):
+2021-12-29 06:03:58,144 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:03:58,144 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:04:28,175 ==================== TRACER ======================
+2021-12-29 06:04:28,176 Channel (server worker num[20]):
+2021-12-29 06:04:28,176 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:04:28,177 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:04:58,207 ==================== TRACER ======================
+2021-12-29 06:04:58,208 Channel (server worker num[20]):
+2021-12-29 06:04:58,209 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:04:58,210 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:05:28,240 ==================== TRACER ======================
+2021-12-29 06:05:28,241 Channel (server worker num[20]):
+2021-12-29 06:05:28,242 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:05:28,243 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:05:58,273 ==================== TRACER ======================
+2021-12-29 06:05:58,274 Channel (server worker num[20]):
+2021-12-29 06:05:58,275 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:05:58,275 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:06:28,306 ==================== TRACER ======================
+2021-12-29 06:06:28,307 Channel (server worker num[20]):
+2021-12-29 06:06:28,307 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:06:28,308 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:06:58,338 ==================== TRACER ======================
+2021-12-29 06:06:58,339 Channel (server worker num[20]):
+2021-12-29 06:06:58,340 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:06:58,341 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:07:28,371 ==================== TRACER ======================
+2021-12-29 06:07:28,372 Channel (server worker num[20]):
+2021-12-29 06:07:28,373 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:07:28,374 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:07:58,404 ==================== TRACER ======================
+2021-12-29 06:07:58,405 Channel (server worker num[20]):
+2021-12-29 06:07:58,406 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:07:58,406 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:08:28,426 ==================== TRACER ======================
+2021-12-29 06:08:28,427 Channel (server worker num[20]):
+2021-12-29 06:08:28,428 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:08:28,428 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:08:54,465 ==================== TRACER ======================
+2021-12-29 06:08:54,466 Channel (server worker num[20]):
+2021-12-29 06:08:54,468 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:08:54,468 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:08:58,450 ==================== TRACER ======================
+2021-12-29 06:08:58,451 Channel (server worker num[20]):
+2021-12-29 06:08:58,452 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:08:58,452 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:09:24,499 ==================== TRACER ======================
+2021-12-29 06:09:24,500 Channel (server worker num[20]):
+2021-12-29 06:09:24,500 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:09:24,501 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:09:28,483 ==================== TRACER ======================
+2021-12-29 06:09:28,484 Channel (server worker num[20]):
+2021-12-29 06:09:28,484 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:09:28,485 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:09:54,532 ==================== TRACER ======================
+2021-12-29 06:09:54,532 Channel (server worker num[20]):
+2021-12-29 06:09:54,533 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:09:54,534 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:09:58,516 ==================== TRACER ======================
+2021-12-29 06:09:58,516 Channel (server worker num[20]):
+2021-12-29 06:09:58,517 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:09:58,518 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:10:19,913 ==================== TRACER ======================
+2021-12-29 06:10:19,914 Channel (server worker num[20]):
+2021-12-29 06:10:19,917 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:10:19,917 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:10:28,520 ==================== TRACER ======================
+2021-12-29 06:10:28,521 Channel (server worker num[20]):
+2021-12-29 06:10:28,522 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:10:28,522 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:10:49,948 ==================== TRACER ======================
+2021-12-29 06:10:49,949 Channel (server worker num[20]):
+2021-12-29 06:10:49,950 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:10:49,950 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:10:58,553 ==================== TRACER ======================
+2021-12-29 06:10:58,554 Channel (server worker num[20]):
+2021-12-29 06:10:58,555 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:10:58,555 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:11:19,981 ==================== TRACER ======================
+2021-12-29 06:11:19,981 Channel (server worker num[20]):
+2021-12-29 06:11:19,982 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:11:19,983 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:11:28,586 ==================== TRACER ======================
+2021-12-29 06:11:28,586 Channel (server worker num[20]):
+2021-12-29 06:11:28,587 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:11:28,588 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:11:49,985 ==================== TRACER ======================
+2021-12-29 06:11:49,986 Channel (server worker num[20]):
+2021-12-29 06:11:49,986 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:11:49,987 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:11:58,608 ==================== TRACER ======================
+2021-12-29 06:11:58,608 Channel (server worker num[20]):
+2021-12-29 06:11:58,609 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:11:58,610 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:12:09,038 ==================== TRACER ======================
+2021-12-29 06:12:09,039 Channel (server worker num[20]):
+2021-12-29 06:12:09,041 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:12:09,042 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:12:28,640 ==================== TRACER ======================
+2021-12-29 06:12:28,641 Channel (server worker num[20]):
+2021-12-29 06:12:28,642 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:12:28,643 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:12:39,072 ==================== TRACER ======================
+2021-12-29 06:12:39,073 Channel (server worker num[20]):
+2021-12-29 06:12:39,074 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:12:39,075 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:12:47,298 ==================== TRACER ======================
+2021-12-29 06:12:47,299 Channel (server worker num[20]):
+2021-12-29 06:12:47,302 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:12:47,302 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:12:58,673 ==================== TRACER ======================
+2021-12-29 06:12:58,674 Channel (server worker num[20]):
+2021-12-29 06:12:58,675 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:12:58,675 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:13:17,333 ==================== TRACER ======================
+2021-12-29 06:13:17,334 Channel (server worker num[20]):
+2021-12-29 06:13:17,334 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:13:17,335 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:13:28,706 ==================== TRACER ======================
+2021-12-29 06:13:28,707 Channel (server worker num[20]):
+2021-12-29 06:13:28,707 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:13:28,708 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:13:47,365 ==================== TRACER ======================
+2021-12-29 06:13:47,366 Channel (server worker num[20]):
+2021-12-29 06:13:47,367 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:13:47,368 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:13:58,739 ==================== TRACER ======================
+2021-12-29 06:13:58,739 Channel (server worker num[20]):
+2021-12-29 06:13:58,740 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:13:58,741 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:14:17,399 ==================== TRACER ======================
+2021-12-29 06:14:17,399 Channel (server worker num[20]):
+2021-12-29 06:14:17,400 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:14:17,401 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:14:28,771 ==================== TRACER ======================
+2021-12-29 06:14:28,772 Channel (server worker num[20]):
+2021-12-29 06:14:28,773 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:14:28,774 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:14:47,409 ==================== TRACER ======================
+2021-12-29 06:14:47,410 Channel (server worker num[20]):
+2021-12-29 06:14:47,410 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[1/0]
+2021-12-29 06:14:47,411 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:14:58,804 ==================== TRACER ======================
+2021-12-29 06:14:58,805 Channel (server worker num[20]):
+2021-12-29 06:14:58,806 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:14:58,807 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:15:10,573 ==================== TRACER ======================
+2021-12-29 06:15:10,574 Channel (server worker num[20]):
+2021-12-29 06:15:10,577 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:15:10,577 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:15:28,837 ==================== TRACER ======================
+2021-12-29 06:15:28,838 Channel (server worker num[20]):
+2021-12-29 06:15:28,839 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:15:28,839 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:15:40,599 ==================== TRACER ======================
+2021-12-29 06:15:40,600 Channel (server worker num[20]):
+2021-12-29 06:15:40,601 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:15:40,601 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:15:58,870 ==================== TRACER ======================
+2021-12-29 06:15:58,871 Channel (server worker num[20]):
+2021-12-29 06:15:58,871 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:15:58,872 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:16:10,614 ==================== TRACER ======================
+2021-12-29 06:16:10,615 Channel (server worker num[20]):
+2021-12-29 06:16:10,616 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:16:10,616 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:16:28,892 ==================== TRACER ======================
+2021-12-29 06:16:28,893 Channel (server worker num[20]):
+2021-12-29 06:16:28,894 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:16:28,894 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:16:40,623 ==================== TRACER ======================
+2021-12-29 06:16:40,624 Channel (server worker num[20]):
+2021-12-29 06:16:40,625 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:16:40,626 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:16:58,924 ==================== TRACER ======================
+2021-12-29 06:16:58,925 Channel (server worker num[20]):
+2021-12-29 06:16:58,926 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:16:58,927 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:17:10,656 ==================== TRACER ======================
+2021-12-29 06:17:10,657 Channel (server worker num[20]):
+2021-12-29 06:17:10,658 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:17:10,659 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:17:28,957 ==================== TRACER ======================
+2021-12-29 06:17:28,958 Channel (server worker num[20]):
+2021-12-29 06:17:28,959 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:17:28,959 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:17:36,431 ==================== TRACER ======================
+2021-12-29 06:17:36,433 Channel (server worker num[20]):
+2021-12-29 06:17:36,435 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:17:36,436 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:17:58,990 ==================== TRACER ======================
+2021-12-29 06:17:58,991 Channel (server worker num[20]):
+2021-12-29 06:17:58,991 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:17:58,992 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:18:06,466 ==================== TRACER ======================
+2021-12-29 06:18:06,467 Channel (server worker num[20]):
+2021-12-29 06:18:06,468 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:18:06,469 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:18:17,514 ==================== TRACER ======================
+2021-12-29 06:18:17,516 Channel (server worker num[20]):
+2021-12-29 06:18:17,518 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:18:17,518 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:18:29,022 ==================== TRACER ======================
+2021-12-29 06:18:29,023 Channel (server worker num[20]):
+2021-12-29 06:18:29,024 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:18:29,025 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:18:47,549 ==================== TRACER ======================
+2021-12-29 06:18:47,550 Channel (server worker num[20]):
+2021-12-29 06:18:47,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:18:47,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:18:59,055 ==================== TRACER ======================
+2021-12-29 06:18:59,056 Channel (server worker num[20]):
+2021-12-29 06:18:59,057 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:18:59,058 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:19:17,582 ==================== TRACER ======================
+2021-12-29 06:19:17,583 Channel (server worker num[20]):
+2021-12-29 06:19:17,583 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:19:17,584 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:19:29,064 ==================== TRACER ======================
+2021-12-29 06:19:29,065 Channel (server worker num[20]):
+2021-12-29 06:19:29,066 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:19:29,067 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:19:47,614 ==================== TRACER ======================
+2021-12-29 06:19:47,615 Channel (server worker num[20]):
+2021-12-29 06:19:47,616 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:19:47,617 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:19:57,984 ==================== TRACER ======================
+2021-12-29 06:19:57,986 Channel (server worker num[20]):
+2021-12-29 06:19:57,988 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:19:57,989 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:19:59,080 ==================== TRACER ======================
+2021-12-29 06:19:59,081 Channel (server worker num[20]):
+2021-12-29 06:19:59,082 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:19:59,082 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:20:28,000 ==================== TRACER ======================
+2021-12-29 06:20:28,001 Channel (server worker num[20]):
+2021-12-29 06:20:28,002 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:20:28,003 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:20:29,093 ==================== TRACER ======================
+2021-12-29 06:20:29,094 Channel (server worker num[20]):
+2021-12-29 06:20:29,095 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:20:29,095 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:20:58,032 ==================== TRACER ======================
+2021-12-29 06:20:58,033 Channel (server worker num[20]):
+2021-12-29 06:20:58,034 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:20:58,035 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:20:59,126 ==================== TRACER ======================
+2021-12-29 06:20:59,126 Channel (server worker num[20]):
+2021-12-29 06:20:59,127 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:20:59,128 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:21:28,732 ==================== TRACER ======================
+2021-12-29 06:21:28,733 Channel (server worker num[20]):
+2021-12-29 06:21:28,735 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:21:28,736 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:21:29,158 ==================== TRACER ======================
+2021-12-29 06:21:29,159 Channel (server worker num[20]):
+2021-12-29 06:21:29,160 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:21:29,161 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:21:58,766 ==================== TRACER ======================
+2021-12-29 06:21:58,767 Channel (server worker num[20]):
+2021-12-29 06:21:58,768 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:21:58,768 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:21:59,175 ==================== TRACER ======================
+2021-12-29 06:21:59,175 Channel (server worker num[20]):
+2021-12-29 06:21:59,176 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:21:59,177 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:22:28,799 ==================== TRACER ======================
+2021-12-29 06:22:28,800 Channel (server worker num[20]):
+2021-12-29 06:22:28,801 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:22:28,801 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:22:29,178 ==================== TRACER ======================
+2021-12-29 06:22:29,179 Channel (server worker num[20]):
+2021-12-29 06:22:29,180 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:22:29,180 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:22:58,832 ==================== TRACER ======================
+2021-12-29 06:22:58,832 Channel (server worker num[20]):
+2021-12-29 06:22:58,833 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:22:58,834 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:22:59,211 ==================== TRACER ======================
+2021-12-29 06:22:59,211 Channel (server worker num[20]):
+2021-12-29 06:22:59,212 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:22:59,213 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:23:28,864 ==================== TRACER ======================
+2021-12-29 06:23:28,865 Channel (server worker num[20]):
+2021-12-29 06:23:28,866 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:23:28,867 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:23:29,243 ==================== TRACER ======================
+2021-12-29 06:23:29,244 Channel (server worker num[20]):
+2021-12-29 06:23:29,245 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:23:29,246 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:23:58,897 ==================== TRACER ======================
+2021-12-29 06:23:58,898 Channel (server worker num[20]):
+2021-12-29 06:23:58,899 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:23:58,900 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:23:59,270 ==================== TRACER ======================
+2021-12-29 06:23:59,270 Channel (server worker num[20]):
+2021-12-29 06:23:59,271 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:23:59,272 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:24:29,302 ==================== TRACER ======================
+2021-12-29 06:24:29,303 Channel (server worker num[20]):
+2021-12-29 06:24:29,304 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:24:29,305 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:24:59,335 ==================== TRACER ======================
+2021-12-29 06:24:59,336 Channel (server worker num[20]):
+2021-12-29 06:24:59,337 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:24:59,337 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:25:12,161 ==================== TRACER ======================
+2021-12-29 06:25:12,162 Channel (server worker num[20]):
+2021-12-29 06:25:12,165 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:25:12,166 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:25:29,368 ==================== TRACER ======================
+2021-12-29 06:25:29,368 Channel (server worker num[20]):
+2021-12-29 06:25:29,369 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:25:29,370 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:25:29,547 ==================== TRACER ======================
+2021-12-29 06:25:29,548 Channel (server worker num[20]):
+2021-12-29 06:25:29,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:25:29,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:25:59,376 ==================== TRACER ======================
+2021-12-29 06:25:59,377 Channel (server worker num[20]):
+2021-12-29 06:25:59,377 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:25:59,378 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:25:59,581 ==================== TRACER ======================
+2021-12-29 06:25:59,582 Channel (server worker num[20]):
+2021-12-29 06:25:59,583 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:25:59,584 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:26:29,409 ==================== TRACER ======================
+2021-12-29 06:26:29,409 Channel (server worker num[20]):
+2021-12-29 06:26:29,410 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:26:29,411 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:26:29,614 ==================== TRACER ======================
+2021-12-29 06:26:29,615 Channel (server worker num[20]):
+2021-12-29 06:26:29,616 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:26:29,617 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:26:59,441 ==================== TRACER ======================
+2021-12-29 06:26:59,442 Channel (server worker num[20]):
+2021-12-29 06:26:59,443 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:26:59,444 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:26:59,648 ==================== TRACER ======================
+2021-12-29 06:26:59,648 Channel (server worker num[20]):
+2021-12-29 06:26:59,649 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:26:59,650 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:27:29,474 ==================== TRACER ======================
+2021-12-29 06:27:29,475 Channel (server worker num[20]):
+2021-12-29 06:27:29,476 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:27:29,476 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:27:29,680 ==================== TRACER ======================
+2021-12-29 06:27:29,681 Channel (server worker num[20]):
+2021-12-29 06:27:29,682 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:27:29,683 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:27:59,507 ==================== TRACER ======================
+2021-12-29 06:27:59,507 Channel (server worker num[20]):
+2021-12-29 06:27:59,508 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:27:59,509 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:27:59,713 ==================== TRACER ======================
+2021-12-29 06:27:59,714 Channel (server worker num[20]):
+2021-12-29 06:27:59,715 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:27:59,716 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:28:29,539 ==================== TRACER ======================
+2021-12-29 06:28:29,540 Channel (server worker num[20]):
+2021-12-29 06:28:29,541 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:28:29,542 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:28:29,746 ==================== TRACER ======================
+2021-12-29 06:28:29,747 Channel (server worker num[20]):
+2021-12-29 06:28:29,748 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:28:29,748 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:28:59,572 ==================== TRACER ======================
+2021-12-29 06:28:59,573 Channel (server worker num[20]):
+2021-12-29 06:28:59,574 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:28:59,574 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:28:59,779 ==================== TRACER ======================
+2021-12-29 06:28:59,780 Channel (server worker num[20]):
+2021-12-29 06:28:59,780 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:28:59,781 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:29:29,605 ==================== TRACER ======================
+2021-12-29 06:29:29,606 Channel (server worker num[20]):
+2021-12-29 06:29:29,606 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:29:29,607 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:29:29,812 ==================== TRACER ======================
+2021-12-29 06:29:29,812 Channel (server worker num[20]):
+2021-12-29 06:29:29,813 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:29:29,814 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:29:59,638 ==================== TRACER ======================
+2021-12-29 06:29:59,638 Channel (server worker num[20]):
+2021-12-29 06:29:59,639 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:29:59,640 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:29:59,844 ==================== TRACER ======================
+2021-12-29 06:29:59,845 Channel (server worker num[20]):
+2021-12-29 06:29:59,846 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:29:59,847 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:30:29,670 ==================== TRACER ======================
+2021-12-29 06:30:29,671 Channel (server worker num[20]):
+2021-12-29 06:30:29,672 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:30:29,673 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:30:29,877 ==================== TRACER ======================
+2021-12-29 06:30:29,878 Channel (server worker num[20]):
+2021-12-29 06:30:29,879 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:30:29,880 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:30:59,703 ==================== TRACER ======================
+2021-12-29 06:30:59,704 Channel (server worker num[20]):
+2021-12-29 06:30:59,705 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:30:59,706 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:30:59,910 ==================== TRACER ======================
+2021-12-29 06:30:59,911 Channel (server worker num[20]):
+2021-12-29 06:30:59,912 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:30:59,912 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:31:29,736 ==================== TRACER ======================
+2021-12-29 06:31:29,737 Channel (server worker num[20]):
+2021-12-29 06:31:29,738 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:31:29,738 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:31:29,943 ==================== TRACER ======================
+2021-12-29 06:31:29,944 Channel (server worker num[20]):
+2021-12-29 06:31:29,945 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:31:29,945 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:31:43,566 ==================== TRACER ======================
+2021-12-29 06:31:43,568 Channel (server worker num[20]):
+2021-12-29 06:31:43,570 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:31:43,571 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:31:59,769 ==================== TRACER ======================
+2021-12-29 06:31:59,770 Channel (server worker num[20]):
+2021-12-29 06:31:59,770 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:31:59,771 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:32:13,591 ==================== TRACER ======================
+2021-12-29 06:32:13,592 Channel (server worker num[20]):
+2021-12-29 06:32:13,593 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:32:13,594 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:32:29,801 ==================== TRACER ======================
+2021-12-29 06:32:29,802 Channel (server worker num[20]):
+2021-12-29 06:32:29,803 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:32:29,804 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:32:31,132 ==================== TRACER ======================
+2021-12-29 06:32:31,134 Channel (server worker num[20]):
+2021-12-29 06:32:31,138 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:32:31,138 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:32:59,834 ==================== TRACER ======================
+2021-12-29 06:32:59,835 Channel (server worker num[20]):
+2021-12-29 06:32:59,836 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:32:59,837 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:33:01,169 ==================== TRACER ======================
+2021-12-29 06:33:01,170 Channel (server worker num[20]):
+2021-12-29 06:33:01,171 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:33:01,171 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:33:05,365 ==================== TRACER ======================
+2021-12-29 06:33:05,368 Channel (server worker num[20]):
+2021-12-29 06:33:05,370 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:33:05,370 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:33:29,848 ==================== TRACER ======================
+2021-12-29 06:33:29,849 Channel (server worker num[20]):
+2021-12-29 06:33:29,850 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:33:29,850 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:33:35,401 ==================== TRACER ======================
+2021-12-29 06:33:35,402 Channel (server worker num[20]):
+2021-12-29 06:33:35,402 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:33:35,403 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:33:59,858 ==================== TRACER ======================
+2021-12-29 06:33:59,859 Channel (server worker num[20]):
+2021-12-29 06:33:59,860 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:33:59,861 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:34:05,424 ==================== TRACER ======================
+2021-12-29 06:34:05,425 Channel (server worker num[20]):
+2021-12-29 06:34:05,428 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:34:05,429 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:34:29,891 ==================== TRACER ======================
+2021-12-29 06:34:29,892 Channel (server worker num[20]):
+2021-12-29 06:34:29,893 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:34:29,894 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:34:35,434 ==================== TRACER ======================
+2021-12-29 06:34:35,435 Channel (server worker num[20]):
+2021-12-29 06:34:35,436 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:34:35,437 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:34:59,924 ==================== TRACER ======================
+2021-12-29 06:34:59,925 Channel (server worker num[20]):
+2021-12-29 06:34:59,925 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:34:59,926 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:35:05,467 ==================== TRACER ======================
+2021-12-29 06:35:05,468 Channel (server worker num[20]):
+2021-12-29 06:35:05,469 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:35:05,470 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:35:29,937 ==================== TRACER ======================
+2021-12-29 06:35:29,938 Channel (server worker num[20]):
+2021-12-29 06:35:29,939 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:35:29,940 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:35:35,500 ==================== TRACER ======================
+2021-12-29 06:35:35,501 Channel (server worker num[20]):
+2021-12-29 06:35:35,502 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:35:35,503 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:35:59,970 ==================== TRACER ======================
+2021-12-29 06:35:59,971 Channel (server worker num[20]):
+2021-12-29 06:35:59,972 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:35:59,972 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:36:05,533 ==================== TRACER ======================
+2021-12-29 06:36:05,534 Channel (server worker num[20]):
+2021-12-29 06:36:05,535 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:36:05,536 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:36:30,003 ==================== TRACER ======================
+2021-12-29 06:36:30,004 Channel (server worker num[20]):
+2021-12-29 06:36:30,005 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:36:30,005 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:36:35,566 ==================== TRACER ======================
+2021-12-29 06:36:35,567 Channel (server worker num[20]):
+2021-12-29 06:36:35,568 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:36:35,569 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:37:00,036 ==================== TRACER ======================
+2021-12-29 06:37:00,036 Channel (server worker num[20]):
+2021-12-29 06:37:00,037 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:37:00,038 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:37:05,599 ==================== TRACER ======================
+2021-12-29 06:37:05,600 Channel (server worker num[20]):
+2021-12-29 06:37:05,601 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:37:05,602 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:37:30,068 ==================== TRACER ======================
+2021-12-29 06:37:30,069 Channel (server worker num[20]):
+2021-12-29 06:37:30,070 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:37:30,071 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:37:35,632 ==================== TRACER ======================
+2021-12-29 06:37:35,633 Channel (server worker num[20]):
+2021-12-29 06:37:35,634 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:37:35,635 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:38:00,100 ==================== TRACER ======================
+2021-12-29 06:38:00,101 Channel (server worker num[20]):
+2021-12-29 06:38:00,101 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:38:00,102 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:38:05,665 ==================== TRACER ======================
+2021-12-29 06:38:05,666 Channel (server worker num[20]):
+2021-12-29 06:38:05,667 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:38:05,668 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:38:30,128 ==================== TRACER ======================
+2021-12-29 06:38:30,129 Channel (server worker num[20]):
+2021-12-29 06:38:30,130 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:38:30,131 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:38:35,697 ==================== TRACER ======================
+2021-12-29 06:38:35,698 Channel (server worker num[20]):
+2021-12-29 06:38:35,699 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:38:35,700 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:39:00,161 ==================== TRACER ======================
+2021-12-29 06:39:00,162 Channel (server worker num[20]):
+2021-12-29 06:39:00,163 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:39:00,163 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:39:05,730 ==================== TRACER ======================
+2021-12-29 06:39:05,731 Channel (server worker num[20]):
+2021-12-29 06:39:05,734 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:39:05,734 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:39:30,194 ==================== TRACER ======================
+2021-12-29 06:39:30,195 Channel (server worker num[20]):
+2021-12-29 06:39:30,195 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:39:30,196 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:39:35,765 ==================== TRACER ======================
+2021-12-29 06:39:35,766 Channel (server worker num[20]):
+2021-12-29 06:39:35,767 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:39:35,767 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:40:00,226 ==================== TRACER ======================
+2021-12-29 06:40:00,227 Channel (server worker num[20]):
+2021-12-29 06:40:00,228 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:40:00,229 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:40:05,798 ==================== TRACER ======================
+2021-12-29 06:40:05,799 Channel (server worker num[20]):
+2021-12-29 06:40:05,799 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:40:05,800 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:40:19,364 ==================== TRACER ======================
+2021-12-29 06:40:19,365 Channel (server worker num[20]):
+2021-12-29 06:40:19,367 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:40:19,368 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:40:30,259 ==================== TRACER ======================
+2021-12-29 06:40:30,260 Channel (server worker num[20]):
+2021-12-29 06:40:30,261 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:40:30,261 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:40:49,398 ==================== TRACER ======================
+2021-12-29 06:40:49,399 DAGExecutor:
+2021-12-29 06:40:49,400 Query count[1]
+2021-12-29 06:40:49,400 QPS[0.03333333333333333 q/s]
+2021-12-29 06:40:49,400 Succ[0.0]
+2021-12-29 06:40:49,400 Error req[0]
+2021-12-29 06:40:49,401 Latency:
+2021-12-29 06:40:49,401 ave[50.848 ms]
+2021-12-29 06:40:49,401 .50[50.848 ms]
+2021-12-29 06:40:49,401 .60[50.848 ms]
+2021-12-29 06:40:49,401 .70[50.848 ms]
+2021-12-29 06:40:49,402 .80[50.848 ms]
+2021-12-29 06:40:49,402 .90[50.848 ms]
+2021-12-29 06:40:49,402 .95[50.848 ms]
+2021-12-29 06:40:49,402 .99[50.848 ms]
+2021-12-29 06:40:49,402 Channel (server worker num[20]):
+2021-12-29 06:40:49,403 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:40:49,404 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:41:00,292 ==================== TRACER ======================
+2021-12-29 06:41:00,292 Channel (server worker num[20]):
+2021-12-29 06:41:00,293 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:41:00,294 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:41:19,434 ==================== TRACER ======================
+2021-12-29 06:41:19,435 Channel (server worker num[20]):
+2021-12-29 06:41:19,436 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:41:19,437 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:41:30,300 ==================== TRACER ======================
+2021-12-29 06:41:30,301 Channel (server worker num[20]):
+2021-12-29 06:41:30,302 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:41:30,303 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:42:00,333 ==================== TRACER ======================
+2021-12-29 06:42:00,334 Channel (server worker num[20]):
+2021-12-29 06:42:00,335 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:42:00,335 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:42:03,488 ==================== TRACER ======================
+2021-12-29 06:42:03,489 Channel (server worker num[20]):
+2021-12-29 06:42:03,491 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:42:03,492 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:42:30,366 ==================== TRACER ======================
+2021-12-29 06:42:30,367 Channel (server worker num[20]):
+2021-12-29 06:42:30,367 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:42:30,368 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:42:33,522 ==================== TRACER ======================
+2021-12-29 06:42:33,523 DAGExecutor:
+2021-12-29 06:42:33,524 Query count[1]
+2021-12-29 06:42:33,524 QPS[0.03333333333333333 q/s]
+2021-12-29 06:42:33,524 Succ[0.0]
+2021-12-29 06:42:33,524 Error req[0]
+2021-12-29 06:42:33,525 Latency:
+2021-12-29 06:42:33,525 ave[44.165 ms]
+2021-12-29 06:42:33,525 .50[44.165 ms]
+2021-12-29 06:42:33,525 .60[44.165 ms]
+2021-12-29 06:42:33,525 .70[44.165 ms]
+2021-12-29 06:42:33,526 .80[44.165 ms]
+2021-12-29 06:42:33,526 .90[44.165 ms]
+2021-12-29 06:42:33,526 .95[44.165 ms]
+2021-12-29 06:42:33,526 .99[44.165 ms]
+2021-12-29 06:42:33,526 Channel (server worker num[20]):
+2021-12-29 06:42:33,527 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:42:33,528 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:42:44,285 ==================== TRACER ======================
+2021-12-29 06:42:44,287 Channel (server worker num[20]):
+2021-12-29 06:42:44,289 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:42:44,289 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:43:00,398 ==================== TRACER ======================
+2021-12-29 06:43:00,399 Channel (server worker num[20]):
+2021-12-29 06:43:00,400 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:43:00,401 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:43:14,300 ==================== TRACER ======================
+2021-12-29 06:43:14,301 DAGExecutor:
+2021-12-29 06:43:14,302 Query count[1]
+2021-12-29 06:43:14,302 QPS[0.03333333333333333 q/s]
+2021-12-29 06:43:14,302 Succ[0.0]
+2021-12-29 06:43:14,302 Error req[0]
+2021-12-29 06:43:14,302 Latency:
+2021-12-29 06:43:14,303 ave[47.774 ms]
+2021-12-29 06:43:14,303 .50[47.774 ms]
+2021-12-29 06:43:14,303 .60[47.774 ms]
+2021-12-29 06:43:14,303 .70[47.774 ms]
+2021-12-29 06:43:14,303 .80[47.774 ms]
+2021-12-29 06:43:14,303 .90[47.774 ms]
+2021-12-29 06:43:14,304 .95[47.774 ms]
+2021-12-29 06:43:14,304 .99[47.774 ms]
+2021-12-29 06:43:14,304 Channel (server worker num[20]):
+2021-12-29 06:43:14,305 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:43:14,306 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:43:30,408 ==================== TRACER ======================
+2021-12-29 06:43:30,409 Channel (server worker num[20]):
+2021-12-29 06:43:30,410 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:43:30,410 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:43:44,336 ==================== TRACER ======================
+2021-12-29 06:43:44,337 Channel (server worker num[20]):
+2021-12-29 06:43:44,338 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:43:44,338 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:44:00,416 ==================== TRACER ======================
+2021-12-29 06:44:00,417 Channel (server worker num[20]):
+2021-12-29 06:44:00,418 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:44:00,418 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:44:14,368 ==================== TRACER ======================
+2021-12-29 06:44:14,369 Channel (server worker num[20]):
+2021-12-29 06:44:14,370 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:44:14,371 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:44:30,449 ==================== TRACER ======================
+2021-12-29 06:44:30,450 Channel (server worker num[20]):
+2021-12-29 06:44:30,450 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:44:30,451 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:44:34,336 ==================== TRACER ======================
+2021-12-29 06:44:34,338 Channel (server worker num[20]):
+2021-12-29 06:44:34,339 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:44:34,340 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:45:00,481 ==================== TRACER ======================
+2021-12-29 06:45:00,482 Channel (server worker num[20]):
+2021-12-29 06:45:00,483 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:45:00,484 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:45:04,370 ==================== TRACER ======================
+2021-12-29 06:45:04,371 DAGExecutor:
+2021-12-29 06:45:04,372 Query count[1]
+2021-12-29 06:45:04,372 QPS[0.03333333333333333 q/s]
+2021-12-29 06:45:04,372 Succ[0.0]
+2021-12-29 06:45:04,372 Error req[0]
+2021-12-29 06:45:04,373 Latency:
+2021-12-29 06:45:04,373 ave[52.253 ms]
+2021-12-29 06:45:04,373 .50[52.253 ms]
+2021-12-29 06:45:04,373 .60[52.253 ms]
+2021-12-29 06:45:04,373 .70[52.253 ms]
+2021-12-29 06:45:04,373 .80[52.253 ms]
+2021-12-29 06:45:04,374 .90[52.253 ms]
+2021-12-29 06:45:04,374 .95[52.253 ms]
+2021-12-29 06:45:04,374 .99[52.253 ms]
+2021-12-29 06:45:04,374 Channel (server worker num[20]):
+2021-12-29 06:45:04,375 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:45:04,376 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:45:30,514 ==================== TRACER ======================
+2021-12-29 06:45:30,515 Channel (server worker num[20]):
+2021-12-29 06:45:30,516 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:45:30,517 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:45:34,406 ==================== TRACER ======================
+2021-12-29 06:45:34,407 Channel (server worker num[20]):
+2021-12-29 06:45:34,408 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:45:34,408 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:46:00,547 ==================== TRACER ======================
+2021-12-29 06:46:00,548 Channel (server worker num[20]):
+2021-12-29 06:46:00,548 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:46:00,549 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:46:04,425 ==================== TRACER ======================
+2021-12-29 06:46:04,426 Channel (server worker num[20]):
+2021-12-29 06:46:04,427 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:46:04,428 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:46:19,150 ==================== TRACER ======================
+2021-12-29 06:46:19,152 Channel (server worker num[20]):
+2021-12-29 06:46:19,155 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:46:19,155 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:46:30,579 ==================== TRACER ======================
+2021-12-29 06:46:30,580 Channel (server worker num[20]):
+2021-12-29 06:46:30,581 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:46:30,582 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:46:49,186 ==================== TRACER ======================
+2021-12-29 06:46:49,187 DAGExecutor:
+2021-12-29 06:46:49,187 Query count[1]
+2021-12-29 06:46:49,188 QPS[0.03333333333333333 q/s]
+2021-12-29 06:46:49,188 Succ[0.0]
+2021-12-29 06:46:49,188 Error req[0]
+2021-12-29 06:46:49,188 Latency:
+2021-12-29 06:46:49,188 ave[98.181 ms]
+2021-12-29 06:46:49,189 .50[98.181 ms]
+2021-12-29 06:46:49,189 .60[98.181 ms]
+2021-12-29 06:46:49,189 .70[98.181 ms]
+2021-12-29 06:46:49,189 .80[98.181 ms]
+2021-12-29 06:46:49,189 .90[98.181 ms]
+2021-12-29 06:46:49,190 .95[98.181 ms]
+2021-12-29 06:46:49,190 .99[98.181 ms]
+2021-12-29 06:46:49,190 Channel (server worker num[20]):
+2021-12-29 06:46:49,191 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:46:49,191 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:47:00,612 ==================== TRACER ======================
+2021-12-29 06:47:00,613 Channel (server worker num[20]):
+2021-12-29 06:47:00,614 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:47:00,614 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:47:19,222 ==================== TRACER ======================
+2021-12-29 06:47:19,223 Channel (server worker num[20]):
+2021-12-29 06:47:19,223 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:47:19,224 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:47:30,639 ==================== TRACER ======================
+2021-12-29 06:47:30,640 Channel (server worker num[20]):
+2021-12-29 06:47:30,641 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:47:30,641 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:47:49,255 ==================== TRACER ======================
+2021-12-29 06:47:49,255 Channel (server worker num[20]):
+2021-12-29 06:47:49,256 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:47:49,257 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:48:00,672 ==================== TRACER ======================
+2021-12-29 06:48:00,673 Channel (server worker num[20]):
+2021-12-29 06:48:00,674 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:48:00,674 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:48:19,287 ==================== TRACER ======================
+2021-12-29 06:48:19,288 Channel (server worker num[20]):
+2021-12-29 06:48:19,289 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:48:19,290 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:48:30,705 ==================== TRACER ======================
+2021-12-29 06:48:30,705 Channel (server worker num[20]):
+2021-12-29 06:48:30,706 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:48:30,707 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:48:49,320 ==================== TRACER ======================
+2021-12-29 06:48:49,321 Channel (server worker num[20]):
+2021-12-29 06:48:49,322 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:48:49,323 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:49:00,737 ==================== TRACER ======================
+2021-12-29 06:49:00,738 Channel (server worker num[20]):
+2021-12-29 06:49:00,739 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:49:00,740 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:49:19,353 ==================== TRACER ======================
+2021-12-29 06:49:19,354 Channel (server worker num[20]):
+2021-12-29 06:49:19,355 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:49:19,355 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:49:30,770 ==================== TRACER ======================
+2021-12-29 06:49:30,771 Channel (server worker num[20]):
+2021-12-29 06:49:30,772 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:49:30,772 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:49:49,386 ==================== TRACER ======================
+2021-12-29 06:49:49,386 Channel (server worker num[20]):
+2021-12-29 06:49:49,387 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:49:49,388 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:50:00,803 ==================== TRACER ======================
+2021-12-29 06:50:00,804 Channel (server worker num[20]):
+2021-12-29 06:50:00,805 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:50:00,805 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:50:19,418 ==================== TRACER ======================
+2021-12-29 06:50:19,419 Channel (server worker num[20]):
+2021-12-29 06:50:19,420 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:50:19,421 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:50:30,808 ==================== TRACER ======================
+2021-12-29 06:50:30,809 Channel (server worker num[20]):
+2021-12-29 06:50:30,810 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:50:30,810 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:50:49,451 ==================== TRACER ======================
+2021-12-29 06:50:49,452 Channel (server worker num[20]):
+2021-12-29 06:50:49,453 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:50:49,454 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:51:00,841 ==================== TRACER ======================
+2021-12-29 06:51:00,841 Channel (server worker num[20]):
+2021-12-29 06:51:00,842 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:51:00,843 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:51:01,177 ==================== TRACER ======================
+2021-12-29 06:51:01,179 Channel (server worker num[20]):
+2021-12-29 06:51:01,181 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:51:01,182 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:51:30,858 ==================== TRACER ======================
+2021-12-29 06:51:30,859 Channel (server worker num[20]):
+2021-12-29 06:51:30,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:51:30,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:51:31,212 ==================== TRACER ======================
+2021-12-29 06:51:31,213 DAGExecutor:
+2021-12-29 06:51:31,213 Query count[1]
+2021-12-29 06:51:31,214 QPS[0.03333333333333333 q/s]
+2021-12-29 06:51:31,214 Succ[0.0]
+2021-12-29 06:51:31,214 Error req[0]
+2021-12-29 06:51:31,214 Latency:
+2021-12-29 06:51:31,214 ave[1632.321 ms]
+2021-12-29 06:51:31,215 .50[1632.321 ms]
+2021-12-29 06:51:31,215 .60[1632.321 ms]
+2021-12-29 06:51:31,215 .70[1632.321 ms]
+2021-12-29 06:51:31,215 .80[1632.321 ms]
+2021-12-29 06:51:31,215 .90[1632.321 ms]
+2021-12-29 06:51:31,216 .95[1632.321 ms]
+2021-12-29 06:51:31,216 .99[1632.321 ms]
+2021-12-29 06:51:31,216 Channel (server worker num[20]):
+2021-12-29 06:51:31,217 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:51:31,217 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:51:57,055 ==================== TRACER ======================
+2021-12-29 06:51:57,056 Channel (server worker num[20]):
+2021-12-29 06:51:57,059 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:51:57,059 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:52:00,865 ==================== TRACER ======================
+2021-12-29 06:52:00,867 Channel (server worker num[20]):
+2021-12-29 06:52:00,867 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:52:00,868 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:52:13,972 ==================== TRACER ======================
+2021-12-29 06:52:13,974 Channel (server worker num[20]):
+2021-12-29 06:52:13,977 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:52:13,977 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:52:30,883 ==================== TRACER ======================
+2021-12-29 06:52:30,884 Channel (server worker num[20]):
+2021-12-29 06:52:30,885 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:52:30,886 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:52:44,008 ==================== TRACER ======================
+2021-12-29 06:52:44,009 DAGExecutor:
+2021-12-29 06:52:44,009 Query count[1]
+2021-12-29 06:52:44,009 QPS[0.03333333333333333 q/s]
+2021-12-29 06:52:44,009 Succ[0.0]
+2021-12-29 06:52:44,010 Error req[0]
+2021-12-29 06:52:44,010 Latency:
+2021-12-29 06:52:44,010 ave[1637.304 ms]
+2021-12-29 06:52:44,010 .50[1637.304 ms]
+2021-12-29 06:52:44,010 .60[1637.304 ms]
+2021-12-29 06:52:44,011 .70[1637.304 ms]
+2021-12-29 06:52:44,011 .80[1637.304 ms]
+2021-12-29 06:52:44,011 .90[1637.304 ms]
+2021-12-29 06:52:44,011 .95[1637.304 ms]
+2021-12-29 06:52:44,011 .99[1637.304 ms]
+2021-12-29 06:52:44,012 Channel (server worker num[20]):
+2021-12-29 06:52:44,012 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:52:44,013 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:53:00,892 ==================== TRACER ======================
+2021-12-29 06:53:00,893 Channel (server worker num[20]):
+2021-12-29 06:53:00,894 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:53:00,895 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:53:14,044 ==================== TRACER ======================
+2021-12-29 06:53:14,044 Channel (server worker num[20]):
+2021-12-29 06:53:14,045 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:53:14,046 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:53:30,906 ==================== TRACER ======================
+2021-12-29 06:53:30,907 Channel (server worker num[20]):
+2021-12-29 06:53:30,908 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:53:30,909 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:53:44,067 ==================== TRACER ======================
+2021-12-29 06:53:44,068 Channel (server worker num[20]):
+2021-12-29 06:53:44,069 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:53:44,070 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:54:00,738 ==================== TRACER ======================
+2021-12-29 06:54:00,740 Channel (server worker num[20]):
+2021-12-29 06:54:00,742 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:54:00,743 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:54:00,939 ==================== TRACER ======================
+2021-12-29 06:54:00,940 Channel (server worker num[20]):
+2021-12-29 06:54:00,941 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:54:00,941 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:54:30,768 ==================== TRACER ======================
+2021-12-29 06:54:30,769 DAGExecutor:
+2021-12-29 06:54:30,769 Query count[1]
+2021-12-29 06:54:30,769 QPS[0.03333333333333333 q/s]
+2021-12-29 06:54:30,770 Succ[0.0]
+2021-12-29 06:54:30,770 Error req[0]
+2021-12-29 06:54:30,770 Latency:
+2021-12-29 06:54:30,770 ave[1731.024 ms]
+2021-12-29 06:54:30,770 .50[1731.024 ms]
+2021-12-29 06:54:30,770 .60[1731.024 ms]
+2021-12-29 06:54:30,771 .70[1731.024 ms]
+2021-12-29 06:54:30,771 .80[1731.024 ms]
+2021-12-29 06:54:30,771 .90[1731.024 ms]
+2021-12-29 06:54:30,771 .95[1731.024 ms]
+2021-12-29 06:54:30,771 .99[1731.024 ms]
+2021-12-29 06:54:30,772 Channel (server worker num[20]):
+2021-12-29 06:54:30,772 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:54:30,773 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:54:30,972 ==================== TRACER ======================
+2021-12-29 06:54:30,972 Channel (server worker num[20]):
+2021-12-29 06:54:30,973 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:54:30,974 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:55:00,800 ==================== TRACER ======================
+2021-12-29 06:55:00,801 Channel (server worker num[20]):
+2021-12-29 06:55:00,804 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:55:00,804 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:55:01,004 ==================== TRACER ======================
+2021-12-29 06:55:01,005 Channel (server worker num[20]):
+2021-12-29 06:55:01,006 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:55:01,007 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:55:30,835 ==================== TRACER ======================
+2021-12-29 06:55:30,836 Channel (server worker num[20]):
+2021-12-29 06:55:30,836 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:55:30,837 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:55:31,037 ==================== TRACER ======================
+2021-12-29 06:55:31,038 Channel (server worker num[20]):
+2021-12-29 06:55:31,039 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:55:31,040 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:56:00,858 ==================== TRACER ======================
+2021-12-29 06:56:00,859 Channel (server worker num[20]):
+2021-12-29 06:56:00,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:56:00,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:56:01,063 ==================== TRACER ======================
+2021-12-29 06:56:01,064 Channel (server worker num[20]):
+2021-12-29 06:56:01,065 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:56:01,065 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:56:30,888 ==================== TRACER ======================
+2021-12-29 06:56:30,889 Channel (server worker num[20]):
+2021-12-29 06:56:30,890 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:56:30,891 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:56:31,083 ==================== TRACER ======================
+2021-12-29 06:56:31,084 Channel (server worker num[20]):
+2021-12-29 06:56:31,084 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:56:31,085 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:57:00,912 ==================== TRACER ======================
+2021-12-29 06:57:00,913 Channel (server worker num[20]):
+2021-12-29 06:57:00,914 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:57:00,915 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:57:01,116 ==================== TRACER ======================
+2021-12-29 06:57:01,116 Channel (server worker num[20]):
+2021-12-29 06:57:01,117 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:57:01,118 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:57:30,945 ==================== TRACER ======================
+2021-12-29 06:57:30,946 Channel (server worker num[20]):
+2021-12-29 06:57:30,947 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:57:30,947 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:57:31,148 ==================== TRACER ======================
+2021-12-29 06:57:31,149 Channel (server worker num[20]):
+2021-12-29 06:57:31,150 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:57:31,151 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:58:00,978 ==================== TRACER ======================
+2021-12-29 06:58:00,979 Channel (server worker num[20]):
+2021-12-29 06:58:00,981 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:58:00,982 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:58:01,181 ==================== TRACER ======================
+2021-12-29 06:58:01,182 Channel (server worker num[20]):
+2021-12-29 06:58:01,183 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:58:01,183 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:58:31,012 ==================== TRACER ======================
+2021-12-29 06:58:31,013 Channel (server worker num[20]):
+2021-12-29 06:58:31,014 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:58:31,014 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:58:31,214 ==================== TRACER ======================
+2021-12-29 06:58:31,214 Channel (server worker num[20]):
+2021-12-29 06:58:31,215 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:58:31,216 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:59:01,045 ==================== TRACER ======================
+2021-12-29 06:59:01,046 Channel (server worker num[20]):
+2021-12-29 06:59:01,047 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:59:01,047 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:59:01,246 ==================== TRACER ======================
+2021-12-29 06:59:01,247 Channel (server worker num[20]):
+2021-12-29 06:59:01,248 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:59:01,249 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:59:31,076 ==================== TRACER ======================
+2021-12-29 06:59:31,077 Channel (server worker num[20]):
+2021-12-29 06:59:31,078 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:59:31,079 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 06:59:31,260 ==================== TRACER ======================
+2021-12-29 06:59:31,261 Channel (server worker num[20]):
+2021-12-29 06:59:31,262 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 06:59:31,262 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:00:01,088 ==================== TRACER ======================
+2021-12-29 07:00:01,088 Channel (server worker num[20]):
+2021-12-29 07:00:01,089 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:00:01,090 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:00:01,293 ==================== TRACER ======================
+2021-12-29 07:00:01,294 Channel (server worker num[20]):
+2021-12-29 07:00:01,294 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:00:01,295 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:00:31,120 ==================== TRACER ======================
+2021-12-29 07:00:31,121 Channel (server worker num[20]):
+2021-12-29 07:00:31,122 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:00:31,123 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:00:31,324 ==================== TRACER ======================
+2021-12-29 07:00:31,325 Channel (server worker num[20]):
+2021-12-29 07:00:31,326 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:00:31,326 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:01:01,153 ==================== TRACER ======================
+2021-12-29 07:01:01,154 Channel (server worker num[20]):
+2021-12-29 07:01:01,155 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:01:01,156 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:01:01,357 ==================== TRACER ======================
+2021-12-29 07:01:01,358 Channel (server worker num[20]):
+2021-12-29 07:01:01,358 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:01:01,359 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:01:31,186 ==================== TRACER ======================
+2021-12-29 07:01:31,187 Channel (server worker num[20]):
+2021-12-29 07:01:31,188 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:01:31,188 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:01:31,389 ==================== TRACER ======================
+2021-12-29 07:01:31,390 Channel (server worker num[20]):
+2021-12-29 07:01:31,391 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:01:31,392 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:02:01,219 ==================== TRACER ======================
+2021-12-29 07:02:01,220 Channel (server worker num[20]):
+2021-12-29 07:02:01,220 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:02:01,221 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:02:01,416 ==================== TRACER ======================
+2021-12-29 07:02:01,417 Channel (server worker num[20]):
+2021-12-29 07:02:01,418 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:02:01,418 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:02:31,251 ==================== TRACER ======================
+2021-12-29 07:02:31,252 Channel (server worker num[20]):
+2021-12-29 07:02:31,253 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:02:31,254 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:02:31,448 ==================== TRACER ======================
+2021-12-29 07:02:31,449 Channel (server worker num[20]):
+2021-12-29 07:02:31,450 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:02:31,450 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:03:01,284 ==================== TRACER ======================
+2021-12-29 07:03:01,285 Channel (server worker num[20]):
+2021-12-29 07:03:01,286 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:03:01,287 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:03:01,481 ==================== TRACER ======================
+2021-12-29 07:03:01,481 Channel (server worker num[20]):
+2021-12-29 07:03:01,482 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:03:01,483 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:03:31,316 ==================== TRACER ======================
+2021-12-29 07:03:31,317 Channel (server worker num[20]):
+2021-12-29 07:03:31,318 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:03:31,318 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:03:31,513 ==================== TRACER ======================
+2021-12-29 07:03:31,514 Channel (server worker num[20]):
+2021-12-29 07:03:31,515 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:03:31,516 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:04:01,349 ==================== TRACER ======================
+2021-12-29 07:04:01,350 Channel (server worker num[20]):
+2021-12-29 07:04:01,351 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:04:01,351 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:04:01,546 ==================== TRACER ======================
+2021-12-29 07:04:01,547 Channel (server worker num[20]):
+2021-12-29 07:04:01,548 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:04:01,548 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:04:31,381 ==================== TRACER ======================
+2021-12-29 07:04:31,382 Channel (server worker num[20]):
+2021-12-29 07:04:31,383 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:04:31,384 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:04:31,579 ==================== TRACER ======================
+2021-12-29 07:04:31,579 Channel (server worker num[20]):
+2021-12-29 07:04:31,580 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:04:31,581 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:05:01,414 ==================== TRACER ======================
+2021-12-29 07:05:01,415 Channel (server worker num[20]):
+2021-12-29 07:05:01,416 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:05:01,416 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:05:01,611 ==================== TRACER ======================
+2021-12-29 07:05:01,612 Channel (server worker num[20]):
+2021-12-29 07:05:01,613 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:05:01,614 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:05:31,447 ==================== TRACER ======================
+2021-12-29 07:05:31,448 Channel (server worker num[20]):
+2021-12-29 07:05:31,448 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:05:31,449 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:05:31,644 ==================== TRACER ======================
+2021-12-29 07:05:31,645 Channel (server worker num[20]):
+2021-12-29 07:05:31,646 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:05:31,646 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:06:01,479 ==================== TRACER ======================
+2021-12-29 07:06:01,480 Channel (server worker num[20]):
+2021-12-29 07:06:01,481 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:06:01,482 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:06:01,649 ==================== TRACER ======================
+2021-12-29 07:06:01,650 Channel (server worker num[20]):
+2021-12-29 07:06:01,651 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:06:01,652 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:06:31,512 ==================== TRACER ======================
+2021-12-29 07:06:31,513 Channel (server worker num[20]):
+2021-12-29 07:06:31,514 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:06:31,515 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:06:31,680 ==================== TRACER ======================
+2021-12-29 07:06:31,681 Channel (server worker num[20]):
+2021-12-29 07:06:31,682 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:06:31,682 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:07:01,545 ==================== TRACER ======================
+2021-12-29 07:07:01,546 Channel (server worker num[20]):
+2021-12-29 07:07:01,547 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:07:01,547 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:07:01,713 ==================== TRACER ======================
+2021-12-29 07:07:01,713 Channel (server worker num[20]):
+2021-12-29 07:07:01,714 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:07:01,715 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:07:31,578 ==================== TRACER ======================
+2021-12-29 07:07:31,579 Channel (server worker num[20]):
+2021-12-29 07:07:31,579 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:07:31,580 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:07:31,745 ==================== TRACER ======================
+2021-12-29 07:07:31,746 Channel (server worker num[20]):
+2021-12-29 07:07:31,747 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:07:31,748 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:08:01,610 ==================== TRACER ======================
+2021-12-29 07:08:01,611 Channel (server worker num[20]):
+2021-12-29 07:08:01,612 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:08:01,613 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:08:01,778 ==================== TRACER ======================
+2021-12-29 07:08:01,779 Channel (server worker num[20]):
+2021-12-29 07:08:01,780 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:08:01,780 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:08:31,643 ==================== TRACER ======================
+2021-12-29 07:08:31,644 Channel (server worker num[20]):
+2021-12-29 07:08:31,645 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:08:31,646 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:08:31,803 ==================== TRACER ======================
+2021-12-29 07:08:31,804 Channel (server worker num[20]):
+2021-12-29 07:08:31,804 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:08:31,805 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:09:01,676 ==================== TRACER ======================
+2021-12-29 07:09:01,677 Channel (server worker num[20]):
+2021-12-29 07:09:01,678 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:09:01,678 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:09:01,836 ==================== TRACER ======================
+2021-12-29 07:09:01,836 Channel (server worker num[20]):
+2021-12-29 07:09:01,837 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:09:01,838 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:09:31,709 ==================== TRACER ======================
+2021-12-29 07:09:31,710 Channel (server worker num[20]):
+2021-12-29 07:09:31,710 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:09:31,711 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:09:31,868 ==================== TRACER ======================
+2021-12-29 07:09:31,869 Channel (server worker num[20]):
+2021-12-29 07:09:31,870 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:09:31,871 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:10:01,742 ==================== TRACER ======================
+2021-12-29 07:10:01,742 Channel (server worker num[20]):
+2021-12-29 07:10:01,745 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:10:01,746 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:10:01,888 ==================== TRACER ======================
+2021-12-29 07:10:01,888 Channel (server worker num[20]):
+2021-12-29 07:10:01,889 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:10:01,890 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:10:31,776 ==================== TRACER ======================
+2021-12-29 07:10:31,777 Channel (server worker num[20]):
+2021-12-29 07:10:31,778 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:10:31,778 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:10:31,920 ==================== TRACER ======================
+2021-12-29 07:10:31,921 Channel (server worker num[20]):
+2021-12-29 07:10:31,922 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:10:31,922 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:11:01,809 ==================== TRACER ======================
+2021-12-29 07:11:01,810 Channel (server worker num[20]):
+2021-12-29 07:11:01,810 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:11:01,811 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:11:01,953 ==================== TRACER ======================
+2021-12-29 07:11:01,953 Channel (server worker num[20]):
+2021-12-29 07:11:01,954 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:11:01,955 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:11:31,841 ==================== TRACER ======================
+2021-12-29 07:11:31,842 Channel (server worker num[20]):
+2021-12-29 07:11:31,843 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:11:31,844 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:11:31,985 ==================== TRACER ======================
+2021-12-29 07:11:31,986 Channel (server worker num[20]):
+2021-12-29 07:11:31,987 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:11:31,988 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:12:01,874 ==================== TRACER ======================
+2021-12-29 07:12:01,875 Channel (server worker num[20]):
+2021-12-29 07:12:01,876 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:12:01,876 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:12:02,018 ==================== TRACER ======================
+2021-12-29 07:12:02,019 Channel (server worker num[20]):
+2021-12-29 07:12:02,019 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:12:02,020 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:12:31,885 ==================== TRACER ======================
+2021-12-29 07:12:31,886 Channel (server worker num[20]):
+2021-12-29 07:12:31,887 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:12:31,888 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:12:32,035 ==================== TRACER ======================
+2021-12-29 07:12:32,036 Channel (server worker num[20]):
+2021-12-29 07:12:32,037 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:12:32,038 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:13:01,918 ==================== TRACER ======================
+2021-12-29 07:13:01,919 Channel (server worker num[20]):
+2021-12-29 07:13:01,920 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:13:01,920 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:13:02,068 ==================== TRACER ======================
+2021-12-29 07:13:02,069 Channel (server worker num[20]):
+2021-12-29 07:13:02,069 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:13:02,070 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:13:12,292 ==================== TRACER ======================
+2021-12-29 07:13:12,294 Channel (server worker num[20]):
+2021-12-29 07:13:12,296 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:13:12,297 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:13:32,101 ==================== TRACER ======================
+2021-12-29 07:13:32,101 Channel (server worker num[20]):
+2021-12-29 07:13:32,102 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:13:32,103 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:13:42,327 ==================== TRACER ======================
+2021-12-29 07:13:42,329 DAGExecutor:
+2021-12-29 07:13:42,329 Query count[1]
+2021-12-29 07:13:42,329 QPS[0.03333333333333333 q/s]
+2021-12-29 07:13:42,329 Succ[0.0]
+2021-12-29 07:13:42,329 Error req[0]
+2021-12-29 07:13:42,330 Latency:
+2021-12-29 07:13:42,330 ave[1661.686 ms]
+2021-12-29 07:13:42,330 .50[1661.686 ms]
+2021-12-29 07:13:42,330 .60[1661.686 ms]
+2021-12-29 07:13:42,330 .70[1661.686 ms]
+2021-12-29 07:13:42,331 .80[1661.686 ms]
+2021-12-29 07:13:42,331 .90[1661.686 ms]
+2021-12-29 07:13:42,331 .95[1661.686 ms]
+2021-12-29 07:13:42,331 .99[1661.686 ms]
+2021-12-29 07:13:42,331 Channel (server worker num[20]):
+2021-12-29 07:13:42,332 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:13:42,333 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:14:02,133 ==================== TRACER ======================
+2021-12-29 07:14:02,134 Channel (server worker num[20]):
+2021-12-29 07:14:02,135 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:14:02,136 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:14:12,363 ==================== TRACER ======================
+2021-12-29 07:14:12,364 Channel (server worker num[20]):
+2021-12-29 07:14:12,365 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:14:12,366 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:14:32,166 ==================== TRACER ======================
+2021-12-29 07:14:32,167 Channel (server worker num[20]):
+2021-12-29 07:14:32,168 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:14:32,168 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:14:42,396 ==================== TRACER ======================
+2021-12-29 07:14:42,397 Channel (server worker num[20]):
+2021-12-29 07:14:42,398 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:14:42,398 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:15:02,199 ==================== TRACER ======================
+2021-12-29 07:15:02,200 Channel (server worker num[20]):
+2021-12-29 07:15:02,200 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:15:02,201 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:15:12,429 ==================== TRACER ======================
+2021-12-29 07:15:12,430 Channel (server worker num[20]):
+2021-12-29 07:15:12,430 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:15:12,431 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:15:32,223 ==================== TRACER ======================
+2021-12-29 07:15:32,224 Channel (server worker num[20]):
+2021-12-29 07:15:32,225 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:15:32,226 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:15:42,461 ==================== TRACER ======================
+2021-12-29 07:15:42,462 Channel (server worker num[20]):
+2021-12-29 07:15:42,463 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:15:42,464 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:16:02,228 ==================== TRACER ======================
+2021-12-29 07:16:02,229 Channel (server worker num[20]):
+2021-12-29 07:16:02,230 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:16:02,230 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:16:12,494 ==================== TRACER ======================
+2021-12-29 07:16:12,495 Channel (server worker num[20]):
+2021-12-29 07:16:12,496 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:16:12,496 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:16:27,943 ==================== TRACER ======================
+2021-12-29 07:16:27,945 Channel (server worker num[20]):
+2021-12-29 07:16:27,947 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:16:27,948 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:16:32,261 ==================== TRACER ======================
+2021-12-29 07:16:32,261 Channel (server worker num[20]):
+2021-12-29 07:16:32,262 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:16:32,263 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:16:57,960 ==================== TRACER ======================
+2021-12-29 07:16:57,961 DAGExecutor:
+2021-12-29 07:16:57,962 Query count[1]
+2021-12-29 07:16:57,962 QPS[0.03333333333333333 q/s]
+2021-12-29 07:16:57,962 Succ[0.0]
+2021-12-29 07:16:57,962 Error req[0]
+2021-12-29 07:16:57,962 Latency:
+2021-12-29 07:16:57,962 ave[76.653 ms]
+2021-12-29 07:16:57,963 .50[76.653 ms]
+2021-12-29 07:16:57,963 .60[76.653 ms]
+2021-12-29 07:16:57,963 .70[76.653 ms]
+2021-12-29 07:16:57,963 .80[76.653 ms]
+2021-12-29 07:16:57,963 .90[76.653 ms]
+2021-12-29 07:16:57,963 .95[76.653 ms]
+2021-12-29 07:16:57,964 .99[76.653 ms]
+2021-12-29 07:16:57,964 Channel (server worker num[20]):
+2021-12-29 07:16:57,965 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:16:57,965 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:17:02,293 ==================== TRACER ======================
+2021-12-29 07:17:02,294 Channel (server worker num[20]):
+2021-12-29 07:17:02,295 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:17:02,295 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:17:27,996 ==================== TRACER ======================
+2021-12-29 07:17:27,997 Channel (server worker num[20]):
+2021-12-29 07:17:27,997 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:17:27,998 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:17:32,326 ==================== TRACER ======================
+2021-12-29 07:17:32,327 Channel (server worker num[20]):
+2021-12-29 07:17:32,327 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:17:32,329 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:17:33,622 ==================== TRACER ======================
+2021-12-29 07:17:33,624 Channel (server worker num[20]):
+2021-12-29 07:17:33,626 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:17:33,626 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:18:02,359 ==================== TRACER ======================
+2021-12-29 07:18:02,360 Channel (server worker num[20]):
+2021-12-29 07:18:02,361 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:18:02,362 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:18:03,656 ==================== TRACER ======================
+2021-12-29 07:18:03,658 DAGExecutor:
+2021-12-29 07:18:03,658 Query count[1]
+2021-12-29 07:18:03,658 QPS[0.03333333333333333 q/s]
+2021-12-29 07:18:03,658 Succ[0.0]
+2021-12-29 07:18:03,658 Error req[0]
+2021-12-29 07:18:03,659 Latency:
+2021-12-29 07:18:03,659 ave[1646.997 ms]
+2021-12-29 07:18:03,659 .50[1646.997 ms]
+2021-12-29 07:18:03,659 .60[1646.997 ms]
+2021-12-29 07:18:03,659 .70[1646.997 ms]
+2021-12-29 07:18:03,659 .80[1646.997 ms]
+2021-12-29 07:18:03,660 .90[1646.997 ms]
+2021-12-29 07:18:03,660 .95[1646.997 ms]
+2021-12-29 07:18:03,660 .99[1646.997 ms]
+2021-12-29 07:18:03,660 Channel (server worker num[20]):
+2021-12-29 07:18:03,661 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:18:03,662 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:18:32,392 ==================== TRACER ======================
+2021-12-29 07:18:32,393 Channel (server worker num[20]):
+2021-12-29 07:18:32,394 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:18:32,395 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:18:33,692 ==================== TRACER ======================
+2021-12-29 07:18:33,693 Channel (server worker num[20]):
+2021-12-29 07:18:33,694 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:18:33,694 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:19:02,425 ==================== TRACER ======================
+2021-12-29 07:19:02,426 Channel (server worker num[20]):
+2021-12-29 07:19:02,427 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:19:02,427 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:19:03,718 ==================== TRACER ======================
+2021-12-29 07:19:03,718 Channel (server worker num[20]):
+2021-12-29 07:19:03,719 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:19:03,720 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:19:06,252 ==================== TRACER ======================
+2021-12-29 07:19:06,254 Channel (server worker num[20]):
+2021-12-29 07:19:06,256 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:19:06,256 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:19:32,457 ==================== TRACER ======================
+2021-12-29 07:19:32,458 Channel (server worker num[20]):
+2021-12-29 07:19:32,459 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:19:32,459 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:19:36,278 ==================== TRACER ======================
+2021-12-29 07:19:36,279 DAGExecutor:
+2021-12-29 07:19:36,279 Query count[1]
+2021-12-29 07:19:36,279 QPS[0.03333333333333333 q/s]
+2021-12-29 07:19:36,280 Succ[0.0]
+2021-12-29 07:19:36,280 Error req[0]
+2021-12-29 07:19:36,280 Latency:
+2021-12-29 07:19:36,280 ave[1630.707 ms]
+2021-12-29 07:19:36,280 .50[1630.707 ms]
+2021-12-29 07:19:36,280 .60[1630.707 ms]
+2021-12-29 07:19:36,281 .70[1630.707 ms]
+2021-12-29 07:19:36,281 .80[1630.707 ms]
+2021-12-29 07:19:36,281 .90[1630.707 ms]
+2021-12-29 07:19:36,281 .95[1630.707 ms]
+2021-12-29 07:19:36,281 .99[1630.707 ms]
+2021-12-29 07:19:36,282 Channel (server worker num[20]):
+2021-12-29 07:19:36,282 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:19:36,283 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:20:02,490 ==================== TRACER ======================
+2021-12-29 07:20:02,490 Channel (server worker num[20]):
+2021-12-29 07:20:02,491 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:20:02,492 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:20:06,313 ==================== TRACER ======================
+2021-12-29 07:20:06,314 Channel (server worker num[20]):
+2021-12-29 07:20:06,315 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:20:06,316 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:20:32,520 ==================== TRACER ======================
+2021-12-29 07:20:32,521 Channel (server worker num[20]):
+2021-12-29 07:20:32,522 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:20:32,523 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:20:36,346 ==================== TRACER ======================
+2021-12-29 07:20:36,347 Channel (server worker num[20]):
+2021-12-29 07:20:36,348 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:20:36,349 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:21:02,553 ==================== TRACER ======================
+2021-12-29 07:21:02,554 Channel (server worker num[20]):
+2021-12-29 07:21:02,555 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:21:02,555 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:21:06,379 ==================== TRACER ======================
+2021-12-29 07:21:06,380 Channel (server worker num[20]):
+2021-12-29 07:21:06,381 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:21:06,381 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:21:32,586 ==================== TRACER ======================
+2021-12-29 07:21:32,587 Channel (server worker num[20]):
+2021-12-29 07:21:32,587 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:21:32,588 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:21:36,412 ==================== TRACER ======================
+2021-12-29 07:21:36,413 Channel (server worker num[20]):
+2021-12-29 07:21:36,413 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:21:36,414 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:22:02,618 ==================== TRACER ======================
+2021-12-29 07:22:02,619 Channel (server worker num[20]):
+2021-12-29 07:22:02,620 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:22:02,621 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:22:06,445 ==================== TRACER ======================
+2021-12-29 07:22:06,445 Channel (server worker num[20]):
+2021-12-29 07:22:06,446 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:22:06,447 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:22:32,651 ==================== TRACER ======================
+2021-12-29 07:22:32,652 Channel (server worker num[20]):
+2021-12-29 07:22:32,653 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:22:32,654 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:22:36,477 ==================== TRACER ======================
+2021-12-29 07:22:36,478 Channel (server worker num[20]):
+2021-12-29 07:22:36,479 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:22:36,480 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:23:02,664 ==================== TRACER ======================
+2021-12-29 07:23:02,665 Channel (server worker num[20]):
+2021-12-29 07:23:02,666 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:23:02,666 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:23:06,510 ==================== TRACER ======================
+2021-12-29 07:23:06,511 Channel (server worker num[20]):
+2021-12-29 07:23:06,512 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:23:06,513 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:23:32,680 ==================== TRACER ======================
+2021-12-29 07:23:32,681 Channel (server worker num[20]):
+2021-12-29 07:23:32,682 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:23:32,683 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:23:36,543 ==================== TRACER ======================
+2021-12-29 07:23:36,544 Channel (server worker num[20]):
+2021-12-29 07:23:36,545 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:23:36,545 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:24:02,713 ==================== TRACER ======================
+2021-12-29 07:24:02,714 Channel (server worker num[20]):
+2021-12-29 07:24:02,715 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:24:02,715 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:24:06,576 ==================== TRACER ======================
+2021-12-29 07:24:06,577 Channel (server worker num[20]):
+2021-12-29 07:24:06,577 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:24:06,578 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:24:32,746 ==================== TRACER ======================
+2021-12-29 07:24:32,746 Channel (server worker num[20]):
+2021-12-29 07:24:32,747 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:24:32,748 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:24:36,609 ==================== TRACER ======================
+2021-12-29 07:24:36,609 Channel (server worker num[20]):
+2021-12-29 07:24:36,610 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:24:36,611 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:25:02,778 ==================== TRACER ======================
+2021-12-29 07:25:02,779 Channel (server worker num[20]):
+2021-12-29 07:25:02,780 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:25:02,781 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:25:06,641 ==================== TRACER ======================
+2021-12-29 07:25:06,642 Channel (server worker num[20]):
+2021-12-29 07:25:06,643 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:25:06,644 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:25:32,796 ==================== TRACER ======================
+2021-12-29 07:25:32,797 Channel (server worker num[20]):
+2021-12-29 07:25:32,798 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:25:32,798 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:25:36,674 ==================== TRACER ======================
+2021-12-29 07:25:36,675 Channel (server worker num[20]):
+2021-12-29 07:25:36,676 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:25:36,677 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:26:02,829 ==================== TRACER ======================
+2021-12-29 07:26:02,830 Channel (server worker num[20]):
+2021-12-29 07:26:02,830 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:26:02,831 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:26:06,707 ==================== TRACER ======================
+2021-12-29 07:26:06,708 Channel (server worker num[20]):
+2021-12-29 07:26:06,708 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:26:06,709 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:26:32,861 ==================== TRACER ======================
+2021-12-29 07:26:32,862 Channel (server worker num[20]):
+2021-12-29 07:26:32,863 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:26:32,864 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:26:36,740 ==================== TRACER ======================
+2021-12-29 07:26:36,740 Channel (server worker num[20]):
+2021-12-29 07:26:36,741 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:26:36,742 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:27:02,894 ==================== TRACER ======================
+2021-12-29 07:27:02,895 Channel (server worker num[20]):
+2021-12-29 07:27:02,896 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:27:02,896 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:27:06,772 ==================== TRACER ======================
+2021-12-29 07:27:06,773 Channel (server worker num[20]):
+2021-12-29 07:27:06,774 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:27:06,775 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:27:32,927 ==================== TRACER ======================
+2021-12-29 07:27:32,928 Channel (server worker num[20]):
+2021-12-29 07:27:32,928 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:27:32,929 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:27:36,805 ==================== TRACER ======================
+2021-12-29 07:27:36,806 Channel (server worker num[20]):
+2021-12-29 07:27:36,807 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:27:36,807 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:28:02,959 ==================== TRACER ======================
+2021-12-29 07:28:02,960 Channel (server worker num[20]):
+2021-12-29 07:28:02,961 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:28:02,962 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:28:06,838 ==================== TRACER ======================
+2021-12-29 07:28:06,839 Channel (server worker num[20]):
+2021-12-29 07:28:06,839 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:28:06,840 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:28:32,992 ==================== TRACER ======================
+2021-12-29 07:28:32,993 Channel (server worker num[20]):
+2021-12-29 07:28:32,993 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:28:32,994 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:28:36,871 ==================== TRACER ======================
+2021-12-29 07:28:36,871 Channel (server worker num[20]):
+2021-12-29 07:28:36,872 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:28:36,873 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:29:03,024 ==================== TRACER ======================
+2021-12-29 07:29:03,025 Channel (server worker num[20]):
+2021-12-29 07:29:03,026 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:29:03,027 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:29:06,903 ==================== TRACER ======================
+2021-12-29 07:29:06,904 Channel (server worker num[20]):
+2021-12-29 07:29:06,905 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:29:06,906 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:29:33,057 ==================== TRACER ======================
+2021-12-29 07:29:33,058 Channel (server worker num[20]):
+2021-12-29 07:29:33,059 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:29:33,060 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:29:36,912 ==================== TRACER ======================
+2021-12-29 07:29:36,913 Channel (server worker num[20]):
+2021-12-29 07:29:36,914 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:29:36,914 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:30:03,090 ==================== TRACER ======================
+2021-12-29 07:30:03,091 Channel (server worker num[20]):
+2021-12-29 07:30:03,092 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:30:03,092 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:30:06,945 ==================== TRACER ======================
+2021-12-29 07:30:06,946 Channel (server worker num[20]):
+2021-12-29 07:30:06,946 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:30:06,947 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:30:33,123 ==================== TRACER ======================
+2021-12-29 07:30:33,123 Channel (server worker num[20]):
+2021-12-29 07:30:33,124 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:30:33,125 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:30:36,974 ==================== TRACER ======================
+2021-12-29 07:30:36,975 Channel (server worker num[20]):
+2021-12-29 07:30:36,976 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:30:36,977 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:31:03,155 ==================== TRACER ======================
+2021-12-29 07:31:03,156 Channel (server worker num[20]):
+2021-12-29 07:31:03,157 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:31:03,158 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:31:07,007 ==================== TRACER ======================
+2021-12-29 07:31:07,008 Channel (server worker num[20]):
+2021-12-29 07:31:07,009 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:31:07,009 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:31:33,188 ==================== TRACER ======================
+2021-12-29 07:31:33,189 Channel (server worker num[20]):
+2021-12-29 07:31:33,190 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:31:33,190 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:31:37,029 ==================== TRACER ======================
+2021-12-29 07:31:37,029 Channel (server worker num[20]):
+2021-12-29 07:31:37,030 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:31:37,031 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:32:03,208 ==================== TRACER ======================
+2021-12-29 07:32:03,208 Channel (server worker num[20]):
+2021-12-29 07:32:03,209 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:32:03,210 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:32:07,061 ==================== TRACER ======================
+2021-12-29 07:32:07,062 Channel (server worker num[20]):
+2021-12-29 07:32:07,063 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:32:07,064 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:32:33,217 ==================== TRACER ======================
+2021-12-29 07:32:33,218 Channel (server worker num[20]):
+2021-12-29 07:32:33,219 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:32:33,219 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:32:37,094 ==================== TRACER ======================
+2021-12-29 07:32:37,095 Channel (server worker num[20]):
+2021-12-29 07:32:37,095 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:32:37,096 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:33:03,250 ==================== TRACER ======================
+2021-12-29 07:33:03,251 Channel (server worker num[20]):
+2021-12-29 07:33:03,251 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:33:03,252 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:33:07,116 ==================== TRACER ======================
+2021-12-29 07:33:07,117 Channel (server worker num[20]):
+2021-12-29 07:33:07,118 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:33:07,119 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:33:33,282 ==================== TRACER ======================
+2021-12-29 07:33:33,283 Channel (server worker num[20]):
+2021-12-29 07:33:33,284 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:33:33,285 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:33:37,119 ==================== TRACER ======================
+2021-12-29 07:33:37,120 Channel (server worker num[20]):
+2021-12-29 07:33:37,121 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:33:37,122 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:34:03,312 ==================== TRACER ======================
+2021-12-29 07:34:03,313 Channel (server worker num[20]):
+2021-12-29 07:34:03,314 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:34:03,314 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:34:07,152 ==================== TRACER ======================
+2021-12-29 07:34:07,153 Channel (server worker num[20]):
+2021-12-29 07:34:07,154 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:34:07,154 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:34:33,345 ==================== TRACER ======================
+2021-12-29 07:34:33,346 Channel (server worker num[20]):
+2021-12-29 07:34:33,346 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:34:33,347 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:34:37,185 ==================== TRACER ======================
+2021-12-29 07:34:37,186 Channel (server worker num[20]):
+2021-12-29 07:34:37,186 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:34:37,187 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:35:03,366 ==================== TRACER ======================
+2021-12-29 07:35:03,367 Channel (server worker num[20]):
+2021-12-29 07:35:03,368 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:35:03,369 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:35:07,218 ==================== TRACER ======================
+2021-12-29 07:35:07,218 Channel (server worker num[20]):
+2021-12-29 07:35:07,221 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:35:07,222 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:35:33,378 ==================== TRACER ======================
+2021-12-29 07:35:33,379 Channel (server worker num[20]):
+2021-12-29 07:35:33,380 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:35:33,381 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:35:37,252 ==================== TRACER ======================
+2021-12-29 07:35:37,253 Channel (server worker num[20]):
+2021-12-29 07:35:37,254 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:35:37,255 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:36:03,411 ==================== TRACER ======================
+2021-12-29 07:36:03,412 Channel (server worker num[20]):
+2021-12-29 07:36:03,413 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:36:03,413 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:36:07,257 ==================== TRACER ======================
+2021-12-29 07:36:07,257 Channel (server worker num[20]):
+2021-12-29 07:36:07,260 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:36:07,261 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:36:33,444 ==================== TRACER ======================
+2021-12-29 07:36:33,445 Channel (server worker num[20]):
+2021-12-29 07:36:33,445 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:36:33,446 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:36:37,291 ==================== TRACER ======================
+2021-12-29 07:36:37,292 Channel (server worker num[20]):
+2021-12-29 07:36:37,293 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:36:37,294 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:37:03,477 ==================== TRACER ======================
+2021-12-29 07:37:03,477 Channel (server worker num[20]):
+2021-12-29 07:37:03,478 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:37:03,479 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:37:07,324 ==================== TRACER ======================
+2021-12-29 07:37:07,325 Channel (server worker num[20]):
+2021-12-29 07:37:07,326 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:37:07,326 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:37:33,509 ==================== TRACER ======================
+2021-12-29 07:37:33,510 Channel (server worker num[20]):
+2021-12-29 07:37:33,511 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:37:33,512 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:37:37,357 ==================== TRACER ======================
+2021-12-29 07:37:37,358 Channel (server worker num[20]):
+2021-12-29 07:37:37,358 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:37:37,359 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:38:03,542 ==================== TRACER ======================
+2021-12-29 07:38:03,543 Channel (server worker num[20]):
+2021-12-29 07:38:03,544 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:38:03,545 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:38:07,365 ==================== TRACER ======================
+2021-12-29 07:38:07,365 Channel (server worker num[20]):
+2021-12-29 07:38:07,366 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:38:07,367 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:38:33,565 ==================== TRACER ======================
+2021-12-29 07:38:33,566 Channel (server worker num[20]):
+2021-12-29 07:38:33,567 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:38:33,568 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:38:37,397 ==================== TRACER ======================
+2021-12-29 07:38:37,398 Channel (server worker num[20]):
+2021-12-29 07:38:37,399 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:38:37,400 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:39:03,592 ==================== TRACER ======================
+2021-12-29 07:39:03,593 Channel (server worker num[20]):
+2021-12-29 07:39:03,594 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:39:03,595 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:39:07,430 ==================== TRACER ======================
+2021-12-29 07:39:07,431 Channel (server worker num[20]):
+2021-12-29 07:39:07,432 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:39:07,432 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:39:33,625 ==================== TRACER ======================
+2021-12-29 07:39:33,626 Channel (server worker num[20]):
+2021-12-29 07:39:33,627 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:39:33,628 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:39:37,463 ==================== TRACER ======================
+2021-12-29 07:39:37,464 Channel (server worker num[20]):
+2021-12-29 07:39:37,465 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:39:37,465 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:40:03,658 ==================== TRACER ======================
+2021-12-29 07:40:03,659 Channel (server worker num[20]):
+2021-12-29 07:40:03,660 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:40:03,661 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:40:07,475 ==================== TRACER ======================
+2021-12-29 07:40:07,475 Channel (server worker num[20]):
+2021-12-29 07:40:07,476 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:40:07,477 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:40:33,680 ==================== TRACER ======================
+2021-12-29 07:40:33,681 Channel (server worker num[20]):
+2021-12-29 07:40:33,681 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:40:33,682 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:40:37,507 ==================== TRACER ======================
+2021-12-29 07:40:37,508 Channel (server worker num[20]):
+2021-12-29 07:40:37,509 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:40:37,510 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:41:03,713 ==================== TRACER ======================
+2021-12-29 07:41:03,714 Channel (server worker num[20]):
+2021-12-29 07:41:03,714 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:41:03,715 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:41:07,516 ==================== TRACER ======================
+2021-12-29 07:41:07,517 Channel (server worker num[20]):
+2021-12-29 07:41:07,518 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:41:07,518 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:41:33,744 ==================== TRACER ======================
+2021-12-29 07:41:33,745 Channel (server worker num[20]):
+2021-12-29 07:41:33,746 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:41:33,746 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:41:37,549 ==================== TRACER ======================
+2021-12-29 07:41:37,550 Channel (server worker num[20]):
+2021-12-29 07:41:37,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:41:37,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:42:11,746 ==================== TRACER ======================
+2021-12-29 07:42:11,748 Channel (server worker num[20]):
+2021-12-29 07:42:11,754 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:42:11,755 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:42:41,785 ==================== TRACER ======================
+2021-12-29 07:42:41,787 DAGExecutor:
+2021-12-29 07:42:41,787 Query count[1]
+2021-12-29 07:42:41,787 QPS[0.03333333333333333 q/s]
+2021-12-29 07:42:41,787 Succ[0.0]
+2021-12-29 07:42:41,787 Error req[0]
+2021-12-29 07:42:41,788 Latency:
+2021-12-29 07:42:41,788 ave[1679.686 ms]
+2021-12-29 07:42:41,788 .50[1679.686 ms]
+2021-12-29 07:42:41,788 .60[1679.686 ms]
+2021-12-29 07:42:41,788 .70[1679.686 ms]
+2021-12-29 07:42:41,788 .80[1679.686 ms]
+2021-12-29 07:42:41,789 .90[1679.686 ms]
+2021-12-29 07:42:41,789 .95[1679.686 ms]
+2021-12-29 07:42:41,789 .99[1679.686 ms]
+2021-12-29 07:42:41,789 Channel (server worker num[20]):
+2021-12-29 07:42:41,790 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:42:41,791 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:43:11,821 ==================== TRACER ======================
+2021-12-29 07:43:11,822 Channel (server worker num[20]):
+2021-12-29 07:43:11,823 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:43:11,823 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:43:41,854 ==================== TRACER ======================
+2021-12-29 07:43:41,855 Channel (server worker num[20]):
+2021-12-29 07:43:41,856 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:43:41,856 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:44:11,858 ==================== TRACER ======================
+2021-12-29 07:44:11,859 Channel (server worker num[20]):
+2021-12-29 07:44:11,859 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:44:11,860 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:44:41,891 ==================== TRACER ======================
+2021-12-29 07:44:41,891 Channel (server worker num[20]):
+2021-12-29 07:44:41,892 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:44:41,893 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:45:11,910 ==================== TRACER ======================
+2021-12-29 07:45:11,911 Channel (server worker num[20]):
+2021-12-29 07:45:11,912 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:45:11,913 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:45:41,943 ==================== TRACER ======================
+2021-12-29 07:45:41,944 Channel (server worker num[20]):
+2021-12-29 07:45:41,945 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:45:41,946 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:46:11,976 ==================== TRACER ======================
+2021-12-29 07:46:11,977 Channel (server worker num[20]):
+2021-12-29 07:46:11,978 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:46:11,979 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:46:41,996 ==================== TRACER ======================
+2021-12-29 07:46:41,997 Channel (server worker num[20]):
+2021-12-29 07:46:41,998 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:46:41,999 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:47:12,015 ==================== TRACER ======================
+2021-12-29 07:47:12,016 Channel (server worker num[20]):
+2021-12-29 07:47:12,016 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:47:12,017 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:47:42,048 ==================== TRACER ======================
+2021-12-29 07:47:42,049 Channel (server worker num[20]):
+2021-12-29 07:47:42,049 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:47:42,050 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:48:12,080 ==================== TRACER ======================
+2021-12-29 07:48:12,081 Channel (server worker num[20]):
+2021-12-29 07:48:12,082 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:48:12,083 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:48:42,113 ==================== TRACER ======================
+2021-12-29 07:48:42,114 Channel (server worker num[20]):
+2021-12-29 07:48:42,115 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:48:42,116 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:49:12,146 ==================== TRACER ======================
+2021-12-29 07:49:12,147 Channel (server worker num[20]):
+2021-12-29 07:49:12,148 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:49:12,149 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:49:42,153 ==================== TRACER ======================
+2021-12-29 07:49:42,154 Channel (server worker num[20]):
+2021-12-29 07:49:42,155 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:49:42,156 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:49:58,546 ==================== TRACER ======================
+2021-12-29 07:49:58,548 Channel (server worker num[20]):
+2021-12-29 07:49:58,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:49:58,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:50:28,581 ==================== TRACER ======================
+2021-12-29 07:50:28,582 DAGExecutor:
+2021-12-29 07:50:28,582 Query count[1]
+2021-12-29 07:50:28,583 QPS[0.03333333333333333 q/s]
+2021-12-29 07:50:28,583 Succ[0.0]
+2021-12-29 07:50:28,583 Error req[0]
+2021-12-29 07:50:28,583 Latency:
+2021-12-29 07:50:28,583 ave[1878.876 ms]
+2021-12-29 07:50:28,583 .50[1878.876 ms]
+2021-12-29 07:50:28,584 .60[1878.876 ms]
+2021-12-29 07:50:28,584 .70[1878.876 ms]
+2021-12-29 07:50:28,584 .80[1878.876 ms]
+2021-12-29 07:50:28,584 .90[1878.876 ms]
+2021-12-29 07:50:28,584 .95[1878.876 ms]
+2021-12-29 07:50:28,585 .99[1878.876 ms]
+2021-12-29 07:50:28,585 Channel (server worker num[20]):
+2021-12-29 07:50:28,586 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:50:28,586 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:50:58,592 ==================== TRACER ======================
+2021-12-29 07:50:58,593 Channel (server worker num[20]):
+2021-12-29 07:50:58,594 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:50:58,595 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-29 07:51:28,625 ==================== TRACER ======================
+2021-12-29 07:51:28,626 Channel (server worker num[20]):
+2021-12-29 07:51:28,627 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-29 07:51:28,627 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:53:16,350 ==================== TRACER ======================
+2021-12-30 06:53:16,352 Channel (server worker num[20]):
+2021-12-30 06:53:16,353 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:53:16,354 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:53:46,384 ==================== TRACER ======================
+2021-12-30 06:53:46,385 DAGExecutor:
+2021-12-30 06:53:46,386 Query count[1]
+2021-12-30 06:53:46,386 QPS[0.03333333333333333 q/s]
+2021-12-30 06:53:46,386 Succ[0.0]
+2021-12-30 06:53:46,386 Error req[0]
+2021-12-30 06:53:46,386 Latency:
+2021-12-30 06:53:46,387 ave[1711.484 ms]
+2021-12-30 06:53:46,387 .50[1711.484 ms]
+2021-12-30 06:53:46,387 .60[1711.484 ms]
+2021-12-30 06:53:46,387 .70[1711.484 ms]
+2021-12-30 06:53:46,387 .80[1711.484 ms]
+2021-12-30 06:53:46,387 .90[1711.484 ms]
+2021-12-30 06:53:46,388 .95[1711.484 ms]
+2021-12-30 06:53:46,388 .99[1711.484 ms]
+2021-12-30 06:53:46,388 Channel (server worker num[20]):
+2021-12-30 06:53:46,389 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:53:46,389 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:54:16,420 ==================== TRACER ======================
+2021-12-30 06:54:16,421 Channel (server worker num[20]):
+2021-12-30 06:54:16,421 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:54:16,422 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:54:46,453 ==================== TRACER ======================
+2021-12-30 06:54:46,453 Channel (server worker num[20]):
+2021-12-30 06:54:46,454 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:54:46,455 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:55:16,485 ==================== TRACER ======================
+2021-12-30 06:55:16,486 Channel (server worker num[20]):
+2021-12-30 06:55:16,487 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:55:16,488 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:55:46,501 ==================== TRACER ======================
+2021-12-30 06:55:46,502 Channel (server worker num[20]):
+2021-12-30 06:55:46,503 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:55:46,504 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:56:16,522 ==================== TRACER ======================
+2021-12-30 06:56:16,522 Channel (server worker num[20]):
+2021-12-30 06:56:16,523 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:56:16,524 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:56:46,554 ==================== TRACER ======================
+2021-12-30 06:56:46,555 Channel (server worker num[20]):
+2021-12-30 06:56:46,556 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:56:46,557 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:57:16,587 ==================== TRACER ======================
+2021-12-30 06:57:16,588 Channel (server worker num[20]):
+2021-12-30 06:57:16,589 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:57:16,590 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:57:46,613 ==================== TRACER ======================
+2021-12-30 06:57:46,614 Channel (server worker num[20]):
+2021-12-30 06:57:46,615 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:57:46,616 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:58:16,646 ==================== TRACER ======================
+2021-12-30 06:58:16,647 Channel (server worker num[20]):
+2021-12-30 06:58:16,648 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:58:16,649 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:58:46,679 ==================== TRACER ======================
+2021-12-30 06:58:46,680 Channel (server worker num[20]):
+2021-12-30 06:58:46,681 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:58:46,682 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:59:16,712 ==================== TRACER ======================
+2021-12-30 06:59:16,713 Channel (server worker num[20]):
+2021-12-30 06:59:16,714 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:59:16,714 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 06:59:46,745 ==================== TRACER ======================
+2021-12-30 06:59:46,746 Channel (server worker num[20]):
+2021-12-30 06:59:46,746 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 06:59:46,747 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:00:16,778 ==================== TRACER ======================
+2021-12-30 07:00:16,778 Channel (server worker num[20]):
+2021-12-30 07:00:16,779 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:00:16,780 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:00:46,810 ==================== TRACER ======================
+2021-12-30 07:00:46,811 Channel (server worker num[20]):
+2021-12-30 07:00:46,812 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:00:46,813 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:01:16,843 ==================== TRACER ======================
+2021-12-30 07:01:16,844 Channel (server worker num[20]):
+2021-12-30 07:01:16,845 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:01:16,846 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:01:46,876 ==================== TRACER ======================
+2021-12-30 07:01:46,877 Channel (server worker num[20]):
+2021-12-30 07:01:46,878 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:01:46,878 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:02:16,904 ==================== TRACER ======================
+2021-12-30 07:02:16,905 Channel (server worker num[20]):
+2021-12-30 07:02:16,906 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:02:16,907 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:02:46,937 ==================== TRACER ======================
+2021-12-30 07:02:46,938 Channel (server worker num[20]):
+2021-12-30 07:02:46,939 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:02:46,939 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:03:16,970 ==================== TRACER ======================
+2021-12-30 07:03:16,971 Channel (server worker num[20]):
+2021-12-30 07:03:16,971 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:03:16,972 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:03:46,974 ==================== TRACER ======================
+2021-12-30 07:03:46,975 Channel (server worker num[20]):
+2021-12-30 07:03:46,976 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:03:46,977 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:04:17,007 ==================== TRACER ======================
+2021-12-30 07:04:17,008 Channel (server worker num[20]):
+2021-12-30 07:04:17,009 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:04:17,010 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:04:47,040 ==================== TRACER ======================
+2021-12-30 07:04:47,041 Channel (server worker num[20]):
+2021-12-30 07:04:47,042 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:04:47,043 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:05:17,073 ==================== TRACER ======================
+2021-12-30 07:05:17,074 Channel (server worker num[20]):
+2021-12-30 07:05:17,075 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:05:17,075 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:05:47,106 ==================== TRACER ======================
+2021-12-30 07:05:47,107 Channel (server worker num[20]):
+2021-12-30 07:05:47,107 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:05:47,108 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:06:17,138 ==================== TRACER ======================
+2021-12-30 07:06:17,139 Channel (server worker num[20]):
+2021-12-30 07:06:17,140 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:06:17,141 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:06:47,171 ==================== TRACER ======================
+2021-12-30 07:06:47,172 Channel (server worker num[20]):
+2021-12-30 07:06:47,173 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:06:47,174 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:07:17,204 ==================== TRACER ======================
+2021-12-30 07:07:17,205 Channel (server worker num[20]):
+2021-12-30 07:07:17,206 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:07:17,207 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:07:47,237 ==================== TRACER ======================
+2021-12-30 07:07:47,238 Channel (server worker num[20]):
+2021-12-30 07:07:47,239 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:07:47,239 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:08:17,270 ==================== TRACER ======================
+2021-12-30 07:08:17,270 Channel (server worker num[20]):
+2021-12-30 07:08:17,271 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:08:17,272 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:08:47,272 ==================== TRACER ======================
+2021-12-30 07:08:47,273 Channel (server worker num[20]):
+2021-12-30 07:08:47,274 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:08:47,275 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:09:17,305 ==================== TRACER ======================
+2021-12-30 07:09:17,306 Channel (server worker num[20]):
+2021-12-30 07:09:17,307 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:09:17,308 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:09:47,338 ==================== TRACER ======================
+2021-12-30 07:09:47,339 Channel (server worker num[20]):
+2021-12-30 07:09:47,340 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:09:47,340 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:10:17,371 ==================== TRACER ======================
+2021-12-30 07:10:17,372 Channel (server worker num[20]):
+2021-12-30 07:10:17,373 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:10:17,373 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:10:47,404 ==================== TRACER ======================
+2021-12-30 07:10:47,405 Channel (server worker num[20]):
+2021-12-30 07:10:47,405 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:10:47,406 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:11:17,436 ==================== TRACER ======================
+2021-12-30 07:11:17,437 Channel (server worker num[20]):
+2021-12-30 07:11:17,438 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:11:17,439 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:11:47,469 ==================== TRACER ======================
+2021-12-30 07:11:47,470 Channel (server worker num[20]):
+2021-12-30 07:11:47,471 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:11:47,472 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:12:17,502 ==================== TRACER ======================
+2021-12-30 07:12:17,503 Channel (server worker num[20]):
+2021-12-30 07:12:17,504 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:12:17,505 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:12:47,535 ==================== TRACER ======================
+2021-12-30 07:12:47,536 Channel (server worker num[20]):
+2021-12-30 07:12:47,537 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:12:47,537 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:13:17,568 ==================== TRACER ======================
+2021-12-30 07:13:17,569 Channel (server worker num[20]):
+2021-12-30 07:13:17,569 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:13:17,570 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:13:47,601 ==================== TRACER ======================
+2021-12-30 07:13:47,601 Channel (server worker num[20]):
+2021-12-30 07:13:47,603 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:13:47,603 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:14:17,634 ==================== TRACER ======================
+2021-12-30 07:14:17,635 Channel (server worker num[20]):
+2021-12-30 07:14:17,635 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:14:17,636 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:14:47,667 ==================== TRACER ======================
+2021-12-30 07:14:47,667 Channel (server worker num[20]):
+2021-12-30 07:14:47,668 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:14:47,669 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:15:17,699 ==================== TRACER ======================
+2021-12-30 07:15:17,700 Channel (server worker num[20]):
+2021-12-30 07:15:17,701 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:15:17,702 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:15:47,732 ==================== TRACER ======================
+2021-12-30 07:15:47,733 Channel (server worker num[20]):
+2021-12-30 07:15:47,734 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:15:47,735 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:16:17,765 ==================== TRACER ======================
+2021-12-30 07:16:17,766 Channel (server worker num[20]):
+2021-12-30 07:16:17,767 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:16:17,767 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:16:47,798 ==================== TRACER ======================
+2021-12-30 07:16:47,799 Channel (server worker num[20]):
+2021-12-30 07:16:47,799 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:16:47,800 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:17:17,831 ==================== TRACER ======================
+2021-12-30 07:17:17,831 Channel (server worker num[20]):
+2021-12-30 07:17:17,832 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:17:17,833 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:17:47,863 ==================== TRACER ======================
+2021-12-30 07:17:47,864 Channel (server worker num[20]):
+2021-12-30 07:17:47,865 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:17:47,865 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:18:17,896 ==================== TRACER ======================
+2021-12-30 07:18:17,897 Channel (server worker num[20]):
+2021-12-30 07:18:17,898 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:18:17,898 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:18:47,929 ==================== TRACER ======================
+2021-12-30 07:18:47,929 Channel (server worker num[20]):
+2021-12-30 07:18:47,930 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:18:47,931 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:19:17,961 ==================== TRACER ======================
+2021-12-30 07:19:17,962 Channel (server worker num[20]):
+2021-12-30 07:19:17,963 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:19:17,964 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:19:47,994 ==================== TRACER ======================
+2021-12-30 07:19:47,995 Channel (server worker num[20]):
+2021-12-30 07:19:47,996 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:19:47,996 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:20:18,027 ==================== TRACER ======================
+2021-12-30 07:20:18,028 Channel (server worker num[20]):
+2021-12-30 07:20:18,029 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:20:18,029 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:20:48,060 ==================== TRACER ======================
+2021-12-30 07:20:48,061 Channel (server worker num[20]):
+2021-12-30 07:20:48,061 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:20:48,062 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:21:18,092 ==================== TRACER ======================
+2021-12-30 07:21:18,093 Channel (server worker num[20]):
+2021-12-30 07:21:18,094 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:21:18,095 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:21:48,125 ==================== TRACER ======================
+2021-12-30 07:21:48,126 Channel (server worker num[20]):
+2021-12-30 07:21:48,127 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:21:48,128 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:22:18,158 ==================== TRACER ======================
+2021-12-30 07:22:18,159 Channel (server worker num[20]):
+2021-12-30 07:22:18,160 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:22:18,161 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:22:48,191 ==================== TRACER ======================
+2021-12-30 07:22:48,192 Channel (server worker num[20]):
+2021-12-30 07:22:48,193 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:22:48,193 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:23:18,224 ==================== TRACER ======================
+2021-12-30 07:23:18,225 Channel (server worker num[20]):
+2021-12-30 07:23:18,225 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:23:18,226 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:23:48,256 ==================== TRACER ======================
+2021-12-30 07:23:48,257 Channel (server worker num[20]):
+2021-12-30 07:23:48,258 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:23:48,259 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:24:18,289 ==================== TRACER ======================
+2021-12-30 07:24:18,290 Channel (server worker num[20]):
+2021-12-30 07:24:18,291 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:24:18,292 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:24:48,322 ==================== TRACER ======================
+2021-12-30 07:24:48,323 Channel (server worker num[20]):
+2021-12-30 07:24:48,324 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:24:48,325 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:25:18,355 ==================== TRACER ======================
+2021-12-30 07:25:18,356 Channel (server worker num[20]):
+2021-12-30 07:25:18,358 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:25:18,359 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:25:48,389 ==================== TRACER ======================
+2021-12-30 07:25:48,390 Channel (server worker num[20]):
+2021-12-30 07:25:48,391 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:25:48,392 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:26:18,422 ==================== TRACER ======================
+2021-12-30 07:26:18,423 Channel (server worker num[20]):
+2021-12-30 07:26:18,424 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:26:18,425 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:26:48,455 ==================== TRACER ======================
+2021-12-30 07:26:48,456 Channel (server worker num[20]):
+2021-12-30 07:26:48,457 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:26:48,457 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:27:18,488 ==================== TRACER ======================
+2021-12-30 07:27:18,489 Channel (server worker num[20]):
+2021-12-30 07:27:18,489 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:27:18,490 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:27:48,520 ==================== TRACER ======================
+2021-12-30 07:27:48,521 Channel (server worker num[20]):
+2021-12-30 07:27:48,522 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:27:48,523 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:28:18,553 ==================== TRACER ======================
+2021-12-30 07:28:18,554 Channel (server worker num[20]):
+2021-12-30 07:28:18,555 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:28:18,556 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:28:48,586 ==================== TRACER ======================
+2021-12-30 07:28:48,587 Channel (server worker num[20]):
+2021-12-30 07:28:48,588 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:28:48,588 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:29:18,619 ==================== TRACER ======================
+2021-12-30 07:29:18,620 Channel (server worker num[20]):
+2021-12-30 07:29:18,621 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:29:18,621 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:29:48,628 ==================== TRACER ======================
+2021-12-30 07:29:48,629 Channel (server worker num[20]):
+2021-12-30 07:29:48,630 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:29:48,630 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:30:18,661 ==================== TRACER ======================
+2021-12-30 07:30:18,662 Channel (server worker num[20]):
+2021-12-30 07:30:18,662 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:30:18,663 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:30:48,694 ==================== TRACER ======================
+2021-12-30 07:30:48,695 Channel (server worker num[20]):
+2021-12-30 07:30:48,695 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:30:48,696 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:31:18,726 ==================== TRACER ======================
+2021-12-30 07:31:18,727 Channel (server worker num[20]):
+2021-12-30 07:31:18,728 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:31:18,729 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:31:48,759 ==================== TRACER ======================
+2021-12-30 07:31:48,760 Channel (server worker num[20]):
+2021-12-30 07:31:48,761 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:31:48,762 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:32:18,792 ==================== TRACER ======================
+2021-12-30 07:32:18,793 Channel (server worker num[20]):
+2021-12-30 07:32:18,794 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:32:18,794 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:32:48,796 ==================== TRACER ======================
+2021-12-30 07:32:48,797 Channel (server worker num[20]):
+2021-12-30 07:32:48,798 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:32:48,799 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:33:18,829 ==================== TRACER ======================
+2021-12-30 07:33:18,830 Channel (server worker num[20]):
+2021-12-30 07:33:18,831 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:33:18,831 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:33:48,862 ==================== TRACER ======================
+2021-12-30 07:33:48,863 Channel (server worker num[20]):
+2021-12-30 07:33:48,863 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:33:48,864 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:34:18,884 ==================== TRACER ======================
+2021-12-30 07:34:18,885 Channel (server worker num[20]):
+2021-12-30 07:34:18,886 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:34:18,887 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:34:48,917 ==================== TRACER ======================
+2021-12-30 07:34:48,918 Channel (server worker num[20]):
+2021-12-30 07:34:48,919 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:34:48,919 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:35:18,950 ==================== TRACER ======================
+2021-12-30 07:35:18,951 Channel (server worker num[20]):
+2021-12-30 07:35:18,951 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:35:18,952 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:35:48,983 ==================== TRACER ======================
+2021-12-30 07:35:48,983 Channel (server worker num[20]):
+2021-12-30 07:35:48,984 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:35:48,985 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:36:19,015 ==================== TRACER ======================
+2021-12-30 07:36:19,016 Channel (server worker num[20]):
+2021-12-30 07:36:19,017 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:36:19,018 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:36:49,048 ==================== TRACER ======================
+2021-12-30 07:36:49,049 Channel (server worker num[20]):
+2021-12-30 07:36:49,050 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:36:49,051 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:37:19,081 ==================== TRACER ======================
+2021-12-30 07:37:19,082 Channel (server worker num[20]):
+2021-12-30 07:37:19,083 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:37:19,083 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:37:49,114 ==================== TRACER ======================
+2021-12-30 07:37:49,115 Channel (server worker num[20]):
+2021-12-30 07:37:49,115 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:37:49,116 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:38:19,147 ==================== TRACER ======================
+2021-12-30 07:38:19,147 Channel (server worker num[20]):
+2021-12-30 07:38:19,148 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:38:19,149 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:38:49,179 ==================== TRACER ======================
+2021-12-30 07:38:49,180 Channel (server worker num[20]):
+2021-12-30 07:38:49,181 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:38:49,182 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:39:19,212 ==================== TRACER ======================
+2021-12-30 07:39:19,213 Channel (server worker num[20]):
+2021-12-30 07:39:19,214 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:39:19,215 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:39:49,245 ==================== TRACER ======================
+2021-12-30 07:39:49,246 Channel (server worker num[20]):
+2021-12-30 07:39:49,247 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:39:49,248 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:40:19,278 ==================== TRACER ======================
+2021-12-30 07:40:19,279 Channel (server worker num[20]):
+2021-12-30 07:40:19,280 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:40:19,280 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:40:49,311 ==================== TRACER ======================
+2021-12-30 07:40:49,312 Channel (server worker num[20]):
+2021-12-30 07:40:49,312 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:40:49,313 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:41:19,343 ==================== TRACER ======================
+2021-12-30 07:41:19,344 Channel (server worker num[20]):
+2021-12-30 07:41:19,345 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:41:19,346 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:41:49,376 ==================== TRACER ======================
+2021-12-30 07:41:49,377 Channel (server worker num[20]):
+2021-12-30 07:41:49,378 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:41:49,379 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:42:19,389 ==================== TRACER ======================
+2021-12-30 07:42:19,390 Channel (server worker num[20]):
+2021-12-30 07:42:19,391 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:42:19,392 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:42:49,422 ==================== TRACER ======================
+2021-12-30 07:42:49,423 Channel (server worker num[20]):
+2021-12-30 07:42:49,424 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:42:49,425 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:43:19,455 ==================== TRACER ======================
+2021-12-30 07:43:19,456 Channel (server worker num[20]):
+2021-12-30 07:43:19,457 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:43:19,458 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:43:49,488 ==================== TRACER ======================
+2021-12-30 07:43:49,489 Channel (server worker num[20]):
+2021-12-30 07:43:49,490 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:43:49,491 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:44:19,521 ==================== TRACER ======================
+2021-12-30 07:44:19,522 Channel (server worker num[20]):
+2021-12-30 07:44:19,523 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:44:19,523 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:44:49,554 ==================== TRACER ======================
+2021-12-30 07:44:49,555 Channel (server worker num[20]):
+2021-12-30 07:44:49,555 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:44:49,556 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:45:19,587 ==================== TRACER ======================
+2021-12-30 07:45:19,587 Channel (server worker num[20]):
+2021-12-30 07:45:19,588 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:45:19,589 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:45:49,619 ==================== TRACER ======================
+2021-12-30 07:45:49,620 Channel (server worker num[20]):
+2021-12-30 07:45:49,621 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:45:49,622 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:46:19,652 ==================== TRACER ======================
+2021-12-30 07:46:19,653 Channel (server worker num[20]):
+2021-12-30 07:46:19,654 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:46:19,655 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:46:49,685 ==================== TRACER ======================
+2021-12-30 07:46:49,686 Channel (server worker num[20]):
+2021-12-30 07:46:49,687 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:46:49,687 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:47:19,700 ==================== TRACER ======================
+2021-12-30 07:47:19,701 Channel (server worker num[20]):
+2021-12-30 07:47:19,702 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:47:19,703 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:47:49,733 ==================== TRACER ======================
+2021-12-30 07:47:49,734 Channel (server worker num[20]):
+2021-12-30 07:47:49,735 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:47:49,735 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:48:19,766 ==================== TRACER ======================
+2021-12-30 07:48:19,767 Channel (server worker num[20]):
+2021-12-30 07:48:19,768 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:48:19,768 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:48:49,799 ==================== TRACER ======================
+2021-12-30 07:48:49,800 Channel (server worker num[20]):
+2021-12-30 07:48:49,800 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:48:49,801 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:49:19,831 ==================== TRACER ======================
+2021-12-30 07:49:19,832 Channel (server worker num[20]):
+2021-12-30 07:49:19,833 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:49:19,834 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:49:49,864 ==================== TRACER ======================
+2021-12-30 07:49:49,865 Channel (server worker num[20]):
+2021-12-30 07:49:49,866 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:49:49,867 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:50:19,897 ==================== TRACER ======================
+2021-12-30 07:50:19,898 Channel (server worker num[20]):
+2021-12-30 07:50:19,899 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:50:19,900 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:50:49,930 ==================== TRACER ======================
+2021-12-30 07:50:49,931 Channel (server worker num[20]):
+2021-12-30 07:50:49,932 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:50:49,932 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:51:19,963 ==================== TRACER ======================
+2021-12-30 07:51:19,964 Channel (server worker num[20]):
+2021-12-30 07:51:19,964 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:51:19,965 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:51:49,996 ==================== TRACER ======================
+2021-12-30 07:51:49,996 Channel (server worker num[20]):
+2021-12-30 07:51:49,997 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:51:49,998 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:52:20,028 ==================== TRACER ======================
+2021-12-30 07:52:20,029 Channel (server worker num[20]):
+2021-12-30 07:52:20,030 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:52:20,031 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:52:50,061 ==================== TRACER ======================
+2021-12-30 07:52:50,062 Channel (server worker num[20]):
+2021-12-30 07:52:50,063 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:52:50,064 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:53:20,094 ==================== TRACER ======================
+2021-12-30 07:53:20,095 Channel (server worker num[20]):
+2021-12-30 07:53:20,096 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:53:20,096 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:53:50,127 ==================== TRACER ======================
+2021-12-30 07:53:50,128 Channel (server worker num[20]):
+2021-12-30 07:53:50,129 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:53:50,129 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:54:20,160 ==================== TRACER ======================
+2021-12-30 07:54:20,160 Channel (server worker num[20]):
+2021-12-30 07:54:20,161 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:54:20,162 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:54:50,192 ==================== TRACER ======================
+2021-12-30 07:54:50,193 Channel (server worker num[20]):
+2021-12-30 07:54:50,194 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:54:50,195 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:55:20,225 ==================== TRACER ======================
+2021-12-30 07:55:20,226 Channel (server worker num[20]):
+2021-12-30 07:55:20,227 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:55:20,227 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:55:50,258 ==================== TRACER ======================
+2021-12-30 07:55:50,259 Channel (server worker num[20]):
+2021-12-30 07:55:50,259 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:55:50,260 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:56:20,291 ==================== TRACER ======================
+2021-12-30 07:56:20,291 Channel (server worker num[20]):
+2021-12-30 07:56:20,292 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:56:20,293 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:56:50,323 ==================== TRACER ======================
+2021-12-30 07:56:50,324 Channel (server worker num[20]):
+2021-12-30 07:56:50,325 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:56:50,326 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:57:06,917 ==================== TRACER ======================
+2021-12-30 07:57:06,918 Channel (server worker num[20]):
+2021-12-30 07:57:06,920 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:57:06,921 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:57:36,951 ==================== TRACER ======================
+2021-12-30 07:57:36,953 DAGExecutor:
+2021-12-30 07:57:36,953 Query count[1]
+2021-12-30 07:57:36,953 QPS[0.03333333333333333 q/s]
+2021-12-30 07:57:36,953 Succ[0.0]
+2021-12-30 07:57:36,953 Error req[0]
+2021-12-30 07:57:36,954 Latency:
+2021-12-30 07:57:36,954 ave[1767.715 ms]
+2021-12-30 07:57:36,954 .50[1767.715 ms]
+2021-12-30 07:57:36,954 .60[1767.715 ms]
+2021-12-30 07:57:36,954 .70[1767.715 ms]
+2021-12-30 07:57:36,954 .80[1767.715 ms]
+2021-12-30 07:57:36,955 .90[1767.715 ms]
+2021-12-30 07:57:36,955 .95[1767.715 ms]
+2021-12-30 07:57:36,955 .99[1767.715 ms]
+2021-12-30 07:57:36,955 Channel (server worker num[20]):
+2021-12-30 07:57:36,956 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:57:36,957 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:58:06,987 ==================== TRACER ======================
+2021-12-30 07:58:06,988 Channel (server worker num[20]):
+2021-12-30 07:58:06,989 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:58:06,989 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:58:37,020 ==================== TRACER ======================
+2021-12-30 07:58:37,021 Channel (server worker num[20]):
+2021-12-30 07:58:37,021 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:58:37,022 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:59:07,053 ==================== TRACER ======================
+2021-12-30 07:59:07,053 Channel (server worker num[20]):
+2021-12-30 07:59:07,054 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:59:07,055 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 07:59:37,085 ==================== TRACER ======================
+2021-12-30 07:59:37,086 Channel (server worker num[20]):
+2021-12-30 07:59:37,087 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 07:59:37,088 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:00:07,118 ==================== TRACER ======================
+2021-12-30 08:00:07,119 Channel (server worker num[20]):
+2021-12-30 08:00:07,120 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:00:07,120 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:00:37,151 ==================== TRACER ======================
+2021-12-30 08:00:37,152 Channel (server worker num[20]):
+2021-12-30 08:00:37,152 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:00:37,153 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:01:07,184 ==================== TRACER ======================
+2021-12-30 08:01:07,184 Channel (server worker num[20]):
+2021-12-30 08:01:07,185 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:01:07,186 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:01:37,216 ==================== TRACER ======================
+2021-12-30 08:01:37,217 Channel (server worker num[20]):
+2021-12-30 08:01:37,218 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:01:37,219 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:02:07,249 ==================== TRACER ======================
+2021-12-30 08:02:07,250 Channel (server worker num[20]):
+2021-12-30 08:02:07,251 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:02:07,251 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:02:37,282 ==================== TRACER ======================
+2021-12-30 08:02:37,282 Channel (server worker num[20]):
+2021-12-30 08:02:37,283 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:02:37,284 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:03:07,314 ==================== TRACER ======================
+2021-12-30 08:03:07,315 Channel (server worker num[20]):
+2021-12-30 08:03:07,316 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:03:07,317 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:03:37,347 ==================== TRACER ======================
+2021-12-30 08:03:37,348 Channel (server worker num[20]):
+2021-12-30 08:03:37,349 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:03:37,349 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:04:07,380 ==================== TRACER ======================
+2021-12-30 08:04:07,381 Channel (server worker num[20]):
+2021-12-30 08:04:07,382 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:04:07,382 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:04:37,390 ==================== TRACER ======================
+2021-12-30 08:04:37,390 Channel (server worker num[20]):
+2021-12-30 08:04:37,391 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:04:37,392 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:05:07,422 ==================== TRACER ======================
+2021-12-30 08:05:07,423 Channel (server worker num[20]):
+2021-12-30 08:05:07,424 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:05:07,425 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:05:37,455 ==================== TRACER ======================
+2021-12-30 08:05:37,456 Channel (server worker num[20]):
+2021-12-30 08:05:37,457 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:05:37,458 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:06:07,488 ==================== TRACER ======================
+2021-12-30 08:06:07,489 Channel (server worker num[20]):
+2021-12-30 08:06:07,490 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:06:07,490 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:06:37,516 ==================== TRACER ======================
+2021-12-30 08:06:37,517 Channel (server worker num[20]):
+2021-12-30 08:06:37,518 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:06:37,518 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:07:07,549 ==================== TRACER ======================
+2021-12-30 08:07:07,550 Channel (server worker num[20]):
+2021-12-30 08:07:07,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:07:07,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:07:37,581 ==================== TRACER ======================
+2021-12-30 08:07:37,582 Channel (server worker num[20]):
+2021-12-30 08:07:37,583 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:07:37,584 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:08:07,614 ==================== TRACER ======================
+2021-12-30 08:08:07,615 Channel (server worker num[20]):
+2021-12-30 08:08:07,616 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:08:07,617 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:08:37,647 ==================== TRACER ======================
+2021-12-30 08:08:37,648 Channel (server worker num[20]):
+2021-12-30 08:08:37,649 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:08:37,650 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:09:07,680 ==================== TRACER ======================
+2021-12-30 08:09:07,681 Channel (server worker num[20]):
+2021-12-30 08:09:07,682 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:09:07,682 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:09:37,709 ==================== TRACER ======================
+2021-12-30 08:09:37,710 Channel (server worker num[20]):
+2021-12-30 08:09:37,710 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:09:37,711 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:10:07,741 ==================== TRACER ======================
+2021-12-30 08:10:07,742 Channel (server worker num[20]):
+2021-12-30 08:10:07,743 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:10:07,744 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:10:37,774 ==================== TRACER ======================
+2021-12-30 08:10:37,775 Channel (server worker num[20]):
+2021-12-30 08:10:37,776 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:10:37,777 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:11:07,807 ==================== TRACER ======================
+2021-12-30 08:11:07,808 Channel (server worker num[20]):
+2021-12-30 08:11:07,809 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:11:07,810 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:11:37,840 ==================== TRACER ======================
+2021-12-30 08:11:37,841 Channel (server worker num[20]):
+2021-12-30 08:11:37,842 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:11:37,842 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:12:07,244 ==================== TRACER ======================
+2021-12-30 08:12:07,246 Channel (server worker num[20]):
+2021-12-30 08:12:07,247 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:12:07,248 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:12:37,278 ==================== TRACER ======================
+2021-12-30 08:12:37,280 DAGExecutor:
+2021-12-30 08:12:37,280 Query count[1]
+2021-12-30 08:12:37,280 QPS[0.03333333333333333 q/s]
+2021-12-30 08:12:37,280 Succ[0.0]
+2021-12-30 08:12:37,281 Error req[0]
+2021-12-30 08:12:37,281 Latency:
+2021-12-30 08:12:37,281 ave[1666.015 ms]
+2021-12-30 08:12:37,281 .50[1666.015 ms]
+2021-12-30 08:12:37,281 .60[1666.015 ms]
+2021-12-30 08:12:37,281 .70[1666.015 ms]
+2021-12-30 08:12:37,282 .80[1666.015 ms]
+2021-12-30 08:12:37,282 .90[1666.015 ms]
+2021-12-30 08:12:37,282 .95[1666.015 ms]
+2021-12-30 08:12:37,282 .99[1666.015 ms]
+2021-12-30 08:12:37,282 Channel (server worker num[20]):
+2021-12-30 08:12:37,283 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:12:37,284 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:13:07,314 ==================== TRACER ======================
+2021-12-30 08:13:07,315 Channel (server worker num[20]):
+2021-12-30 08:13:07,316 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:13:07,317 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:13:37,347 ==================== TRACER ======================
+2021-12-30 08:13:37,348 Channel (server worker num[20]):
+2021-12-30 08:13:37,349 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:13:37,349 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:13:45,787 ==================== TRACER ======================
+2021-12-30 08:13:45,789 Channel (server worker num[20]):
+2021-12-30 08:13:45,792 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:13:45,793 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:14:15,823 ==================== TRACER ======================
+2021-12-30 08:14:15,824 DAGExecutor:
+2021-12-30 08:14:15,825 Query count[1]
+2021-12-30 08:14:15,825 QPS[0.03333333333333333 q/s]
+2021-12-30 08:14:15,825 Succ[0.0]
+2021-12-30 08:14:15,825 Error req[0]
+2021-12-30 08:14:15,825 Latency:
+2021-12-30 08:14:15,826 ave[1659.131 ms]
+2021-12-30 08:14:15,826 .50[1659.131 ms]
+2021-12-30 08:14:15,826 .60[1659.131 ms]
+2021-12-30 08:14:15,826 .70[1659.131 ms]
+2021-12-30 08:14:15,826 .80[1659.131 ms]
+2021-12-30 08:14:15,826 .90[1659.131 ms]
+2021-12-30 08:14:15,827 .95[1659.131 ms]
+2021-12-30 08:14:15,827 .99[1659.131 ms]
+2021-12-30 08:14:15,827 Channel (server worker num[20]):
+2021-12-30 08:14:15,828 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:14:15,828 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:14:45,859 ==================== TRACER ======================
+2021-12-30 08:14:45,860 Channel (server worker num[20]):
+2021-12-30 08:14:45,860 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:14:45,861 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:15:15,892 ==================== TRACER ======================
+2021-12-30 08:15:15,892 Channel (server worker num[20]):
+2021-12-30 08:15:15,893 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:15:15,894 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:15:45,924 ==================== TRACER ======================
+2021-12-30 08:15:45,925 Channel (server worker num[20]):
+2021-12-30 08:15:45,926 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:15:45,927 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:16:15,957 ==================== TRACER ======================
+2021-12-30 08:16:15,958 Channel (server worker num[20]):
+2021-12-30 08:16:15,959 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:16:15,959 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:16:45,990 ==================== TRACER ======================
+2021-12-30 08:16:45,991 Channel (server worker num[20]):
+2021-12-30 08:16:45,992 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:16:45,992 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:17:15,596 ==================== TRACER ======================
+2021-12-30 08:17:15,598 Channel (server worker num[20]):
+2021-12-30 08:17:15,600 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:17:15,601 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:17:31,013 ==================== TRACER ======================
+2021-12-30 08:17:31,015 Channel (server worker num[20]):
+2021-12-30 08:17:31,017 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:17:31,018 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:18:01,049 ==================== TRACER ======================
+2021-12-30 08:18:01,050 DAGExecutor:
+2021-12-30 08:18:01,050 Query count[1]
+2021-12-30 08:18:01,050 QPS[0.03333333333333333 q/s]
+2021-12-30 08:18:01,050 Succ[0.0]
+2021-12-30 08:18:01,051 Error req[0]
+2021-12-30 08:18:01,051 Latency:
+2021-12-30 08:18:01,051 ave[1718.021 ms]
+2021-12-30 08:18:01,051 .50[1718.021 ms]
+2021-12-30 08:18:01,051 .60[1718.021 ms]
+2021-12-30 08:18:01,051 .70[1718.021 ms]
+2021-12-30 08:18:01,052 .80[1718.021 ms]
+2021-12-30 08:18:01,052 .90[1718.021 ms]
+2021-12-30 08:18:01,052 .95[1718.021 ms]
+2021-12-30 08:18:01,052 .99[1718.021 ms]
+2021-12-30 08:18:01,052 Channel (server worker num[20]):
+2021-12-30 08:18:01,053 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:18:01,054 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:18:31,084 ==================== TRACER ======================
+2021-12-30 08:18:31,085 Channel (server worker num[20]):
+2021-12-30 08:18:31,086 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:18:31,087 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:19:01,117 ==================== TRACER ======================
+2021-12-30 08:19:01,118 Channel (server worker num[20]):
+2021-12-30 08:19:01,119 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:19:01,120 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:19:31,150 ==================== TRACER ======================
+2021-12-30 08:19:31,151 Channel (server worker num[20]):
+2021-12-30 08:19:31,152 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:19:31,152 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:20:01,183 ==================== TRACER ======================
+2021-12-30 08:20:01,184 Channel (server worker num[20]):
+2021-12-30 08:20:01,184 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:20:01,185 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:20:13,303 ==================== TRACER ======================
+2021-12-30 08:20:13,304 Channel (server worker num[20]):
+2021-12-30 08:20:13,306 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:20:13,307 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:20:43,337 ==================== TRACER ======================
+2021-12-30 08:20:43,339 DAGExecutor:
+2021-12-30 08:20:43,339 Query count[1]
+2021-12-30 08:20:43,339 QPS[0.03333333333333333 q/s]
+2021-12-30 08:20:43,339 Succ[0.0]
+2021-12-30 08:20:43,339 Error req[0]
+2021-12-30 08:20:43,340 Latency:
+2021-12-30 08:20:43,340 ave[1664.843 ms]
+2021-12-30 08:20:43,340 .50[1664.843 ms]
+2021-12-30 08:20:43,340 .60[1664.843 ms]
+2021-12-30 08:20:43,340 .70[1664.843 ms]
+2021-12-30 08:20:43,341 .80[1664.843 ms]
+2021-12-30 08:20:43,341 .90[1664.843 ms]
+2021-12-30 08:20:43,341 .95[1664.843 ms]
+2021-12-30 08:20:43,341 .99[1664.843 ms]
+2021-12-30 08:20:43,341 Channel (server worker num[20]):
+2021-12-30 08:20:43,342 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:20:43,343 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:21:13,373 ==================== TRACER ======================
+2021-12-30 08:21:13,374 Channel (server worker num[20]):
+2021-12-30 08:21:13,375 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:21:13,376 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:31:54,883 ==================== TRACER ======================
+2021-12-30 08:31:54,885 Channel (server worker num[20]):
+2021-12-30 08:31:54,887 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:31:54,888 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:32:24,900 ==================== TRACER ======================
+2021-12-30 08:32:24,901 DAGExecutor:
+2021-12-30 08:32:24,902 Query count[1]
+2021-12-30 08:32:24,902 QPS[0.03333333333333333 q/s]
+2021-12-30 08:32:24,902 Succ[0.0]
+2021-12-30 08:32:24,902 Error req[0]
+2021-12-30 08:32:24,902 Latency:
+2021-12-30 08:32:24,903 ave[1698.932 ms]
+2021-12-30 08:32:24,903 .50[1698.932 ms]
+2021-12-30 08:32:24,903 .60[1698.932 ms]
+2021-12-30 08:32:24,903 .70[1698.932 ms]
+2021-12-30 08:32:24,903 .80[1698.932 ms]
+2021-12-30 08:32:24,903 .90[1698.932 ms]
+2021-12-30 08:32:24,904 .95[1698.932 ms]
+2021-12-30 08:32:24,904 .99[1698.932 ms]
+2021-12-30 08:32:24,904 Channel (server worker num[20]):
+2021-12-30 08:32:24,905 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:32:24,906 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:32:54,936 ==================== TRACER ======================
+2021-12-30 08:32:54,937 Channel (server worker num[20]):
+2021-12-30 08:32:54,938 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:32:54,938 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:33:24,969 ==================== TRACER ======================
+2021-12-30 08:33:24,970 Channel (server worker num[20]):
+2021-12-30 08:33:24,970 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:33:24,971 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:33:55,002 ==================== TRACER ======================
+2021-12-30 08:33:55,002 Channel (server worker num[20]):
+2021-12-30 08:33:55,003 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:33:55,004 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:34:21,528 ==================== TRACER ======================
+2021-12-30 08:34:21,530 Channel (server worker num[20]):
+2021-12-30 08:34:21,533 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:34:21,534 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:34:51,564 ==================== TRACER ======================
+2021-12-30 08:34:51,565 DAGExecutor:
+2021-12-30 08:34:51,566 Query count[1]
+2021-12-30 08:34:51,566 QPS[0.03333333333333333 q/s]
+2021-12-30 08:34:51,566 Succ[0.0]
+2021-12-30 08:34:51,566 Error req[0]
+2021-12-30 08:34:51,566 Latency:
+2021-12-30 08:34:51,566 ave[1726.27 ms]
+2021-12-30 08:34:51,567 .50[1726.27 ms]
+2021-12-30 08:34:51,567 .60[1726.27 ms]
+2021-12-30 08:34:51,567 .70[1726.27 ms]
+2021-12-30 08:34:51,567 .80[1726.27 ms]
+2021-12-30 08:34:51,567 .90[1726.27 ms]
+2021-12-30 08:34:51,567 .95[1726.27 ms]
+2021-12-30 08:34:51,568 .99[1726.27 ms]
+2021-12-30 08:34:51,568 Channel (server worker num[20]):
+2021-12-30 08:34:51,569 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:34:51,569 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:35:21,600 ==================== TRACER ======================
+2021-12-30 08:35:21,601 Channel (server worker num[20]):
+2021-12-30 08:35:21,602 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:35:21,602 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:35:51,633 ==================== TRACER ======================
+2021-12-30 08:35:51,634 Channel (server worker num[20]):
+2021-12-30 08:35:51,635 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:35:51,635 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:36:21,666 ==================== TRACER ======================
+2021-12-30 08:36:21,667 Channel (server worker num[20]):
+2021-12-30 08:36:21,668 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:36:21,668 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:36:51,699 ==================== TRACER ======================
+2021-12-30 08:36:51,700 Channel (server worker num[20]):
+2021-12-30 08:36:51,701 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:36:51,701 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:37:21,732 ==================== TRACER ======================
+2021-12-30 08:37:21,733 Channel (server worker num[20]):
+2021-12-30 08:37:21,734 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:37:21,734 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:37:51,765 ==================== TRACER ======================
+2021-12-30 08:37:51,766 Channel (server worker num[20]):
+2021-12-30 08:37:51,767 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:37:51,767 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:43:32,381 ==================== TRACER ======================
+2021-12-30 08:43:32,383 Channel (server worker num[20]):
+2021-12-30 08:43:32,386 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:43:32,387 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:44:02,417 ==================== TRACER ======================
+2021-12-30 08:44:02,418 DAGExecutor:
+2021-12-30 08:44:02,419 Query count[1]
+2021-12-30 08:44:02,419 QPS[0.03333333333333333 q/s]
+2021-12-30 08:44:02,419 Succ[0.0]
+2021-12-30 08:44:02,419 Error req[0]
+2021-12-30 08:44:02,419 Latency:
+2021-12-30 08:44:02,419 ave[1687.363 ms]
+2021-12-30 08:44:02,420 .50[1687.363 ms]
+2021-12-30 08:44:02,420 .60[1687.363 ms]
+2021-12-30 08:44:02,420 .70[1687.363 ms]
+2021-12-30 08:44:02,420 .80[1687.363 ms]
+2021-12-30 08:44:02,420 .90[1687.363 ms]
+2021-12-30 08:44:02,420 .95[1687.363 ms]
+2021-12-30 08:44:02,421 .99[1687.363 ms]
+2021-12-30 08:44:02,421 Channel (server worker num[20]):
+2021-12-30 08:44:02,422 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:44:02,422 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:44:32,453 ==================== TRACER ======================
+2021-12-30 08:44:32,453 Channel (server worker num[20]):
+2021-12-30 08:44:32,454 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:44:32,455 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:45:02,485 ==================== TRACER ======================
+2021-12-30 08:45:02,486 Channel (server worker num[20]):
+2021-12-30 08:45:02,487 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:45:02,488 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:45:32,518 ==================== TRACER ======================
+2021-12-30 08:45:32,519 Channel (server worker num[20]):
+2021-12-30 08:45:32,520 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:45:32,521 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:46:02,551 ==================== TRACER ======================
+2021-12-30 08:46:02,552 Channel (server worker num[20]):
+2021-12-30 08:46:02,553 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:46:02,553 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:46:32,584 ==================== TRACER ======================
+2021-12-30 08:46:32,585 Channel (server worker num[20]):
+2021-12-30 08:46:32,585 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:46:32,586 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:47:02,616 ==================== TRACER ======================
+2021-12-30 08:47:02,617 Channel (server worker num[20]):
+2021-12-30 08:47:02,618 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:47:02,619 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:47:32,649 ==================== TRACER ======================
+2021-12-30 08:47:32,650 Channel (server worker num[20]):
+2021-12-30 08:47:32,651 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:47:32,652 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:48:02,682 ==================== TRACER ======================
+2021-12-30 08:48:02,683 Channel (server worker num[20]):
+2021-12-30 08:48:02,684 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:48:02,684 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:48:32,715 ==================== TRACER ======================
+2021-12-30 08:48:32,716 Channel (server worker num[20]):
+2021-12-30 08:48:32,717 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:48:32,717 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:49:02,748 ==================== TRACER ======================
+2021-12-30 08:49:02,749 Channel (server worker num[20]):
+2021-12-30 08:49:02,749 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:49:02,750 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:49:32,396 ==================== TRACER ======================
+2021-12-30 08:49:32,398 Channel (server worker num[20]):
+2021-12-30 08:49:32,400 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:49:32,401 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:50:02,431 ==================== TRACER ======================
+2021-12-30 08:50:02,432 DAGExecutor:
+2021-12-30 08:50:02,433 Query count[1]
+2021-12-30 08:50:02,433 QPS[0.03333333333333333 q/s]
+2021-12-30 08:50:02,433 Succ[0.0]
+2021-12-30 08:50:02,433 Error req[0]
+2021-12-30 08:50:02,433 Latency:
+2021-12-30 08:50:02,433 ave[1679.848 ms]
+2021-12-30 08:50:02,434 .50[1679.848 ms]
+2021-12-30 08:50:02,434 .60[1679.848 ms]
+2021-12-30 08:50:02,434 .70[1679.848 ms]
+2021-12-30 08:50:02,434 .80[1679.848 ms]
+2021-12-30 08:50:02,434 .90[1679.848 ms]
+2021-12-30 08:50:02,434 .95[1679.848 ms]
+2021-12-30 08:50:02,435 .99[1679.848 ms]
+2021-12-30 08:50:02,435 Channel (server worker num[20]):
+2021-12-30 08:50:02,436 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:50:02,436 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:50:32,467 ==================== TRACER ======================
+2021-12-30 08:50:32,468 Channel (server worker num[20]):
+2021-12-30 08:50:32,468 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:50:32,469 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:51:02,497 ==================== TRACER ======================
+2021-12-30 08:51:02,498 Channel (server worker num[20]):
+2021-12-30 08:51:02,499 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:51:02,500 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:51:32,530 ==================== TRACER ======================
+2021-12-30 08:51:32,531 Channel (server worker num[20]):
+2021-12-30 08:51:32,532 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:51:32,532 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:52:02,563 ==================== TRACER ======================
+2021-12-30 08:52:02,564 Channel (server worker num[20]):
+2021-12-30 08:52:02,565 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:52:02,565 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:52:32,596 ==================== TRACER ======================
+2021-12-30 08:52:32,596 Channel (server worker num[20]):
+2021-12-30 08:52:32,597 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:52:32,598 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:52:42,481 ==================== TRACER ======================
+2021-12-30 08:52:42,484 Channel (server worker num[20]):
+2021-12-30 08:52:42,486 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:52:42,486 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:53:12,517 ==================== TRACER ======================
+2021-12-30 08:53:12,518 DAGExecutor:
+2021-12-30 08:53:12,518 Query count[1]
+2021-12-30 08:53:12,518 QPS[0.03333333333333333 q/s]
+2021-12-30 08:53:12,519 Succ[0.0]
+2021-12-30 08:53:12,519 Error req[0]
+2021-12-30 08:53:12,519 Latency:
+2021-12-30 08:53:12,519 ave[1709.757 ms]
+2021-12-30 08:53:12,519 .50[1709.757 ms]
+2021-12-30 08:53:12,519 .60[1709.757 ms]
+2021-12-30 08:53:12,520 .70[1709.757 ms]
+2021-12-30 08:53:12,520 .80[1709.757 ms]
+2021-12-30 08:53:12,520 .90[1709.757 ms]
+2021-12-30 08:53:12,520 .95[1709.757 ms]
+2021-12-30 08:53:12,520 .99[1709.757 ms]
+2021-12-30 08:53:12,520 Channel (server worker num[20]):
+2021-12-30 08:53:12,521 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:53:12,522 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:53:42,552 ==================== TRACER ======================
+2021-12-30 08:53:42,553 Channel (server worker num[20]):
+2021-12-30 08:53:42,554 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:53:42,555 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:54:12,585 ==================== TRACER ======================
+2021-12-30 08:54:12,586 Channel (server worker num[20]):
+2021-12-30 08:54:12,587 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:54:12,587 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:54:42,618 ==================== TRACER ======================
+2021-12-30 08:54:42,619 Channel (server worker num[20]):
+2021-12-30 08:54:42,620 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:54:42,620 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 08:59:43,531 ==================== TRACER ======================
+2021-12-30 08:59:43,533 Channel (server worker num[20]):
+2021-12-30 08:59:43,535 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 08:59:43,536 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:00:13,566 ==================== TRACER ======================
+2021-12-30 09:00:13,567 Channel (server worker num[20]):
+2021-12-30 09:00:13,568 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:00:13,569 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:00:43,599 ==================== TRACER ======================
+2021-12-30 09:00:43,600 Channel (server worker num[20]):
+2021-12-30 09:00:43,601 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:00:43,602 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:01:13,632 ==================== TRACER ======================
+2021-12-30 09:01:13,633 Channel (server worker num[20]):
+2021-12-30 09:01:13,634 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:01:13,634 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:01:43,665 ==================== TRACER ======================
+2021-12-30 09:01:43,666 Channel (server worker num[20]):
+2021-12-30 09:01:43,666 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:01:43,667 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:02:13,697 ==================== TRACER ======================
+2021-12-30 09:02:13,698 Channel (server worker num[20]):
+2021-12-30 09:02:13,699 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:02:13,700 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:02:43,708 ==================== TRACER ======================
+2021-12-30 09:02:43,709 Channel (server worker num[20]):
+2021-12-30 09:02:43,710 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:02:43,711 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:03:10,066 ==================== TRACER ======================
+2021-12-30 09:03:10,068 Channel (server worker num[20]):
+2021-12-30 09:03:10,071 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:03:10,071 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:03:12,019 ==================== TRACER ======================
+2021-12-30 09:03:12,021 Channel (server worker num[20]):
+2021-12-30 09:03:12,023 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:03:12,024 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:03:42,055 ==================== TRACER ======================
+2021-12-30 09:03:42,056 DAGExecutor:
+2021-12-30 09:03:42,056 Query count[1]
+2021-12-30 09:03:42,056 QPS[0.03333333333333333 q/s]
+2021-12-30 09:03:42,057 Succ[0.0]
+2021-12-30 09:03:42,057 Error req[0]
+2021-12-30 09:03:42,057 Latency:
+2021-12-30 09:03:42,057 ave[1915.476 ms]
+2021-12-30 09:03:42,057 .50[1915.476 ms]
+2021-12-30 09:03:42,057 .60[1915.476 ms]
+2021-12-30 09:03:42,058 .70[1915.476 ms]
+2021-12-30 09:03:42,058 .80[1915.476 ms]
+2021-12-30 09:03:42,058 .90[1915.476 ms]
+2021-12-30 09:03:42,058 .95[1915.476 ms]
+2021-12-30 09:03:42,058 .99[1915.476 ms]
+2021-12-30 09:03:42,058 Channel (server worker num[20]):
+2021-12-30 09:03:42,059 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:03:42,060 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:04:12,090 ==================== TRACER ======================
+2021-12-30 09:04:12,091 Channel (server worker num[20]):
+2021-12-30 09:04:12,092 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:04:12,093 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:04:42,120 ==================== TRACER ======================
+2021-12-30 09:04:42,121 Channel (server worker num[20]):
+2021-12-30 09:04:42,122 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:04:42,123 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:05:12,153 ==================== TRACER ======================
+2021-12-30 09:05:12,154 Channel (server worker num[20]):
+2021-12-30 09:05:12,155 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:05:12,155 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:05:42,186 ==================== TRACER ======================
+2021-12-30 09:05:42,187 Channel (server worker num[20]):
+2021-12-30 09:05:42,188 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:05:42,188 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:06:12,219 ==================== TRACER ======================
+2021-12-30 09:06:12,220 Channel (server worker num[20]):
+2021-12-30 09:06:12,220 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:06:12,221 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:06:42,251 ==================== TRACER ======================
+2021-12-30 09:06:42,252 Channel (server worker num[20]):
+2021-12-30 09:06:42,253 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:06:42,254 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:07:12,284 ==================== TRACER ======================
+2021-12-30 09:07:12,285 Channel (server worker num[20]):
+2021-12-30 09:07:12,286 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:07:12,287 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:07:27,496 ==================== TRACER ======================
+2021-12-30 09:07:27,498 Channel (server worker num[20]):
+2021-12-30 09:07:27,500 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:07:27,501 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:07:48,868 ==================== TRACER ======================
+2021-12-30 09:07:48,871 Channel (server worker num[20]):
+2021-12-30 09:07:48,872 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:07:48,873 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:08:06,615 ==================== TRACER ======================
+2021-12-30 09:08:06,618 Channel (server worker num[20]):
+2021-12-30 09:08:06,620 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:08:06,620 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:08:36,651 ==================== TRACER ======================
+2021-12-30 09:08:36,652 DAGExecutor:
+2021-12-30 09:08:36,652 Query count[1]
+2021-12-30 09:08:36,652 QPS[0.03333333333333333 q/s]
+2021-12-30 09:08:36,653 Succ[0.0]
+2021-12-30 09:08:36,653 Error req[0]
+2021-12-30 09:08:36,653 Latency:
+2021-12-30 09:08:36,653 ave[1661.09 ms]
+2021-12-30 09:08:36,653 .50[1661.09 ms]
+2021-12-30 09:08:36,654 .60[1661.09 ms]
+2021-12-30 09:08:36,654 .70[1661.09 ms]
+2021-12-30 09:08:36,654 .80[1661.09 ms]
+2021-12-30 09:08:36,654 .90[1661.09 ms]
+2021-12-30 09:08:36,654 .95[1661.09 ms]
+2021-12-30 09:08:36,654 .99[1661.09 ms]
+2021-12-30 09:08:36,655 Channel (server worker num[20]):
+2021-12-30 09:08:36,655 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:08:36,656 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:09:06,686 ==================== TRACER ======================
+2021-12-30 09:09:06,687 Channel (server worker num[20]):
+2021-12-30 09:09:06,688 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:09:06,689 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:09:36,719 ==================== TRACER ======================
+2021-12-30 09:09:36,720 Channel (server worker num[20]):
+2021-12-30 09:09:36,721 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:09:36,722 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:10:06,752 ==================== TRACER ======================
+2021-12-30 09:10:06,753 Channel (server worker num[20]):
+2021-12-30 09:10:06,754 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:10:06,755 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:10:36,785 ==================== TRACER ======================
+2021-12-30 09:10:36,786 Channel (server worker num[20]):
+2021-12-30 09:10:36,787 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:10:36,787 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:11:06,818 ==================== TRACER ======================
+2021-12-30 09:11:06,819 Channel (server worker num[20]):
+2021-12-30 09:11:06,819 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:11:06,820 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:11:36,851 ==================== TRACER ======================
+2021-12-30 09:11:36,851 Channel (server worker num[20]):
+2021-12-30 09:11:36,852 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:11:36,853 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:12:06,883 ==================== TRACER ======================
+2021-12-30 09:12:06,884 Channel (server worker num[20]):
+2021-12-30 09:12:06,885 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:12:06,886 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:12:36,900 ==================== TRACER ======================
+2021-12-30 09:12:36,901 Channel (server worker num[20]):
+2021-12-30 09:12:36,902 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:12:36,903 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:13:06,933 ==================== TRACER ======================
+2021-12-30 09:13:06,934 Channel (server worker num[20]):
+2021-12-30 09:13:06,935 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:13:06,935 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:13:36,966 ==================== TRACER ======================
+2021-12-30 09:13:36,967 Channel (server worker num[20]):
+2021-12-30 09:13:36,967 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:13:36,968 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:14:06,998 ==================== TRACER ======================
+2021-12-30 09:14:06,999 Channel (server worker num[20]):
+2021-12-30 09:14:07,000 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:14:07,001 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:14:37,031 ==================== TRACER ======================
+2021-12-30 09:14:37,032 Channel (server worker num[20]):
+2021-12-30 09:14:37,033 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:14:37,034 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:15:07,064 ==================== TRACER ======================
+2021-12-30 09:15:07,065 Channel (server worker num[20]):
+2021-12-30 09:15:07,068 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:15:07,068 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:15:37,096 ==================== TRACER ======================
+2021-12-30 09:15:37,097 Channel (server worker num[20]):
+2021-12-30 09:15:37,098 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:15:37,099 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:16:07,129 ==================== TRACER ======================
+2021-12-30 09:16:07,130 Channel (server worker num[20]):
+2021-12-30 09:16:07,131 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:16:07,131 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:16:37,162 ==================== TRACER ======================
+2021-12-30 09:16:37,163 Channel (server worker num[20]):
+2021-12-30 09:16:37,163 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:16:37,164 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:17:07,195 ==================== TRACER ======================
+2021-12-30 09:17:07,195 Channel (server worker num[20]):
+2021-12-30 09:17:07,196 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:17:07,197 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:17:37,227 ==================== TRACER ======================
+2021-12-30 09:17:37,228 Channel (server worker num[20]):
+2021-12-30 09:17:37,229 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:17:37,230 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:18:07,260 ==================== TRACER ======================
+2021-12-30 09:18:07,261 Channel (server worker num[20]):
+2021-12-30 09:18:07,262 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:18:07,263 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:18:37,293 ==================== TRACER ======================
+2021-12-30 09:18:37,294 Channel (server worker num[20]):
+2021-12-30 09:18:37,295 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:18:37,296 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:19:07,326 ==================== TRACER ======================
+2021-12-30 09:19:07,327 Channel (server worker num[20]):
+2021-12-30 09:19:07,328 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:19:07,328 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:19:39,524 ==================== TRACER ======================
+2021-12-30 09:19:39,525 Channel (server worker num[20]):
+2021-12-30 09:19:39,527 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:19:39,528 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:20:11,284 ==================== TRACER ======================
+2021-12-30 09:20:11,286 Channel (server worker num[20]):
+2021-12-30 09:20:11,289 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:20:11,289 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:20:41,320 ==================== TRACER ======================
+2021-12-30 09:20:41,321 DAGExecutor:
+2021-12-30 09:20:41,321 Query count[1]
+2021-12-30 09:20:41,321 QPS[0.03333333333333333 q/s]
+2021-12-30 09:20:41,322 Succ[0.0]
+2021-12-30 09:20:41,322 Error req[0]
+2021-12-30 09:20:41,322 Latency:
+2021-12-30 09:20:41,322 ave[2327.767 ms]
+2021-12-30 09:20:41,322 .50[2327.767 ms]
+2021-12-30 09:20:41,323 .60[2327.767 ms]
+2021-12-30 09:20:41,323 .70[2327.767 ms]
+2021-12-30 09:20:41,323 .80[2327.767 ms]
+2021-12-30 09:20:41,323 .90[2327.767 ms]
+2021-12-30 09:20:41,323 .95[2327.767 ms]
+2021-12-30 09:20:41,323 .99[2327.767 ms]
+2021-12-30 09:20:41,324 Channel (server worker num[20]):
+2021-12-30 09:20:41,324 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:20:41,325 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:21:11,356 ==================== TRACER ======================
+2021-12-30 09:21:11,356 Channel (server worker num[20]):
+2021-12-30 09:21:11,357 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:21:11,358 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:21:19,661 ==================== TRACER ======================
+2021-12-30 09:21:19,663 Channel (server worker num[20]):
+2021-12-30 09:21:19,665 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:21:19,666 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:21:49,668 ==================== TRACER ======================
+2021-12-30 09:21:49,669 DAGExecutor:
+2021-12-30 09:21:49,669 Query count[1]
+2021-12-30 09:21:49,670 QPS[0.03333333333333333 q/s]
+2021-12-30 09:21:49,670 Succ[0.0]
+2021-12-30 09:21:49,670 Error req[0]
+2021-12-30 09:21:49,670 Latency:
+2021-12-30 09:21:49,670 ave[1667.456 ms]
+2021-12-30 09:21:49,670 .50[1667.456 ms]
+2021-12-30 09:21:49,671 .60[1667.456 ms]
+2021-12-30 09:21:49,671 .70[1667.456 ms]
+2021-12-30 09:21:49,671 .80[1667.456 ms]
+2021-12-30 09:21:49,671 .90[1667.456 ms]
+2021-12-30 09:21:49,671 .95[1667.456 ms]
+2021-12-30 09:21:49,671 .99[1667.456 ms]
+2021-12-30 09:21:49,672 Channel (server worker num[20]):
+2021-12-30 09:21:49,672 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:21:49,673 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:22:19,704 ==================== TRACER ======================
+2021-12-30 09:22:19,704 Channel (server worker num[20]):
+2021-12-30 09:22:19,705 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:22:19,706 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:22:49,736 ==================== TRACER ======================
+2021-12-30 09:22:49,737 Channel (server worker num[20]):
+2021-12-30 09:22:49,738 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:22:49,739 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:23:19,769 ==================== TRACER ======================
+2021-12-30 09:23:19,770 Channel (server worker num[20]):
+2021-12-30 09:23:19,771 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:23:19,772 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:23:31,946 ==================== TRACER ======================
+2021-12-30 09:23:31,948 Channel (server worker num[20]):
+2021-12-30 09:23:31,951 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:23:31,951 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:24:02,960 ==================== TRACER ======================
+2021-12-30 09:24:02,962 Channel (server worker num[20]):
+2021-12-30 09:24:02,964 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:24:02,965 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:24:32,995 ==================== TRACER ======================
+2021-12-30 09:24:32,996 DAGExecutor:
+2021-12-30 09:24:32,997 Query count[1]
+2021-12-30 09:24:32,997 QPS[0.03333333333333333 q/s]
+2021-12-30 09:24:32,997 Succ[0.0]
+2021-12-30 09:24:32,997 Error req[0]
+2021-12-30 09:24:32,997 Latency:
+2021-12-30 09:24:32,997 ave[2969.908 ms]
+2021-12-30 09:24:32,998 .50[2969.908 ms]
+2021-12-30 09:24:32,998 .60[2969.908 ms]
+2021-12-30 09:24:32,998 .70[2969.908 ms]
+2021-12-30 09:24:32,998 .80[2969.908 ms]
+2021-12-30 09:24:32,998 .90[2969.908 ms]
+2021-12-30 09:24:32,998 .95[2969.908 ms]
+2021-12-30 09:24:32,999 .99[2969.908 ms]
+2021-12-30 09:24:32,999 Channel (server worker num[20]):
+2021-12-30 09:24:33,000 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:24:33,000 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:25:03,031 ==================== TRACER ======================
+2021-12-30 09:25:03,032 Channel (server worker num[20]):
+2021-12-30 09:25:03,032 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:25:03,033 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:25:33,049 ==================== TRACER ======================
+2021-12-30 09:25:33,050 Channel (server worker num[20]):
+2021-12-30 09:25:33,051 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:25:33,052 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:25:54,144 ==================== TRACER ======================
+2021-12-30 09:25:54,146 Channel (server worker num[20]):
+2021-12-30 09:25:54,149 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:25:54,149 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:26:24,168 ==================== TRACER ======================
+2021-12-30 09:26:24,170 Op(ppyolo_mbv3):
+2021-12-30 09:26:24,170 in[11.296 ms]
+2021-12-30 09:26:24,170 prep[49.388 ms]
+2021-12-30 09:26:24,170 midp[1611.812 ms]
+2021-12-30 09:26:24,170 postp[11.047 ms]
+2021-12-30 09:26:24,171 out[2.017 ms]
+2021-12-30 09:26:24,171 idle[0.007898265264956517]
+2021-12-30 09:26:24,171 DAGExecutor:
+2021-12-30 09:26:24,171 Query count[1]
+2021-12-30 09:26:24,171 QPS[0.03333333333333333 q/s]
+2021-12-30 09:26:24,171 Succ[1.0]
+2021-12-30 09:26:24,172 Error req[]
+2021-12-30 09:26:24,172 Latency:
+2021-12-30 09:26:24,172 ave[1682.576 ms]
+2021-12-30 09:26:24,172 .50[1682.576 ms]
+2021-12-30 09:26:24,172 .60[1682.576 ms]
+2021-12-30 09:26:24,173 .70[1682.576 ms]
+2021-12-30 09:26:24,173 .80[1682.576 ms]
+2021-12-30 09:26:24,173 .90[1682.576 ms]
+2021-12-30 09:26:24,173 .95[1682.576 ms]
+2021-12-30 09:26:24,173 .99[1682.576 ms]
+2021-12-30 09:26:24,173 Channel (server worker num[20]):
+2021-12-30 09:26:24,174 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:26:24,175 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:26:54,205 ==================== TRACER ======================
+2021-12-30 09:26:54,206 Channel (server worker num[20]):
+2021-12-30 09:26:54,207 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:26:54,208 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:39:07,251 ==================== TRACER ======================
+2021-12-30 09:39:07,254 Channel (server worker num[20]):
+2021-12-30 09:39:07,256 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:39:07,256 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:39:37,287 ==================== TRACER ======================
+2021-12-30 09:39:37,288 Op(ppyolo_mbv3):
+2021-12-30 09:39:37,289 in[16052.636 ms]
+2021-12-30 09:39:37,289 prep[52.574 ms]
+2021-12-30 09:39:37,289 midp[1722.923 ms]
+2021-12-30 09:39:37,289 postp[11.854 ms]
+2021-12-30 09:39:37,289 out[1.654 ms]
+2021-12-30 09:39:37,289 idle[0.8998213785379944]
+2021-12-30 09:39:37,290 DAGExecutor:
+2021-12-30 09:39:37,290 Query count[1]
+2021-12-30 09:39:37,290 QPS[0.03333333333333333 q/s]
+2021-12-30 09:39:37,290 Succ[1.0]
+2021-12-30 09:39:37,290 Error req[]
+2021-12-30 09:39:37,291 Latency:
+2021-12-30 09:39:37,291 ave[1797.892 ms]
+2021-12-30 09:39:37,291 .50[1797.892 ms]
+2021-12-30 09:39:37,291 .60[1797.892 ms]
+2021-12-30 09:39:37,291 .70[1797.892 ms]
+2021-12-30 09:39:37,291 .80[1797.892 ms]
+2021-12-30 09:39:37,292 .90[1797.892 ms]
+2021-12-30 09:39:37,292 .95[1797.892 ms]
+2021-12-30 09:39:37,292 .99[1797.892 ms]
+2021-12-30 09:39:37,292 Channel (server worker num[20]):
+2021-12-30 09:39:37,293 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:39:37,294 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:40:07,324 ==================== TRACER ======================
+2021-12-30 09:40:07,325 DAGExecutor:
+2021-12-30 09:40:07,325 Query count[1]
+2021-12-30 09:40:07,325 QPS[0.03333333333333333 q/s]
+2021-12-30 09:40:07,326 Succ[0.0]
+2021-12-30 09:40:07,326 Error req[1]
+2021-12-30 09:40:07,326 Latency:
+2021-12-30 09:40:07,326 ave[118.163 ms]
+2021-12-30 09:40:07,326 .50[118.163 ms]
+2021-12-30 09:40:07,326 .60[118.163 ms]
+2021-12-30 09:40:07,327 .70[118.163 ms]
+2021-12-30 09:40:07,327 .80[118.163 ms]
+2021-12-30 09:40:07,327 .90[118.163 ms]
+2021-12-30 09:40:07,327 .95[118.163 ms]
+2021-12-30 09:40:07,327 .99[118.163 ms]
+2021-12-30 09:40:07,327 Channel (server worker num[20]):
+2021-12-30 09:40:07,328 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:40:07,329 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:40:37,359 ==================== TRACER ======================
+2021-12-30 09:40:37,360 Channel (server worker num[20]):
+2021-12-30 09:40:37,361 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:40:37,362 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:41:07,392 ==================== TRACER ======================
+2021-12-30 09:41:07,393 DAGExecutor:
+2021-12-30 09:41:07,394 Query count[2]
+2021-12-30 09:41:07,394 QPS[0.06666666666666667 q/s]
+2021-12-30 09:41:07,394 Succ[0.0]
+2021-12-30 09:41:07,394 Error req[2, 3]
+2021-12-30 09:41:07,394 Latency:
+2021-12-30 09:41:07,395 ave[91.7105 ms]
+2021-12-30 09:41:07,395 .50[110.376 ms]
+2021-12-30 09:41:07,395 .60[110.376 ms]
+2021-12-30 09:41:07,395 .70[110.376 ms]
+2021-12-30 09:41:07,395 .80[110.376 ms]
+2021-12-30 09:41:07,395 .90[110.376 ms]
+2021-12-30 09:41:07,396 .95[110.376 ms]
+2021-12-30 09:41:07,396 .99[110.376 ms]
+2021-12-30 09:41:07,396 Channel (server worker num[20]):
+2021-12-30 09:41:07,397 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:41:07,397 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:41:37,428 ==================== TRACER ======================
+2021-12-30 09:41:37,429 Channel (server worker num[20]):
+2021-12-30 09:41:37,430 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:41:37,430 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:42:01,898 ==================== TRACER ======================
+2021-12-30 09:42:01,900 Channel (server worker num[20]):
+2021-12-30 09:42:01,902 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:42:01,902 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:42:24,079 ==================== TRACER ======================
+2021-12-30 09:42:24,081 Channel (server worker num[20]):
+2021-12-30 09:42:24,083 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:42:24,084 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:42:54,114 ==================== TRACER ======================
+2021-12-30 09:42:54,116 Op(ppyolo_mbv3):
+2021-12-30 09:42:54,116 in[238.366 ms]
+2021-12-30 09:42:54,116 prep[78.16 ms]
+2021-12-30 09:42:54,116 midp[1605.08 ms]
+2021-12-30 09:42:54,117 postp[10.166 ms]
+2021-12-30 09:42:54,117 out[1.317 ms]
+2021-12-30 09:42:54,117 idle[0.12398963524183315]
+2021-12-30 09:42:54,117 DAGExecutor:
+2021-12-30 09:42:54,117 Query count[1]
+2021-12-30 09:42:54,117 QPS[0.03333333333333333 q/s]
+2021-12-30 09:42:54,118 Succ[1.0]
+2021-12-30 09:42:54,118 Error req[]
+2021-12-30 09:42:54,118 Latency:
+2021-12-30 09:42:54,118 ave[1705.072 ms]
+2021-12-30 09:42:54,118 .50[1705.072 ms]
+2021-12-30 09:42:54,119 .60[1705.072 ms]
+2021-12-30 09:42:54,119 .70[1705.072 ms]
+2021-12-30 09:42:54,119 .80[1705.072 ms]
+2021-12-30 09:42:54,119 .90[1705.072 ms]
+2021-12-30 09:42:54,119 .95[1705.072 ms]
+2021-12-30 09:42:54,119 .99[1705.072 ms]
+2021-12-30 09:42:54,119 Channel (server worker num[20]):
+2021-12-30 09:42:54,120 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:42:54,121 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:43:24,151 ==================== TRACER ======================
+2021-12-30 09:43:24,153 DAGExecutor:
+2021-12-30 09:43:24,153 Query count[2]
+2021-12-30 09:43:24,153 QPS[0.06666666666666667 q/s]
+2021-12-30 09:43:24,153 Succ[0.0]
+2021-12-30 09:43:24,153 Error req[1, 2]
+2021-12-30 09:43:24,154 Latency:
+2021-12-30 09:43:24,154 ave[111.03649999999999 ms]
+2021-12-30 09:43:24,154 .50[111.139 ms]
+2021-12-30 09:43:24,154 .60[111.139 ms]
+2021-12-30 09:43:24,154 .70[111.139 ms]
+2021-12-30 09:43:24,154 .80[111.139 ms]
+2021-12-30 09:43:24,155 .90[111.139 ms]
+2021-12-30 09:43:24,155 .95[111.139 ms]
+2021-12-30 09:43:24,155 .99[111.139 ms]
+2021-12-30 09:43:24,155 Channel (server worker num[20]):
+2021-12-30 09:43:24,156 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:43:24,156 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:43:54,187 ==================== TRACER ======================
+2021-12-30 09:43:54,188 Channel (server worker num[20]):
+2021-12-30 09:43:54,188 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:43:54,189 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:44:24,220 ==================== TRACER ======================
+2021-12-30 09:44:24,220 Channel (server worker num[20]):
+2021-12-30 09:44:24,221 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:44:24,222 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:44:54,933 ==================== TRACER ======================
+2021-12-30 09:44:54,935 Channel (server worker num[20]):
+2021-12-30 09:44:54,937 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:44:54,938 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:45:24,968 ==================== TRACER ======================
+2021-12-30 09:45:24,970 Op(ppyolo_mbv3):
+2021-12-30 09:45:24,970 in[1609.535 ms]
+2021-12-30 09:45:24,970 prep[67.209 ms]
+2021-12-30 09:45:24,970 midp[1766.049 ms]
+2021-12-30 09:45:24,970 postp[11.493 ms]
+2021-12-30 09:45:24,971 out[1.567 ms]
+2021-12-30 09:45:24,971 idle[0.4661951767045647]
+2021-12-30 09:45:24,971 DAGExecutor:
+2021-12-30 09:45:24,971 Query count[2]
+2021-12-30 09:45:24,971 QPS[0.06666666666666667 q/s]
+2021-12-30 09:45:24,972 Succ[0.5]
+2021-12-30 09:45:24,972 Error req[1]
+2021-12-30 09:45:24,972 Latency:
+2021-12-30 09:45:24,972 ave[984.508 ms]
+2021-12-30 09:45:24,972 .50[1859.535 ms]
+2021-12-30 09:45:24,972 .60[1859.535 ms]
+2021-12-30 09:45:24,973 .70[1859.535 ms]
+2021-12-30 09:45:24,973 .80[1859.535 ms]
+2021-12-30 09:45:24,973 .90[1859.535 ms]
+2021-12-30 09:45:24,973 .95[1859.535 ms]
+2021-12-30 09:45:24,973 .99[1859.535 ms]
+2021-12-30 09:45:24,973 Channel (server worker num[20]):
+2021-12-30 09:45:24,974 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:45:24,975 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:45:55,005 ==================== TRACER ======================
+2021-12-30 09:45:55,006 Channel (server worker num[20]):
+2021-12-30 09:45:55,007 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:45:55,007 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:46:17,802 ==================== TRACER ======================
+2021-12-30 09:46:17,804 Channel (server worker num[20]):
+2021-12-30 09:46:17,807 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:46:17,808 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:46:47,838 ==================== TRACER ======================
+2021-12-30 09:46:47,840 Op(ppyolo_mbv3):
+2021-12-30 09:46:47,840 in[842.57 ms]
+2021-12-30 09:46:47,840 prep[79.406 ms]
+2021-12-30 09:46:47,841 midp[1591.784 ms]
+2021-12-30 09:46:47,841 postp[10.156 ms]
+2021-12-30 09:46:47,841 out[1.137 ms]
+2021-12-30 09:46:47,841 idle[0.33413437262504997]
+2021-12-30 09:46:47,841 DAGExecutor:
+2021-12-30 09:46:47,841 Query count[2]
+2021-12-30 09:46:47,842 QPS[0.06666666666666667 q/s]
+2021-12-30 09:46:47,842 Succ[0.5]
+2021-12-30 09:46:47,842 Error req[1]
+2021-12-30 09:46:47,842 Latency:
+2021-12-30 09:46:47,842 ave[910.8975 ms]
+2021-12-30 09:46:47,842 .50[1695.613 ms]
+2021-12-30 09:46:47,843 .60[1695.613 ms]
+2021-12-30 09:46:47,843 .70[1695.613 ms]
+2021-12-30 09:46:47,843 .80[1695.613 ms]
+2021-12-30 09:46:47,843 .90[1695.613 ms]
+2021-12-30 09:46:47,843 .95[1695.613 ms]
+2021-12-30 09:46:47,843 .99[1695.613 ms]
+2021-12-30 09:46:47,844 Channel (server worker num[20]):
+2021-12-30 09:46:47,844 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:46:47,845 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:47:17,876 ==================== TRACER ======================
+2021-12-30 09:47:17,877 Channel (server worker num[20]):
+2021-12-30 09:47:17,877 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:47:17,878 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:47:47,909 ==================== TRACER ======================
+2021-12-30 09:47:47,910 Channel (server worker num[20]):
+2021-12-30 09:47:47,910 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:47:47,911 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:48:17,942 ==================== TRACER ======================
+2021-12-30 09:48:17,943 Channel (server worker num[20]):
+2021-12-30 09:48:17,944 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:48:17,944 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:48:47,975 ==================== TRACER ======================
+2021-12-30 09:48:47,976 Channel (server worker num[20]):
+2021-12-30 09:48:47,977 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:48:47,977 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:49:18,008 ==================== TRACER ======================
+2021-12-30 09:49:18,009 Channel (server worker num[20]):
+2021-12-30 09:49:18,011 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:49:18,012 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:49:48,042 ==================== TRACER ======================
+2021-12-30 09:49:48,043 Channel (server worker num[20]):
+2021-12-30 09:49:48,044 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:49:48,045 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:50:18,057 ==================== TRACER ======================
+2021-12-30 09:50:18,058 Channel (server worker num[20]):
+2021-12-30 09:50:18,059 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:50:18,060 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:50:48,091 ==================== TRACER ======================
+2021-12-30 09:50:48,092 Channel (server worker num[20]):
+2021-12-30 09:50:48,093 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:50:48,093 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:51:18,124 ==================== TRACER ======================
+2021-12-30 09:51:18,125 Channel (server worker num[20]):
+2021-12-30 09:51:18,126 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:51:18,127 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:51:48,157 ==================== TRACER ======================
+2021-12-30 09:51:48,158 Channel (server worker num[20]):
+2021-12-30 09:51:48,159 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:51:48,159 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:52:18,190 ==================== TRACER ======================
+2021-12-30 09:52:18,191 Channel (server worker num[20]):
+2021-12-30 09:52:18,192 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:52:18,193 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:52:48,220 ==================== TRACER ======================
+2021-12-30 09:52:48,221 Channel (server worker num[20]):
+2021-12-30 09:52:48,222 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:52:48,223 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:53:18,253 ==================== TRACER ======================
+2021-12-30 09:53:18,254 Channel (server worker num[20]):
+2021-12-30 09:53:18,255 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:53:18,256 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:53:48,286 ==================== TRACER ======================
+2021-12-30 09:53:48,287 Channel (server worker num[20]):
+2021-12-30 09:53:48,288 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:53:48,289 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:54:18,318 ==================== TRACER ======================
+2021-12-30 09:54:18,319 Channel (server worker num[20]):
+2021-12-30 09:54:18,319 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:54:18,320 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:54:48,351 ==================== TRACER ======================
+2021-12-30 09:54:48,352 Channel (server worker num[20]):
+2021-12-30 09:54:48,352 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:54:48,353 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:55:18,384 ==================== TRACER ======================
+2021-12-30 09:55:18,385 Channel (server worker num[20]):
+2021-12-30 09:55:18,385 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:55:18,386 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:55:48,417 ==================== TRACER ======================
+2021-12-30 09:55:48,418 Channel (server worker num[20]):
+2021-12-30 09:55:48,418 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:55:48,419 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:56:18,450 ==================== TRACER ======================
+2021-12-30 09:56:18,451 Channel (server worker num[20]):
+2021-12-30 09:56:18,451 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:56:18,452 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:56:48,483 ==================== TRACER ======================
+2021-12-30 09:56:48,484 Channel (server worker num[20]):
+2021-12-30 09:56:48,484 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:56:48,485 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:57:18,516 ==================== TRACER ======================
+2021-12-30 09:57:18,517 Channel (server worker num[20]):
+2021-12-30 09:57:18,518 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:57:18,518 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:57:48,549 ==================== TRACER ======================
+2021-12-30 09:57:48,550 Channel (server worker num[20]):
+2021-12-30 09:57:48,550 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:57:48,551 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:58:18,582 ==================== TRACER ======================
+2021-12-30 09:58:18,583 Channel (server worker num[20]):
+2021-12-30 09:58:18,584 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:58:18,584 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:58:48,615 ==================== TRACER ======================
+2021-12-30 09:58:48,616 Channel (server worker num[20]):
+2021-12-30 09:58:48,617 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:58:48,617 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:59:18,648 ==================== TRACER ======================
+2021-12-30 09:59:18,649 Channel (server worker num[20]):
+2021-12-30 09:59:18,650 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:59:18,650 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 09:59:48,680 ==================== TRACER ======================
+2021-12-30 09:59:48,681 Channel (server worker num[20]):
+2021-12-30 09:59:48,682 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 09:59:48,683 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:00:18,713 ==================== TRACER ======================
+2021-12-30 10:00:18,714 Channel (server worker num[20]):
+2021-12-30 10:00:18,715 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:00:18,716 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:00:48,731 ==================== TRACER ======================
+2021-12-30 10:00:48,732 Channel (server worker num[20]):
+2021-12-30 10:00:48,733 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:00:48,734 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:01:18,748 ==================== TRACER ======================
+2021-12-30 10:01:18,749 Channel (server worker num[20]):
+2021-12-30 10:01:18,750 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:01:18,751 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:01:48,781 ==================== TRACER ======================
+2021-12-30 10:01:48,782 Channel (server worker num[20]):
+2021-12-30 10:01:48,783 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:01:48,784 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:02:18,814 ==================== TRACER ======================
+2021-12-30 10:02:18,815 Channel (server worker num[20]):
+2021-12-30 10:02:18,816 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:02:18,817 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:02:48,847 ==================== TRACER ======================
+2021-12-30 10:02:48,848 Channel (server worker num[20]):
+2021-12-30 10:02:48,849 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:02:48,850 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:03:18,880 ==================== TRACER ======================
+2021-12-30 10:03:18,881 Channel (server worker num[20]):
+2021-12-30 10:03:18,882 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:03:18,883 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:03:48,892 ==================== TRACER ======================
+2021-12-30 10:03:48,893 Channel (server worker num[20]):
+2021-12-30 10:03:48,894 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:03:48,895 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:04:18,925 ==================== TRACER ======================
+2021-12-30 10:04:18,926 Channel (server worker num[20]):
+2021-12-30 10:04:18,927 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:04:18,928 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:04:48,958 ==================== TRACER ======================
+2021-12-30 10:04:48,959 Channel (server worker num[20]):
+2021-12-30 10:04:48,960 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:04:48,961 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:05:18,991 ==================== TRACER ======================
+2021-12-30 10:05:18,992 Channel (server worker num[20]):
+2021-12-30 10:05:18,993 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:05:18,994 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:05:49,000 ==================== TRACER ======================
+2021-12-30 10:05:49,001 Channel (server worker num[20]):
+2021-12-30 10:05:49,002 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:05:49,003 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:06:19,033 ==================== TRACER ======================
+2021-12-30 10:06:19,034 Channel (server worker num[20]):
+2021-12-30 10:06:19,035 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:06:19,036 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:06:49,066 ==================== TRACER ======================
+2021-12-30 10:06:49,067 Channel (server worker num[20]):
+2021-12-30 10:06:49,068 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:06:49,069 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:07:19,099 ==================== TRACER ======================
+2021-12-30 10:07:19,100 Channel (server worker num[20]):
+2021-12-30 10:07:19,101 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:07:19,102 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:07:49,132 ==================== TRACER ======================
+2021-12-30 10:07:49,133 Channel (server worker num[20]):
+2021-12-30 10:07:49,134 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:07:49,135 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:08:19,165 ==================== TRACER ======================
+2021-12-30 10:08:19,166 Channel (server worker num[20]):
+2021-12-30 10:08:19,167 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:08:19,168 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:08:49,198 ==================== TRACER ======================
+2021-12-30 10:08:49,199 Channel (server worker num[20]):
+2021-12-30 10:08:49,200 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:08:49,201 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:09:19,231 ==================== TRACER ======================
+2021-12-30 10:09:19,232 Channel (server worker num[20]):
+2021-12-30 10:09:19,233 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:09:19,234 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:09:49,265 ==================== TRACER ======================
+2021-12-30 10:09:49,266 Channel (server worker num[20]):
+2021-12-30 10:09:49,266 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:09:49,267 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:10:19,294 ==================== TRACER ======================
+2021-12-30 10:10:19,295 Channel (server worker num[20]):
+2021-12-30 10:10:19,296 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:10:19,296 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:10:49,327 ==================== TRACER ======================
+2021-12-30 10:10:49,328 Channel (server worker num[20]):
+2021-12-30 10:10:49,329 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:10:49,329 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:11:19,360 ==================== TRACER ======================
+2021-12-30 10:11:19,361 Channel (server worker num[20]):
+2021-12-30 10:11:19,362 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:11:19,363 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:11:49,368 ==================== TRACER ======================
+2021-12-30 10:11:49,369 Channel (server worker num[20]):
+2021-12-30 10:11:49,370 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:11:49,371 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:12:19,401 ==================== TRACER ======================
+2021-12-30 10:12:19,402 Channel (server worker num[20]):
+2021-12-30 10:12:19,403 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:12:19,404 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:12:49,434 ==================== TRACER ======================
+2021-12-30 10:12:49,435 Channel (server worker num[20]):
+2021-12-30 10:12:49,436 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:12:49,437 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:13:19,468 ==================== TRACER ======================
+2021-12-30 10:13:19,469 Channel (server worker num[20]):
+2021-12-30 10:13:19,469 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:13:19,470 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:13:49,500 ==================== TRACER ======================
+2021-12-30 10:13:49,501 Channel (server worker num[20]):
+2021-12-30 10:13:49,502 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:13:49,503 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:14:19,533 ==================== TRACER ======================
+2021-12-30 10:14:19,534 Channel (server worker num[20]):
+2021-12-30 10:14:19,535 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:14:19,536 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:14:49,566 ==================== TRACER ======================
+2021-12-30 10:14:49,567 Channel (server worker num[20]):
+2021-12-30 10:14:49,570 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:14:49,570 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:15:19,592 ==================== TRACER ======================
+2021-12-30 10:15:19,593 Channel (server worker num[20]):
+2021-12-30 10:15:19,594 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:15:19,595 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:15:49,625 ==================== TRACER ======================
+2021-12-30 10:15:49,626 Channel (server worker num[20]):
+2021-12-30 10:15:49,627 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:15:49,628 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:16:19,658 ==================== TRACER ======================
+2021-12-30 10:16:19,659 Channel (server worker num[20]):
+2021-12-30 10:16:19,660 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:16:19,661 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:16:49,692 ==================== TRACER ======================
+2021-12-30 10:16:49,693 Channel (server worker num[20]):
+2021-12-30 10:16:49,693 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:16:49,694 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:17:19,725 ==================== TRACER ======================
+2021-12-30 10:17:19,726 Channel (server worker num[20]):
+2021-12-30 10:17:19,727 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:17:19,727 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:17:49,732 ==================== TRACER ======================
+2021-12-30 10:17:49,733 Channel (server worker num[20]):
+2021-12-30 10:17:49,734 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:17:49,735 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:18:19,765 ==================== TRACER ======================
+2021-12-30 10:18:19,766 Channel (server worker num[20]):
+2021-12-30 10:18:19,767 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:18:19,768 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:18:49,798 ==================== TRACER ======================
+2021-12-30 10:18:49,799 Channel (server worker num[20]):
+2021-12-30 10:18:49,800 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:18:49,801 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:19:19,811 ==================== TRACER ======================
+2021-12-30 10:19:19,812 Channel (server worker num[20]):
+2021-12-30 10:19:19,813 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:19:19,814 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:19:49,844 ==================== TRACER ======================
+2021-12-30 10:19:49,845 Channel (server worker num[20]):
+2021-12-30 10:19:49,846 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:19:49,847 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:20:19,877 ==================== TRACER ======================
+2021-12-30 10:20:19,878 Channel (server worker num[20]):
+2021-12-30 10:20:19,879 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:20:19,880 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:20:49,910 ==================== TRACER ======================
+2021-12-30 10:20:49,911 Channel (server worker num[20]):
+2021-12-30 10:20:49,912 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:20:49,913 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:21:19,943 ==================== TRACER ======================
+2021-12-30 10:21:19,944 Channel (server worker num[20]):
+2021-12-30 10:21:19,945 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:21:19,946 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:21:49,976 ==================== TRACER ======================
+2021-12-30 10:21:49,977 Channel (server worker num[20]):
+2021-12-30 10:21:49,978 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:21:49,979 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:22:20,009 ==================== TRACER ======================
+2021-12-30 10:22:20,010 Channel (server worker num[20]):
+2021-12-30 10:22:20,011 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:22:20,012 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:22:50,042 ==================== TRACER ======================
+2021-12-30 10:22:50,043 Channel (server worker num[20]):
+2021-12-30 10:22:50,044 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:22:50,045 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:23:20,075 ==================== TRACER ======================
+2021-12-30 10:23:20,076 Channel (server worker num[20]):
+2021-12-30 10:23:20,077 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:23:20,078 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:23:50,108 ==================== TRACER ======================
+2021-12-30 10:23:50,109 Channel (server worker num[20]):
+2021-12-30 10:23:50,110 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:23:50,111 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:24:20,141 ==================== TRACER ======================
+2021-12-30 10:24:20,142 Channel (server worker num[20]):
+2021-12-30 10:24:20,143 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:24:20,144 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:24:50,165 ==================== TRACER ======================
+2021-12-30 10:24:50,166 Channel (server worker num[20]):
+2021-12-30 10:24:50,167 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:24:50,168 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:25:20,198 ==================== TRACER ======================
+2021-12-30 10:25:20,199 Channel (server worker num[20]):
+2021-12-30 10:25:20,200 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:25:20,201 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:25:50,231 ==================== TRACER ======================
+2021-12-30 10:25:50,232 Channel (server worker num[20]):
+2021-12-30 10:25:50,233 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:25:50,234 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:26:20,264 ==================== TRACER ======================
+2021-12-30 10:26:20,265 Channel (server worker num[20]):
+2021-12-30 10:26:20,266 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:26:20,267 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:26:50,297 ==================== TRACER ======================
+2021-12-30 10:26:50,298 Channel (server worker num[20]):
+2021-12-30 10:26:50,299 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:26:50,300 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:27:20,330 ==================== TRACER ======================
+2021-12-30 10:27:20,331 Channel (server worker num[20]):
+2021-12-30 10:27:20,332 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:27:20,333 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:27:50,363 ==================== TRACER ======================
+2021-12-30 10:27:50,364 Channel (server worker num[20]):
+2021-12-30 10:27:50,365 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:27:50,366 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:28:20,396 ==================== TRACER ======================
+2021-12-30 10:28:20,397 Channel (server worker num[20]):
+2021-12-30 10:28:20,398 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:28:20,399 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:28:50,429 ==================== TRACER ======================
+2021-12-30 10:28:50,430 Channel (server worker num[20]):
+2021-12-30 10:28:50,431 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:28:50,432 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:29:20,462 ==================== TRACER ======================
+2021-12-30 10:29:20,463 Channel (server worker num[20]):
+2021-12-30 10:29:20,464 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:29:20,465 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:29:50,495 ==================== TRACER ======================
+2021-12-30 10:29:50,496 Channel (server worker num[20]):
+2021-12-30 10:29:50,497 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:29:50,498 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:30:20,528 ==================== TRACER ======================
+2021-12-30 10:30:20,529 Channel (server worker num[20]):
+2021-12-30 10:30:20,530 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:30:20,531 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:30:50,561 ==================== TRACER ======================
+2021-12-30 10:30:50,562 Channel (server worker num[20]):
+2021-12-30 10:30:50,563 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:30:50,564 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:31:20,594 ==================== TRACER ======================
+2021-12-30 10:31:20,595 Channel (server worker num[20]):
+2021-12-30 10:31:20,596 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:31:20,597 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:31:50,627 ==================== TRACER ======================
+2021-12-30 10:31:50,628 Channel (server worker num[20]):
+2021-12-30 10:31:50,629 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:31:50,630 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:32:20,660 ==================== TRACER ======================
+2021-12-30 10:32:20,662 Channel (server worker num[20]):
+2021-12-30 10:32:20,662 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:32:20,663 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:32:50,694 ==================== TRACER ======================
+2021-12-30 10:32:50,695 Channel (server worker num[20]):
+2021-12-30 10:32:50,695 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:32:50,696 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:33:20,727 ==================== TRACER ======================
+2021-12-30 10:33:20,728 Channel (server worker num[20]):
+2021-12-30 10:33:20,733 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:33:20,733 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:33:50,764 ==================== TRACER ======================
+2021-12-30 10:33:50,765 Channel (server worker num[20]):
+2021-12-30 10:33:50,766 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:33:50,766 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:34:20,797 ==================== TRACER ======================
+2021-12-30 10:34:20,798 Channel (server worker num[20]):
+2021-12-30 10:34:20,799 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:34:20,799 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:34:50,830 ==================== TRACER ======================
+2021-12-30 10:34:50,831 Channel (server worker num[20]):
+2021-12-30 10:34:50,831 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:34:50,832 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:35:20,863 ==================== TRACER ======================
+2021-12-30 10:35:20,864 Channel (server worker num[20]):
+2021-12-30 10:35:20,864 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:35:20,865 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:35:50,896 ==================== TRACER ======================
+2021-12-30 10:35:50,897 Channel (server worker num[20]):
+2021-12-30 10:35:50,898 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:35:50,898 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:36:20,929 ==================== TRACER ======================
+2021-12-30 10:36:20,930 Channel (server worker num[20]):
+2021-12-30 10:36:20,930 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:36:20,931 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:36:50,962 ==================== TRACER ======================
+2021-12-30 10:36:50,963 Channel (server worker num[20]):
+2021-12-30 10:36:50,963 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:36:50,964 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:37:20,995 ==================== TRACER ======================
+2021-12-30 10:37:20,996 Channel (server worker num[20]):
+2021-12-30 10:37:20,996 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:37:20,997 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:37:51,028 ==================== TRACER ======================
+2021-12-30 10:37:51,029 Channel (server worker num[20]):
+2021-12-30 10:37:51,030 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:37:51,030 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:38:21,061 ==================== TRACER ======================
+2021-12-30 10:38:21,062 Channel (server worker num[20]):
+2021-12-30 10:38:21,062 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:38:21,063 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:38:51,094 ==================== TRACER ======================
+2021-12-30 10:38:51,095 Channel (server worker num[20]):
+2021-12-30 10:38:51,095 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:38:51,096 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:39:21,127 ==================== TRACER ======================
+2021-12-30 10:39:21,128 Channel (server worker num[20]):
+2021-12-30 10:39:21,128 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:39:21,129 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:39:51,160 ==================== TRACER ======================
+2021-12-30 10:39:51,161 Channel (server worker num[20]):
+2021-12-30 10:39:51,161 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:39:51,162 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:40:21,185 ==================== TRACER ======================
+2021-12-30 10:40:21,186 Channel (server worker num[20]):
+2021-12-30 10:40:21,186 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:40:21,187 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:40:51,218 ==================== TRACER ======================
+2021-12-30 10:40:51,219 Channel (server worker num[20]):
+2021-12-30 10:40:51,219 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:40:51,220 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:41:21,244 ==================== TRACER ======================
+2021-12-30 10:41:21,245 Channel (server worker num[20]):
+2021-12-30 10:41:21,246 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:41:21,247 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:41:51,277 ==================== TRACER ======================
+2021-12-30 10:41:51,278 Channel (server worker num[20]):
+2021-12-30 10:41:51,279 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:41:51,280 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:42:21,310 ==================== TRACER ======================
+2021-12-30 10:42:21,311 Channel (server worker num[20]):
+2021-12-30 10:42:21,312 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:42:21,313 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:42:51,343 ==================== TRACER ======================
+2021-12-30 10:42:51,344 Channel (server worker num[20]):
+2021-12-30 10:42:51,345 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:42:51,345 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:43:21,369 ==================== TRACER ======================
+2021-12-30 10:43:21,370 Channel (server worker num[20]):
+2021-12-30 10:43:21,371 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:43:21,372 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:43:51,402 ==================== TRACER ======================
+2021-12-30 10:43:51,403 Channel (server worker num[20]):
+2021-12-30 10:43:51,404 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:43:51,405 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:44:21,436 ==================== TRACER ======================
+2021-12-30 10:44:21,437 Channel (server worker num[20]):
+2021-12-30 10:44:21,437 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:44:21,438 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:44:51,469 ==================== TRACER ======================
+2021-12-30 10:44:51,469 Channel (server worker num[20]):
+2021-12-30 10:44:51,470 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:44:51,471 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:45:21,501 ==================== TRACER ======================
+2021-12-30 10:45:21,502 Channel (server worker num[20]):
+2021-12-30 10:45:21,503 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:45:21,504 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:45:51,534 ==================== TRACER ======================
+2021-12-30 10:45:51,535 Channel (server worker num[20]):
+2021-12-30 10:45:51,536 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:45:51,537 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:46:21,568 ==================== TRACER ======================
+2021-12-30 10:46:21,569 Channel (server worker num[20]):
+2021-12-30 10:46:21,569 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:46:21,570 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:46:51,584 ==================== TRACER ======================
+2021-12-30 10:46:51,585 Channel (server worker num[20]):
+2021-12-30 10:46:51,586 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:46:51,587 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:47:21,617 ==================== TRACER ======================
+2021-12-30 10:47:21,618 Channel (server worker num[20]):
+2021-12-30 10:47:21,619 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:47:21,620 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:47:51,624 ==================== TRACER ======================
+2021-12-30 10:47:51,625 Channel (server worker num[20]):
+2021-12-30 10:47:51,626 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:47:51,627 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:48:21,656 ==================== TRACER ======================
+2021-12-30 10:48:21,657 Channel (server worker num[20]):
+2021-12-30 10:48:21,658 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:48:21,658 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:48:51,689 ==================== TRACER ======================
+2021-12-30 10:48:51,690 Channel (server worker num[20]):
+2021-12-30 10:48:51,691 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:48:51,691 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:49:21,722 ==================== TRACER ======================
+2021-12-30 10:49:21,723 Channel (server worker num[20]):
+2021-12-30 10:49:21,724 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:49:21,724 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:49:51,755 ==================== TRACER ======================
+2021-12-30 10:49:51,755 Channel (server worker num[20]):
+2021-12-30 10:49:51,756 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:49:51,757 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:50:21,787 ==================== TRACER ======================
+2021-12-30 10:50:21,788 Channel (server worker num[20]):
+2021-12-30 10:50:21,789 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:50:21,790 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:50:51,820 ==================== TRACER ======================
+2021-12-30 10:50:51,821 Channel (server worker num[20]):
+2021-12-30 10:50:51,822 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:50:51,823 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:51:21,853 ==================== TRACER ======================
+2021-12-30 10:51:21,854 Channel (server worker num[20]):
+2021-12-30 10:51:21,855 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:51:21,856 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:51:51,886 ==================== TRACER ======================
+2021-12-30 10:51:51,887 Channel (server worker num[20]):
+2021-12-30 10:51:51,888 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:51:51,888 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:52:21,896 ==================== TRACER ======================
+2021-12-30 10:52:21,897 Channel (server worker num[20]):
+2021-12-30 10:52:21,898 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:52:21,899 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:52:51,929 ==================== TRACER ======================
+2021-12-30 10:52:51,930 Channel (server worker num[20]):
+2021-12-30 10:52:51,930 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:52:51,931 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:53:21,961 ==================== TRACER ======================
+2021-12-30 10:53:21,962 Channel (server worker num[20]):
+2021-12-30 10:53:21,963 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:53:21,964 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:53:51,994 ==================== TRACER ======================
+2021-12-30 10:53:51,995 Channel (server worker num[20]):
+2021-12-30 10:53:51,996 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:53:51,997 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:54:22,027 ==================== TRACER ======================
+2021-12-30 10:54:22,028 Channel (server worker num[20]):
+2021-12-30 10:54:22,029 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:54:22,030 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:54:52,060 ==================== TRACER ======================
+2021-12-30 10:54:52,061 Channel (server worker num[20]):
+2021-12-30 10:54:52,062 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:54:52,062 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:55:22,072 ==================== TRACER ======================
+2021-12-30 10:55:22,073 Channel (server worker num[20]):
+2021-12-30 10:55:22,074 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:55:22,074 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:55:52,105 ==================== TRACER ======================
+2021-12-30 10:55:52,106 Channel (server worker num[20]):
+2021-12-30 10:55:52,107 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:55:52,107 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:56:22,138 ==================== TRACER ======================
+2021-12-30 10:56:22,139 Channel (server worker num[20]):
+2021-12-30 10:56:22,139 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:56:22,140 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:56:52,170 ==================== TRACER ======================
+2021-12-30 10:56:52,171 Channel (server worker num[20]):
+2021-12-30 10:56:52,172 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:56:52,173 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:57:22,203 ==================== TRACER ======================
+2021-12-30 10:57:22,204 Channel (server worker num[20]):
+2021-12-30 10:57:22,205 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:57:22,206 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:57:52,236 ==================== TRACER ======================
+2021-12-30 10:57:52,237 Channel (server worker num[20]):
+2021-12-30 10:57:52,238 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:57:52,238 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:58:22,269 ==================== TRACER ======================
+2021-12-30 10:58:22,270 Channel (server worker num[20]):
+2021-12-30 10:58:22,270 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:58:22,271 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:58:52,302 ==================== TRACER ======================
+2021-12-30 10:58:52,303 Channel (server worker num[20]):
+2021-12-30 10:58:52,303 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:58:52,304 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:59:22,335 ==================== TRACER ======================
+2021-12-30 10:59:22,335 Channel (server worker num[20]):
+2021-12-30 10:59:22,336 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:59:22,337 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 10:59:52,367 ==================== TRACER ======================
+2021-12-30 10:59:52,368 Channel (server worker num[20]):
+2021-12-30 10:59:52,369 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 10:59:52,370 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:00:22,400 ==================== TRACER ======================
+2021-12-30 11:00:22,401 Channel (server worker num[20]):
+2021-12-30 11:00:22,402 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:00:22,403 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:00:52,433 ==================== TRACER ======================
+2021-12-30 11:00:52,434 Channel (server worker num[20]):
+2021-12-30 11:00:52,435 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:00:52,436 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:01:22,466 ==================== TRACER ======================
+2021-12-30 11:01:22,467 Channel (server worker num[20]):
+2021-12-30 11:01:22,468 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:01:22,469 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:01:52,499 ==================== TRACER ======================
+2021-12-30 11:01:52,500 Channel (server worker num[20]):
+2021-12-30 11:01:52,501 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:01:52,502 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:02:22,511 ==================== TRACER ======================
+2021-12-30 11:02:22,512 Channel (server worker num[20]):
+2021-12-30 11:02:22,512 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:02:22,513 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:02:52,544 ==================== TRACER ======================
+2021-12-30 11:02:52,544 Channel (server worker num[20]):
+2021-12-30 11:02:52,545 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:02:52,546 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:03:22,576 ==================== TRACER ======================
+2021-12-30 11:03:22,577 Channel (server worker num[20]):
+2021-12-30 11:03:22,578 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:03:22,579 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:03:52,609 ==================== TRACER ======================
+2021-12-30 11:03:52,610 Channel (server worker num[20]):
+2021-12-30 11:03:52,611 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:03:52,612 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:04:22,642 ==================== TRACER ======================
+2021-12-30 11:04:22,643 Channel (server worker num[20]):
+2021-12-30 11:04:22,644 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:04:22,644 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:04:52,675 ==================== TRACER ======================
+2021-12-30 11:04:52,675 Channel (server worker num[20]):
+2021-12-30 11:04:52,676 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:04:52,677 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:05:22,699 ==================== TRACER ======================
+2021-12-30 11:05:22,700 Channel (server worker num[20]):
+2021-12-30 11:05:22,701 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:05:22,702 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:05:52,732 ==================== TRACER ======================
+2021-12-30 11:05:52,734 DAGExecutor:
+2021-12-30 11:05:52,734 Query count[1]
+2021-12-30 11:05:52,734 QPS[0.03333333333333333 q/s]
+2021-12-30 11:05:52,734 Succ[0.0]
+2021-12-30 11:05:52,734 Error req[2]
+2021-12-30 11:05:52,735 Latency:
+2021-12-30 11:05:52,735 ave[111.508 ms]
+2021-12-30 11:05:52,735 .50[111.508 ms]
+2021-12-30 11:05:52,735 .60[111.508 ms]
+2021-12-30 11:05:52,735 .70[111.508 ms]
+2021-12-30 11:05:52,736 .80[111.508 ms]
+2021-12-30 11:05:52,736 .90[111.508 ms]
+2021-12-30 11:05:52,736 .95[111.508 ms]
+2021-12-30 11:05:52,736 .99[111.508 ms]
+2021-12-30 11:05:52,736 Channel (server worker num[20]):
+2021-12-30 11:05:52,737 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:05:52,738 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:05:54,677 ==================== TRACER ======================
+2021-12-30 11:05:54,679 Channel (server worker num[20]):
+2021-12-30 11:05:54,681 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:05:54,682 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:06:24,712 ==================== TRACER ======================
+2021-12-30 11:06:24,714 Op(ppyolo_mbv3):
+2021-12-30 11:06:24,714 in[3.782 ms]
+2021-12-30 11:06:24,714 prep[61.954 ms]
+2021-12-30 11:06:24,715 midp[1570.405 ms]
+2021-12-30 11:06:24,715 postp[10.504 ms]
+2021-12-30 11:06:24,715 out[1.288 ms]
+2021-12-30 11:06:24,715 idle[0.003076581390141553]
+2021-12-30 11:06:24,715 DAGExecutor:
+2021-12-30 11:06:24,715 Query count[1]
+2021-12-30 11:06:24,716 QPS[0.03333333333333333 q/s]
+2021-12-30 11:06:24,716 Succ[1.0]
+2021-12-30 11:06:24,716 Error req[]
+2021-12-30 11:06:24,716 Latency:
+2021-12-30 11:06:24,716 ave[2632.994 ms]
+2021-12-30 11:06:24,716 .50[2632.994 ms]
+2021-12-30 11:06:24,717 .60[2632.994 ms]
+2021-12-30 11:06:24,717 .70[2632.994 ms]
+2021-12-30 11:06:24,717 .80[2632.994 ms]
+2021-12-30 11:06:24,717 .90[2632.994 ms]
+2021-12-30 11:06:24,717 .95[2632.994 ms]
+2021-12-30 11:06:24,718 .99[2632.994 ms]
+2021-12-30 11:06:24,718 Channel (server worker num[20]):
+2021-12-30 11:06:24,718 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:06:24,719 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:06:54,750 ==================== TRACER ======================
+2021-12-30 11:06:54,750 Channel (server worker num[20]):
+2021-12-30 11:06:54,751 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:06:54,752 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:07:24,782 ==================== TRACER ======================
+2021-12-30 11:07:24,783 Channel (server worker num[20]):
+2021-12-30 11:07:24,784 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:07:24,785 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:07:54,815 ==================== TRACER ======================
+2021-12-30 11:07:54,816 Channel (server worker num[20]):
+2021-12-30 11:07:54,817 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:07:54,818 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:08:24,848 ==================== TRACER ======================
+2021-12-30 11:08:24,849 Channel (server worker num[20]):
+2021-12-30 11:08:24,850 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:08:24,851 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:08:54,881 ==================== TRACER ======================
+2021-12-30 11:08:54,882 Channel (server worker num[20]):
+2021-12-30 11:08:54,883 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:08:54,883 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:09:24,914 ==================== TRACER ======================
+2021-12-30 11:09:24,915 Channel (server worker num[20]):
+2021-12-30 11:09:24,915 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:09:24,916 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:09:54,946 ==================== TRACER ======================
+2021-12-30 11:09:54,947 Channel (server worker num[20]):
+2021-12-30 11:09:54,948 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:09:54,949 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:10:24,979 ==================== TRACER ======================
+2021-12-30 11:10:24,980 Channel (server worker num[20]):
+2021-12-30 11:10:24,981 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:10:24,982 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:10:55,012 ==================== TRACER ======================
+2021-12-30 11:10:55,013 Channel (server worker num[20]):
+2021-12-30 11:10:55,014 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:10:55,015 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:11:25,045 ==================== TRACER ======================
+2021-12-30 11:11:25,046 Channel (server worker num[20]):
+2021-12-30 11:11:25,047 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:11:25,047 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:11:55,078 ==================== TRACER ======================
+2021-12-30 11:11:55,079 Channel (server worker num[20]):
+2021-12-30 11:11:55,080 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:11:55,080 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:12:25,111 ==================== TRACER ======================
+2021-12-30 11:12:25,112 Channel (server worker num[20]):
+2021-12-30 11:12:25,112 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:12:25,113 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:12:55,144 ==================== TRACER ======================
+2021-12-30 11:12:55,144 Channel (server worker num[20]):
+2021-12-30 11:12:55,145 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:12:55,146 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:13:25,176 ==================== TRACER ======================
+2021-12-30 11:13:25,177 Channel (server worker num[20]):
+2021-12-30 11:13:25,178 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:13:25,179 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:13:55,209 ==================== TRACER ======================
+2021-12-30 11:13:55,210 Channel (server worker num[20]):
+2021-12-30 11:13:55,211 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:13:55,212 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:14:25,242 ==================== TRACER ======================
+2021-12-30 11:14:25,243 Channel (server worker num[20]):
+2021-12-30 11:14:25,244 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:14:25,244 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:14:55,275 ==================== TRACER ======================
+2021-12-30 11:14:55,276 Channel (server worker num[20]):
+2021-12-30 11:14:55,276 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:14:55,277 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:15:25,308 ==================== TRACER ======================
+2021-12-30 11:15:25,308 Channel (server worker num[20]):
+2021-12-30 11:15:25,309 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:15:25,310 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:15:55,340 ==================== TRACER ======================
+2021-12-30 11:15:55,341 DAGExecutor:
+2021-12-30 11:15:55,343 Query count[1]
+2021-12-30 11:15:55,343 QPS[0.03333333333333333 q/s]
+2021-12-30 11:15:55,343 Succ[0.0]
+2021-12-30 11:15:55,344 Error req[1]
+2021-12-30 11:15:55,344 Latency:
+2021-12-30 11:15:55,344 ave[115.746 ms]
+2021-12-30 11:15:55,344 .50[115.746 ms]
+2021-12-30 11:15:55,344 .60[115.746 ms]
+2021-12-30 11:15:55,344 .70[115.746 ms]
+2021-12-30 11:15:55,345 .80[115.746 ms]
+2021-12-30 11:15:55,345 .90[115.746 ms]
+2021-12-30 11:15:55,345 .95[115.746 ms]
+2021-12-30 11:15:55,345 .99[115.746 ms]
+2021-12-30 11:15:55,345 Channel (server worker num[20]):
+2021-12-30 11:15:55,346 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:15:55,347 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:16:25,377 ==================== TRACER ======================
+2021-12-30 11:16:25,378 Channel (server worker num[20]):
+2021-12-30 11:16:25,379 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:16:25,380 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:16:55,410 ==================== TRACER ======================
+2021-12-30 11:16:55,411 Channel (server worker num[20]):
+2021-12-30 11:16:55,412 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:16:55,412 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:17:25,443 ==================== TRACER ======================
+2021-12-30 11:17:25,444 Channel (server worker num[20]):
+2021-12-30 11:17:25,444 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:17:25,445 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:17:55,476 ==================== TRACER ======================
+2021-12-30 11:17:55,476 Channel (server worker num[20]):
+2021-12-30 11:17:55,477 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:17:55,478 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:18:25,508 ==================== TRACER ======================
+2021-12-30 11:18:25,509 Channel (server worker num[20]):
+2021-12-30 11:18:25,510 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:18:25,511 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:18:55,513 ==================== TRACER ======================
+2021-12-30 11:18:55,514 Channel (server worker num[20]):
+2021-12-30 11:18:55,515 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:18:55,515 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:19:25,546 ==================== TRACER ======================
+2021-12-30 11:19:25,546 Channel (server worker num[20]):
+2021-12-30 11:19:25,547 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:19:25,548 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:19:55,578 ==================== TRACER ======================
+2021-12-30 11:19:55,579 Channel (server worker num[20]):
+2021-12-30 11:19:55,580 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:19:55,581 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:20:25,611 ==================== TRACER ======================
+2021-12-30 11:20:25,612 Channel (server worker num[20]):
+2021-12-30 11:20:25,613 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:20:25,614 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:20:55,644 ==================== TRACER ======================
+2021-12-30 11:20:55,645 Channel (server worker num[20]):
+2021-12-30 11:20:55,646 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:20:55,647 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:21:25,664 ==================== TRACER ======================
+2021-12-30 11:21:25,665 Channel (server worker num[20]):
+2021-12-30 11:21:25,666 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:21:25,667 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:21:55,697 ==================== TRACER ======================
+2021-12-30 11:21:55,698 Channel (server worker num[20]):
+2021-12-30 11:21:55,699 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:21:55,699 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:22:25,730 ==================== TRACER ======================
+2021-12-30 11:22:25,731 Channel (server worker num[20]):
+2021-12-30 11:22:25,731 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:22:25,732 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:22:55,762 ==================== TRACER ======================
+2021-12-30 11:22:55,763 Channel (server worker num[20]):
+2021-12-30 11:22:55,764 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:22:55,765 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:23:25,768 ==================== TRACER ======================
+2021-12-30 11:23:25,769 Channel (server worker num[20]):
+2021-12-30 11:23:25,770 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:23:25,771 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:23:55,801 ==================== TRACER ======================
+2021-12-30 11:23:55,802 Channel (server worker num[20]):
+2021-12-30 11:23:55,803 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:23:55,803 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:24:24,537 ==================== TRACER ======================
+2021-12-30 11:24:24,538 Channel (server worker num[20]):
+2021-12-30 11:24:24,540 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:24:24,540 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:24:54,571 ==================== TRACER ======================
+2021-12-30 11:24:54,572 Op(ppyolo_mbv3):
+2021-12-30 11:24:54,573 in[1233.596 ms]
+2021-12-30 11:24:54,573 prep[79.632 ms]
+2021-12-30 11:24:54,573 midp[1828.419 ms]
+2021-12-30 11:24:54,573 postp[25.206 ms]
+2021-12-30 11:24:54,573 out[1.554 ms]
+2021-12-30 11:24:54,574 idle[0.38983312434292694]
+2021-12-30 11:24:54,574 DAGExecutor:
+2021-12-30 11:24:54,574 Query count[2]
+2021-12-30 11:24:54,574 QPS[0.06666666666666667 q/s]
+2021-12-30 11:24:54,574 Succ[0.5]
+2021-12-30 11:24:54,574 Error req[1]
+2021-12-30 11:24:54,575 Latency:
+2021-12-30 11:24:54,575 ave[1038.919 ms]
+2021-12-30 11:24:54,575 .50[1947.202 ms]
+2021-12-30 11:24:54,575 .60[1947.202 ms]
+2021-12-30 11:24:54,575 .70[1947.202 ms]
+2021-12-30 11:24:54,575 .80[1947.202 ms]
+2021-12-30 11:24:54,576 .90[1947.202 ms]
+2021-12-30 11:24:54,576 .95[1947.202 ms]
+2021-12-30 11:24:54,576 .99[1947.202 ms]
+2021-12-30 11:24:54,576 Channel (server worker num[20]):
+2021-12-30 11:24:54,577 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:24:54,578 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:25:24,608 ==================== TRACER ======================
+2021-12-30 11:25:24,609 Channel (server worker num[20]):
+2021-12-30 11:25:24,610 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:25:24,610 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:37:49,731 ==================== TRACER ======================
+2021-12-30 11:37:49,733 Channel (server worker num[20]):
+2021-12-30 11:37:49,735 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:37:49,736 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:38:09,368 ==================== TRACER ======================
+2021-12-30 11:38:09,370 Channel (server worker num[20]):
+2021-12-30 11:38:09,373 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:38:09,373 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:38:39,404 ==================== TRACER ======================
+2021-12-30 11:38:39,405 DAGExecutor:
+2021-12-30 11:38:39,405 Query count[1]
+2021-12-30 11:38:39,405 QPS[0.03333333333333333 q/s]
+2021-12-30 11:38:39,406 Succ[0.0]
+2021-12-30 11:38:39,406 Error req[0]
+2021-12-30 11:38:39,406 Latency:
+2021-12-30 11:38:39,406 ave[2798.136 ms]
+2021-12-30 11:38:39,406 .50[2798.136 ms]
+2021-12-30 11:38:39,406 .60[2798.136 ms]
+2021-12-30 11:38:39,407 .70[2798.136 ms]
+2021-12-30 11:38:39,407 .80[2798.136 ms]
+2021-12-30 11:38:39,407 .90[2798.136 ms]
+2021-12-30 11:38:39,407 .95[2798.136 ms]
+2021-12-30 11:38:39,407 .99[2798.136 ms]
+2021-12-30 11:38:39,407 Channel (server worker num[20]):
+2021-12-30 11:38:39,408 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:38:39,409 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:39:09,439 ==================== TRACER ======================
+2021-12-30 11:39:09,440 Channel (server worker num[20]):
+2021-12-30 11:39:09,441 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:39:09,442 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:39:39,472 ==================== TRACER ======================
+2021-12-30 11:39:39,473 Channel (server worker num[20]):
+2021-12-30 11:39:39,474 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:39:39,474 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:40:05,674 ==================== TRACER ======================
+2021-12-30 11:40:05,677 Channel (server worker num[20]):
+2021-12-30 11:40:05,679 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:40:05,679 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:40:35,710 ==================== TRACER ======================
+2021-12-30 11:40:35,712 Op(ppyolo_mbv3):
+2021-12-30 11:40:35,712 in[1703.2785 ms]
+2021-12-30 11:40:35,712 prep[66.81 ms]
+2021-12-30 11:40:35,713 midp[792.723 ms]
+2021-12-30 11:40:35,713 postp[10.012 ms]
+2021-12-30 11:40:35,713 out[1.1055 ms]
+2021-12-30 11:40:35,713 idle[0.6621721111965404]
+2021-12-30 11:40:35,713 DAGExecutor:
+2021-12-30 11:40:35,713 Query count[2]
+2021-12-30 11:40:35,714 QPS[0.06666666666666667 q/s]
+2021-12-30 11:40:35,714 Succ[1.0]
+2021-12-30 11:40:35,714 Error req[]
+2021-12-30 11:40:35,714 Latency:
+2021-12-30 11:40:35,714 ave[880.886 ms]
+2021-12-30 11:40:35,714 .50[1658.566 ms]
+2021-12-30 11:40:35,715 .60[1658.566 ms]
+2021-12-30 11:40:35,715 .70[1658.566 ms]
+2021-12-30 11:40:35,715 .80[1658.566 ms]
+2021-12-30 11:40:35,715 .90[1658.566 ms]
+2021-12-30 11:40:35,715 .95[1658.566 ms]
+2021-12-30 11:40:35,715 .99[1658.566 ms]
+2021-12-30 11:40:35,716 Channel (server worker num[20]):
+2021-12-30 11:40:35,716 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:40:35,717 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:41:05,746 ==================== TRACER ======================
+2021-12-30 11:41:05,747 Op(ppyolo_mbv3):
+2021-12-30 11:41:05,748 in[45842.715 ms]
+2021-12-30 11:41:05,748 prep[76.967 ms]
+2021-12-30 11:41:05,748 midp[18.287 ms]
+2021-12-30 11:41:05,748 postp[9.692 ms]
+2021-12-30 11:41:05,748 out[1.296 ms]
+2021-12-30 11:41:05,749 idle[0.9977160308557167]
+2021-12-30 11:41:05,749 DAGExecutor:
+2021-12-30 11:41:05,749 Query count[1]
+2021-12-30 11:41:05,749 QPS[0.03333333333333333 q/s]
+2021-12-30 11:41:05,749 Succ[1.0]
+2021-12-30 11:41:05,750 Error req[]
+2021-12-30 11:41:05,750 Latency:
+2021-12-30 11:41:05,750 ave[118.137 ms]
+2021-12-30 11:41:05,750 .50[118.137 ms]
+2021-12-30 11:41:05,750 .60[118.137 ms]
+2021-12-30 11:41:05,750 .70[118.137 ms]
+2021-12-30 11:41:05,751 .80[118.137 ms]
+2021-12-30 11:41:05,751 .90[118.137 ms]
+2021-12-30 11:41:05,751 .95[118.137 ms]
+2021-12-30 11:41:05,751 .99[118.137 ms]
+2021-12-30 11:41:05,751 Channel (server worker num[20]):
+2021-12-30 11:41:05,752 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:41:05,753 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:41:27,033 ==================== TRACER ======================
+2021-12-30 11:41:27,035 Channel (server worker num[20]):
+2021-12-30 11:41:27,037 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:41:27,038 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:41:57,060 ==================== TRACER ======================
+2021-12-30 11:41:57,062 Op(ppyolo_mbv3):
+2021-12-30 11:41:57,062 in[1008.0535 ms]
+2021-12-30 11:41:57,063 prep[61.519 ms]
+2021-12-30 11:41:57,063 midp[796.019 ms]
+2021-12-30 11:41:57,063 postp[10.4375 ms]
+2021-12-30 11:41:57,063 out[1.295 ms]
+2021-12-30 11:41:57,063 idle[0.537652797279532]
+2021-12-30 11:41:57,063 DAGExecutor:
+2021-12-30 11:41:57,064 Query count[2]
+2021-12-30 11:41:57,064 QPS[0.06666666666666667 q/s]
+2021-12-30 11:41:57,064 Succ[1.0]
+2021-12-30 11:41:57,064 Error req[]
+2021-12-30 11:41:57,064 Latency:
+2021-12-30 11:41:57,065 ave[1179.855 ms]
+2021-12-30 11:41:57,065 .50[2258.924 ms]
+2021-12-30 11:41:57,065 .60[2258.924 ms]
+2021-12-30 11:41:57,065 .70[2258.924 ms]
+2021-12-30 11:41:57,065 .80[2258.924 ms]
+2021-12-30 11:41:57,065 .90[2258.924 ms]
+2021-12-30 11:41:57,066 .95[2258.924 ms]
+2021-12-30 11:41:57,066 .99[2258.924 ms]
+2021-12-30 11:41:57,066 Channel (server worker num[20]):
+2021-12-30 11:41:57,067 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:41:57,067 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:42:27,098 ==================== TRACER ======================
+2021-12-30 11:42:27,099 Channel (server worker num[20]):
+2021-12-30 11:42:27,099 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:42:27,100 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:42:57,131 ==================== TRACER ======================
+2021-12-30 11:42:57,131 Channel (server worker num[20]):
+2021-12-30 11:42:57,132 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:42:57,133 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:43:27,163 ==================== TRACER ======================
+2021-12-30 11:43:27,164 Channel (server worker num[20]):
+2021-12-30 11:43:27,165 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:43:27,166 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:43:57,196 ==================== TRACER ======================
+2021-12-30 11:43:57,197 Channel (server worker num[20]):
+2021-12-30 11:43:57,198 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:43:57,199 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:44:27,229 ==================== TRACER ======================
+2021-12-30 11:44:27,230 Channel (server worker num[20]):
+2021-12-30 11:44:27,231 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:44:27,231 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:44:57,262 ==================== TRACER ======================
+2021-12-30 11:44:57,263 Channel (server worker num[20]):
+2021-12-30 11:44:57,264 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:44:57,264 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:45:27,295 ==================== TRACER ======================
+2021-12-30 11:45:27,296 Channel (server worker num[20]):
+2021-12-30 11:45:27,296 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:45:27,297 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:45:57,328 ==================== TRACER ======================
+2021-12-30 11:45:57,328 Channel (server worker num[20]):
+2021-12-30 11:45:57,329 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:45:57,330 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:46:27,360 ==================== TRACER ======================
+2021-12-30 11:46:27,361 Channel (server worker num[20]):
+2021-12-30 11:46:27,362 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:46:27,363 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:46:57,393 ==================== TRACER ======================
+2021-12-30 11:46:57,394 Channel (server worker num[20]):
+2021-12-30 11:46:57,395 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:46:57,396 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:47:27,426 ==================== TRACER ======================
+2021-12-30 11:47:27,427 Channel (server worker num[20]):
+2021-12-30 11:47:27,428 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:47:27,428 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:47:57,459 ==================== TRACER ======================
+2021-12-30 11:47:57,460 Channel (server worker num[20]):
+2021-12-30 11:47:57,460 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:47:57,461 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:48:27,492 ==================== TRACER ======================
+2021-12-30 11:48:27,492 Channel (server worker num[20]):
+2021-12-30 11:48:27,493 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:48:27,494 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:48:57,524 ==================== TRACER ======================
+2021-12-30 11:48:57,525 Channel (server worker num[20]):
+2021-12-30 11:48:57,526 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:48:57,527 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:49:27,557 ==================== TRACER ======================
+2021-12-30 11:49:27,558 Channel (server worker num[20]):
+2021-12-30 11:49:27,559 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:49:27,560 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:49:57,590 ==================== TRACER ======================
+2021-12-30 11:49:57,591 Channel (server worker num[20]):
+2021-12-30 11:49:57,592 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:49:57,593 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2021-12-30 11:50:27,623 ==================== TRACER ======================
+2021-12-30 11:50:27,624 Channel (server worker num[20]):
+2021-12-30 11:50:27,625 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2021-12-30 11:50:27,625 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-14 09:24:33,177 ==================== TRACER ======================
+2022-02-14 09:24:33,179 Channel (server worker num[20]):
+2022-02-14 09:24:33,182 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-14 09:24:33,182 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-14 09:24:42,822 ==================== TRACER ======================
+2022-02-14 09:24:42,824 Channel (server worker num[20]):
+2022-02-14 09:24:42,827 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-14 09:24:42,827 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-14 09:26:03,784 ==================== TRACER ======================
+2022-02-14 09:26:03,786 Channel (server worker num[20]):
+2022-02-14 09:26:03,789 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-14 09:26:03,789 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 16:56:51,916 ==================== TRACER ======================
+2022-02-16 16:56:51,917 Channel (server worker num[20]):
+2022-02-16 16:56:51,918 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 16:56:51,918 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:05:23,231 ==================== TRACER ======================
+2022-02-16 17:05:23,232 Channel (server worker num[20]):
+2022-02-16 17:05:23,233 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:05:23,234 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:05:53,260 ==================== TRACER ======================
+2022-02-16 17:05:53,261 Op(ppyolo_mbv3):
+2022-02-16 17:05:53,261 in[10785.568 ms]
+2022-02-16 17:05:53,261 prep[48.4655 ms]
+2022-02-16 17:05:53,261 midp[1298.1825 ms]
+2022-02-16 17:05:53,261 postp[9.903 ms]
+2022-02-16 17:05:53,261 out[0.8555 ms]
+2022-02-16 17:05:53,261 idle[0.888285114985624]
+2022-02-16 17:05:53,261 DAGExecutor:
+2022-02-16 17:05:53,261 Query count[2]
+2022-02-16 17:05:53,262 QPS[0.06666666666666667 q/s]
+2022-02-16 17:05:53,262 Succ[1.0]
+2022-02-16 17:05:53,262 Error req[]
+2022-02-16 17:05:53,262 Latency:
+2022-02-16 17:05:53,262 ave[1365.0625 ms]
+2022-02-16 17:05:53,262 .50[2649.873 ms]
+2022-02-16 17:05:53,262 .60[2649.873 ms]
+2022-02-16 17:05:53,262 .70[2649.873 ms]
+2022-02-16 17:05:53,262 .80[2649.873 ms]
+2022-02-16 17:05:53,262 .90[2649.873 ms]
+2022-02-16 17:05:53,262 .95[2649.873 ms]
+2022-02-16 17:05:53,262 .99[2649.873 ms]
+2022-02-16 17:05:53,262 Channel (server worker num[20]):
+2022-02-16 17:05:53,262 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:05:53,263 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:06:23,264 ==================== TRACER ======================
+2022-02-16 17:06:23,265 Channel (server worker num[20]):
+2022-02-16 17:06:23,265 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:06:23,265 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:06:53,294 ==================== TRACER ======================
+2022-02-16 17:06:53,294 Channel (server worker num[20]):
+2022-02-16 17:06:53,295 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:06:53,295 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:07:23,303 ==================== TRACER ======================
+2022-02-16 17:07:23,304 Op(ppyolo_mbv3):
+2022-02-16 17:07:23,304 in[69773.431 ms]
+2022-02-16 17:07:23,304 prep[41.191 ms]
+2022-02-16 17:07:23,304 midp[11.872 ms]
+2022-02-16 17:07:23,305 postp[9.658 ms]
+2022-02-16 17:07:23,305 out[0.679 ms]
+2022-02-16 17:07:23,305 idle[0.9991018922379225]
+2022-02-16 17:07:23,305 DAGExecutor:
+2022-02-16 17:07:23,305 Query count[1]
+2022-02-16 17:07:23,305 QPS[0.03333333333333333 q/s]
+2022-02-16 17:07:23,305 Succ[1.0]
+2022-02-16 17:07:23,305 Error req[]
+2022-02-16 17:07:23,305 Latency:
+2022-02-16 17:07:23,305 ave[69.518 ms]
+2022-02-16 17:07:23,305 .50[69.518 ms]
+2022-02-16 17:07:23,305 .60[69.518 ms]
+2022-02-16 17:07:23,305 .70[69.518 ms]
+2022-02-16 17:07:23,305 .80[69.518 ms]
+2022-02-16 17:07:23,305 .90[69.518 ms]
+2022-02-16 17:07:23,305 .95[69.518 ms]
+2022-02-16 17:07:23,305 .99[69.518 ms]
+2022-02-16 17:07:23,305 Channel (server worker num[20]):
+2022-02-16 17:07:23,306 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:07:23,306 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:09:22,123 ==================== TRACER ======================
+2022-02-16 17:09:22,124 Channel (server worker num[20]):
+2022-02-16 17:09:22,125 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:09:22,126 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:09:52,156 ==================== TRACER ======================
+2022-02-16 17:09:52,157 Channel (server worker num[20]):
+2022-02-16 17:09:52,157 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:09:52,157 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:10:22,184 ==================== TRACER ======================
+2022-02-16 17:10:22,185 Op(ppyolo_mbv3):
+2022-02-16 17:10:22,185 in[35440.544 ms]
+2022-02-16 17:10:22,185 prep[42.793 ms]
+2022-02-16 17:10:22,185 midp[2504.427 ms]
+2022-02-16 17:10:22,185 postp[10.631 ms]
+2022-02-16 17:10:22,185 out[0.959 ms]
+2022-02-16 17:10:22,186 idle[0.9326869872577308]
+2022-02-16 17:10:22,186 DAGExecutor:
+2022-02-16 17:10:22,186 Query count[1]
+2022-02-16 17:10:22,186 QPS[0.03333333333333333 q/s]
+2022-02-16 17:10:22,186 Succ[1.0]
+2022-02-16 17:10:22,186 Error req[]
+2022-02-16 17:10:22,186 Latency:
+2022-02-16 17:10:22,186 ave[2566.559 ms]
+2022-02-16 17:10:22,186 .50[2566.559 ms]
+2022-02-16 17:10:22,186 .60[2566.559 ms]
+2022-02-16 17:10:22,186 .70[2566.559 ms]
+2022-02-16 17:10:22,186 .80[2566.559 ms]
+2022-02-16 17:10:22,186 .90[2566.559 ms]
+2022-02-16 17:10:22,186 .95[2566.559 ms]
+2022-02-16 17:10:22,186 .99[2566.559 ms]
+2022-02-16 17:10:22,186 Channel (server worker num[20]):
+2022-02-16 17:10:22,187 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:10:22,187 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:10:52,217 ==================== TRACER ======================
+2022-02-16 17:10:52,218 Channel (server worker num[20]):
+2022-02-16 17:10:52,219 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:10:52,219 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:11:22,249 ==================== TRACER ======================
+2022-02-16 17:11:22,250 Channel (server worker num[20]):
+2022-02-16 17:11:22,250 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:11:22,250 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:11:52,264 ==================== TRACER ======================
+2022-02-16 17:11:52,265 Channel (server worker num[20]):
+2022-02-16 17:11:52,265 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:11:52,265 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:12:22,292 ==================== TRACER ======================
+2022-02-16 17:12:22,292 Channel (server worker num[20]):
+2022-02-16 17:12:22,293 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:12:22,293 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:12:52,319 ==================== TRACER ======================
+2022-02-16 17:12:52,320 Channel (server worker num[20]):
+2022-02-16 17:12:52,321 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:12:52,321 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:13:22,323 ==================== TRACER ======================
+2022-02-16 17:13:22,324 Channel (server worker num[20]):
+2022-02-16 17:13:22,324 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:13:22,324 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:13:52,354 ==================== TRACER ======================
+2022-02-16 17:13:52,355 Channel (server worker num[20]):
+2022-02-16 17:13:52,355 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:13:52,356 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:14:22,373 ==================== TRACER ======================
+2022-02-16 17:14:22,374 Channel (server worker num[20]):
+2022-02-16 17:14:22,374 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:14:22,374 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:14:52,404 ==================== TRACER ======================
+2022-02-16 17:14:52,405 Op(ppyolo_mbv3):
+2022-02-16 17:14:52,405 in[289118.996 ms]
+2022-02-16 17:14:52,405 prep[46.16 ms]
+2022-02-16 17:14:52,405 midp[11.854 ms]
+2022-02-16 17:14:52,405 postp[9.602 ms]
+2022-02-16 17:14:52,405 out[0.799 ms]
+2022-02-16 17:14:52,405 idle[0.9997661862258589]
+2022-02-16 17:14:52,405 DAGExecutor:
+2022-02-16 17:14:52,405 Query count[1]
+2022-02-16 17:14:52,406 QPS[0.03333333333333333 q/s]
+2022-02-16 17:14:52,406 Succ[1.0]
+2022-02-16 17:14:52,406 Error req[]
+2022-02-16 17:14:52,406 Latency:
+2022-02-16 17:14:52,406 ave[76.35 ms]
+2022-02-16 17:14:52,406 .50[76.35 ms]
+2022-02-16 17:14:52,406 .60[76.35 ms]
+2022-02-16 17:14:52,406 .70[76.35 ms]
+2022-02-16 17:14:52,406 .80[76.35 ms]
+2022-02-16 17:14:52,406 .90[76.35 ms]
+2022-02-16 17:14:52,406 .95[76.35 ms]
+2022-02-16 17:14:52,406 .99[76.35 ms]
+2022-02-16 17:14:52,406 Channel (server worker num[20]):
+2022-02-16 17:14:52,406 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:14:52,407 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:15:22,428 ==================== TRACER ======================
+2022-02-16 17:15:22,429 Op(ppyolo_mbv3):
+2022-02-16 17:15:22,429 in[14226.853 ms]
+2022-02-16 17:15:22,430 prep[40.298 ms]
+2022-02-16 17:15:22,430 midp[11.486 ms]
+2022-02-16 17:15:22,430 postp[9.523 ms]
+2022-02-16 17:15:22,430 out[0.712 ms]
+2022-02-16 17:15:22,430 idle[0.9957094583813194]
+2022-02-16 17:15:22,430 DAGExecutor:
+2022-02-16 17:15:22,430 Query count[1]
+2022-02-16 17:15:22,430 QPS[0.03333333333333333 q/s]
+2022-02-16 17:15:22,430 Succ[1.0]
+2022-02-16 17:15:22,430 Error req[]
+2022-02-16 17:15:22,430 Latency:
+2022-02-16 17:15:22,430 ave[68.343 ms]
+2022-02-16 17:15:22,430 .50[68.343 ms]
+2022-02-16 17:15:22,430 .60[68.343 ms]
+2022-02-16 17:15:22,430 .70[68.343 ms]
+2022-02-16 17:15:22,430 .80[68.343 ms]
+2022-02-16 17:15:22,430 .90[68.343 ms]
+2022-02-16 17:15:22,430 .95[68.343 ms]
+2022-02-16 17:15:22,430 .99[68.343 ms]
+2022-02-16 17:15:22,430 Channel (server worker num[20]):
+2022-02-16 17:15:22,431 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:15:22,431 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:15:52,452 ==================== TRACER ======================
+2022-02-16 17:15:52,452 Channel (server worker num[20]):
+2022-02-16 17:15:52,453 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:15:52,453 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:16:22,472 ==================== TRACER ======================
+2022-02-16 17:16:22,473 Channel (server worker num[20]):
+2022-02-16 17:16:22,473 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:16:22,474 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:16:52,504 ==================== TRACER ======================
+2022-02-16 17:16:52,504 Channel (server worker num[20]):
+2022-02-16 17:16:52,505 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:16:52,505 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:17:22,529 ==================== TRACER ======================
+2022-02-16 17:17:22,530 Channel (server worker num[20]):
+2022-02-16 17:17:22,530 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:17:22,530 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:17:52,560 ==================== TRACER ======================
+2022-02-16 17:17:52,561 Channel (server worker num[20]):
+2022-02-16 17:17:52,562 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:17:52,562 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:18:22,592 ==================== TRACER ======================
+2022-02-16 17:18:22,593 Channel (server worker num[20]):
+2022-02-16 17:18:22,593 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:18:22,593 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:18:52,624 ==================== TRACER ======================
+2022-02-16 17:18:52,624 Channel (server worker num[20]):
+2022-02-16 17:18:52,625 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:18:52,625 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:19:22,655 ==================== TRACER ======================
+2022-02-16 17:19:22,656 Channel (server worker num[20]):
+2022-02-16 17:19:22,656 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:19:22,656 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:19:52,679 ==================== TRACER ======================
+2022-02-16 17:19:52,679 Channel (server worker num[20]):
+2022-02-16 17:19:52,680 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:19:52,680 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:20:22,710 ==================== TRACER ======================
+2022-02-16 17:20:22,711 Channel (server worker num[20]):
+2022-02-16 17:20:22,711 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:20:22,711 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:20:52,740 ==================== TRACER ======================
+2022-02-16 17:20:52,741 Channel (server worker num[20]):
+2022-02-16 17:20:52,741 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:20:52,741 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:21:22,771 ==================== TRACER ======================
+2022-02-16 17:21:22,772 Channel (server worker num[20]):
+2022-02-16 17:21:22,772 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:21:22,773 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:21:52,795 ==================== TRACER ======================
+2022-02-16 17:21:52,795 Op(ppyolo_mbv3):
+2022-02-16 17:21:52,796 in[378619.8 ms]
+2022-02-16 17:21:52,796 prep[40.567 ms]
+2022-02-16 17:21:52,796 midp[11.664 ms]
+2022-02-16 17:21:52,796 postp[9.171 ms]
+2022-02-16 17:21:52,796 out[0.723 ms]
+2022-02-16 17:21:52,796 idle[0.9998378533646675]
+2022-02-16 17:21:52,796 DAGExecutor:
+2022-02-16 17:21:52,796 Query count[1]
+2022-02-16 17:21:52,796 QPS[0.03333333333333333 q/s]
+2022-02-16 17:21:52,796 Succ[1.0]
+2022-02-16 17:21:52,796 Error req[]
+2022-02-16 17:21:52,796 Latency:
+2022-02-16 17:21:52,796 ave[67.215 ms]
+2022-02-16 17:21:52,796 .50[67.215 ms]
+2022-02-16 17:21:52,796 .60[67.215 ms]
+2022-02-16 17:21:52,796 .70[67.215 ms]
+2022-02-16 17:21:52,796 .80[67.215 ms]
+2022-02-16 17:21:52,796 .90[67.215 ms]
+2022-02-16 17:21:52,796 .95[67.215 ms]
+2022-02-16 17:21:52,796 .99[67.215 ms]
+2022-02-16 17:21:52,796 Channel (server worker num[20]):
+2022-02-16 17:21:52,797 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:21:52,797 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:22:22,827 ==================== TRACER ======================
+2022-02-16 17:22:22,828 Op(ppyolo_mbv3):
+2022-02-16 17:22:22,828 in[28828.316 ms]
+2022-02-16 17:22:22,828 prep[40.625 ms]
+2022-02-16 17:22:22,828 midp[11.704 ms]
+2022-02-16 17:22:22,828 postp[9.511 ms]
+2022-02-16 17:22:22,828 out[0.838 ms]
+2022-02-16 17:22:22,828 idle[0.9978595405890154]
+2022-02-16 17:22:22,828 DAGExecutor:
+2022-02-16 17:22:22,828 Query count[1]
+2022-02-16 17:22:22,828 QPS[0.03333333333333333 q/s]
+2022-02-16 17:22:22,828 Succ[1.0]
+2022-02-16 17:22:22,828 Error req[]
+2022-02-16 17:22:22,828 Latency:
+2022-02-16 17:22:22,829 ave[69.462 ms]
+2022-02-16 17:22:22,829 .50[69.462 ms]
+2022-02-16 17:22:22,829 .60[69.462 ms]
+2022-02-16 17:22:22,829 .70[69.462 ms]
+2022-02-16 17:22:22,829 .80[69.462 ms]
+2022-02-16 17:22:22,829 .90[69.462 ms]
+2022-02-16 17:22:22,829 .95[69.462 ms]
+2022-02-16 17:22:22,829 .99[69.462 ms]
+2022-02-16 17:22:22,829 Channel (server worker num[20]):
+2022-02-16 17:22:22,829 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:22:22,829 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:22:52,860 ==================== TRACER ======================
+2022-02-16 17:22:52,860 Op(ppyolo_mbv3):
+2022-02-16 17:22:52,860 in[53789.073 ms]
+2022-02-16 17:22:52,861 prep[41.018 ms]
+2022-02-16 17:22:52,861 midp[11.874 ms]
+2022-02-16 17:22:52,861 postp[9.638 ms]
+2022-02-16 17:22:52,861 out[0.767 ms]
+2022-02-16 17:22:52,861 idle[0.9988388626164456]
+2022-02-16 17:22:52,861 DAGExecutor:
+2022-02-16 17:22:52,861 Query count[1]
+2022-02-16 17:22:52,861 QPS[0.03333333333333333 q/s]
+2022-02-16 17:22:52,861 Succ[1.0]
+2022-02-16 17:22:52,861 Error req[]
+2022-02-16 17:22:52,861 Latency:
+2022-02-16 17:22:52,861 ave[69.775 ms]
+2022-02-16 17:22:52,861 .50[69.775 ms]
+2022-02-16 17:22:52,861 .60[69.775 ms]
+2022-02-16 17:22:52,861 .70[69.775 ms]
+2022-02-16 17:22:52,861 .80[69.775 ms]
+2022-02-16 17:22:52,861 .90[69.775 ms]
+2022-02-16 17:22:52,861 .95[69.775 ms]
+2022-02-16 17:22:52,861 .99[69.775 ms]
+2022-02-16 17:22:52,861 Channel (server worker num[20]):
+2022-02-16 17:22:52,862 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:22:52,862 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:23:22,869 ==================== TRACER ======================
+2022-02-16 17:23:22,870 Op(ppyolo_mbv3):
+2022-02-16 17:23:22,870 in[16002.513 ms]
+2022-02-16 17:23:22,870 prep[40.564 ms]
+2022-02-16 17:23:22,870 midp[11.608 ms]
+2022-02-16 17:23:22,870 postp[9.549 ms]
+2022-02-16 17:23:22,870 out[0.796 ms]
+2022-02-16 17:23:22,870 idle[0.9961580526149033]
+2022-02-16 17:23:22,870 DAGExecutor:
+2022-02-16 17:23:22,870 Query count[1]
+2022-02-16 17:23:22,871 QPS[0.03333333333333333 q/s]
+2022-02-16 17:23:22,871 Succ[1.0]
+2022-02-16 17:23:22,871 Error req[]
+2022-02-16 17:23:22,871 Latency:
+2022-02-16 17:23:22,871 ave[68.978 ms]
+2022-02-16 17:23:22,871 .50[68.978 ms]
+2022-02-16 17:23:22,871 .60[68.978 ms]
+2022-02-16 17:23:22,871 .70[68.978 ms]
+2022-02-16 17:23:22,871 .80[68.978 ms]
+2022-02-16 17:23:22,871 .90[68.978 ms]
+2022-02-16 17:23:22,871 .95[68.978 ms]
+2022-02-16 17:23:22,871 .99[68.978 ms]
+2022-02-16 17:23:22,871 Channel (server worker num[20]):
+2022-02-16 17:23:22,871 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:23:22,872 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:23:52,895 ==================== TRACER ======================
+2022-02-16 17:23:52,895 Op(ppyolo_mbv3):
+2022-02-16 17:23:52,896 in[26647.47 ms]
+2022-02-16 17:23:52,896 prep[41.165 ms]
+2022-02-16 17:23:52,896 midp[11.559 ms]
+2022-02-16 17:23:52,896 postp[9.357 ms]
+2022-02-16 17:23:52,896 out[0.736 ms]
+2022-02-16 17:23:52,896 idle[0.9976757643974399]
+2022-02-16 17:23:52,896 DAGExecutor:
+2022-02-16 17:23:52,896 Query count[1]
+2022-02-16 17:23:52,896 QPS[0.03333333333333333 q/s]
+2022-02-16 17:23:52,896 Succ[1.0]
+2022-02-16 17:23:52,896 Error req[]
+2022-02-16 17:23:52,896 Latency:
+2022-02-16 17:23:52,896 ave[69.043 ms]
+2022-02-16 17:23:52,896 .50[69.043 ms]
+2022-02-16 17:23:52,896 .60[69.043 ms]
+2022-02-16 17:23:52,896 .70[69.043 ms]
+2022-02-16 17:23:52,896 .80[69.043 ms]
+2022-02-16 17:23:52,896 .90[69.043 ms]
+2022-02-16 17:23:52,896 .95[69.043 ms]
+2022-02-16 17:23:52,896 .99[69.043 ms]
+2022-02-16 17:23:52,896 Channel (server worker num[20]):
+2022-02-16 17:23:52,897 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:23:52,897 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:24:22,925 ==================== TRACER ======================
+2022-02-16 17:24:22,926 Op(ppyolo_mbv3):
+2022-02-16 17:24:22,926 in[42124.841 ms]
+2022-02-16 17:24:22,926 prep[42.925 ms]
+2022-02-16 17:24:22,926 midp[12.13 ms]
+2022-02-16 17:24:22,926 postp[9.767 ms]
+2022-02-16 17:24:22,926 out[0.895 ms]
+2022-02-16 17:24:22,926 idle[0.9984635898866282]
+2022-02-16 17:24:22,926 DAGExecutor:
+2022-02-16 17:24:22,926 Query count[1]
+2022-02-16 17:24:22,926 QPS[0.03333333333333333 q/s]
+2022-02-16 17:24:22,926 Succ[1.0]
+2022-02-16 17:24:22,926 Error req[]
+2022-02-16 17:24:22,926 Latency:
+2022-02-16 17:24:22,927 ave[73.248 ms]
+2022-02-16 17:24:22,927 .50[73.248 ms]
+2022-02-16 17:24:22,927 .60[73.248 ms]
+2022-02-16 17:24:22,927 .70[73.248 ms]
+2022-02-16 17:24:22,927 .80[73.248 ms]
+2022-02-16 17:24:22,927 .90[73.248 ms]
+2022-02-16 17:24:22,927 .95[73.248 ms]
+2022-02-16 17:24:22,927 .99[73.248 ms]
+2022-02-16 17:24:22,927 Channel (server worker num[20]):
+2022-02-16 17:24:22,927 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:24:22,927 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:24:52,956 ==================== TRACER ======================
+2022-02-16 17:24:52,957 Op(ppyolo_mbv3):
+2022-02-16 17:24:52,957 in[15533.448 ms]
+2022-02-16 17:24:52,957 prep[42.1205 ms]
+2022-02-16 17:24:52,957 midp[11.651 ms]
+2022-02-16 17:24:52,957 postp[9.449 ms]
+2022-02-16 17:24:52,957 out[0.808 ms]
+2022-02-16 17:24:52,957 idle[0.9959467481807073]
+2022-02-16 17:24:52,957 DAGExecutor:
+2022-02-16 17:24:52,957 Query count[2]
+2022-02-16 17:24:52,957 QPS[0.06666666666666667 q/s]
+2022-02-16 17:24:52,957 Succ[1.0]
+2022-02-16 17:24:52,957 Error req[]
+2022-02-16 17:24:52,957 Latency:
+2022-02-16 17:24:52,957 ave[69.7875 ms]
+2022-02-16 17:24:52,957 .50[69.917 ms]
+2022-02-16 17:24:52,957 .60[69.917 ms]
+2022-02-16 17:24:52,958 .70[69.917 ms]
+2022-02-16 17:24:52,958 .80[69.917 ms]
+2022-02-16 17:24:52,958 .90[69.917 ms]
+2022-02-16 17:24:52,958 .95[69.917 ms]
+2022-02-16 17:24:52,958 .99[69.917 ms]
+2022-02-16 17:24:52,958 Channel (server worker num[20]):
+2022-02-16 17:24:52,958 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:24:52,958 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:25:22,988 ==================== TRACER ======================
+2022-02-16 17:25:22,989 Op(ppyolo_mbv3):
+2022-02-16 17:25:22,989 in[14139.5015 ms]
+2022-02-16 17:25:22,990 prep[40.245 ms]
+2022-02-16 17:25:22,990 midp[11.6935 ms]
+2022-02-16 17:25:22,990 postp[9.9075 ms]
+2022-02-16 17:25:22,990 out[0.7675 ms]
+2022-02-16 17:25:22,990 idle[0.9956452964928111]
+2022-02-16 17:25:22,990 DAGExecutor:
+2022-02-16 17:25:22,990 Query count[2]
+2022-02-16 17:25:22,990 QPS[0.06666666666666667 q/s]
+2022-02-16 17:25:22,990 Succ[1.0]
+2022-02-16 17:25:22,990 Error req[]
+2022-02-16 17:25:22,990 Latency:
+2022-02-16 17:25:22,990 ave[68.187 ms]
+2022-02-16 17:25:22,990 .50[68.877 ms]
+2022-02-16 17:25:22,990 .60[68.877 ms]
+2022-02-16 17:25:22,990 .70[68.877 ms]
+2022-02-16 17:25:22,990 .80[68.877 ms]
+2022-02-16 17:25:22,990 .90[68.877 ms]
+2022-02-16 17:25:22,990 .95[68.877 ms]
+2022-02-16 17:25:22,990 .99[68.877 ms]
+2022-02-16 17:25:22,990 Channel (server worker num[20]):
+2022-02-16 17:25:22,991 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:25:22,991 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:25:53,021 ==================== TRACER ======================
+2022-02-16 17:25:53,022 Channel (server worker num[20]):
+2022-02-16 17:25:53,022 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:25:53,022 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:26:23,052 ==================== TRACER ======================
+2022-02-16 17:26:23,053 Channel (server worker num[20]):
+2022-02-16 17:26:23,054 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:26:23,054 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:26:53,066 ==================== TRACER ======================
+2022-02-16 17:26:53,067 Channel (server worker num[20]):
+2022-02-16 17:26:53,068 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:26:53,068 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:27:23,098 ==================== TRACER ======================
+2022-02-16 17:27:23,099 Channel (server worker num[20]):
+2022-02-16 17:27:23,099 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:27:23,099 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:27:53,128 ==================== TRACER ======================
+2022-02-16 17:27:53,129 Channel (server worker num[20]):
+2022-02-16 17:27:53,129 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:27:53,129 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:28:23,147 ==================== TRACER ======================
+2022-02-16 17:28:23,147 Channel (server worker num[20]):
+2022-02-16 17:28:23,148 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:28:23,148 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:28:53,154 ==================== TRACER ======================
+2022-02-16 17:28:53,155 Channel (server worker num[20]):
+2022-02-16 17:28:53,156 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:28:53,156 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:29:23,157 ==================== TRACER ======================
+2022-02-16 17:29:23,157 Channel (server worker num[20]):
+2022-02-16 17:29:23,158 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:29:23,158 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:29:53,180 ==================== TRACER ======================
+2022-02-16 17:29:53,180 Channel (server worker num[20]):
+2022-02-16 17:29:53,181 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:29:53,181 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:30:23,211 ==================== TRACER ======================
+2022-02-16 17:30:23,212 Channel (server worker num[20]):
+2022-02-16 17:30:23,212 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:30:23,213 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:30:53,221 ==================== TRACER ======================
+2022-02-16 17:30:53,222 Channel (server worker num[20]):
+2022-02-16 17:30:53,222 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:30:53,222 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:31:23,252 ==================== TRACER ======================
+2022-02-16 17:31:23,253 Channel (server worker num[20]):
+2022-02-16 17:31:23,253 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:31:23,254 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:31:53,284 ==================== TRACER ======================
+2022-02-16 17:31:53,285 Channel (server worker num[20]):
+2022-02-16 17:31:53,285 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:31:53,285 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:32:23,291 ==================== TRACER ======================
+2022-02-16 17:32:23,291 Channel (server worker num[20]):
+2022-02-16 17:32:23,292 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:32:23,292 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:32:53,322 ==================== TRACER ======================
+2022-02-16 17:32:53,323 Channel (server worker num[20]):
+2022-02-16 17:32:53,327 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:32:53,327 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:33:23,341 ==================== TRACER ======================
+2022-02-16 17:33:23,342 Channel (server worker num[20]):
+2022-02-16 17:33:23,342 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:33:23,342 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:33:53,361 ==================== TRACER ======================
+2022-02-16 17:33:53,361 Channel (server worker num[20]):
+2022-02-16 17:33:53,362 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:33:53,362 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:34:23,365 ==================== TRACER ======================
+2022-02-16 17:34:23,365 Channel (server worker num[20]):
+2022-02-16 17:34:23,366 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:34:23,366 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
+2022-02-16 17:34:53,368 ==================== TRACER ======================
+2022-02-16 17:34:53,369 Channel (server worker num[20]):
+2022-02-16 17:34:53,369 chl0(In: ['@DAGExecutor'], Out: ['ppyolo_mbv3']) size[0/0]
+2022-02-16 17:34:53,370 chl1(In: ['ppyolo_mbv3'], Out: ['@DAGExecutor']) size[0/0]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/ProcessInfo.json b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/ProcessInfo.json
new file mode 100644
index 000000000..300ca0031
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/ProcessInfo.json
@@ -0,0 +1 @@
+[{"pid": 8611, "port": [9999, 2009], "model": "pipline", "start_time": 1645002562.0576003}]
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/picodet_postprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/picodet_postprocess.cpython-37.pyc
new file mode 100644
index 000000000..668e81847
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/picodet_postprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/preprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/preprocess.cpython-37.pyc
new file mode 100644
index 000000000..d6993735d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/__pycache__/preprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/config.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/config.yml
new file mode 100644
index 000000000..2eea1e1d0
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/config.yml
@@ -0,0 +1,25 @@
+dag:
+ is_thread_op: false
+ tracer:
+ interval_s: 30
+http_port: 2009
+op:
+ ppyolo_mbv3:
+ concurrency: 1
+
+ local_service_conf:
+ client_type: local_predictor
+ device_type: 2
+ devices: '0'
+ fetch_list:
+ - save_infer_model/scale_0.tmp_1
+ - save_infer_model/scale_1.tmp_1
+ - save_infer_model/scale_2.tmp_1
+ - save_infer_model/scale_3.tmp_1
+ - save_infer_model/scale_4.tmp_1
+ - save_infer_model/scale_5.tmp_1
+ - save_infer_model/scale_6.tmp_1
+ - save_infer_model/scale_7.tmp_1
+ model_config: serving_server/
+rpc_port: 9999
+worker_num: 20
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/infer_cfg.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/infer_cfg.yml
new file mode 100644
index 000000000..e29f9298f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/infer_cfg.yml
@@ -0,0 +1,118 @@
+mode: fluid
+draw_threshold: 0.5
+metric: COCO
+use_dynamic_shape: false
+arch: PicoDet
+min_subgraph_size: 3
+Preprocess:
+- interp: 2
+ keep_ratio: false
+ target_size:
+ - 640
+ - 640
+ type: Resize
+- is_scale: true
+ mean:
+ - 0.485
+ - 0.456
+ - 0.406
+ std:
+ - 0.229
+ - 0.224
+ - 0.225
+ type: NormalizeImage
+- type: Permute
+- stride: 32
+ type: PadStride
+label_list:
+- person
+- bicycle
+- car
+- motorcycle
+- airplane
+- bus
+- train
+- truck
+- boat
+- traffic light
+- fire hydrant
+- stop sign
+- parking meter
+- bench
+- bird
+- cat
+- dog
+- horse
+- sheep
+- cow
+- elephant
+- bear
+- zebra
+- giraffe
+- backpack
+- umbrella
+- handbag
+- tie
+- suitcase
+- frisbee
+- skis
+- snowboard
+- sports ball
+- kite
+- baseball bat
+- baseball glove
+- skateboard
+- surfboard
+- tennis racket
+- bottle
+- wine glass
+- cup
+- fork
+- knife
+- spoon
+- bowl
+- banana
+- apple
+- sandwich
+- orange
+- broccoli
+- carrot
+- hot dog
+- pizza
+- donut
+- cake
+- chair
+- couch
+- potted plant
+- bed
+- dining table
+- toilet
+- tv
+- laptop
+- mouse
+- remote
+- keyboard
+- cell phone
+- microwave
+- oven
+- toaster
+- sink
+- refrigerator
+- book
+- clock
+- vase
+- scissors
+- teddy bear
+- hair drier
+- toothbrush
+NMS:
+ keep_top_k: 100
+ name: MultiClassNMS
+ nms_threshold: 0.5
+ nms_top_k: 1000
+ score_threshold: 0.3
+fpn_stride:
+- 8
+- 16
+- 32
+- 64
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/picodet_postprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/picodet_postprocess.py
new file mode 100644
index 000000000..7b8159f4e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/picodet_postprocess.py
@@ -0,0 +1,228 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+from scipy.special import softmax
+
+
+def hard_nms(box_scores, iou_threshold, top_k=-1, candidate_size=200):
+ """
+ Args:
+ box_scores (N, 5): boxes in corner-form and probabilities.
+ iou_threshold: intersection over union threshold.
+ top_k: keep top_k results. If k <= 0, keep all the results.
+ candidate_size: only consider the candidates with the highest scores.
+ Returns:
+ picked: a list of indexes of the kept boxes
+ """
+ scores = box_scores[:, -1]
+ boxes = box_scores[:, :-1]
+ picked = []
+ indexes = np.argsort(scores)
+ indexes = indexes[-candidate_size:]
+ while len(indexes) > 0:
+ current = indexes[-1]
+ picked.append(current)
+ if 0 < top_k == len(picked) or len(indexes) == 1:
+ break
+ current_box = boxes[current, :]
+ indexes = indexes[:-1]
+ rest_boxes = boxes[indexes, :]
+ iou = iou_of(
+ rest_boxes,
+ np.expand_dims(
+ current_box, axis=0), )
+ indexes = indexes[iou <= iou_threshold]
+
+ return box_scores[picked, :]
+
+
+def iou_of(boxes0, boxes1, eps=1e-5):
+ """Return intersection-over-union (Jaccard index) of boxes.
+ Args:
+ boxes0 (N, 4): ground truth boxes.
+ boxes1 (N or 1, 4): predicted boxes.
+ eps: a small number to avoid 0 as denominator.
+ Returns:
+ iou (N): IoU values.
+ """
+ overlap_left_top = np.maximum(boxes0[..., :2], boxes1[..., :2])
+ overlap_right_bottom = np.minimum(boxes0[..., 2:], boxes1[..., 2:])
+
+ overlap_area = area_of(overlap_left_top, overlap_right_bottom)
+ area0 = area_of(boxes0[..., :2], boxes0[..., 2:])
+ area1 = area_of(boxes1[..., :2], boxes1[..., 2:])
+ return overlap_area / (area0 + area1 - overlap_area + eps)
+
+
+def area_of(left_top, right_bottom):
+ """Compute the areas of rectangles given two corners.
+ Args:
+ left_top (N, 2): left top corner.
+ right_bottom (N, 2): right bottom corner.
+ Returns:
+ area (N): return the area.
+ """
+ hw = np.clip(right_bottom - left_top, 0.0, None)
+ return hw[..., 0] * hw[..., 1]
+
+
+class PicoDetPostProcess(object):
+ """
+ Args:
+ input_shape (int): network input image size
+ ori_shape (int): ori image shape of before padding
+ scale_factor (float): scale factor of ori image
+ enable_mkldnn (bool): whether to open MKLDNN
+ """
+
+ def __init__(self,
+ input_shape,
+ ori_shape,
+ scale_factor,
+ strides=[8, 16, 32, 64],
+ score_threshold=0.4,
+ nms_threshold=0.5,
+ nms_top_k=1000,
+ keep_top_k=100):
+ self.ori_shape = ori_shape
+ self.input_shape = input_shape
+ self.scale_factor = scale_factor
+ self.strides = strides
+ self.score_threshold = score_threshold
+ self.nms_threshold = nms_threshold
+ self.nms_top_k = nms_top_k
+ self.keep_top_k = keep_top_k
+
+ def warp_boxes(self, boxes, ori_shape):
+ """Apply transform to boxes
+ """
+ width, height = ori_shape[1], ori_shape[0]
+ n = len(boxes)
+ if n:
+ # warp points
+ xy = np.ones((n * 4, 3))
+ xy[:, :2] = boxes[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape(
+ n * 4, 2) # x1y1, x2y2, x1y2, x2y1
+ # xy = xy @ M.T # transform
+ xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8) # rescale
+ # create new boxes
+ x = xy[:, [0, 2, 4, 6]]
+ y = xy[:, [1, 3, 5, 7]]
+ xy = np.concatenate(
+ (x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
+ # clip boxes
+ xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width)
+ xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height)
+ return xy.astype(np.float32)
+ else:
+ return boxes
+
+ def __call__(self, scores, raw_boxes):
+ batch_size = raw_boxes[0].shape[0]
+ reg_max = int(raw_boxes[0].shape[-1] / 4 - 1)
+ out_boxes_num = []
+ out_boxes_list = []
+ for batch_id in range(batch_size):
+ # generate centers
+ decode_boxes = []
+ select_scores = []
+ for stride, box_distribute, score in zip(self.strides, raw_boxes,
+ scores):
+ box_distribute = box_distribute[batch_id]
+ score = score[batch_id]
+ # centers
+ fm_h = self.input_shape[0] / stride
+ fm_w = self.input_shape[1] / stride
+ h_range = np.arange(fm_h)
+ w_range = np.arange(fm_w)
+ ww, hh = np.meshgrid(w_range, h_range)
+ ct_row = (hh.flatten() + 0.5) * stride
+ ct_col = (ww.flatten() + 0.5) * stride
+ center = np.stack((ct_col, ct_row, ct_col, ct_row), axis=1)
+
+ # box distribution to distance
+ reg_range = np.arange(reg_max + 1)
+ box_distance = box_distribute.reshape((-1, reg_max + 1))
+ box_distance = softmax(box_distance, axis=1)
+ box_distance = box_distance * np.expand_dims(reg_range, axis=0)
+ box_distance = np.sum(box_distance, axis=1).reshape((-1, 4))
+ box_distance = box_distance * stride
+
+ # top K candidate
+ topk_idx = np.argsort(score.max(axis=1))[::-1]
+ topk_idx = topk_idx[:self.nms_top_k]
+ center = center[topk_idx]
+ score = score[topk_idx]
+ box_distance = box_distance[topk_idx]
+
+ # decode box
+ decode_box = center + [-1, -1, 1, 1] * box_distance
+
+ select_scores.append(score)
+ decode_boxes.append(decode_box)
+
+ # nms
+ bboxes = np.concatenate(decode_boxes, axis=0)
+ confidences = np.concatenate(select_scores, axis=0)
+ picked_box_probs = []
+ picked_labels = []
+ for class_index in range(0, confidences.shape[1]):
+ probs = confidences[:, class_index]
+ mask = probs > self.score_threshold
+ probs = probs[mask]
+ if probs.shape[0] == 0:
+ continue
+ subset_boxes = bboxes[mask, :]
+ box_probs = np.concatenate(
+ [subset_boxes, probs.reshape(-1, 1)], axis=1)
+ box_probs = hard_nms(
+ box_probs,
+ iou_threshold=self.nms_threshold,
+ top_k=self.keep_top_k, )
+ picked_box_probs.append(box_probs)
+ picked_labels.extend([class_index] * box_probs.shape[0])
+
+ if len(picked_box_probs) == 0:
+ out_boxes_list.append(np.empty((0, 4)))
+ out_boxes_num.append(0)
+
+ else:
+ picked_box_probs = np.concatenate(picked_box_probs)
+
+ # resize output boxes
+ picked_box_probs[:, :4] = self.warp_boxes(
+ picked_box_probs[:, :4], self.ori_shape[batch_id])
+ im_scale = np.concatenate([
+ self.scale_factor[batch_id][::-1],
+ self.scale_factor[batch_id][::-1]
+ ])
+ picked_box_probs[:, :4] /= im_scale
+ # clas score box
+ out_boxes_list.append(
+ np.concatenate(
+ [
+ np.expand_dims(
+ np.array(picked_labels),
+ axis=-1), np.expand_dims(
+ picked_box_probs[:, 4], axis=-1),
+ picked_box_probs[:, :4]
+ ],
+ axis=1))
+ out_boxes_num.append(len(picked_labels))
+
+ out_boxes_list = np.concatenate(out_boxes_list, axis=0)
+ out_boxes_num = np.asarray(out_boxes_num).astype(np.int32)
+
+ return out_boxes_list, out_boxes_num
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/pipeline_http_client.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/pipeline_http_client.py
new file mode 100644
index 000000000..186e29590
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/pipeline_http_client.py
@@ -0,0 +1,58 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# from paddle_serving_server.pipeline import PipelineClient
+import numpy as np
+import requests
+import json
+import cv2
+import base64
+import os
+from time import *
+import threading
+
+
+def demo(url,data,i):
+ begin_time = time()
+ r = requests.post(url=url, data=json.dumps(data))
+ end_time = time()
+ run_time = end_time-begin_time
+ print ('绾跨▼ %d 鏃堕棿 %f '%(i,run_time))
+ print(r.json())
+
+
+def cv2_to_base64(image):
+ return base64.b64encode(image).decode('utf8')
+
+url = "http://127.0.0.1:2009/ppyolo_mbv3/prediction"
+with open(os.path.join(".", "test.jpg"), 'rb') as file:
+ image_data1 = file.read()
+image = cv2_to_base64(image_data1)
+category_dict={0.0:"person",1.0:"bicycle",2.0:"motorcycle"}
+data = {"key": ["image"], "value": [image]}
+r = requests.post(url=url, data=json.dumps(data))
+print(r.json())
+results = eval(r.json()['value'][0])
+img = cv2.imread("test.jpg")
+for result in results:
+ if result["score"] > 0.5:
+ left, right, top, bottom= int(result['bbox'][0]), int(result['bbox'][2]), int(result['bbox'][1]), int(result['bbox'][3])
+ cv2.rectangle(img,(left ,top),(right,bottom), (0, 0, 255), 2)
+ cv2.putText(img,str(round(result["score"],2)),(left,top-10), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+ print(category_dict[result["category_id"]])
+ cv2.putText(img,category_dict[result["category_id"]],(left,top+20), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+cv2.imwrite("./result.jpg",img)
+
+
+
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/preprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/preprocess.py
new file mode 100644
index 000000000..644c8ce3f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/preprocess.py
@@ -0,0 +1,395 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import numpy as np
+
+
+def decode_image(im_file, im_info):
+ """read rgb image
+ Args:
+ im_file (str|np.ndarray): input can be image path or np.ndarray
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ if isinstance(im_file, str):
+ with open(im_file, 'rb') as f:
+ im_read = f.read()
+ data = np.frombuffer(im_read, dtype='uint8')
+ im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+ else:
+ im = im_file
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ im_info['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ return im, im_info
+
+
+class Resize(object):
+ """resize image by target_size and max_size
+ Args:
+ target_size (int): the target size of image
+ keep_ratio (bool): whether keep_ratio or not, default true
+ interp (int): method of resize
+ """
+
+ def __init__(self, target_size, keep_ratio=True, interp=cv2.INTER_LINEAR):
+ if isinstance(target_size, int):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+ self.keep_ratio = keep_ratio
+ self.interp = interp
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ assert len(self.target_size) == 2
+ assert self.target_size[0] > 0 and self.target_size[1] > 0
+ im_channel = im.shape[2]
+ im_scale_y, im_scale_x = self.generate_scale(im)
+ im = cv2.resize(
+ im,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ im_info['im_shape'] = np.array(im.shape[:2]).astype('float32')
+ im_info['scale_factor'] = np.array(
+ [im_scale_y, im_scale_x]).astype('float32')
+ return im, im_info
+
+ def generate_scale(self, im):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ Returns:
+ im_scale_x: the resize ratio of X
+ im_scale_y: the resize ratio of Y
+ """
+ origin_shape = im.shape[:2]
+ im_c = im.shape[2]
+ if self.keep_ratio:
+ im_size_min = np.min(origin_shape)
+ im_size_max = np.max(origin_shape)
+ target_size_min = np.min(self.target_size)
+ target_size_max = np.max(self.target_size)
+ im_scale = float(target_size_min) / float(im_size_min)
+ if np.round(im_scale * im_size_max) > target_size_max:
+ im_scale = float(target_size_max) / float(im_size_max)
+ im_scale_x = im_scale
+ im_scale_y = im_scale
+ else:
+ resize_h, resize_w = self.target_size
+ im_scale_y = resize_h / float(origin_shape[0])
+ im_scale_x = resize_w / float(origin_shape[1])
+ return im_scale_y, im_scale_x
+
+
+class NormalizeImage(object):
+ """normalize image
+ Args:
+ mean (list): im - mean
+ std (list): im / std
+ is_scale (bool): whether need im / 255
+ is_channel_first (bool): if True: image shape is CHW, else: HWC
+ """
+
+ def __init__(self, mean, std, is_scale=True):
+ self.mean = mean
+ self.std = std
+ self.is_scale = is_scale
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ im = im.astype(np.float32, copy=False)
+ mean = np.array(self.mean)[np.newaxis, np.newaxis, :]
+ std = np.array(self.std)[np.newaxis, np.newaxis, :]
+
+ if self.is_scale:
+ im = im / 255.0
+ im -= mean
+ im /= std
+ return im, im_info
+
+
+class Permute(object):
+ """permute image
+ Args:
+ to_bgr (bool): whether convert RGB to BGR
+ channel_first (bool): whether convert HWC to CHW
+ """
+
+ def __init__(self, ):
+ super(Permute, self).__init__()
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ im = im.transpose((2, 0, 1)).copy()
+ return im, im_info
+
+
+class PadStride(object):
+ """ padding image for model with FPN, instead PadBatch(pad_to_stride) in original config
+ Args:
+ stride (bool): model with FPN need image shape % stride == 0
+ """
+
+ def __init__(self, stride=0):
+ self.coarsest_stride = stride
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ coarsest_stride = self.coarsest_stride
+ if coarsest_stride <= 0:
+ return im, im_info
+ im_c, im_h, im_w = im.shape
+ pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride)
+ pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride)
+ padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32)
+ padding_im[:, :im_h, :im_w] = im
+ return padding_im, im_info
+
+
+class LetterBoxResize(object):
+ def __init__(self, target_size):
+ """
+ Resize image to target size, convert normalized xywh to pixel xyxy
+ format ([x_center, y_center, width, height] -> [x0, y0, x1, y1]).
+ Args:
+ target_size (int|list): image target size.
+ """
+ super(LetterBoxResize, self).__init__()
+ if isinstance(target_size, int):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+
+ def letterbox(self, img, height, width, color=(127.5, 127.5, 127.5)):
+ # letterbox: resize a rectangular image to a padded rectangular
+ shape = img.shape[:2] # [height, width]
+ ratio_h = float(height) / shape[0]
+ ratio_w = float(width) / shape[1]
+ ratio = min(ratio_h, ratio_w)
+ new_shape = (round(shape[1] * ratio),
+ round(shape[0] * ratio)) # [width, height]
+ padw = (width - new_shape[0]) / 2
+ padh = (height - new_shape[1]) / 2
+ top, bottom = round(padh - 0.1), round(padh + 0.1)
+ left, right = round(padw - 0.1), round(padw + 0.1)
+
+ img = cv2.resize(
+ img, new_shape, interpolation=cv2.INTER_AREA) # resized, no border
+ img = cv2.copyMakeBorder(
+ img, top, bottom, left, right, cv2.BORDER_CONSTANT,
+ value=color) # padded rectangular
+ return img, ratio, padw, padh
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ assert len(self.target_size) == 2
+ assert self.target_size[0] > 0 and self.target_size[1] > 0
+ height, width = self.target_size
+ h, w = im.shape[:2]
+ im, ratio, padw, padh = self.letterbox(im, height=height, width=width)
+
+ new_shape = [round(h * ratio), round(w * ratio)]
+ im_info['im_shape'] = np.array(new_shape, dtype=np.float32)
+ im_info['scale_factor'] = np.array([ratio, ratio], dtype=np.float32)
+ return im, im_info
+
+
+class WarpAffine(object):
+ """Warp affine the image
+ """
+
+ def __init__(self,
+ keep_res=False,
+ pad=31,
+ input_h=512,
+ input_w=512,
+ scale=0.4,
+ shift=0.1):
+ self.keep_res = keep_res
+ self.pad = pad
+ self.input_h = input_h
+ self.input_w = input_w
+ self.scale = scale
+ self.shift = shift
+
+ def _get_3rd_point(self, a, b):
+ assert len(
+ a) == 2, 'input of _get_3rd_point should be point with length of 2'
+ assert len(
+ b) == 2, 'input of _get_3rd_point should be point with length of 2'
+ direction = a - b
+ third_pt = b + np.array([-direction[1], direction[0]], dtype=np.float32)
+ return third_pt
+
+ def rotate_point(self, pt, angle_rad):
+ """Rotate a point by an angle.
+
+ Args:
+ pt (list[float]): 2 dimensional point to be rotated
+ angle_rad (float): rotation angle by radian
+
+ Returns:
+ list[float]: Rotated point.
+ """
+ assert len(pt) == 2
+ sn, cs = np.sin(angle_rad), np.cos(angle_rad)
+ new_x = pt[0] * cs - pt[1] * sn
+ new_y = pt[0] * sn + pt[1] * cs
+ rotated_pt = [new_x, new_y]
+
+ return rotated_pt
+
+ def get_affine_transform(self,
+ center,
+ input_size,
+ rot,
+ output_size,
+ shift=(0., 0.),
+ inv=False):
+ """Get the affine transform matrix, given the center/scale/rot/output_size.
+
+ Args:
+ center (np.ndarray[2, ]): Center of the bounding box (x, y).
+ input_size (np.ndarray[2, ]): Size of input feature (width, height).
+ rot (float): Rotation angle (degree).
+ output_size (np.ndarray[2, ]): Size of the destination heatmaps.
+ shift (0-100%): Shift translation ratio wrt the width/height.
+ Default (0., 0.).
+ inv (bool): Option to inverse the affine transform direction.
+ (inv=False: src->dst or inv=True: dst->src)
+
+ Returns:
+ np.ndarray: The transform matrix.
+ """
+ assert len(center) == 2
+ assert len(output_size) == 2
+ assert len(shift) == 2
+
+ if not isinstance(input_size, (np.ndarray, list)):
+ input_size = np.array([input_size, input_size], dtype=np.float32)
+ scale_tmp = input_size
+
+ shift = np.array(shift)
+ src_w = scale_tmp[0]
+ dst_w = output_size[0]
+ dst_h = output_size[1]
+
+ rot_rad = np.pi * rot / 180
+ src_dir = self.rotate_point([0., src_w * -0.5], rot_rad)
+ dst_dir = np.array([0., dst_w * -0.5])
+
+ src = np.zeros((3, 2), dtype=np.float32)
+
+ src[0, :] = center + scale_tmp * shift
+ src[1, :] = center + src_dir + scale_tmp * shift
+ src[2, :] = self._get_3rd_point(src[0, :], src[1, :])
+
+ dst = np.zeros((3, 2), dtype=np.float32)
+ dst[0, :] = [dst_w * 0.5, dst_h * 0.5]
+ dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir
+ dst[2, :] = self._get_3rd_point(dst[0, :], dst[1, :])
+
+ if inv:
+ trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))
+ else:
+ trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))
+
+ return trans
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ img = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
+
+ h, w = img.shape[:2]
+
+ if self.keep_res:
+ input_h = (h | self.pad) + 1
+ input_w = (w | self.pad) + 1
+ s = np.array([input_w, input_h], dtype=np.float32)
+ c = np.array([w // 2, h // 2], dtype=np.float32)
+
+ else:
+ s = max(h, w) * 1.0
+ input_h, input_w = self.input_h, self.input_w
+ c = np.array([w / 2., h / 2.], dtype=np.float32)
+
+ trans_input = self.get_affine_transform(c, s, 0, [input_w, input_h])
+ img = cv2.resize(img, (w, h))
+ inp = cv2.warpAffine(
+ img, trans_input, (input_w, input_h), flags=cv2.INTER_LINEAR)
+ return inp, im_info
+
+
+def preprocess(im, preprocess_ops):
+ # process image by preprocess_ops
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': None,
+ }
+ im, im_info = decode_image(im, im_info)
+ #print(im)
+ #print(im_info)
+ #print(preprocess_ops)
+ #print("preprocess(im, preprocess_ops):................")
+ for operator in preprocess_ops:
+ im, im_info = operator(im, im_info)
+ return im, im_info
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/result.jpg b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/result.jpg
new file mode 100644
index 000000000..b20b3f2f6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/result.jpg differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/test.jpg b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/test.jpg
new file mode 100644
index 000000000..4f18d55c4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/test.jpg differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/web_service.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/web_service.py
new file mode 100644
index 000000000..527cce22c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/picodet_lcnet_1_5x_416_coco/web_service.py
@@ -0,0 +1,116 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from paddle_serving_server.web_service import WebService, Op
+import logging
+import numpy as np
+import sys
+import cv2
+from paddle_serving_app.reader import *
+import base64
+import os
+import yaml
+import glob
+from picodet_postprocess import PicoDetPostProcess
+from preprocess import preprocess, Resize, NormalizeImage, Permute, PadStride, LetterBoxResize, WarpAffine
+
+class PPYoloMbvOp(Op):
+ def init_op(self):
+ self.feed_dict={}
+ deploy_file = 'infer_cfg.yml'
+ with open(deploy_file) as f:
+ yml_conf = yaml.safe_load(f)
+ preprocess_infos = yml_conf['Preprocess']
+ self.preprocess_ops = []
+ for op_info in preprocess_infos:
+ new_op_info = op_info.copy()
+ op_type = new_op_info.pop('type')
+ self.preprocess_ops.append(eval(op_type)(**new_op_info))
+ #print(self.preprocess_ops)
+
+ def preprocess(self, input_dicts, data_id, log_id):
+ (_, input_dict), = input_dicts.items()
+ imgs = []
+ for key in input_dict.keys():
+ data = base64.b64decode(input_dict[key].encode('utf8'))
+ data = np.fromstring(data, np.uint8)
+ im = cv2.imdecode(data, 1)
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': None,
+ }
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ im_info['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ for operator in self.preprocess_ops:
+ im, im_info = operator(im, im_info)
+ imgs.append({
+ "image": im[np.newaxis,:],
+ "im_shape": [im_info['im_shape']],#np.array(list(im.shape[1:])).reshape(-1)[np.newaxis,:],
+ "scale_factor": [im_info['scale_factor']],#np.array([im_scale_y, im_scale_x]).astype('float32'),
+ })
+ self.feed_dict = {
+ "image": np.concatenate([x["image"] for x in imgs], axis=0),
+ "im_shape": np.concatenate([x["im_shape"] for x in imgs], axis=0),
+ "scale_factor": np.concatenate([x["scale_factor"] for x in imgs], axis=0)
+ }
+ #print(self.feed_dict)
+ #for key in self.feed_dict.keys():
+ # print(key, self.feed_dict[key].shape)
+
+ return self.feed_dict, False, None, ""
+
+ def postprocess(self, input_dicts, fetch_dict, log_id,data_id =0):
+ #print(fetch_dict)
+ np_score_list = []
+ np_boxes_list = []
+ i = 0
+ for value in fetch_dict.values():#range(4):
+ if i<4:
+ np_score_list.append(value)
+ else:
+ np_boxes_list.append(value)
+ i=i+1
+
+ post_process = PicoDetPostProcess(
+ (640,640),
+ self.feed_dict['im_shape'],
+ self.feed_dict['scale_factor'],
+ [8, 16, 32, 64],
+ 0.5)
+ np_boxes, np_boxes_num = post_process(np_score_list, np_boxes_list)
+ res_dict = {}
+ d = []
+ for b in range(np_boxes.shape[0]):
+ c = {}
+ #print(b)
+ c["category_id"] = np_boxes[b][0]
+ c["bbox"] = [np_boxes[b][2],np_boxes[b][3],np_boxes[b][4],np_boxes[b][5]]
+ c["score"] = np_boxes[b][1]
+ d.append(c)
+ res_dict["bbox_result"] = str(d)
+ #fetch_dict["image"] = "234.png"
+ #res_dict = {"bbox_result": str(self.img_postprocess(fetch_dict, visualize=False))}
+ return res_dict, None, ""
+
+
+class PPYoloMbv(WebService):
+ def get_pipeline_response(self, read_op):
+ ppyolo_mbv3_op = PPYoloMbvOp(name="ppyolo_mbv3", input_ops=[read_op])
+ return ppyolo_mbv3_op
+
+
+ppyolo_mbv3_service = PPYoloMbv(name="ppyolo_mbv3")
+ppyolo_mbv3_service.prepare_pipeline_config("config.yml")
+ppyolo_mbv3_service.run_service()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__init__.py
new file mode 100644
index 000000000..6fcc982fb
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__init__.py
@@ -0,0 +1,26 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import (core, data, engine, modeling, model_zoo, optimizer, metrics,
+ utils, slim)
+
+
+try:
+ from .version import full_version as __version__
+ from .version import commit as __git_commit__
+except ImportError:
+ import sys
+ sys.stderr.write("Warning: import ppdet from source directory " \
+ "without installing, run 'python setup.py install' to " \
+ "install ppdet firstly\n")
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..2798aa67c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/optimizer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/optimizer.cpython-37.pyc
new file mode 100644
index 000000000..f313e3454
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/__pycache__/optimizer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__init__.py
new file mode 100644
index 000000000..d04277177
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__init__.py
@@ -0,0 +1,15 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import config
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..e9d9642f5
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/workspace.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/workspace.cpython-37.pyc
new file mode 100644
index 000000000..9b731d98d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/__pycache__/workspace.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__init__.py
new file mode 100644
index 000000000..d0c32e260
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..c6c49d37f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/schema.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/schema.cpython-37.pyc
new file mode 100644
index 000000000..b29e3e2bb
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/schema.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/yaml_helpers.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/yaml_helpers.cpython-37.pyc
new file mode 100644
index 000000000..d92cfca35
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/__pycache__/yaml_helpers.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/schema.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/schema.py
new file mode 100644
index 000000000..2e41b5c34
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/schema.py
@@ -0,0 +1,248 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import inspect
+import importlib
+import re
+
+try:
+ from docstring_parser import parse as doc_parse
+except Exception:
+
+ def doc_parse(*args):
+ pass
+
+
+try:
+ from typeguard import check_type
+except Exception:
+
+ def check_type(*args):
+ pass
+
+
+__all__ = ['SchemaValue', 'SchemaDict', 'SharedConfig', 'extract_schema']
+
+
+class SchemaValue(object):
+ def __init__(self, name, doc='', type=None):
+ super(SchemaValue, self).__init__()
+ self.name = name
+ self.doc = doc
+ self.type = type
+
+ def set_default(self, value):
+ self.default = value
+
+ def has_default(self):
+ return hasattr(self, 'default')
+
+
+class SchemaDict(dict):
+ def __init__(self, **kwargs):
+ super(SchemaDict, self).__init__()
+ self.schema = {}
+ self.strict = False
+ self.doc = ""
+ self.update(kwargs)
+
+ def __setitem__(self, key, value):
+ # XXX also update regular dict to SchemaDict??
+ if isinstance(value, dict) and key in self and isinstance(self[key],
+ SchemaDict):
+ self[key].update(value)
+ else:
+ super(SchemaDict, self).__setitem__(key, value)
+
+ def __missing__(self, key):
+ if self.has_default(key):
+ return self.schema[key].default
+ elif key in self.schema:
+ return self.schema[key]
+ else:
+ raise KeyError(key)
+
+ def copy(self):
+ newone = SchemaDict()
+ newone.__dict__.update(self.__dict__)
+ newone.update(self)
+ return newone
+
+ def set_schema(self, key, value):
+ assert isinstance(value, SchemaValue)
+ self.schema[key] = value
+
+ def set_strict(self, strict):
+ self.strict = strict
+
+ def has_default(self, key):
+ return key in self.schema and self.schema[key].has_default()
+
+ def is_default(self, key):
+ if not self.has_default(key):
+ return False
+ if hasattr(self[key], '__dict__'):
+ return True
+ else:
+ return key not in self or self[key] == self.schema[key].default
+
+ def find_default_keys(self):
+ return [
+ k for k in list(self.keys()) + list(self.schema.keys())
+ if self.is_default(k)
+ ]
+
+ def mandatory(self):
+ return any([k for k in self.schema.keys() if not self.has_default(k)])
+
+ def find_missing_keys(self):
+ missing = [
+ k for k in self.schema.keys()
+ if k not in self and not self.has_default(k)
+ ]
+ placeholders = [k for k in self if self[k] in ('', '')]
+ return missing + placeholders
+
+ def find_extra_keys(self):
+ return list(set(self.keys()) - set(self.schema.keys()))
+
+ def find_mismatch_keys(self):
+ mismatch_keys = []
+ for arg in self.schema.values():
+ if arg.type is not None:
+ try:
+ check_type("{}.{}".format(self.name, arg.name),
+ self[arg.name], arg.type)
+ except Exception:
+ mismatch_keys.append(arg.name)
+ return mismatch_keys
+
+ def validate(self):
+ missing_keys = self.find_missing_keys()
+ if missing_keys:
+ raise ValueError("Missing param for class<{}>: {}".format(
+ self.name, ", ".join(missing_keys)))
+ extra_keys = self.find_extra_keys()
+ if extra_keys and self.strict:
+ raise ValueError("Extraneous param for class<{}>: {}".format(
+ self.name, ", ".join(extra_keys)))
+ mismatch_keys = self.find_mismatch_keys()
+ if mismatch_keys:
+ raise TypeError("Wrong param type for class<{}>: {}".format(
+ self.name, ", ".join(mismatch_keys)))
+
+
+class SharedConfig(object):
+ """
+ Representation class for `__shared__` annotations, which work as follows:
+
+ - if `key` is set for the module in config file, its value will take
+ precedence
+ - if `key` is not set for the module but present in the config file, its
+ value will be used
+ - otherwise, use the provided `default_value` as fallback
+
+ Args:
+ key: config[key] will be injected
+ default_value: fallback value
+ """
+
+ def __init__(self, key, default_value=None):
+ super(SharedConfig, self).__init__()
+ self.key = key
+ self.default_value = default_value
+
+
+def extract_schema(cls):
+ """
+ Extract schema from a given class
+
+ Args:
+ cls (type): Class from which to extract.
+
+ Returns:
+ schema (SchemaDict): Extracted schema.
+ """
+ ctor = cls.__init__
+ # python 2 compatibility
+ if hasattr(inspect, 'getfullargspec'):
+ argspec = inspect.getfullargspec(ctor)
+ annotations = argspec.annotations
+ has_kwargs = argspec.varkw is not None
+ else:
+ argspec = inspect.getfullargspec(ctor)
+ # python 2 type hinting workaround, see pep-3107
+ # however, since `typeguard` does not support python 2, type checking
+ # is still python 3 only for now
+ annotations = getattr(ctor, '__annotations__', {})
+ has_kwargs = argspec.varkw is not None
+
+ names = [arg for arg in argspec.args if arg != 'self']
+ defaults = argspec.defaults
+ num_defaults = argspec.defaults is not None and len(argspec.defaults) or 0
+ num_required = len(names) - num_defaults
+
+ docs = cls.__doc__
+ if docs is None and getattr(cls, '__category__', None) == 'op':
+ docs = cls.__call__.__doc__
+ try:
+ docstring = doc_parse(docs)
+ except Exception:
+ docstring = None
+
+ if docstring is None:
+ comments = {}
+ else:
+ comments = {}
+ for p in docstring.params:
+ match_obj = re.match('^([a-zA-Z_]+[a-zA-Z_0-9]*).*', p.arg_name)
+ if match_obj is not None:
+ comments[match_obj.group(1)] = p.description
+
+ schema = SchemaDict()
+ schema.name = cls.__name__
+ schema.doc = ""
+ if docs is not None:
+ start_pos = docs[0] == '\n' and 1 or 0
+ schema.doc = docs[start_pos:].split("\n")[0].strip()
+ # XXX handle paddle's weird doc convention
+ if '**' == schema.doc[:2] and '**' == schema.doc[-2:]:
+ schema.doc = schema.doc[2:-2].strip()
+ schema.category = hasattr(cls, '__category__') and getattr(
+ cls, '__category__') or 'module'
+ schema.strict = not has_kwargs
+ schema.pymodule = importlib.import_module(cls.__module__)
+ schema.inject = getattr(cls, '__inject__', [])
+ schema.shared = getattr(cls, '__shared__', [])
+ for idx, name in enumerate(names):
+ comment = name in comments and comments[name] or name
+ if name in schema.inject:
+ type_ = None
+ else:
+ type_ = name in annotations and annotations[name] or None
+ value_schema = SchemaValue(name, comment, type_)
+ if name in schema.shared:
+ assert idx >= num_required, "shared config must have default value"
+ default = defaults[idx - num_required]
+ value_schema.set_default(SharedConfig(name, default))
+ elif idx >= num_required:
+ default = defaults[idx - num_required]
+ value_schema.set_default(default)
+ schema.set_schema(name, value_schema)
+
+ return schema
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/yaml_helpers.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/yaml_helpers.py
new file mode 100644
index 000000000..181cfe6fc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/config/yaml_helpers.py
@@ -0,0 +1,118 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import importlib
+import inspect
+
+import yaml
+from .schema import SharedConfig
+
+__all__ = ['serializable', 'Callable']
+
+
+def represent_dictionary_order(self, dict_data):
+ return self.represent_mapping('tag:yaml.org,2002:map', dict_data.items())
+
+
+def setup_orderdict():
+ from collections import OrderedDict
+ yaml.add_representer(OrderedDict, represent_dictionary_order)
+
+
+def _make_python_constructor(cls):
+ def python_constructor(loader, node):
+ if isinstance(node, yaml.SequenceNode):
+ args = loader.construct_sequence(node, deep=True)
+ return cls(*args)
+ else:
+ kwargs = loader.construct_mapping(node, deep=True)
+ try:
+ return cls(**kwargs)
+ except Exception as ex:
+ print("Error when construct {} instance from yaml config".
+ format(cls.__name__))
+ raise ex
+
+ return python_constructor
+
+
+def _make_python_representer(cls):
+ # python 2 compatibility
+ if hasattr(inspect, 'getfullargspec'):
+ argspec = inspect.getfullargspec(cls)
+ else:
+ argspec = inspect.getfullargspec(cls.__init__)
+ argnames = [arg for arg in argspec.args if arg != 'self']
+
+ def python_representer(dumper, obj):
+ if argnames:
+ data = {name: getattr(obj, name) for name in argnames}
+ else:
+ data = obj.__dict__
+ if '_id' in data:
+ del data['_id']
+ return dumper.represent_mapping(u'!{}'.format(cls.__name__), data)
+
+ return python_representer
+
+
+def serializable(cls):
+ """
+ Add loader and dumper for given class, which must be
+ "trivially serializable"
+
+ Args:
+ cls: class to be serialized
+
+ Returns: cls
+ """
+ yaml.add_constructor(u'!{}'.format(cls.__name__),
+ _make_python_constructor(cls))
+ yaml.add_representer(cls, _make_python_representer(cls))
+ return cls
+
+
+yaml.add_representer(SharedConfig,
+ lambda d, o: d.represent_data(o.default_value))
+
+
+@serializable
+class Callable(object):
+ """
+ Helper to be used in Yaml for creating arbitrary class objects
+
+ Args:
+ full_type (str): the full module path to target function
+ """
+
+ def __init__(self, full_type, args=[], kwargs={}):
+ super(Callable, self).__init__()
+ self.full_type = full_type
+ self.args = args
+ self.kwargs = kwargs
+
+ def __call__(self):
+ if '.' in self.full_type:
+ idx = self.full_type.rfind('.')
+ module = importlib.import_module(self.full_type[:idx])
+ func_name = self.full_type[idx + 1:]
+ else:
+ try:
+ module = importlib.import_module('builtins')
+ except Exception:
+ module = importlib.import_module('__builtin__')
+ func_name = self.full_type
+
+ func = getattr(module, func_name)
+ return func(*self.args, **self.kwargs)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/workspace.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/workspace.py
new file mode 100644
index 000000000..e633746ed
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/core/workspace.py
@@ -0,0 +1,275 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import importlib
+import os
+import sys
+
+import yaml
+import collections
+
+try:
+ collectionsAbc = collections.abc
+except AttributeError:
+ collectionsAbc = collections
+
+from .config.schema import SchemaDict, SharedConfig, extract_schema
+from .config.yaml_helpers import serializable
+
+__all__ = [
+ 'global_config',
+ 'load_config',
+ 'merge_config',
+ 'get_registered_modules',
+ 'create',
+ 'register',
+ 'serializable',
+ 'dump_value',
+]
+
+
+def dump_value(value):
+ # XXX this is hackish, but collections.abc is not available in python 2
+ if hasattr(value, '__dict__') or isinstance(value, (dict, tuple, list)):
+ value = yaml.dump(value, default_flow_style=True)
+ value = value.replace('\n', '')
+ value = value.replace('...', '')
+ return "'{}'".format(value)
+ else:
+ # primitive types
+ return str(value)
+
+
+class AttrDict(dict):
+ """Single level attribute dict, NOT recursive"""
+
+ def __init__(self, **kwargs):
+ super(AttrDict, self).__init__()
+ super(AttrDict, self).update(kwargs)
+
+ def __getattr__(self, key):
+ if key in self:
+ return self[key]
+ raise AttributeError("object has no attribute '{}'".format(key))
+
+
+global_config = AttrDict()
+
+BASE_KEY = '_BASE_'
+
+
+# parse and load _BASE_ recursively
+def _load_config_with_base(file_path):
+ with open(file_path) as f:
+ file_cfg = yaml.load(f, Loader=yaml.Loader)
+
+ # NOTE: cfgs outside have higher priority than cfgs in _BASE_
+ if BASE_KEY in file_cfg:
+ all_base_cfg = AttrDict()
+ base_ymls = list(file_cfg[BASE_KEY])
+ for base_yml in base_ymls:
+ if base_yml.startswith("~"):
+ base_yml = os.path.expanduser(base_yml)
+ if not base_yml.startswith('/'):
+ base_yml = os.path.join(os.path.dirname(file_path), base_yml)
+
+ with open(base_yml) as f:
+ base_cfg = _load_config_with_base(base_yml)
+ all_base_cfg = merge_config(base_cfg, all_base_cfg)
+
+ del file_cfg[BASE_KEY]
+ return merge_config(file_cfg, all_base_cfg)
+
+ return file_cfg
+
+
+def load_config(file_path):
+ """
+ Load config from file.
+
+ Args:
+ file_path (str): Path of the config file to be loaded.
+
+ Returns: global config
+ """
+ _, ext = os.path.splitext(file_path)
+ assert ext in ['.yml', '.yaml'], "only support yaml files for now"
+
+ # load config from file and merge into global config
+ cfg = _load_config_with_base(file_path)
+ cfg['filename'] = os.path.splitext(os.path.split(file_path)[-1])[0]
+ merge_config(cfg)
+
+ return global_config
+
+
+def dict_merge(dct, merge_dct):
+ """ Recursive dict merge. Inspired by :meth:``dict.update()``, instead of
+ updating only top-level keys, dict_merge recurses down into dicts nested
+ to an arbitrary depth, updating keys. The ``merge_dct`` is merged into
+ ``dct``.
+
+ Args:
+ dct: dict onto which the merge is executed
+ merge_dct: dct merged into dct
+
+ Returns: dct
+ """
+ for k, v in merge_dct.items():
+ if (k in dct and isinstance(dct[k], dict) and
+ isinstance(merge_dct[k], collectionsAbc.Mapping)):
+ dict_merge(dct[k], merge_dct[k])
+ else:
+ dct[k] = merge_dct[k]
+ return dct
+
+
+def merge_config(config, another_cfg=None):
+ """
+ Merge config into global config or another_cfg.
+
+ Args:
+ config (dict): Config to be merged.
+
+ Returns: global config
+ """
+ global global_config
+ dct = another_cfg or global_config
+ return dict_merge(dct, config)
+
+
+def get_registered_modules():
+ return {k: v for k, v in global_config.items() if isinstance(v, SchemaDict)}
+
+
+def make_partial(cls):
+ op_module = importlib.import_module(cls.__op__.__module__)
+ op = getattr(op_module, cls.__op__.__name__)
+ cls.__category__ = getattr(cls, '__category__', None) or 'op'
+
+ def partial_apply(self, *args, **kwargs):
+ kwargs_ = self.__dict__.copy()
+ kwargs_.update(kwargs)
+ return op(*args, **kwargs_)
+
+ if getattr(cls, '__append_doc__', True): # XXX should default to True?
+ if sys.version_info[0] > 2:
+ cls.__doc__ = "Wrapper for `{}` OP".format(op.__name__)
+ cls.__init__.__doc__ = op.__doc__
+ cls.__call__ = partial_apply
+ cls.__call__.__doc__ = op.__doc__
+ else:
+ # XXX work around for python 2
+ partial_apply.__doc__ = op.__doc__
+ cls.__call__ = partial_apply
+ return cls
+
+
+def register(cls):
+ """
+ Register a given module class.
+
+ Args:
+ cls (type): Module class to be registered.
+
+ Returns: cls
+ """
+ if cls.__name__ in global_config:
+ raise ValueError("Module class already registered: {}".format(
+ cls.__name__))
+ if hasattr(cls, '__op__'):
+ cls = make_partial(cls)
+ global_config[cls.__name__] = extract_schema(cls)
+ return cls
+
+
+def create(cls_or_name, **kwargs):
+ """
+ Create an instance of given module class.
+
+ Args:
+ cls_or_name (type or str): Class of which to create instance.
+
+ Returns: instance of type `cls_or_name`
+ """
+ assert type(cls_or_name) in [type, str
+ ], "should be a class or name of a class"
+ name = type(cls_or_name) == str and cls_or_name or cls_or_name.__name__
+ assert name in global_config and \
+ isinstance(global_config[name], SchemaDict), \
+ "the module {} is not registered".format(name)
+ config = global_config[name]
+ cls = getattr(config.pymodule, name)
+ cls_kwargs = {}
+ cls_kwargs.update(global_config[name])
+
+ # parse `shared` annoation of registered modules
+ if getattr(config, 'shared', None):
+ for k in config.shared:
+ target_key = config[k]
+ shared_conf = config.schema[k].default
+ assert isinstance(shared_conf, SharedConfig)
+ if target_key is not None and not isinstance(target_key,
+ SharedConfig):
+ continue # value is given for the module
+ elif shared_conf.key in global_config:
+ # `key` is present in config
+ cls_kwargs[k] = global_config[shared_conf.key]
+ else:
+ cls_kwargs[k] = shared_conf.default_value
+
+ # parse `inject` annoation of registered modules
+ if getattr(cls, 'from_config', None):
+ cls_kwargs.update(cls.from_config(config, **kwargs))
+
+ if getattr(config, 'inject', None):
+ for k in config.inject:
+ target_key = config[k]
+ # optional dependency
+ if target_key is None:
+ continue
+
+ if isinstance(target_key, dict) or hasattr(target_key, '__dict__'):
+ if 'name' not in target_key.keys():
+ continue
+ inject_name = str(target_key['name'])
+ if inject_name not in global_config:
+ raise ValueError(
+ "Missing injection name {} and check it's name in cfg file".
+ format(k))
+ target = global_config[inject_name]
+ for i, v in target_key.items():
+ if i == 'name':
+ continue
+ target[i] = v
+ if isinstance(target, SchemaDict):
+ cls_kwargs[k] = create(inject_name)
+ elif isinstance(target_key, str):
+ if target_key not in global_config:
+ raise ValueError("Missing injection config:", target_key)
+ target = global_config[target_key]
+ if isinstance(target, SchemaDict):
+ cls_kwargs[k] = create(target_key)
+ elif hasattr(target, '__dict__'): # serialized object
+ cls_kwargs[k] = target
+ else:
+ raise ValueError("Unsupported injection type:", target_key)
+ # prevent modification of global config values of reference types
+ # (e.g., list, dict) from within the created module instances
+ #kwargs = copy.deepcopy(kwargs)
+ return cls(**cls_kwargs)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__init__.py
new file mode 100644
index 000000000..a12aa323e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__init__.py
@@ -0,0 +1,21 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import source
+from . import transform
+from . import reader
+
+from .source import *
+from .transform import *
+from .reader import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..b366ebbaa
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/reader.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/reader.cpython-37.pyc
new file mode 100644
index 000000000..c7c369faf
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/reader.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/shm_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/shm_utils.cpython-37.pyc
new file mode 100644
index 000000000..63a13890f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/__pycache__/shm_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__init__.py
new file mode 100644
index 000000000..61d5aa213
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..8c07ff43d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/annotation_cropper.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/annotation_cropper.cpython-37.pyc
new file mode 100644
index 000000000..31df2fcf5
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/annotation_cropper.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/chip_box_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/chip_box_utils.cpython-37.pyc
new file mode 100644
index 000000000..f24cf29d4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/__pycache__/chip_box_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/annotation_cropper.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/annotation_cropper.py
new file mode 100644
index 000000000..93a9a1f75
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/annotation_cropper.py
@@ -0,0 +1,542 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import copy
+import math
+import random
+import numpy as np
+from copy import deepcopy
+from typing import List, Tuple
+from collections import defaultdict
+
+from .chip_box_utils import nms, transform_chip_boxes2image_boxes
+from .chip_box_utils import find_chips_to_cover_overlaped_boxes
+from .chip_box_utils import transform_chip_box
+from .chip_box_utils import intersection_over_box
+
+
+class AnnoCropper(object):
+ def __init__(self, image_target_sizes: List[int],
+ valid_box_ratio_ranges: List[List[float]],
+ chip_target_size: int, chip_target_stride: int,
+ use_neg_chip: bool = False,
+ max_neg_num_per_im: int = 8,
+ max_per_img: int = -1,
+ nms_thresh: int = 0.5
+ ):
+ """
+ Generate chips by chip_target_size and chip_target_stride.
+ These two parameters just like kernel_size and stride in cnn.
+
+ Each image has its raw size. After resizing, then get its target size.
+ The resizing scale = target_size / raw_size.
+ So are chips of the image.
+ box_ratio = box_raw_size / image_raw_size = box_target_size / image_target_size
+ The 'size' above mentioned is the size of long-side of image, box or chip.
+
+ :param image_target_sizes: [2000, 1000]
+ :param valid_box_ratio_ranges: [[-1, 0.1],[0.08, -1]]
+ :param chip_target_size: 500
+ :param chip_target_stride: 200
+ """
+ self.target_sizes = image_target_sizes
+ self.valid_box_ratio_ranges = valid_box_ratio_ranges
+ assert len(self.target_sizes) == len(self.valid_box_ratio_ranges)
+ self.scale_num = len(self.target_sizes)
+ self.chip_target_size = chip_target_size # is target size
+ self.chip_target_stride = chip_target_stride # is target stride
+ self.use_neg_chip = use_neg_chip
+ self.max_neg_num_per_im = max_neg_num_per_im
+ self.max_per_img = max_per_img
+ self.nms_thresh = nms_thresh
+
+ def crop_anno_records(self, records: List[dict]):
+ """
+ The main logic:
+ # foreach record(image):
+ # foreach scale:
+ # 1 generate chips by chip size and stride for each scale
+ # 2 get pos chips
+ # - validate boxes: current scale; h,w >= 1
+ # - find pos chips greedily by valid gt boxes in each scale
+ # - for every valid gt box, find its corresponding pos chips in each scale
+ # 3 get neg chips
+ # - If given proposals, find neg boxes in them which are not in pos chips
+ # - If got neg boxes in last step, we find neg chips and assign neg boxes to neg chips such as 2.
+ # 4 sample neg chips if too much each image
+ # transform this image-scale annotations to chips(pos chips&neg chips) annotations
+
+ :param records, standard coco_record but with extra key `proposals`(Px4), which are predicted by stage1
+ model and maybe have neg boxes in them.
+ :return: new_records, list of dict like
+ {
+ 'im_file': 'fake_image1.jpg',
+ 'im_id': np.array([1]), # new _global_chip_id as im_id
+ 'h': h, # chip height
+ 'w': w, # chip width
+ 'is_crowd': is_crowd, # Nx1 -> Mx1
+ 'gt_class': gt_class, # Nx1 -> Mx1
+ 'gt_bbox': gt_bbox, # Nx4 -> Mx4, 4 represents [x1,y1,x2,y2]
+ 'gt_poly': gt_poly, # [None]xN -> [None]xM
+ 'chip': [x1, y1, x2, y2] # added
+ }
+
+ Attention:
+ ------------------------------>x
+ |
+ | (x1,y1)------
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | ----------
+ | (x2,y2)
+ |
+ 鈫
+ y
+
+ If we use [x1, y1, x2, y2] to represent boxes or chips,
+ (x1,y1) is the left-top point which is in the box,
+ but (x2,y2) is the right-bottom point which is not in the box.
+ So x1 in [0, w-1], x2 in [1, w], y1 in [0, h-1], y2 in [1,h].
+ And you can use x2-x1 to get width, and you can use image[y1:y2, x1:x2] to get the box area.
+ """
+
+ self.chip_records = []
+ self._global_chip_id = 1
+ for r in records:
+ self._cur_im_pos_chips = [] # element: (chip, boxes_idx), chip is [x1, y1, x2, y2], boxes_ids is List[int]
+ self._cur_im_neg_chips = [] # element: (chip, neg_box_num)
+ for scale_i in range(self.scale_num):
+ self._get_current_scale_parameters(scale_i, r)
+
+ # Cx4
+ chips = self._create_chips(r['h'], r['w'], self._cur_scale)
+
+ # # dict: chipid->[box_id, ...]
+ pos_chip2boxes_idx = self._get_valid_boxes_and_pos_chips(r['gt_bbox'], chips)
+
+ # dict: chipid->neg_box_num
+ neg_chip2box_num = self._get_neg_boxes_and_chips(chips, list(pos_chip2boxes_idx.keys()), r.get('proposals', None))
+
+ self._add_to_cur_im_chips(chips, pos_chip2boxes_idx, neg_chip2box_num)
+
+ cur_image_records = self._trans_all_chips2annotations(r)
+ self.chip_records.extend(cur_image_records)
+ return self.chip_records
+
+ def _add_to_cur_im_chips(self, chips, pos_chip2boxes_idx, neg_chip2box_num):
+ for pos_chipid, boxes_idx in pos_chip2boxes_idx.items():
+ chip = np.array(chips[pos_chipid]) # copy chips slice
+ self._cur_im_pos_chips.append((chip, boxes_idx))
+
+ if neg_chip2box_num is None:
+ return
+
+ for neg_chipid, neg_box_num in neg_chip2box_num.items():
+ chip = np.array(chips[neg_chipid])
+ self._cur_im_neg_chips.append((chip, neg_box_num))
+
+ def _trans_all_chips2annotations(self, r):
+ gt_bbox = r['gt_bbox']
+ im_file = r['im_file']
+ is_crowd = r['is_crowd']
+ gt_class = r['gt_class']
+ # gt_poly = r['gt_poly'] # [None]xN
+ # remaining keys: im_id, h, w
+ chip_records = self._trans_pos_chips2annotations(im_file, gt_bbox, is_crowd, gt_class)
+
+ if not self.use_neg_chip:
+ return chip_records
+
+ sampled_neg_chips = self._sample_neg_chips()
+ neg_chip_records = self._trans_neg_chips2annotations(im_file, sampled_neg_chips)
+ chip_records.extend(neg_chip_records)
+ return chip_records
+
+ def _trans_pos_chips2annotations(self, im_file, gt_bbox, is_crowd, gt_class):
+ chip_records = []
+ for chip, boxes_idx in self._cur_im_pos_chips:
+ chip_bbox, final_boxes_idx = transform_chip_box(gt_bbox, boxes_idx, chip)
+ x1, y1, x2, y2 = chip
+ chip_h = y2 - y1
+ chip_w = x2 - x1
+ rec = {
+ 'im_file': im_file,
+ 'im_id': np.array([self._global_chip_id]),
+ 'h': chip_h,
+ 'w': chip_w,
+ 'gt_bbox': chip_bbox,
+ 'is_crowd': is_crowd[final_boxes_idx].copy(),
+ 'gt_class': gt_class[final_boxes_idx].copy(),
+ # 'gt_poly': [None] * len(final_boxes_idx),
+ 'chip': chip
+ }
+ self._global_chip_id += 1
+ chip_records.append(rec)
+ return chip_records
+
+ def _sample_neg_chips(self):
+ pos_num = len(self._cur_im_pos_chips)
+ neg_num = len(self._cur_im_neg_chips)
+ sample_num = min(pos_num + 2, self.max_neg_num_per_im)
+ assert sample_num >= 1
+ if neg_num <= sample_num:
+ return self._cur_im_neg_chips
+
+ candidate_num = int(sample_num * 1.5)
+ candidate_neg_chips = sorted(self._cur_im_neg_chips, key=lambda x: -x[1])[:candidate_num]
+ random.shuffle(candidate_neg_chips)
+ sampled_neg_chips = candidate_neg_chips[:sample_num]
+ return sampled_neg_chips
+
+ def _trans_neg_chips2annotations(self, im_file: str, sampled_neg_chips: List[Tuple]):
+ chip_records = []
+ for chip, neg_box_num in sampled_neg_chips:
+ x1, y1, x2, y2 = chip
+ chip_h = y2 - y1
+ chip_w = x2 - x1
+ rec = {
+ 'im_file': im_file,
+ 'im_id': np.array([self._global_chip_id]),
+ 'h': chip_h,
+ 'w': chip_w,
+ 'gt_bbox': np.zeros((0, 4), dtype=np.float32),
+ 'is_crowd': np.zeros((0, 1), dtype=np.int32),
+ 'gt_class': np.zeros((0, 1), dtype=np.int32),
+ # 'gt_poly': [],
+ 'chip': chip
+ }
+ self._global_chip_id += 1
+ chip_records.append(rec)
+ return chip_records
+
+ def _get_current_scale_parameters(self, scale_i, r):
+ im_size = max(r['h'], r['w'])
+ im_target_size = self.target_sizes[scale_i]
+ self._cur_im_size, self._cur_im_target_size = im_size, im_target_size
+ self._cur_scale = self._get_current_scale(im_target_size, im_size)
+ self._cur_valid_ratio_range = self.valid_box_ratio_ranges[scale_i]
+
+ def _get_current_scale(self, im_target_size, im_size):
+ return im_target_size / im_size
+
+ def _create_chips(self, h: int, w: int, scale: float):
+ """
+ Generate chips by chip_target_size and chip_target_stride.
+ These two parameters just like kernel_size and stride in cnn.
+ :return: chips, Cx4, xy in raw size dimension
+ """
+ chip_size = self.chip_target_size # omit target for simplicity
+ stride = self.chip_target_stride
+ width = int(scale * w)
+ height = int(scale * h)
+ min_chip_location_diff = 20 # in target size
+
+ assert chip_size >= stride
+ chip_overlap = chip_size - stride
+ if (width - chip_overlap) % stride > min_chip_location_diff: # 涓嶈兘琚玸tride鏁撮櫎鐨勯儴鍒嗘瘮杈冨ぇ锛屽垯淇濈暀
+ w_steps = max(1, int(math.ceil((width - chip_overlap) / stride)))
+ else: # 涓嶈兘琚玸tride鏁撮櫎鐨勯儴鍒嗘瘮杈冨皬锛屽垯涓㈠純
+ w_steps = max(1, int(math.floor((width - chip_overlap) / stride)))
+ if (height - chip_overlap) % stride > min_chip_location_diff:
+ h_steps = max(1, int(math.ceil((height - chip_overlap) / stride)))
+ else:
+ h_steps = max(1, int(math.floor((height - chip_overlap) / stride)))
+
+ chips = list()
+ for j in range(h_steps):
+ for i in range(w_steps):
+ x1 = i * stride
+ y1 = j * stride
+ x2 = min(x1 + chip_size, width)
+ y2 = min(y1 + chip_size, height)
+ chips.append([x1, y1, x2, y2])
+
+ # check chip size
+ for item in chips:
+ if item[2] - item[0] > chip_size * 1.1 or item[3] - item[1] > chip_size * 1.1:
+ raise ValueError(item)
+ chips = np.array(chips, dtype=np.float)
+
+ raw_size_chips = chips / scale
+ return raw_size_chips
+
+ def _get_valid_boxes_and_pos_chips(self, gt_bbox, chips):
+ valid_ratio_range = self._cur_valid_ratio_range
+ im_size = self._cur_im_size
+ scale = self._cur_scale
+ # Nx4 N
+ valid_boxes, valid_boxes_idx = self._validate_boxes(valid_ratio_range, im_size, gt_bbox, scale)
+ # dict: chipid->[box_id, ...]
+ pos_chip2boxes_idx = self._find_pos_chips(chips, valid_boxes, valid_boxes_idx)
+ return pos_chip2boxes_idx
+
+ def _validate_boxes(self, valid_ratio_range: List[float],
+ im_size: int,
+ gt_boxes: 'np.array of Nx4',
+ scale: float):
+ """
+ :return: valid_boxes: Nx4, valid_boxes_idx: N
+ """
+ ws = (gt_boxes[:, 2] - gt_boxes[:, 0]).astype(np.int32)
+ hs = (gt_boxes[:, 3] - gt_boxes[:, 1]).astype(np.int32)
+ maxs = np.maximum(ws, hs)
+ box_ratio = maxs / im_size
+ mins = np.minimum(ws, hs)
+ target_mins = mins * scale
+
+ low = valid_ratio_range[0] if valid_ratio_range[0] > 0 else 0
+ high = valid_ratio_range[1] if valid_ratio_range[1] > 0 else np.finfo(np.float).max
+
+ valid_boxes_idx = np.nonzero((low <= box_ratio) & (box_ratio < high) & (target_mins >= 2))[0]
+ valid_boxes = gt_boxes[valid_boxes_idx]
+ return valid_boxes, valid_boxes_idx
+
+ def _find_pos_chips(self, chips: 'Cx4', valid_boxes: 'Bx4', valid_boxes_idx: 'B'):
+ """
+ :return: pos_chip2boxes_idx, dict: chipid->[box_id, ...]
+ """
+ iob = intersection_over_box(chips, valid_boxes) # overlap, CxB
+
+ iob_threshold_to_find_chips = 1.
+ pos_chip_ids, _ = self._find_chips_to_cover_overlaped_boxes(iob, iob_threshold_to_find_chips)
+ pos_chip_ids = set(pos_chip_ids)
+
+ iob_threshold_to_assign_box = 0.5
+ pos_chip2boxes_idx = self._assign_boxes_to_pos_chips(
+ iob, iob_threshold_to_assign_box, pos_chip_ids, valid_boxes_idx)
+ return pos_chip2boxes_idx
+
+ def _find_chips_to_cover_overlaped_boxes(self, iob, overlap_threshold):
+ return find_chips_to_cover_overlaped_boxes(iob, overlap_threshold)
+
+ def _assign_boxes_to_pos_chips(self, iob, overlap_threshold, pos_chip_ids, valid_boxes_idx):
+ chip_ids, box_ids = np.nonzero(iob >= overlap_threshold)
+ pos_chip2boxes_idx = defaultdict(list)
+ for chip_id, box_id in zip(chip_ids, box_ids):
+ if chip_id not in pos_chip_ids:
+ continue
+ raw_gt_box_idx = valid_boxes_idx[box_id]
+ pos_chip2boxes_idx[chip_id].append(raw_gt_box_idx)
+ return pos_chip2boxes_idx
+
+ def _get_neg_boxes_and_chips(self, chips: 'Cx4', pos_chip_ids: 'D', proposals: 'Px4'):
+ """
+ :param chips:
+ :param pos_chip_ids:
+ :param proposals:
+ :return: neg_chip2box_num, None or dict: chipid->neg_box_num
+ """
+ if not self.use_neg_chip:
+ return None
+
+ # train proposals maybe None
+ if proposals is None or len(proposals) < 1:
+ return None
+
+ valid_ratio_range = self._cur_valid_ratio_range
+ im_size = self._cur_im_size
+ scale = self._cur_scale
+
+ valid_props, _ = self._validate_boxes(valid_ratio_range, im_size, proposals, scale)
+ neg_boxes = self._find_neg_boxes(chips, pos_chip_ids, valid_props)
+ neg_chip2box_num = self._find_neg_chips(chips, pos_chip_ids, neg_boxes)
+ return neg_chip2box_num
+
+ def _find_neg_boxes(self, chips: 'Cx4', pos_chip_ids: 'D', valid_props: 'Px4'):
+ """
+ :return: neg_boxes: Nx4
+ """
+ if len(pos_chip_ids) == 0:
+ return valid_props
+
+ pos_chips = chips[pos_chip_ids]
+ iob = intersection_over_box(pos_chips, valid_props)
+ overlap_per_prop = np.max(iob, axis=0)
+ non_overlap_props_idx = overlap_per_prop < 0.5
+ neg_boxes = valid_props[non_overlap_props_idx]
+ return neg_boxes
+
+ def _find_neg_chips(self, chips: 'Cx4', pos_chip_ids: 'D', neg_boxes: 'Nx4'):
+ """
+ :return: neg_chip2box_num, dict: chipid->neg_box_num
+ """
+ neg_chip_ids = np.setdiff1d(np.arange(len(chips)), pos_chip_ids)
+ neg_chips = chips[neg_chip_ids]
+
+ iob = intersection_over_box(neg_chips, neg_boxes)
+ iob_threshold_to_find_chips = 0.7
+ chosen_neg_chip_ids, chip_id2overlap_box_num = \
+ self._find_chips_to_cover_overlaped_boxes(iob, iob_threshold_to_find_chips)
+
+ neg_chipid2box_num = {}
+ for cid in chosen_neg_chip_ids:
+ box_num = chip_id2overlap_box_num[cid]
+ raw_chip_id = neg_chip_ids[cid]
+ neg_chipid2box_num[raw_chip_id] = box_num
+ return neg_chipid2box_num
+
+ def crop_infer_anno_records(self, records: List[dict]):
+ """
+ transform image record to chips record
+ :param records:
+ :return: new_records, list of dict like
+ {
+ 'im_file': 'fake_image1.jpg',
+ 'im_id': np.array([1]), # new _global_chip_id as im_id
+ 'h': h, # chip height
+ 'w': w, # chip width
+ 'chip': [x1, y1, x2, y2] # added
+ 'ori_im_h': ori_im_h # added, origin image height
+ 'ori_im_w': ori_im_w # added, origin image width
+ 'scale_i': 0 # added,
+ }
+ """
+ self.chip_records = []
+ self._global_chip_id = 1 # im_id start from 1
+ self._global_chip_id2img_id = {}
+
+ for r in records:
+ for scale_i in range(self.scale_num):
+ self._get_current_scale_parameters(scale_i, r)
+ # Cx4
+ chips = self._create_chips(r['h'], r['w'], self._cur_scale)
+ cur_img_chip_record = self._get_chips_records(r, chips, scale_i)
+ self.chip_records.extend(cur_img_chip_record)
+
+ return self.chip_records
+
+ def _get_chips_records(self, rec, chips, scale_i):
+ cur_img_chip_records = []
+ ori_im_h = rec["h"]
+ ori_im_w = rec["w"]
+ im_file = rec["im_file"]
+ ori_im_id = rec["im_id"]
+ for id, chip in enumerate(chips):
+ chip_rec = {}
+ x1, y1, x2, y2 = chip
+ chip_h = y2 - y1
+ chip_w = x2 - x1
+ chip_rec["im_file"] = im_file
+ chip_rec["im_id"] = self._global_chip_id
+ chip_rec["h"] = chip_h
+ chip_rec["w"] = chip_w
+ chip_rec["chip"] = chip
+ chip_rec["ori_im_h"] = ori_im_h
+ chip_rec["ori_im_w"] = ori_im_w
+ chip_rec["scale_i"] = scale_i
+
+ self._global_chip_id2img_id[self._global_chip_id] = int(ori_im_id)
+ self._global_chip_id += 1
+ cur_img_chip_records.append(chip_rec)
+
+ return cur_img_chip_records
+
+ def aggregate_chips_detections(self, results, records=None):
+ """
+ # 1. transform chip dets to image dets
+ # 2. nms boxes per image;
+ # 3. format output results
+ :param results:
+ :param roidb:
+ :return:
+ """
+ results = deepcopy(results)
+ records = records if records else self.chip_records
+ img_id2bbox = self._transform_chip2image_bboxes(results, records)
+ nms_img_id2bbox = self._nms_dets(img_id2bbox)
+ aggregate_results = self._reformat_results(nms_img_id2bbox)
+ return aggregate_results
+
+ def _transform_chip2image_bboxes(self, results, records):
+ # 1. Transform chip dets to image dets;
+ # 2. Filter valid range;
+ # 3. Reformat and Aggregate chip dets to Get scale_cls_dets
+ img_id2bbox = defaultdict(list)
+ for result in results:
+ bbox_locs = result['bbox']
+ bbox_nums = result['bbox_num']
+ if len(bbox_locs) == 1 and bbox_locs[0][0] == -1: # current batch has no detections
+ # bbox_locs = array([[-1.]], dtype=float32); bbox_nums = [[1]]
+ # MultiClassNMS output: If there is no detected boxes for all images, lod will be set to {1} and Out only contains one value which is -1.
+ continue
+ im_ids = result['im_id'] # replace with range(len(bbox_nums))
+
+ last_bbox_num = 0
+ for idx, im_id in enumerate(im_ids):
+
+ cur_bbox_len = bbox_nums[idx]
+ bboxes = bbox_locs[last_bbox_num: last_bbox_num + cur_bbox_len]
+ last_bbox_num += cur_bbox_len
+ # box: [num_id, score, xmin, ymin, xmax, ymax]
+ if len(bboxes) == 0: # current image has no detections
+ continue
+
+ chip_rec = records[int(im_id) - 1] # im_id starts from 1, type is np.int64
+ image_size = max(chip_rec["ori_im_h"], chip_rec["ori_im_w"])
+
+ bboxes = transform_chip_boxes2image_boxes(bboxes, chip_rec["chip"], chip_rec["ori_im_h"], chip_rec["ori_im_w"])
+
+ scale_i = chip_rec["scale_i"]
+ cur_scale = self._get_current_scale(self.target_sizes[scale_i], image_size)
+ _, valid_boxes_idx = self._validate_boxes(self.valid_box_ratio_ranges[scale_i], image_size,
+ bboxes[:, 2:], cur_scale)
+ ori_img_id = self._global_chip_id2img_id[int(im_id)]
+
+ img_id2bbox[ori_img_id].append(bboxes[valid_boxes_idx])
+
+ return img_id2bbox
+
+ def _nms_dets(self, img_id2bbox):
+ # 1. NMS on each image-class
+ # 2. Limit number of detections to MAX_PER_IMAGE if requested
+ max_per_img = self.max_per_img
+ nms_thresh = self.nms_thresh
+
+ for img_id in img_id2bbox:
+ box = img_id2bbox[img_id] # list of np.array of shape [N, 6], 6 is [label, score, x1, y1, x2, y2]
+ box = np.concatenate(box, axis=0)
+ nms_dets = nms(box, nms_thresh)
+ if max_per_img > 0:
+ if len(nms_dets) > max_per_img:
+ keep = np.argsort(-nms_dets[:, 1])[:max_per_img]
+ nms_dets = nms_dets[keep]
+
+ img_id2bbox[img_id] = nms_dets
+
+ return img_id2bbox
+
+ def _reformat_results(self, img_id2bbox):
+ """reformat results"""
+ im_ids = img_id2bbox.keys()
+ results = []
+ for img_id in im_ids: # output by original im_id order
+ if len(img_id2bbox[img_id]) == 0:
+ bbox = np.array([[-1., 0., 0., 0., 0., 0.]]) # edge case: no detections
+ bbox_num = np.array([0])
+ else:
+ # np.array of shape [N, 6], 6 is [label, score, x1, y1, x2, y2]
+ bbox = img_id2bbox[img_id]
+ bbox_num = np.array([len(bbox)])
+ res = dict(
+ im_id=np.array([[img_id]]),
+ bbox=bbox,
+ bbox_num=bbox_num
+ )
+ results.append(res)
+ return results
+
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/chip_box_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/chip_box_utils.py
new file mode 100644
index 000000000..d6e81a165
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/crop_utils/chip_box_utils.py
@@ -0,0 +1,166 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+
+
+def bbox_area(boxes):
+ return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
+
+
+def intersection_over_box(chips, boxes):
+ """
+ intersection area over box area
+ :param chips: C
+ :param boxes: B
+ :return: iob, CxB
+ """
+ M = chips.shape[0]
+ N = boxes.shape[0]
+ if M * N == 0:
+ return np.zeros([M, N], dtype='float32')
+
+ box_area = bbox_area(boxes) # B
+
+ inter_x2y2 = np.minimum(np.expand_dims(chips, 1)[:, :, 2:], boxes[:, 2:]) # CxBX2
+ inter_x1y1 = np.maximum(np.expand_dims(chips, 1)[:, :, :2], boxes[:, :2]) # CxBx2
+ inter_wh = inter_x2y2 - inter_x1y1
+ inter_wh = np.clip(inter_wh, a_min=0, a_max=None)
+ inter_area = inter_wh[:, :, 0] * inter_wh[:, :, 1] # CxB
+
+ iob = inter_area / np.expand_dims(box_area, 0)
+ return iob
+
+
+def clip_boxes(boxes, im_shape):
+ """
+ Clip boxes to image boundaries.
+ :param boxes: [N, 4]
+ :param im_shape: tuple of 2, [h, w]
+ :return: [N, 4]
+ """
+ # x1 >= 0
+ boxes[:, 0] = np.clip(boxes[:, 0], 0, im_shape[1] - 1)
+ # y1 >= 0
+ boxes[:, 1] = np.clip(boxes[:, 1], 0, im_shape[0] - 1)
+ # x2 < im_shape[1]
+ boxes[:, 2] = np.clip(boxes[:, 2], 1, im_shape[1])
+ # y2 < im_shape[0]
+ boxes[:, 3] = np.clip(boxes[:, 3], 1, im_shape[0])
+ return boxes
+
+
+def transform_chip_box(gt_bbox: 'Gx4', boxes_idx: 'B', chip: '4'):
+ boxes_idx = np.array(boxes_idx)
+ cur_gt_bbox = gt_bbox[boxes_idx].copy() # Bx4
+ x1, y1, x2, y2 = chip
+ cur_gt_bbox[:, 0] -= x1
+ cur_gt_bbox[:, 1] -= y1
+ cur_gt_bbox[:, 2] -= x1
+ cur_gt_bbox[:, 3] -= y1
+ h = y2 - y1
+ w = x2 - x1
+ cur_gt_bbox = clip_boxes(cur_gt_bbox, (h, w))
+ ws = (cur_gt_bbox[:, 2] - cur_gt_bbox[:, 0]).astype(np.int32)
+ hs = (cur_gt_bbox[:, 3] - cur_gt_bbox[:, 1]).astype(np.int32)
+ valid_idx = (ws >= 2) & (hs >= 2)
+ return cur_gt_bbox[valid_idx], boxes_idx[valid_idx]
+
+
+def find_chips_to_cover_overlaped_boxes(iob, overlap_threshold):
+ chip_ids, box_ids = np.nonzero(iob >= overlap_threshold)
+ chip_id2overlap_box_num = np.bincount(chip_ids) # 1d array
+ chip_id2overlap_box_num = np.pad(chip_id2overlap_box_num, (0, len(iob) - len(chip_id2overlap_box_num)),
+ constant_values=0)
+
+ chosen_chip_ids = []
+ while len(box_ids) > 0:
+ value_counts = np.bincount(chip_ids) # 1d array
+ max_count_chip_id = np.argmax(value_counts)
+ assert max_count_chip_id not in chosen_chip_ids
+ chosen_chip_ids.append(max_count_chip_id)
+
+ box_ids_in_cur_chip = box_ids[chip_ids == max_count_chip_id]
+ ids_not_in_cur_boxes_mask = np.logical_not(np.isin(box_ids, box_ids_in_cur_chip))
+ chip_ids = chip_ids[ids_not_in_cur_boxes_mask]
+ box_ids = box_ids[ids_not_in_cur_boxes_mask]
+ return chosen_chip_ids, chip_id2overlap_box_num
+
+
+def transform_chip_boxes2image_boxes(chip_boxes, chip, img_h, img_w):
+ chip_boxes = np.array(sorted(chip_boxes, key=lambda item: -item[1]))
+ xmin, ymin, _, _ = chip
+ # Transform to origin image loc
+ chip_boxes[:, 2] += xmin
+ chip_boxes[:, 4] += xmin
+ chip_boxes[:, 3] += ymin
+ chip_boxes[:, 5] += ymin
+ chip_boxes = clip_boxes(chip_boxes, (img_h, img_w))
+ return chip_boxes
+
+
+def nms(dets, thresh):
+ """Apply classic DPM-style greedy NMS."""
+ if dets.shape[0] == 0:
+ return dets[[], :]
+ scores = dets[:, 1]
+ x1 = dets[:, 2]
+ y1 = dets[:, 3]
+ x2 = dets[:, 4]
+ y2 = dets[:, 5]
+
+ areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ order = scores.argsort()[::-1]
+
+ ndets = dets.shape[0]
+ suppressed = np.zeros((ndets), dtype=np.int)
+
+ # nominal indices
+ # _i, _j
+ # sorted indices
+ # i, j
+ # temp variables for box i's (the box currently under consideration)
+ # ix1, iy1, ix2, iy2, iarea
+
+ # variables for computing overlap with box j (lower scoring box)
+ # xx1, yy1, xx2, yy2
+ # w, h
+ # inter, ovr
+
+ for _i in range(ndets):
+ i = order[_i]
+ if suppressed[i] == 1:
+ continue
+ ix1 = x1[i]
+ iy1 = y1[i]
+ ix2 = x2[i]
+ iy2 = y2[i]
+ iarea = areas[i]
+ for _j in range(_i + 1, ndets):
+ j = order[_j]
+ if suppressed[j] == 1:
+ continue
+ xx1 = max(ix1, x1[j])
+ yy1 = max(iy1, y1[j])
+ xx2 = min(ix2, x2[j])
+ yy2 = min(iy2, y2[j])
+ w = max(0.0, xx2 - xx1 + 1)
+ h = max(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (iarea + areas[j] - inter)
+ if ovr >= thresh:
+ suppressed[j] = 1
+ keep = np.where(suppressed == 0)[0]
+ dets = dets[keep, :]
+ return dets
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/reader.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/reader.py
new file mode 100644
index 000000000..c9ea09af2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/reader.py
@@ -0,0 +1,302 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import traceback
+import six
+import sys
+if sys.version_info >= (3, 0):
+ pass
+else:
+ pass
+import numpy as np
+
+from paddle.io import DataLoader, DistributedBatchSampler
+from paddle.fluid.dataloader.collate import default_collate_fn
+
+from ppdet.core.workspace import register
+from . import transform
+from .shm_utils import _get_shared_memory_size_in_M
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('reader')
+
+MAIN_PID = os.getpid()
+
+
+class Compose(object):
+ def __init__(self, transforms, num_classes=80):
+ self.transforms = transforms
+ self.transforms_cls = []
+ for t in self.transforms:
+ for k, v in t.items():
+ op_cls = getattr(transform, k)
+ f = op_cls(**v)
+ if hasattr(f, 'num_classes'):
+ f.num_classes = num_classes
+
+ self.transforms_cls.append(f)
+
+ def __call__(self, data):
+ for f in self.transforms_cls:
+ try:
+ data = f(data)
+ except Exception as e:
+ stack_info = traceback.format_exc()
+ logger.warning("fail to map sample transform [{}] "
+ "with error: {} and stack:\n{}".format(
+ f, e, str(stack_info)))
+ raise e
+
+ return data
+
+
+class BatchCompose(Compose):
+ def __init__(self, transforms, num_classes=80, collate_batch=True):
+ super(BatchCompose, self).__init__(transforms, num_classes)
+ self.collate_batch = collate_batch
+
+ def __call__(self, data):
+ for f in self.transforms_cls:
+ try:
+ data = f(data)
+ except Exception as e:
+ stack_info = traceback.format_exc()
+ logger.warning("fail to map batch transform [{}] "
+ "with error: {} and stack:\n{}".format(
+ f, e, str(stack_info)))
+ raise e
+
+ # remove keys which is not needed by model
+ extra_key = ['h', 'w', 'flipped']
+ for k in extra_key:
+ for sample in data:
+ if k in sample:
+ sample.pop(k)
+
+ # batch data, if user-define batch function needed
+ # use user-defined here
+ if self.collate_batch:
+ batch_data = default_collate_fn(data)
+ else:
+ batch_data = {}
+ for k in data[0].keys():
+ tmp_data = []
+ for i in range(len(data)):
+ tmp_data.append(data[i][k])
+ if not 'gt_' in k and not 'is_crowd' in k and not 'difficult' in k:
+ tmp_data = np.stack(tmp_data, axis=0)
+ batch_data[k] = tmp_data
+ return batch_data
+
+
+class BaseDataLoader(object):
+ """
+ Base DataLoader implementation for detection models
+
+ Args:
+ sample_transforms (list): a list of transforms to perform
+ on each sample
+ batch_transforms (list): a list of transforms to perform
+ on batch
+ batch_size (int): batch size for batch collating, default 1.
+ shuffle (bool): whether to shuffle samples
+ drop_last (bool): whether to drop the last incomplete,
+ default False
+ num_classes (int): class number of dataset, default 80
+ collate_batch (bool): whether to collate batch in dataloader.
+ If set to True, the samples will collate into batch according
+ to the batch size. Otherwise, the ground-truth will not collate,
+ which is used when the number of ground-truch is different in
+ samples.
+ use_shared_memory (bool): whether to use shared memory to
+ accelerate data loading, enable this only if you
+ are sure that the shared memory size of your OS
+ is larger than memory cost of input datas of model.
+ Note that shared memory will be automatically
+ disabled if the shared memory of OS is less than
+ 1G, which is not enough for detection models.
+ Default False.
+ """
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=False,
+ drop_last=False,
+ num_classes=80,
+ collate_batch=True,
+ use_shared_memory=False,
+ **kwargs):
+ # sample transform
+ self._sample_transforms = Compose(
+ sample_transforms, num_classes=num_classes)
+
+ # batch transfrom
+ self._batch_transforms = BatchCompose(batch_transforms, num_classes,
+ collate_batch)
+ self.batch_size = batch_size
+ self.shuffle = shuffle
+ self.drop_last = drop_last
+ self.use_shared_memory = use_shared_memory
+ self.kwargs = kwargs
+
+ def __call__(self,
+ dataset,
+ worker_num,
+ batch_sampler=None,
+ return_list=False):
+ self.dataset = dataset
+ self.dataset.check_or_download_dataset()
+ self.dataset.parse_dataset()
+ # get data
+ self.dataset.set_transform(self._sample_transforms)
+ # set kwargs
+ self.dataset.set_kwargs(**self.kwargs)
+ # batch sampler
+ if batch_sampler is None:
+ self._batch_sampler = DistributedBatchSampler(
+ self.dataset,
+ batch_size=self.batch_size,
+ shuffle=self.shuffle,
+ drop_last=self.drop_last)
+ else:
+ self._batch_sampler = batch_sampler
+
+ # DataLoader do not start sub-process in Windows and Mac
+ # system, do not need to use shared memory
+ use_shared_memory = self.use_shared_memory and \
+ sys.platform not in ['win32', 'darwin']
+ # check whether shared memory size is bigger than 1G(1024M)
+ if use_shared_memory:
+ shm_size = _get_shared_memory_size_in_M()
+ if shm_size is not None and shm_size < 1024.:
+ logger.warning("Shared memory size is less than 1G, "
+ "disable shared_memory in DataLoader")
+ use_shared_memory = False
+
+ self.dataloader = DataLoader(
+ dataset=self.dataset,
+ batch_sampler=self._batch_sampler,
+ collate_fn=self._batch_transforms,
+ num_workers=worker_num,
+ return_list=return_list,
+ use_shared_memory=use_shared_memory)
+ self.loader = iter(self.dataloader)
+
+ return self
+
+ def __len__(self):
+ return len(self._batch_sampler)
+
+ def __iter__(self):
+ return self
+
+ def __next__(self):
+ try:
+ return next(self.loader)
+ except StopIteration:
+ self.loader = iter(self.dataloader)
+ six.reraise(*sys.exc_info())
+
+ def next(self):
+ # python2 compatibility
+ return self.__next__()
+
+
+@register
+class TrainReader(BaseDataLoader):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=True,
+ drop_last=True,
+ num_classes=80,
+ collate_batch=True,
+ **kwargs):
+ super(TrainReader, self).__init__(sample_transforms, batch_transforms,
+ batch_size, shuffle, drop_last,
+ num_classes, collate_batch, **kwargs)
+
+
+@register
+class EvalReader(BaseDataLoader):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=False,
+ drop_last=True,
+ num_classes=80,
+ **kwargs):
+ super(EvalReader, self).__init__(sample_transforms, batch_transforms,
+ batch_size, shuffle, drop_last,
+ num_classes, **kwargs)
+
+
+@register
+class TestReader(BaseDataLoader):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=False,
+ drop_last=False,
+ num_classes=80,
+ **kwargs):
+ super(TestReader, self).__init__(sample_transforms, batch_transforms,
+ batch_size, shuffle, drop_last,
+ num_classes, **kwargs)
+
+
+@register
+class EvalMOTReader(BaseDataLoader):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=False,
+ drop_last=False,
+ num_classes=1,
+ **kwargs):
+ super(EvalMOTReader, self).__init__(sample_transforms, batch_transforms,
+ batch_size, shuffle, drop_last,
+ num_classes, **kwargs)
+
+
+@register
+class TestMOTReader(BaseDataLoader):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ sample_transforms=[],
+ batch_transforms=[],
+ batch_size=1,
+ shuffle=False,
+ drop_last=False,
+ num_classes=1,
+ **kwargs):
+ super(TestMOTReader, self).__init__(sample_transforms, batch_transforms,
+ batch_size, shuffle, drop_last,
+ num_classes, **kwargs)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/shm_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/shm_utils.py
new file mode 100644
index 000000000..38d8ba66c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/shm_utils.py
@@ -0,0 +1,67 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+
+SIZE_UNIT = ['K', 'M', 'G', 'T']
+SHM_QUERY_CMD = 'df -h'
+SHM_KEY = 'shm'
+SHM_DEFAULT_MOUNT = '/dev/shm'
+
+# [ shared memory size check ]
+# In detection models, image/target data occupies a lot of memory, and
+# will occupy lots of shared memory in multi-process DataLoader, we use
+# following code to get shared memory size and perform a size check to
+# disable shared memory use if shared memory size is not enough.
+# Shared memory getting process as follows:
+# 1. use `df -h` get all mount info
+# 2. pick up spaces whose mount info contains 'shm'
+# 3. if 'shm' space number is only 1, return its size
+# 4. if there are multiple 'shm' space, try to find the default mount
+# directory '/dev/shm' is Linux-like system, otherwise return the
+# biggest space size.
+
+
+def _parse_size_in_M(size_str):
+ num, unit = size_str[:-1], size_str[-1]
+ assert unit in SIZE_UNIT, \
+ "unknown shm size unit {}".format(unit)
+ return float(num) * \
+ (1024 ** (SIZE_UNIT.index(unit) - 1))
+
+
+def _get_shared_memory_size_in_M():
+ try:
+ df_infos = os.popen(SHM_QUERY_CMD).readlines()
+ except:
+ return None
+ else:
+ shm_infos = []
+ for df_info in df_infos:
+ info = df_info.strip()
+ if info.find(SHM_KEY) >= 0:
+ shm_infos.append(info.split())
+
+ if len(shm_infos) == 0:
+ return None
+ elif len(shm_infos) == 1:
+ return _parse_size_in_M(shm_infos[0][3])
+ else:
+ default_mount_infos = [
+ si for si in shm_infos if si[-1] == SHM_DEFAULT_MOUNT
+ ]
+ if default_mount_infos:
+ return _parse_size_in_M(default_mount_infos[0][3])
+ else:
+ return max([_parse_size_in_M(si[3]) for si in shm_infos])
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__init__.py
new file mode 100644
index 000000000..3854d3d25
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__init__.py
@@ -0,0 +1,29 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import coco
+from . import voc
+from . import widerface
+from . import category
+from . import keypoint_coco
+from . import mot
+from . import sniper_coco
+
+from .coco import *
+from .voc import *
+from .widerface import *
+from .category import *
+from .keypoint_coco import *
+from .mot import *
+from .sniper_coco import SniperCOCODataSet
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..ba53aeca6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/category.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/category.cpython-37.pyc
new file mode 100644
index 000000000..b482ceca7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/category.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/coco.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/coco.cpython-37.pyc
new file mode 100644
index 000000000..41c30d1af
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/coco.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/dataset.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/dataset.cpython-37.pyc
new file mode 100644
index 000000000..5111b072a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/dataset.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/keypoint_coco.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/keypoint_coco.cpython-37.pyc
new file mode 100644
index 000000000..86a391729
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/keypoint_coco.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/mot.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/mot.cpython-37.pyc
new file mode 100644
index 000000000..6a3f03ffc
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/mot.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/sniper_coco.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/sniper_coco.cpython-37.pyc
new file mode 100644
index 000000000..aecd87e2f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/sniper_coco.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/voc.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/voc.cpython-37.pyc
new file mode 100644
index 000000000..f028fb109
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/voc.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/widerface.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/widerface.cpython-37.pyc
new file mode 100644
index 000000000..b402cc0cf
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/__pycache__/widerface.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/category.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/category.py
new file mode 100644
index 000000000..9390e54c4
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/category.py
@@ -0,0 +1,904 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+
+from ppdet.data.source.voc import pascalvoc_label
+from ppdet.data.source.widerface import widerface_label
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['get_categories']
+
+
+def get_categories(metric_type, anno_file=None, arch=None):
+ """
+ Get class id to category id map and category id
+ to category name map from annotation file.
+
+ Args:
+ metric_type (str): metric type, currently support 'coco', 'voc', 'oid'
+ and 'widerface'.
+ anno_file (str): annotation file path
+ """
+ if arch == 'keypoint_arch':
+ return (None, {'id': 'keypoint'})
+
+ if metric_type.lower() == 'coco' or metric_type.lower(
+ ) == 'rbox' or metric_type.lower() == 'snipercoco':
+ if anno_file and os.path.isfile(anno_file):
+ # lazy import pycocotools here
+ from pycocotools.coco import COCO
+
+ coco = COCO(anno_file)
+ cats = coco.loadCats(coco.getCatIds())
+
+ clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)}
+ catid2name = {cat['id']: cat['name'] for cat in cats}
+ return clsid2catid, catid2name
+
+ # anno file not exist, load default categories of COCO17
+ else:
+ if metric_type.lower() == 'rbox':
+ return _dota_category()
+
+ return _coco17_category()
+
+ elif metric_type.lower() == 'voc':
+ if anno_file and os.path.isfile(anno_file):
+ cats = []
+ with open(anno_file) as f:
+ for line in f.readlines():
+ cats.append(line.strip())
+
+ if cats[0] == 'background':
+ cats = cats[1:]
+
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+
+ return clsid2catid, catid2name
+
+ # anno file not exist, load default categories of
+ # VOC all 20 categories
+ else:
+ return _vocall_category()
+
+ elif metric_type.lower() == 'oid':
+ if anno_file and os.path.isfile(anno_file):
+ logger.warning("only default categories support for OID19")
+ return _oid19_category()
+
+ elif metric_type.lower() == 'widerface':
+ return _widerface_category()
+
+ elif metric_type.lower() == 'keypointtopdowncocoeval' or metric_type.lower(
+ ) == 'keypointtopdownmpiieval':
+ return (None, {'id': 'keypoint'})
+
+ elif metric_type.lower() in ['mot', 'motdet', 'reid']:
+ if anno_file and os.path.isfile(anno_file):
+ cats = []
+ with open(anno_file) as f:
+ for line in f.readlines():
+ cats.append(line.strip())
+ if cats[0] == 'background':
+ cats = cats[1:]
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+ return clsid2catid, catid2name
+ # anno file not exist, load default category 'pedestrian'.
+ else:
+ return _mot_category(category='pedestrian')
+
+ elif metric_type.lower() in ['kitti', 'bdd100kmot']:
+ return _mot_category(category='vehicle')
+
+ elif metric_type.lower() in ['mcmot']:
+ if anno_file and os.path.isfile(anno_file):
+ cats = []
+ with open(anno_file) as f:
+ for line in f.readlines():
+ cats.append(line.strip())
+ if cats[0] == 'background':
+ cats = cats[1:]
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+ return clsid2catid, catid2name
+ # anno file not exist, load default categories of visdrone all 10 categories
+ else:
+ return _visdrone_category()
+
+ else:
+ raise ValueError("unknown metric type {}".format(metric_type))
+
+
+def _mot_category(category='pedestrian'):
+ """
+ Get class id to category id map and category id
+ to category name map of mot dataset
+ """
+ label_map = {category: 0}
+ label_map = sorted(label_map.items(), key=lambda x: x[1])
+ cats = [l[0] for l in label_map]
+
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+
+ return clsid2catid, catid2name
+
+
+def _coco17_category():
+ """
+ Get class id to category id map and category id
+ to category name map of COCO2017 dataset
+
+ """
+ clsid2catid = {
+ 1: 1,
+ 2: 2,
+ 3: 3,
+ 4: 4,
+ 5: 5,
+ 6: 6,
+ 7: 7,
+ 8: 8,
+ 9: 9,
+ 10: 10,
+ 11: 11,
+ 12: 13,
+ 13: 14,
+ 14: 15,
+ 15: 16,
+ 16: 17,
+ 17: 18,
+ 18: 19,
+ 19: 20,
+ 20: 21,
+ 21: 22,
+ 22: 23,
+ 23: 24,
+ 24: 25,
+ 25: 27,
+ 26: 28,
+ 27: 31,
+ 28: 32,
+ 29: 33,
+ 30: 34,
+ 31: 35,
+ 32: 36,
+ 33: 37,
+ 34: 38,
+ 35: 39,
+ 36: 40,
+ 37: 41,
+ 38: 42,
+ 39: 43,
+ 40: 44,
+ 41: 46,
+ 42: 47,
+ 43: 48,
+ 44: 49,
+ 45: 50,
+ 46: 51,
+ 47: 52,
+ 48: 53,
+ 49: 54,
+ 50: 55,
+ 51: 56,
+ 52: 57,
+ 53: 58,
+ 54: 59,
+ 55: 60,
+ 56: 61,
+ 57: 62,
+ 58: 63,
+ 59: 64,
+ 60: 65,
+ 61: 67,
+ 62: 70,
+ 63: 72,
+ 64: 73,
+ 65: 74,
+ 66: 75,
+ 67: 76,
+ 68: 77,
+ 69: 78,
+ 70: 79,
+ 71: 80,
+ 72: 81,
+ 73: 82,
+ 74: 84,
+ 75: 85,
+ 76: 86,
+ 77: 87,
+ 78: 88,
+ 79: 89,
+ 80: 90
+ }
+
+ catid2name = {
+ 0: 'background',
+ 1: 'person',
+ 2: 'bicycle',
+ 3: 'car',
+ 4: 'motorcycle',
+ 5: 'airplane',
+ 6: 'bus',
+ 7: 'train',
+ 8: 'truck',
+ 9: 'boat',
+ 10: 'traffic light',
+ 11: 'fire hydrant',
+ 13: 'stop sign',
+ 14: 'parking meter',
+ 15: 'bench',
+ 16: 'bird',
+ 17: 'cat',
+ 18: 'dog',
+ 19: 'horse',
+ 20: 'sheep',
+ 21: 'cow',
+ 22: 'elephant',
+ 23: 'bear',
+ 24: 'zebra',
+ 25: 'giraffe',
+ 27: 'backpack',
+ 28: 'umbrella',
+ 31: 'handbag',
+ 32: 'tie',
+ 33: 'suitcase',
+ 34: 'frisbee',
+ 35: 'skis',
+ 36: 'snowboard',
+ 37: 'sports ball',
+ 38: 'kite',
+ 39: 'baseball bat',
+ 40: 'baseball glove',
+ 41: 'skateboard',
+ 42: 'surfboard',
+ 43: 'tennis racket',
+ 44: 'bottle',
+ 46: 'wine glass',
+ 47: 'cup',
+ 48: 'fork',
+ 49: 'knife',
+ 50: 'spoon',
+ 51: 'bowl',
+ 52: 'banana',
+ 53: 'apple',
+ 54: 'sandwich',
+ 55: 'orange',
+ 56: 'broccoli',
+ 57: 'carrot',
+ 58: 'hot dog',
+ 59: 'pizza',
+ 60: 'donut',
+ 61: 'cake',
+ 62: 'chair',
+ 63: 'couch',
+ 64: 'potted plant',
+ 65: 'bed',
+ 67: 'dining table',
+ 70: 'toilet',
+ 72: 'tv',
+ 73: 'laptop',
+ 74: 'mouse',
+ 75: 'remote',
+ 76: 'keyboard',
+ 77: 'cell phone',
+ 78: 'microwave',
+ 79: 'oven',
+ 80: 'toaster',
+ 81: 'sink',
+ 82: 'refrigerator',
+ 84: 'book',
+ 85: 'clock',
+ 86: 'vase',
+ 87: 'scissors',
+ 88: 'teddy bear',
+ 89: 'hair drier',
+ 90: 'toothbrush'
+ }
+
+ clsid2catid = {k - 1: v for k, v in clsid2catid.items()}
+ catid2name.pop(0)
+
+ return clsid2catid, catid2name
+
+
+def _dota_category():
+ """
+ Get class id to category id map and category id
+ to category name map of dota dataset
+ """
+ catid2name = {
+ 0: 'background',
+ 1: 'plane',
+ 2: 'baseball-diamond',
+ 3: 'bridge',
+ 4: 'ground-track-field',
+ 5: 'small-vehicle',
+ 6: 'large-vehicle',
+ 7: 'ship',
+ 8: 'tennis-court',
+ 9: 'basketball-court',
+ 10: 'storage-tank',
+ 11: 'soccer-ball-field',
+ 12: 'roundabout',
+ 13: 'harbor',
+ 14: 'swimming-pool',
+ 15: 'helicopter'
+ }
+ catid2name.pop(0)
+ clsid2catid = {i: i + 1 for i in range(len(catid2name))}
+ return clsid2catid, catid2name
+
+
+def _vocall_category():
+ """
+ Get class id to category id map and category id
+ to category name map of mixup voc dataset
+
+ """
+ label_map = pascalvoc_label()
+ label_map = sorted(label_map.items(), key=lambda x: x[1])
+ cats = [l[0] for l in label_map]
+
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+
+ return clsid2catid, catid2name
+
+
+def _widerface_category():
+ label_map = widerface_label()
+ label_map = sorted(label_map.items(), key=lambda x: x[1])
+ cats = [l[0] for l in label_map]
+ clsid2catid = {i: i for i in range(len(cats))}
+ catid2name = {i: name for i, name in enumerate(cats)}
+
+ return clsid2catid, catid2name
+
+
+def _oid19_category():
+ clsid2catid = {k: k + 1 for k in range(500)}
+
+ catid2name = {
+ 0: "background",
+ 1: "Infant bed",
+ 2: "Rose",
+ 3: "Flag",
+ 4: "Flashlight",
+ 5: "Sea turtle",
+ 6: "Camera",
+ 7: "Animal",
+ 8: "Glove",
+ 9: "Crocodile",
+ 10: "Cattle",
+ 11: "House",
+ 12: "Guacamole",
+ 13: "Penguin",
+ 14: "Vehicle registration plate",
+ 15: "Bench",
+ 16: "Ladybug",
+ 17: "Human nose",
+ 18: "Watermelon",
+ 19: "Flute",
+ 20: "Butterfly",
+ 21: "Washing machine",
+ 22: "Raccoon",
+ 23: "Segway",
+ 24: "Taco",
+ 25: "Jellyfish",
+ 26: "Cake",
+ 27: "Pen",
+ 28: "Cannon",
+ 29: "Bread",
+ 30: "Tree",
+ 31: "Shellfish",
+ 32: "Bed",
+ 33: "Hamster",
+ 34: "Hat",
+ 35: "Toaster",
+ 36: "Sombrero",
+ 37: "Tiara",
+ 38: "Bowl",
+ 39: "Dragonfly",
+ 40: "Moths and butterflies",
+ 41: "Antelope",
+ 42: "Vegetable",
+ 43: "Torch",
+ 44: "Building",
+ 45: "Power plugs and sockets",
+ 46: "Blender",
+ 47: "Billiard table",
+ 48: "Cutting board",
+ 49: "Bronze sculpture",
+ 50: "Turtle",
+ 51: "Broccoli",
+ 52: "Tiger",
+ 53: "Mirror",
+ 54: "Bear",
+ 55: "Zucchini",
+ 56: "Dress",
+ 57: "Volleyball",
+ 58: "Guitar",
+ 59: "Reptile",
+ 60: "Golf cart",
+ 61: "Tart",
+ 62: "Fedora",
+ 63: "Carnivore",
+ 64: "Car",
+ 65: "Lighthouse",
+ 66: "Coffeemaker",
+ 67: "Food processor",
+ 68: "Truck",
+ 69: "Bookcase",
+ 70: "Surfboard",
+ 71: "Footwear",
+ 72: "Bench",
+ 73: "Necklace",
+ 74: "Flower",
+ 75: "Radish",
+ 76: "Marine mammal",
+ 77: "Frying pan",
+ 78: "Tap",
+ 79: "Peach",
+ 80: "Knife",
+ 81: "Handbag",
+ 82: "Laptop",
+ 83: "Tent",
+ 84: "Ambulance",
+ 85: "Christmas tree",
+ 86: "Eagle",
+ 87: "Limousine",
+ 88: "Kitchen & dining room table",
+ 89: "Polar bear",
+ 90: "Tower",
+ 91: "Football",
+ 92: "Willow",
+ 93: "Human head",
+ 94: "Stop sign",
+ 95: "Banana",
+ 96: "Mixer",
+ 97: "Binoculars",
+ 98: "Dessert",
+ 99: "Bee",
+ 100: "Chair",
+ 101: "Wood-burning stove",
+ 102: "Flowerpot",
+ 103: "Beaker",
+ 104: "Oyster",
+ 105: "Woodpecker",
+ 106: "Harp",
+ 107: "Bathtub",
+ 108: "Wall clock",
+ 109: "Sports uniform",
+ 110: "Rhinoceros",
+ 111: "Beehive",
+ 112: "Cupboard",
+ 113: "Chicken",
+ 114: "Man",
+ 115: "Blue jay",
+ 116: "Cucumber",
+ 117: "Balloon",
+ 118: "Kite",
+ 119: "Fireplace",
+ 120: "Lantern",
+ 121: "Missile",
+ 122: "Book",
+ 123: "Spoon",
+ 124: "Grapefruit",
+ 125: "Squirrel",
+ 126: "Orange",
+ 127: "Coat",
+ 128: "Punching bag",
+ 129: "Zebra",
+ 130: "Billboard",
+ 131: "Bicycle",
+ 132: "Door handle",
+ 133: "Mechanical fan",
+ 134: "Ring binder",
+ 135: "Table",
+ 136: "Parrot",
+ 137: "Sock",
+ 138: "Vase",
+ 139: "Weapon",
+ 140: "Shotgun",
+ 141: "Glasses",
+ 142: "Seahorse",
+ 143: "Belt",
+ 144: "Watercraft",
+ 145: "Window",
+ 146: "Giraffe",
+ 147: "Lion",
+ 148: "Tire",
+ 149: "Vehicle",
+ 150: "Canoe",
+ 151: "Tie",
+ 152: "Shelf",
+ 153: "Picture frame",
+ 154: "Printer",
+ 155: "Human leg",
+ 156: "Boat",
+ 157: "Slow cooker",
+ 158: "Croissant",
+ 159: "Candle",
+ 160: "Pancake",
+ 161: "Pillow",
+ 162: "Coin",
+ 163: "Stretcher",
+ 164: "Sandal",
+ 165: "Woman",
+ 166: "Stairs",
+ 167: "Harpsichord",
+ 168: "Stool",
+ 169: "Bus",
+ 170: "Suitcase",
+ 171: "Human mouth",
+ 172: "Juice",
+ 173: "Skull",
+ 174: "Door",
+ 175: "Violin",
+ 176: "Chopsticks",
+ 177: "Digital clock",
+ 178: "Sunflower",
+ 179: "Leopard",
+ 180: "Bell pepper",
+ 181: "Harbor seal",
+ 182: "Snake",
+ 183: "Sewing machine",
+ 184: "Goose",
+ 185: "Helicopter",
+ 186: "Seat belt",
+ 187: "Coffee cup",
+ 188: "Microwave oven",
+ 189: "Hot dog",
+ 190: "Countertop",
+ 191: "Serving tray",
+ 192: "Dog bed",
+ 193: "Beer",
+ 194: "Sunglasses",
+ 195: "Golf ball",
+ 196: "Waffle",
+ 197: "Palm tree",
+ 198: "Trumpet",
+ 199: "Ruler",
+ 200: "Helmet",
+ 201: "Ladder",
+ 202: "Office building",
+ 203: "Tablet computer",
+ 204: "Toilet paper",
+ 205: "Pomegranate",
+ 206: "Skirt",
+ 207: "Gas stove",
+ 208: "Cookie",
+ 209: "Cart",
+ 210: "Raven",
+ 211: "Egg",
+ 212: "Burrito",
+ 213: "Goat",
+ 214: "Kitchen knife",
+ 215: "Skateboard",
+ 216: "Salt and pepper shakers",
+ 217: "Lynx",
+ 218: "Boot",
+ 219: "Platter",
+ 220: "Ski",
+ 221: "Swimwear",
+ 222: "Swimming pool",
+ 223: "Drinking straw",
+ 224: "Wrench",
+ 225: "Drum",
+ 226: "Ant",
+ 227: "Human ear",
+ 228: "Headphones",
+ 229: "Fountain",
+ 230: "Bird",
+ 231: "Jeans",
+ 232: "Television",
+ 233: "Crab",
+ 234: "Microphone",
+ 235: "Home appliance",
+ 236: "Snowplow",
+ 237: "Beetle",
+ 238: "Artichoke",
+ 239: "Jet ski",
+ 240: "Stationary bicycle",
+ 241: "Human hair",
+ 242: "Brown bear",
+ 243: "Starfish",
+ 244: "Fork",
+ 245: "Lobster",
+ 246: "Corded phone",
+ 247: "Drink",
+ 248: "Saucer",
+ 249: "Carrot",
+ 250: "Insect",
+ 251: "Clock",
+ 252: "Castle",
+ 253: "Tennis racket",
+ 254: "Ceiling fan",
+ 255: "Asparagus",
+ 256: "Jaguar",
+ 257: "Musical instrument",
+ 258: "Train",
+ 259: "Cat",
+ 260: "Rifle",
+ 261: "Dumbbell",
+ 262: "Mobile phone",
+ 263: "Taxi",
+ 264: "Shower",
+ 265: "Pitcher",
+ 266: "Lemon",
+ 267: "Invertebrate",
+ 268: "Turkey",
+ 269: "High heels",
+ 270: "Bust",
+ 271: "Elephant",
+ 272: "Scarf",
+ 273: "Barrel",
+ 274: "Trombone",
+ 275: "Pumpkin",
+ 276: "Box",
+ 277: "Tomato",
+ 278: "Frog",
+ 279: "Bidet",
+ 280: "Human face",
+ 281: "Houseplant",
+ 282: "Van",
+ 283: "Shark",
+ 284: "Ice cream",
+ 285: "Swim cap",
+ 286: "Falcon",
+ 287: "Ostrich",
+ 288: "Handgun",
+ 289: "Whiteboard",
+ 290: "Lizard",
+ 291: "Pasta",
+ 292: "Snowmobile",
+ 293: "Light bulb",
+ 294: "Window blind",
+ 295: "Muffin",
+ 296: "Pretzel",
+ 297: "Computer monitor",
+ 298: "Horn",
+ 299: "Furniture",
+ 300: "Sandwich",
+ 301: "Fox",
+ 302: "Convenience store",
+ 303: "Fish",
+ 304: "Fruit",
+ 305: "Earrings",
+ 306: "Curtain",
+ 307: "Grape",
+ 308: "Sofa bed",
+ 309: "Horse",
+ 310: "Luggage and bags",
+ 311: "Desk",
+ 312: "Crutch",
+ 313: "Bicycle helmet",
+ 314: "Tick",
+ 315: "Airplane",
+ 316: "Canary",
+ 317: "Spatula",
+ 318: "Watch",
+ 319: "Lily",
+ 320: "Kitchen appliance",
+ 321: "Filing cabinet",
+ 322: "Aircraft",
+ 323: "Cake stand",
+ 324: "Candy",
+ 325: "Sink",
+ 326: "Mouse",
+ 327: "Wine",
+ 328: "Wheelchair",
+ 329: "Goldfish",
+ 330: "Refrigerator",
+ 331: "French fries",
+ 332: "Drawer",
+ 333: "Treadmill",
+ 334: "Picnic basket",
+ 335: "Dice",
+ 336: "Cabbage",
+ 337: "Football helmet",
+ 338: "Pig",
+ 339: "Person",
+ 340: "Shorts",
+ 341: "Gondola",
+ 342: "Honeycomb",
+ 343: "Doughnut",
+ 344: "Chest of drawers",
+ 345: "Land vehicle",
+ 346: "Bat",
+ 347: "Monkey",
+ 348: "Dagger",
+ 349: "Tableware",
+ 350: "Human foot",
+ 351: "Mug",
+ 352: "Alarm clock",
+ 353: "Pressure cooker",
+ 354: "Human hand",
+ 355: "Tortoise",
+ 356: "Baseball glove",
+ 357: "Sword",
+ 358: "Pear",
+ 359: "Miniskirt",
+ 360: "Traffic sign",
+ 361: "Girl",
+ 362: "Roller skates",
+ 363: "Dinosaur",
+ 364: "Porch",
+ 365: "Human beard",
+ 366: "Submarine sandwich",
+ 367: "Screwdriver",
+ 368: "Strawberry",
+ 369: "Wine glass",
+ 370: "Seafood",
+ 371: "Racket",
+ 372: "Wheel",
+ 373: "Sea lion",
+ 374: "Toy",
+ 375: "Tea",
+ 376: "Tennis ball",
+ 377: "Waste container",
+ 378: "Mule",
+ 379: "Cricket ball",
+ 380: "Pineapple",
+ 381: "Coconut",
+ 382: "Doll",
+ 383: "Coffee table",
+ 384: "Snowman",
+ 385: "Lavender",
+ 386: "Shrimp",
+ 387: "Maple",
+ 388: "Cowboy hat",
+ 389: "Goggles",
+ 390: "Rugby ball",
+ 391: "Caterpillar",
+ 392: "Poster",
+ 393: "Rocket",
+ 394: "Organ",
+ 395: "Saxophone",
+ 396: "Traffic light",
+ 397: "Cocktail",
+ 398: "Plastic bag",
+ 399: "Squash",
+ 400: "Mushroom",
+ 401: "Hamburger",
+ 402: "Light switch",
+ 403: "Parachute",
+ 404: "Teddy bear",
+ 405: "Winter melon",
+ 406: "Deer",
+ 407: "Musical keyboard",
+ 408: "Plumbing fixture",
+ 409: "Scoreboard",
+ 410: "Baseball bat",
+ 411: "Envelope",
+ 412: "Adhesive tape",
+ 413: "Briefcase",
+ 414: "Paddle",
+ 415: "Bow and arrow",
+ 416: "Telephone",
+ 417: "Sheep",
+ 418: "Jacket",
+ 419: "Boy",
+ 420: "Pizza",
+ 421: "Otter",
+ 422: "Office supplies",
+ 423: "Couch",
+ 424: "Cello",
+ 425: "Bull",
+ 426: "Camel",
+ 427: "Ball",
+ 428: "Duck",
+ 429: "Whale",
+ 430: "Shirt",
+ 431: "Tank",
+ 432: "Motorcycle",
+ 433: "Accordion",
+ 434: "Owl",
+ 435: "Porcupine",
+ 436: "Sun hat",
+ 437: "Nail",
+ 438: "Scissors",
+ 439: "Swan",
+ 440: "Lamp",
+ 441: "Crown",
+ 442: "Piano",
+ 443: "Sculpture",
+ 444: "Cheetah",
+ 445: "Oboe",
+ 446: "Tin can",
+ 447: "Mango",
+ 448: "Tripod",
+ 449: "Oven",
+ 450: "Mouse",
+ 451: "Barge",
+ 452: "Coffee",
+ 453: "Snowboard",
+ 454: "Common fig",
+ 455: "Salad",
+ 456: "Marine invertebrates",
+ 457: "Umbrella",
+ 458: "Kangaroo",
+ 459: "Human arm",
+ 460: "Measuring cup",
+ 461: "Snail",
+ 462: "Loveseat",
+ 463: "Suit",
+ 464: "Teapot",
+ 465: "Bottle",
+ 466: "Alpaca",
+ 467: "Kettle",
+ 468: "Trousers",
+ 469: "Popcorn",
+ 470: "Centipede",
+ 471: "Spider",
+ 472: "Sparrow",
+ 473: "Plate",
+ 474: "Bagel",
+ 475: "Personal care",
+ 476: "Apple",
+ 477: "Brassiere",
+ 478: "Bathroom cabinet",
+ 479: "studio couch",
+ 480: "Computer keyboard",
+ 481: "Table tennis racket",
+ 482: "Sushi",
+ 483: "Cabinetry",
+ 484: "Street light",
+ 485: "Towel",
+ 486: "Nightstand",
+ 487: "Rabbit",
+ 488: "Dolphin",
+ 489: "Dog",
+ 490: "Jug",
+ 491: "Wok",
+ 492: "Fire hydrant",
+ 493: "Human eye",
+ 494: "Skyscraper",
+ 495: "Backpack",
+ 496: "Potato",
+ 497: "Paper towel",
+ 498: "Lifejacket",
+ 499: "Bicycle wheel",
+ 500: "Toilet",
+ }
+
+ return clsid2catid, catid2name
+
+
+def _visdrone_category():
+ clsid2catid = {i: i for i in range(10)}
+
+ catid2name = {
+ 0: 'pedestrian',
+ 1: 'people',
+ 2: 'bicycle',
+ 3: 'car',
+ 4: 'van',
+ 5: 'truck',
+ 6: 'tricycle',
+ 7: 'awning-tricycle',
+ 8: 'bus',
+ 9: 'motor'
+ }
+ return clsid2catid, catid2name
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/coco.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/coco.py
new file mode 100644
index 000000000..0efc9ae0e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/coco.py
@@ -0,0 +1,251 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import numpy as np
+from ppdet.core.workspace import register, serializable
+from .dataset import DetDataset
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class COCODataSet(DetDataset):
+ """
+ Load dataset with COCO format.
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_dir (str): directory for images.
+ anno_path (str): coco annotation file path.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ sample_num (int): number of samples to load, -1 means all.
+ load_crowd (bool): whether to load crowded ground-truth.
+ False as default
+ allow_empty (bool): whether to load empty entry. False as default
+ empty_ratio (float): the ratio of empty record number to total
+ record's, if empty_ratio is out of [0. ,1.), do not sample the
+ records and use all the empty entries. 1. as default
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ data_fields=['image'],
+ sample_num=-1,
+ load_crowd=False,
+ allow_empty=False,
+ empty_ratio=1.):
+ super(COCODataSet, self).__init__(dataset_dir, image_dir, anno_path,
+ data_fields, sample_num)
+ self.load_image_only = False
+ self.load_semantic = False
+ self.load_crowd = load_crowd
+ self.allow_empty = allow_empty
+ self.empty_ratio = empty_ratio
+
+ def _sample_empty(self, records, num):
+ # if empty_ratio is out of [0. ,1.), do not sample the records
+ if self.empty_ratio < 0. or self.empty_ratio >= 1.:
+ return records
+ import random
+ sample_num = min(
+ int(num * self.empty_ratio / (1 - self.empty_ratio)), len(records))
+ records = random.sample(records, sample_num)
+ return records
+
+ def parse_dataset(self):
+ anno_path = os.path.join(self.dataset_dir, self.anno_path)
+ image_dir = os.path.join(self.dataset_dir, self.image_dir)
+
+ assert anno_path.endswith('.json'), \
+ 'invalid coco annotation file: ' + anno_path
+ from pycocotools.coco import COCO
+ coco = COCO(anno_path)
+ img_ids = coco.getImgIds()
+ img_ids.sort()
+ cat_ids = coco.getCatIds()
+ records = []
+ empty_records = []
+ ct = 0
+
+ self.catid2clsid = dict({catid: i for i, catid in enumerate(cat_ids)})
+ self.cname2cid = dict({
+ coco.loadCats(catid)[0]['name']: clsid
+ for catid, clsid in self.catid2clsid.items()
+ })
+
+ if 'annotations' not in coco.dataset:
+ self.load_image_only = True
+ logger.warning('Annotation file: {} does not contains ground truth '
+ 'and load image information only.'.format(anno_path))
+
+ for img_id in img_ids:
+ img_anno = coco.loadImgs([img_id])[0]
+ im_fname = img_anno['file_name']
+ im_w = float(img_anno['width'])
+ im_h = float(img_anno['height'])
+
+ im_path = os.path.join(image_dir,
+ im_fname) if image_dir else im_fname
+ im_path = im_path.rstrip()
+ is_empty = False
+
+ if not os.path.exists(im_path):
+ logger.warning('Illegal image file: {}, and it will be '
+ 'ignored'.format(im_path))
+ continue
+
+ if im_w < 0 or im_h < 0:
+ logger.warning('Illegal width: {} or height: {} in annotation, '
+ 'and im_id: {} will be ignored'.format(
+ im_w, im_h, img_id))
+ continue
+
+ coco_rec = {
+ 'im_file': im_path,
+ 'im_id': np.array([img_id]),
+ 'h': im_h,
+ 'w': im_w,
+ } if 'image' in self.data_fields else {}
+
+ if not self.load_image_only:
+ ins_anno_ids = coco.getAnnIds(
+ imgIds=[img_id], iscrowd=None if self.load_crowd else False)
+ instances = coco.loadAnns(ins_anno_ids)
+
+ bboxes = []
+ is_rbox_anno = False
+ for inst in instances:
+ # check gt bbox
+ if inst.get('ignore', False):
+ continue
+ if 'bbox' not in inst.keys():
+ continue
+ else:
+ if not any(np.array(inst['bbox'])):
+ continue
+
+ # read rbox anno or not
+ is_rbox_anno = True if len(inst['bbox']) == 5 else False
+ if is_rbox_anno:
+ xc, yc, box_w, box_h, angle = inst['bbox']
+ x1 = xc - box_w / 2.0
+ y1 = yc - box_h / 2.0
+ x2 = x1 + box_w
+ y2 = y1 + box_h
+ else:
+ x1, y1, box_w, box_h = inst['bbox']
+ x2 = x1 + box_w
+ y2 = y1 + box_h
+ eps = 1e-5
+ if inst['area'] > 0 and x2 - x1 > eps and y2 - y1 > eps:
+ inst['clean_bbox'] = [
+ round(float(x), 3) for x in [x1, y1, x2, y2]
+ ]
+ if is_rbox_anno:
+ inst['clean_rbox'] = [xc, yc, box_w, box_h, angle]
+ bboxes.append(inst)
+ else:
+ logger.warning(
+ 'Found an invalid bbox in annotations: im_id: {}, '
+ 'area: {} x1: {}, y1: {}, x2: {}, y2: {}.'.format(
+ img_id, float(inst['area']), x1, y1, x2, y2))
+
+ num_bbox = len(bboxes)
+ if num_bbox <= 0 and not self.allow_empty:
+ continue
+ elif num_bbox <= 0:
+ is_empty = True
+
+ gt_bbox = np.zeros((num_bbox, 4), dtype=np.float32)
+ if is_rbox_anno:
+ gt_rbox = np.zeros((num_bbox, 5), dtype=np.float32)
+ gt_theta = np.zeros((num_bbox, 1), dtype=np.int32)
+ gt_class = np.zeros((num_bbox, 1), dtype=np.int32)
+ is_crowd = np.zeros((num_bbox, 1), dtype=np.int32)
+ gt_poly = [None] * num_bbox
+
+ has_segmentation = False
+ for i, box in enumerate(bboxes):
+ catid = box['category_id']
+ gt_class[i][0] = self.catid2clsid[catid]
+ gt_bbox[i, :] = box['clean_bbox']
+ # xc, yc, w, h, theta
+ if is_rbox_anno:
+ gt_rbox[i, :] = box['clean_rbox']
+ is_crowd[i][0] = box['iscrowd']
+ # check RLE format
+ if 'segmentation' in box and box['iscrowd'] == 1:
+ gt_poly[i] = [[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]
+ elif 'segmentation' in box and box['segmentation']:
+ if not np.array(box['segmentation']
+ ).size > 0 and not self.allow_empty:
+ bboxes.pop(i)
+ gt_poly.pop(i)
+ np.delete(is_crowd, i)
+ np.delete(gt_class, i)
+ np.delete(gt_bbox, i)
+ else:
+ gt_poly[i] = box['segmentation']
+ has_segmentation = True
+
+ if has_segmentation and not any(
+ gt_poly) and not self.allow_empty:
+ continue
+
+ if is_rbox_anno:
+ gt_rec = {
+ 'is_crowd': is_crowd,
+ 'gt_class': gt_class,
+ 'gt_bbox': gt_bbox,
+ 'gt_rbox': gt_rbox,
+ 'gt_poly': gt_poly,
+ }
+ else:
+ gt_rec = {
+ 'is_crowd': is_crowd,
+ 'gt_class': gt_class,
+ 'gt_bbox': gt_bbox,
+ 'gt_poly': gt_poly,
+ }
+
+ for k, v in gt_rec.items():
+ if k in self.data_fields:
+ coco_rec[k] = v
+
+ # TODO: remove load_semantic
+ if self.load_semantic and 'semantic' in self.data_fields:
+ seg_path = os.path.join(self.dataset_dir, 'stuffthingmaps',
+ 'train2017', im_fname[:-3] + 'png')
+ coco_rec.update({'semantic': seg_path})
+
+ logger.debug('Load file: {}, im_id: {}, h: {}, w: {}.'.format(
+ im_path, img_id, im_h, im_w))
+ if is_empty:
+ empty_records.append(coco_rec)
+ else:
+ records.append(coco_rec)
+ ct += 1
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ assert ct > 0, 'not found any coco record in %s' % (anno_path)
+ logger.debug('{} samples in file {}'.format(ct, anno_path))
+ if self.allow_empty and len(empty_records) > 0:
+ empty_records = self._sample_empty(empty_records, len(records))
+ records += empty_records
+ self.roidbs = records
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/dataset.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/dataset.py
new file mode 100644
index 000000000..1bef548e6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/dataset.py
@@ -0,0 +1,197 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import numpy as np
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+from paddle.io import Dataset
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.download import get_dataset_path
+import copy
+
+
+@serializable
+class DetDataset(Dataset):
+ """
+ Load detection dataset.
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_dir (str): directory for images.
+ anno_path (str): annotation file path.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ sample_num (int): number of samples to load, -1 means all.
+ use_default_label (bool): whether to load default label list.
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ data_fields=['image'],
+ sample_num=-1,
+ use_default_label=None,
+ **kwargs):
+ super(DetDataset, self).__init__()
+ self.dataset_dir = dataset_dir if dataset_dir is not None else ''
+ self.anno_path = anno_path
+ self.image_dir = image_dir if image_dir is not None else ''
+ self.data_fields = data_fields
+ self.sample_num = sample_num
+ self.use_default_label = use_default_label
+ self._epoch = 0
+ self._curr_iter = 0
+
+ def __len__(self, ):
+ return len(self.roidbs)
+
+ def __getitem__(self, idx):
+ # data batch
+ roidb = copy.deepcopy(self.roidbs[idx])
+ if self.mixup_epoch == 0 or self._epoch < self.mixup_epoch:
+ n = len(self.roidbs)
+ idx = np.random.randint(n)
+ roidb = [roidb, copy.deepcopy(self.roidbs[idx])]
+ elif self.cutmix_epoch == 0 or self._epoch < self.cutmix_epoch:
+ n = len(self.roidbs)
+ idx = np.random.randint(n)
+ roidb = [roidb, copy.deepcopy(self.roidbs[idx])]
+ elif self.mosaic_epoch == 0 or self._epoch < self.mosaic_epoch:
+ n = len(self.roidbs)
+ roidb = [roidb, ] + [
+ copy.deepcopy(self.roidbs[np.random.randint(n)])
+ for _ in range(3)
+ ]
+ if isinstance(roidb, Sequence):
+ for r in roidb:
+ r['curr_iter'] = self._curr_iter
+ else:
+ roidb['curr_iter'] = self._curr_iter
+ self._curr_iter += 1
+
+ return self.transform(roidb)
+
+ def check_or_download_dataset(self):
+ self.dataset_dir = get_dataset_path(self.dataset_dir, self.anno_path,
+ self.image_dir)
+
+ def set_kwargs(self, **kwargs):
+ self.mixup_epoch = kwargs.get('mixup_epoch', -1)
+ self.cutmix_epoch = kwargs.get('cutmix_epoch', -1)
+ self.mosaic_epoch = kwargs.get('mosaic_epoch', -1)
+
+ def set_transform(self, transform):
+ self.transform = transform
+
+ def set_epoch(self, epoch_id):
+ self._epoch = epoch_id
+
+ def parse_dataset(self, ):
+ raise NotImplementedError(
+ "Need to implement parse_dataset method of Dataset")
+
+ def get_anno(self):
+ if self.anno_path is None:
+ return
+ return os.path.join(self.dataset_dir, self.anno_path)
+
+
+def _is_valid_file(f, extensions=('.jpg', '.jpeg', '.png', '.bmp')):
+ return f.lower().endswith(extensions)
+
+
+def _make_dataset(dir):
+ dir = os.path.expanduser(dir)
+ if not os.path.isdir(dir):
+ raise ('{} should be a dir'.format(dir))
+ images = []
+ for root, _, fnames in sorted(os.walk(dir, followlinks=True)):
+ for fname in sorted(fnames):
+ path = os.path.join(root, fname)
+ if _is_valid_file(path):
+ images.append(path)
+ return images
+
+
+@register
+@serializable
+class ImageFolder(DetDataset):
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ sample_num=-1,
+ use_default_label=None,
+ **kwargs):
+ super(ImageFolder, self).__init__(
+ dataset_dir,
+ image_dir,
+ anno_path,
+ sample_num=sample_num,
+ use_default_label=use_default_label)
+ self._imid2path = {}
+ self.roidbs = None
+ self.sample_num = sample_num
+
+ def check_or_download_dataset(self):
+ if self.dataset_dir:
+ # NOTE: ImageFolder is only used for prediction, in
+ # infer mode, image_dir is set by set_images
+ # so we only check anno_path here
+ self.dataset_dir = get_dataset_path(self.dataset_dir,
+ self.anno_path, None)
+
+ def parse_dataset(self, ):
+ if not self.roidbs:
+ self.roidbs = self._load_images()
+
+ def _parse(self):
+ image_dir = self.image_dir
+ if not isinstance(image_dir, Sequence):
+ image_dir = [image_dir]
+ images = []
+ for im_dir in image_dir:
+ if os.path.isdir(im_dir):
+ im_dir = os.path.join(self.dataset_dir, im_dir)
+ images.extend(_make_dataset(im_dir))
+ elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
+ images.append(im_dir)
+ return images
+
+ def _load_images(self):
+ images = self._parse()
+ ct = 0
+ records = []
+ for image in images:
+ assert image != '' and os.path.isfile(image), \
+ "Image {} not found".format(image)
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ rec = {'im_id': np.array([ct]), 'im_file': image}
+ self._imid2path[ct] = image
+ ct += 1
+ records.append(rec)
+ assert len(records) > 0, "No image file found"
+ return records
+
+ def get_imid2path(self):
+ return self._imid2path
+
+ def set_images(self, images):
+ self.image_dir = images
+ self.roidbs = self._load_images()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/keypoint_coco.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/keypoint_coco.py
new file mode 100644
index 000000000..fdea57ada
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/keypoint_coco.py
@@ -0,0 +1,674 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import cv2
+import numpy as np
+import json
+import copy
+import pycocotools
+from pycocotools.coco import COCO
+from .dataset import DetDataset
+from ppdet.core.workspace import register, serializable
+
+
+@serializable
+class KeypointBottomUpBaseDataset(DetDataset):
+ """Base class for bottom-up datasets. Adapted from
+ https://github.com/open-mmlab/mmpose
+
+ All datasets should subclass it.
+ All subclasses should overwrite:
+ Methods:`_get_imganno`
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ anno_path (str): Relative path to the annotation file.
+ image_dir (str): Path to a directory where images are held.
+ Default: None.
+ num_joints (int): keypoint numbers
+ transform (composed(operators)): A sequence of data transforms.
+ shard (list): [rank, worldsize], the distributed env params
+ test_mode (bool): Store True when building test or
+ validation dataset. Default: False.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ transform=[],
+ shard=[0, 1],
+ test_mode=False):
+ super().__init__(dataset_dir, image_dir, anno_path)
+ self.image_info = {}
+ self.ann_info = {}
+
+ self.img_prefix = os.path.join(dataset_dir, image_dir)
+ self.transform = transform
+ self.test_mode = test_mode
+
+ self.ann_info['num_joints'] = num_joints
+ self.img_ids = []
+
+ def parse_dataset(self):
+ pass
+
+ def __len__(self):
+ """Get dataset length."""
+ return len(self.img_ids)
+
+ def _get_imganno(self, idx):
+ """Get anno for a single image."""
+ raise NotImplementedError
+
+ def __getitem__(self, idx):
+ """Prepare image for training given the index."""
+ records = copy.deepcopy(self._get_imganno(idx))
+ records['image'] = cv2.imread(records['image_file'])
+ records['image'] = cv2.cvtColor(records['image'], cv2.COLOR_BGR2RGB)
+ records['mask'] = (records['mask'] + 0).astype('uint8')
+ records = self.transform(records)
+ return records
+
+ def parse_dataset(self):
+ return
+
+
+@register
+@serializable
+class KeypointBottomUpCocoDataset(KeypointBottomUpBaseDataset):
+ """COCO dataset for bottom-up pose estimation. Adapted from
+ https://github.com/open-mmlab/mmpose
+
+ The dataset loads raw features and apply specified transforms
+ to return a dict containing the image tensors and other information.
+
+ COCO keypoint indexes::
+
+ 0: 'nose',
+ 1: 'left_eye',
+ 2: 'right_eye',
+ 3: 'left_ear',
+ 4: 'right_ear',
+ 5: 'left_shoulder',
+ 6: 'right_shoulder',
+ 7: 'left_elbow',
+ 8: 'right_elbow',
+ 9: 'left_wrist',
+ 10: 'right_wrist',
+ 11: 'left_hip',
+ 12: 'right_hip',
+ 13: 'left_knee',
+ 14: 'right_knee',
+ 15: 'left_ankle',
+ 16: 'right_ankle'
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ anno_path (str): Relative path to the annotation file.
+ image_dir (str): Path to a directory where images are held.
+ Default: None.
+ num_joints (int): keypoint numbers
+ transform (composed(operators)): A sequence of data transforms.
+ shard (list): [rank, worldsize], the distributed env params
+ test_mode (bool): Store True when building test or
+ validation dataset. Default: False.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ transform=[],
+ shard=[0, 1],
+ test_mode=False):
+ super().__init__(dataset_dir, image_dir, anno_path, num_joints,
+ transform, shard, test_mode)
+
+ self.ann_file = os.path.join(dataset_dir, anno_path)
+ self.shard = shard
+ self.test_mode = test_mode
+
+ def parse_dataset(self):
+ self.coco = COCO(self.ann_file)
+
+ self.img_ids = self.coco.getImgIds()
+ if not self.test_mode:
+ self.img_ids = [
+ img_id for img_id in self.img_ids
+ if len(self.coco.getAnnIds(
+ imgIds=img_id, iscrowd=None)) > 0
+ ]
+ blocknum = int(len(self.img_ids) / self.shard[1])
+ self.img_ids = self.img_ids[(blocknum * self.shard[0]):(blocknum * (
+ self.shard[0] + 1))]
+ self.num_images = len(self.img_ids)
+ self.id2name, self.name2id = self._get_mapping_id_name(self.coco.imgs)
+ self.dataset_name = 'coco'
+
+ cat_ids = self.coco.getCatIds()
+ self.catid2clsid = dict({catid: i for i, catid in enumerate(cat_ids)})
+ print('=> num_images: {}'.format(self.num_images))
+
+ @staticmethod
+ def _get_mapping_id_name(imgs):
+ """
+ Args:
+ imgs (dict): dict of image info.
+
+ Returns:
+ tuple: Image name & id mapping dicts.
+
+ - id2name (dict): Mapping image id to name.
+ - name2id (dict): Mapping image name to id.
+ """
+ id2name = {}
+ name2id = {}
+ for image_id, image in imgs.items():
+ file_name = image['file_name']
+ id2name[image_id] = file_name
+ name2id[file_name] = image_id
+
+ return id2name, name2id
+
+ def _get_imganno(self, idx):
+ """Get anno for a single image.
+
+ Args:
+ idx (int): image idx
+
+ Returns:
+ dict: info for model training
+ """
+ coco = self.coco
+ img_id = self.img_ids[idx]
+ ann_ids = coco.getAnnIds(imgIds=img_id)
+ anno = coco.loadAnns(ann_ids)
+
+ mask = self._get_mask(anno, idx)
+ anno = [
+ obj for obj in anno
+ if obj['iscrowd'] == 0 or obj['num_keypoints'] > 0
+ ]
+
+ joints, orgsize = self._get_joints(anno, idx)
+
+ db_rec = {}
+ db_rec['im_id'] = img_id
+ db_rec['image_file'] = os.path.join(self.img_prefix,
+ self.id2name[img_id])
+ db_rec['mask'] = mask
+ db_rec['joints'] = joints
+ db_rec['im_shape'] = orgsize
+
+ return db_rec
+
+ def _get_joints(self, anno, idx):
+ """Get joints for all people in an image."""
+ num_people = len(anno)
+
+ joints = np.zeros(
+ (num_people, self.ann_info['num_joints'], 3), dtype=np.float32)
+
+ for i, obj in enumerate(anno):
+ joints[i, :self.ann_info['num_joints'], :3] = \
+ np.array(obj['keypoints']).reshape([-1, 3])
+
+ img_info = self.coco.loadImgs(self.img_ids[idx])[0]
+ joints[..., 0] /= img_info['width']
+ joints[..., 1] /= img_info['height']
+ orgsize = np.array([img_info['height'], img_info['width']])
+
+ return joints, orgsize
+
+ def _get_mask(self, anno, idx):
+ """Get ignore masks to mask out losses."""
+ coco = self.coco
+ img_info = coco.loadImgs(self.img_ids[idx])[0]
+
+ m = np.zeros((img_info['height'], img_info['width']), dtype=np.float32)
+
+ for obj in anno:
+ if 'segmentation' in obj:
+ if obj['iscrowd']:
+ rle = pycocotools.mask.frPyObjects(obj['segmentation'],
+ img_info['height'],
+ img_info['width'])
+ m += pycocotools.mask.decode(rle)
+ elif obj['num_keypoints'] == 0:
+ rles = pycocotools.mask.frPyObjects(obj['segmentation'],
+ img_info['height'],
+ img_info['width'])
+ for rle in rles:
+ m += pycocotools.mask.decode(rle)
+
+ return m < 0.5
+
+
+@register
+@serializable
+class KeypointBottomUpCrowdPoseDataset(KeypointBottomUpCocoDataset):
+ """CrowdPose dataset for bottom-up pose estimation. Adapted from
+ https://github.com/open-mmlab/mmpose
+
+ The dataset loads raw features and apply specified transforms
+ to return a dict containing the image tensors and other information.
+
+ CrowdPose keypoint indexes::
+
+ 0: 'left_shoulder',
+ 1: 'right_shoulder',
+ 2: 'left_elbow',
+ 3: 'right_elbow',
+ 4: 'left_wrist',
+ 5: 'right_wrist',
+ 6: 'left_hip',
+ 7: 'right_hip',
+ 8: 'left_knee',
+ 9: 'right_knee',
+ 10: 'left_ankle',
+ 11: 'right_ankle',
+ 12: 'top_head',
+ 13: 'neck'
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ anno_path (str): Relative path to the annotation file.
+ image_dir (str): Path to a directory where images are held.
+ Default: None.
+ num_joints (int): keypoint numbers
+ transform (composed(operators)): A sequence of data transforms.
+ shard (list): [rank, worldsize], the distributed env params
+ test_mode (bool): Store True when building test or
+ validation dataset. Default: False.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ transform=[],
+ shard=[0, 1],
+ test_mode=False):
+ super().__init__(dataset_dir, image_dir, anno_path, num_joints,
+ transform, shard, test_mode)
+
+ self.ann_file = os.path.join(dataset_dir, anno_path)
+ self.shard = shard
+ self.test_mode = test_mode
+
+ def parse_dataset(self):
+ self.coco = COCO(self.ann_file)
+
+ self.img_ids = self.coco.getImgIds()
+ if not self.test_mode:
+ self.img_ids = [
+ img_id for img_id in self.img_ids
+ if len(self.coco.getAnnIds(
+ imgIds=img_id, iscrowd=None)) > 0
+ ]
+ blocknum = int(len(self.img_ids) / self.shard[1])
+ self.img_ids = self.img_ids[(blocknum * self.shard[0]):(blocknum * (
+ self.shard[0] + 1))]
+ self.num_images = len(self.img_ids)
+ self.id2name, self.name2id = self._get_mapping_id_name(self.coco.imgs)
+
+ self.dataset_name = 'crowdpose'
+ print('=> num_images: {}'.format(self.num_images))
+
+
+@serializable
+class KeypointTopDownBaseDataset(DetDataset):
+ """Base class for top_down datasets.
+
+ All datasets should subclass it.
+ All subclasses should overwrite:
+ Methods:`_get_db`
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ image_dir (str): Path to a directory where images are held.
+ anno_path (str): Relative path to the annotation file.
+ num_joints (int): keypoint numbers
+ transform (composed(operators)): A sequence of data transforms.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ transform=[]):
+ super().__init__(dataset_dir, image_dir, anno_path)
+ self.image_info = {}
+ self.ann_info = {}
+
+ self.img_prefix = os.path.join(dataset_dir, image_dir)
+ self.transform = transform
+
+ self.ann_info['num_joints'] = num_joints
+ self.db = []
+
+ def __len__(self):
+ """Get dataset length."""
+ return len(self.db)
+
+ def _get_db(self):
+ """Get a sample"""
+ raise NotImplementedError
+
+ def __getitem__(self, idx):
+ """Prepare sample for training given the index."""
+ records = copy.deepcopy(self.db[idx])
+ records['image'] = cv2.imread(records['image_file'], cv2.IMREAD_COLOR |
+ cv2.IMREAD_IGNORE_ORIENTATION)
+ records['image'] = cv2.cvtColor(records['image'], cv2.COLOR_BGR2RGB)
+ records['score'] = records['score'] if 'score' in records else 1
+ records = self.transform(records)
+ # print('records', records)
+ return records
+
+
+@register
+@serializable
+class KeypointTopDownCocoDataset(KeypointTopDownBaseDataset):
+ """COCO dataset for top-down pose estimation. Adapted from
+ https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
+ Copyright (c) Microsoft, under the MIT License.
+
+ The dataset loads raw features and apply specified transforms
+ to return a dict containing the image tensors and other information.
+
+ COCO keypoint indexes:
+
+ 0: 'nose',
+ 1: 'left_eye',
+ 2: 'right_eye',
+ 3: 'left_ear',
+ 4: 'right_ear',
+ 5: 'left_shoulder',
+ 6: 'right_shoulder',
+ 7: 'left_elbow',
+ 8: 'right_elbow',
+ 9: 'left_wrist',
+ 10: 'right_wrist',
+ 11: 'left_hip',
+ 12: 'right_hip',
+ 13: 'left_knee',
+ 14: 'right_knee',
+ 15: 'left_ankle',
+ 16: 'right_ankle'
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ image_dir (str): Path to a directory where images are held.
+ anno_path (str): Relative path to the annotation file.
+ num_joints (int): Keypoint numbers
+ trainsize (list):[w, h] Image target size
+ transform (composed(operators)): A sequence of data transforms.
+ bbox_file (str): Path to a detection bbox file
+ Default: None.
+ use_gt_bbox (bool): Whether to use ground truth bbox
+ Default: True.
+ pixel_std (int): The pixel std of the scale
+ Default: 200.
+ image_thre (float): The threshold to filter the detection box
+ Default: 0.0.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ trainsize,
+ transform=[],
+ bbox_file=None,
+ use_gt_bbox=True,
+ pixel_std=200,
+ image_thre=0.0):
+ super().__init__(dataset_dir, image_dir, anno_path, num_joints,
+ transform)
+
+ self.bbox_file = bbox_file
+ self.use_gt_bbox = use_gt_bbox
+ self.trainsize = trainsize
+ self.pixel_std = pixel_std
+ self.image_thre = image_thre
+ self.dataset_name = 'coco'
+
+ def parse_dataset(self):
+ if self.use_gt_bbox:
+ self.db = self._load_coco_keypoint_annotations()
+ else:
+ self.db = self._load_coco_person_detection_results()
+
+ def _load_coco_keypoint_annotations(self):
+ coco = COCO(self.get_anno())
+ img_ids = coco.getImgIds()
+ gt_db = []
+ for index in img_ids:
+ im_ann = coco.loadImgs(index)[0]
+ width = im_ann['width']
+ height = im_ann['height']
+ file_name = im_ann['file_name']
+ im_id = int(im_ann["id"])
+
+ annIds = coco.getAnnIds(imgIds=index, iscrowd=False)
+ objs = coco.loadAnns(annIds)
+
+ valid_objs = []
+ for obj in objs:
+ x, y, w, h = obj['bbox']
+ x1 = np.max((0, x))
+ y1 = np.max((0, y))
+ x2 = np.min((width - 1, x1 + np.max((0, w - 1))))
+ y2 = np.min((height - 1, y1 + np.max((0, h - 1))))
+ if obj['area'] > 0 and x2 >= x1 and y2 >= y1:
+ obj['clean_bbox'] = [x1, y1, x2 - x1, y2 - y1]
+ valid_objs.append(obj)
+ objs = valid_objs
+
+ rec = []
+ for obj in objs:
+ if max(obj['keypoints']) == 0:
+ continue
+
+ joints = np.zeros(
+ (self.ann_info['num_joints'], 3), dtype=np.float)
+ joints_vis = np.zeros(
+ (self.ann_info['num_joints'], 3), dtype=np.float)
+ for ipt in range(self.ann_info['num_joints']):
+ joints[ipt, 0] = obj['keypoints'][ipt * 3 + 0]
+ joints[ipt, 1] = obj['keypoints'][ipt * 3 + 1]
+ joints[ipt, 2] = 0
+ t_vis = obj['keypoints'][ipt * 3 + 2]
+ if t_vis > 1:
+ t_vis = 1
+ joints_vis[ipt, 0] = t_vis
+ joints_vis[ipt, 1] = t_vis
+ joints_vis[ipt, 2] = 0
+
+ center, scale = self._box2cs(obj['clean_bbox'][:4])
+ rec.append({
+ 'image_file': os.path.join(self.img_prefix, file_name),
+ 'center': center,
+ 'scale': scale,
+ 'joints': joints,
+ 'joints_vis': joints_vis,
+ 'im_id': im_id,
+ })
+ gt_db.extend(rec)
+
+ return gt_db
+
+ def _box2cs(self, box):
+ x, y, w, h = box[:4]
+ center = np.zeros((2), dtype=np.float32)
+ center[0] = x + w * 0.5
+ center[1] = y + h * 0.5
+ aspect_ratio = self.trainsize[0] * 1.0 / self.trainsize[1]
+
+ if w > aspect_ratio * h:
+ h = w * 1.0 / aspect_ratio
+ elif w < aspect_ratio * h:
+ w = h * aspect_ratio
+ scale = np.array(
+ [w * 1.0 / self.pixel_std, h * 1.0 / self.pixel_std],
+ dtype=np.float32)
+ if center[0] != -1:
+ scale = scale * 1.25
+
+ return center, scale
+
+ def _load_coco_person_detection_results(self):
+ all_boxes = None
+ bbox_file_path = os.path.join(self.dataset_dir, self.bbox_file)
+ with open(bbox_file_path, 'r') as f:
+ all_boxes = json.load(f)
+
+ if not all_boxes:
+ print('=> Load %s fail!' % bbox_file_path)
+ return None
+
+ kpt_db = []
+ for n_img in range(0, len(all_boxes)):
+ det_res = all_boxes[n_img]
+ if det_res['category_id'] != 1:
+ continue
+ file_name = det_res[
+ 'filename'] if 'filename' in det_res else '%012d.jpg' % det_res[
+ 'image_id']
+ img_name = os.path.join(self.img_prefix, file_name)
+ box = det_res['bbox']
+ score = det_res['score']
+ im_id = int(det_res['image_id'])
+
+ if score < self.image_thre:
+ continue
+
+ center, scale = self._box2cs(box)
+ joints = np.zeros((self.ann_info['num_joints'], 3), dtype=np.float)
+ joints_vis = np.ones(
+ (self.ann_info['num_joints'], 3), dtype=np.float)
+ kpt_db.append({
+ 'image_file': img_name,
+ 'im_id': im_id,
+ 'center': center,
+ 'scale': scale,
+ 'score': score,
+ 'joints': joints,
+ 'joints_vis': joints_vis,
+ })
+
+ return kpt_db
+
+
+@register
+@serializable
+class KeypointTopDownMPIIDataset(KeypointTopDownBaseDataset):
+ """MPII dataset for topdown pose estimation. Adapted from
+ https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
+ Copyright (c) Microsoft, under the MIT License.
+
+ The dataset loads raw features and apply specified transforms
+ to return a dict containing the image tensors and other information.
+
+ MPII keypoint indexes::
+
+ 0: 'right_ankle',
+ 1: 'right_knee',
+ 2: 'right_hip',
+ 3: 'left_hip',
+ 4: 'left_knee',
+ 5: 'left_ankle',
+ 6: 'pelvis',
+ 7: 'thorax',
+ 8: 'upper_neck',
+ 9: 'head_top',
+ 10: 'right_wrist',
+ 11: 'right_elbow',
+ 12: 'right_shoulder',
+ 13: 'left_shoulder',
+ 14: 'left_elbow',
+ 15: 'left_wrist',
+
+ Args:
+ dataset_dir (str): Root path to the dataset.
+ image_dir (str): Path to a directory where images are held.
+ anno_path (str): Relative path to the annotation file.
+ num_joints (int): Keypoint numbers
+ trainsize (list):[w, h] Image target size
+ transform (composed(operators)): A sequence of data transforms.
+ """
+
+ def __init__(self,
+ dataset_dir,
+ image_dir,
+ anno_path,
+ num_joints,
+ transform=[]):
+ super().__init__(dataset_dir, image_dir, anno_path, num_joints,
+ transform)
+
+ self.dataset_name = 'mpii'
+
+ def parse_dataset(self):
+ with open(self.get_anno()) as anno_file:
+ anno = json.load(anno_file)
+
+ gt_db = []
+ for a in anno:
+ image_name = a['image']
+ im_id = a['image_id'] if 'image_id' in a else int(
+ os.path.splitext(image_name)[0])
+
+ c = np.array(a['center'], dtype=np.float)
+ s = np.array([a['scale'], a['scale']], dtype=np.float)
+
+ # Adjust center/scale slightly to avoid cropping limbs
+ if c[0] != -1:
+ c[1] = c[1] + 15 * s[1]
+ s = s * 1.25
+ c = c - 1
+
+ joints = np.zeros((self.ann_info['num_joints'], 3), dtype=np.float)
+ joints_vis = np.zeros(
+ (self.ann_info['num_joints'], 3), dtype=np.float)
+ if 'joints' in a:
+ joints_ = np.array(a['joints'])
+ joints_[:, 0:2] = joints_[:, 0:2] - 1
+ joints_vis_ = np.array(a['joints_vis'])
+ assert len(joints_) == self.ann_info[
+ 'num_joints'], 'joint num diff: {} vs {}'.format(
+ len(joints_), self.ann_info['num_joints'])
+
+ joints[:, 0:2] = joints_[:, 0:2]
+ joints_vis[:, 0] = joints_vis_[:]
+ joints_vis[:, 1] = joints_vis_[:]
+
+ gt_db.append({
+ 'image_file': os.path.join(self.img_prefix, image_name),
+ 'im_id': im_id,
+ 'center': c,
+ 'scale': s,
+ 'joints': joints,
+ 'joints_vis': joints_vis
+ })
+ print("number length: {}".format(len(gt_db)))
+ self.db = gt_db
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/mot.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/mot.py
new file mode 100644
index 000000000..d46c02f52
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/mot.py
@@ -0,0 +1,628 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import sys
+import cv2
+import glob
+import numpy as np
+from collections import OrderedDict, defaultdict
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+from .dataset import DetDataset, _make_dataset, _is_valid_file
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class MOTDataSet(DetDataset):
+ """
+ Load dataset with MOT format, only support single class MOT.
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_lists (str|list): mot data image lists, muiti-source mot dataset.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ sample_num (int): number of samples to load, -1 means all.
+
+ Notes:
+ MOT datasets root directory following this:
+ dataset/mot
+ |鈥斺斺斺斺斺攊mage_lists
+ | |鈥斺斺斺斺斺攃altech.train
+ | |鈥斺斺斺斺斺攃altech.val
+ | |鈥斺斺斺斺斺攎ot16.train
+ | |鈥斺斺斺斺斺攎ot17.train
+ | ......
+ |鈥斺斺斺斺斺擟altech
+ |鈥斺斺斺斺斺擬OT17
+ |鈥斺斺斺斺斺......
+
+ All the MOT datasets have the following structure:
+ Caltech
+ |鈥斺斺斺斺斺攊mages
+ | 鈹斺斺斺斺斺斺00001.jpg
+ | |鈥斺斺斺斺斺 ...
+ | 鈹斺斺斺斺斺斺0000N.jpg
+ 鈹斺斺斺斺斺斺攍abels_with_ids
+ 鈹斺斺斺斺斺斺00001.txt
+ |鈥斺斺斺斺斺 ...
+ 鈹斺斺斺斺斺斺0000N.txt
+ or
+
+ MOT17
+ |鈥斺斺斺斺斺攊mages
+ | 鈹斺斺斺斺斺斺攖rain
+ | 鈹斺斺斺斺斺斺攖est
+ 鈹斺斺斺斺斺斺攍abels_with_ids
+ 鈹斺斺斺斺斺斺攖rain
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_lists=[],
+ data_fields=['image'],
+ sample_num=-1):
+ super(MOTDataSet, self).__init__(
+ dataset_dir=dataset_dir,
+ data_fields=data_fields,
+ sample_num=sample_num)
+ self.dataset_dir = dataset_dir
+ self.image_lists = image_lists
+ if isinstance(self.image_lists, str):
+ self.image_lists = [self.image_lists]
+ self.roidbs = None
+ self.cname2cid = None
+
+ def get_anno(self):
+ if self.image_lists == []:
+ return
+ # only used to get categories and metric
+ # only check first data, but the label_list of all data should be same.
+ first_mot_data = self.image_lists[0].split('.')[0]
+ anno_file = os.path.join(self.dataset_dir, first_mot_data, 'label_list.txt')
+ return anno_file
+
+ def parse_dataset(self):
+ self.img_files = OrderedDict()
+ self.img_start_index = OrderedDict()
+ self.label_files = OrderedDict()
+ self.tid_num = OrderedDict()
+ self.tid_start_index = OrderedDict()
+
+ img_index = 0
+ for data_name in self.image_lists:
+ # check every data image list
+ image_lists_dir = os.path.join(self.dataset_dir, 'image_lists')
+ assert os.path.isdir(image_lists_dir), \
+ "The {} is not a directory.".format(image_lists_dir)
+
+ list_path = os.path.join(image_lists_dir, data_name)
+ assert os.path.exists(list_path), \
+ "The list path {} does not exist.".format(list_path)
+
+ # record img_files, filter out empty ones
+ with open(list_path, 'r') as file:
+ self.img_files[data_name] = file.readlines()
+ self.img_files[data_name] = [
+ os.path.join(self.dataset_dir, x.strip())
+ for x in self.img_files[data_name]
+ ]
+ self.img_files[data_name] = list(
+ filter(lambda x: len(x) > 0, self.img_files[data_name]))
+
+ self.img_start_index[data_name] = img_index
+ img_index += len(self.img_files[data_name])
+
+ # record label_files
+ self.label_files[data_name] = [
+ x.replace('images', 'labels_with_ids').replace(
+ '.png', '.txt').replace('.jpg', '.txt')
+ for x in self.img_files[data_name]
+ ]
+
+ for data_name, label_paths in self.label_files.items():
+ max_index = -1
+ for lp in label_paths:
+ lb = np.loadtxt(lp)
+ if len(lb) < 1:
+ continue
+ if len(lb.shape) < 2:
+ img_max = lb[1]
+ else:
+ img_max = np.max(lb[:, 1])
+ if img_max > max_index:
+ max_index = img_max
+ self.tid_num[data_name] = int(max_index + 1)
+
+ last_index = 0
+ for i, (k, v) in enumerate(self.tid_num.items()):
+ self.tid_start_index[k] = last_index
+ last_index += v
+
+ self.num_identities_dict = defaultdict(int)
+ self.num_identities_dict[0] = int(last_index + 1) # single class
+ self.num_imgs_each_data = [len(x) for x in self.img_files.values()]
+ self.total_imgs = sum(self.num_imgs_each_data)
+
+ logger.info('MOT dataset summary: ')
+ logger.info(self.tid_num)
+ logger.info('Total images: {}'.format(self.total_imgs))
+ logger.info('Image start index: {}'.format(self.img_start_index))
+ logger.info('Total identities: {}'.format(self.num_identities_dict[0]))
+ logger.info('Identity start index: {}'.format(self.tid_start_index))
+
+ records = []
+ cname2cid = mot_label()
+
+ for img_index in range(self.total_imgs):
+ for i, (k, v) in enumerate(self.img_start_index.items()):
+ if img_index >= v:
+ data_name = list(self.label_files.keys())[i]
+ start_index = v
+ img_file = self.img_files[data_name][img_index - start_index]
+ lbl_file = self.label_files[data_name][img_index - start_index]
+
+ if not os.path.exists(img_file):
+ logger.warning('Illegal image file: {}, and it will be ignored'.
+ format(img_file))
+ continue
+ if not os.path.isfile(lbl_file):
+ logger.warning('Illegal label file: {}, and it will be ignored'.
+ format(lbl_file))
+ continue
+
+ labels = np.loadtxt(lbl_file, dtype=np.float32).reshape(-1, 6)
+ # each row in labels (N, 6) is [gt_class, gt_identity, cx, cy, w, h]
+
+ cx, cy = labels[:, 2], labels[:, 3]
+ w, h = labels[:, 4], labels[:, 5]
+ gt_bbox = np.stack((cx, cy, w, h)).T.astype('float32')
+ gt_class = labels[:, 0:1].astype('int32')
+ gt_score = np.ones((len(labels), 1)).astype('float32')
+ gt_ide = labels[:, 1:2].astype('int32')
+ for i, _ in enumerate(gt_ide):
+ if gt_ide[i] > -1:
+ gt_ide[i] += self.tid_start_index[data_name]
+
+ mot_rec = {
+ 'im_file': img_file,
+ 'im_id': img_index,
+ } if 'image' in self.data_fields else {}
+
+ gt_rec = {
+ 'gt_class': gt_class,
+ 'gt_score': gt_score,
+ 'gt_bbox': gt_bbox,
+ 'gt_ide': gt_ide,
+ }
+
+ for k, v in gt_rec.items():
+ if k in self.data_fields:
+ mot_rec[k] = v
+
+ records.append(mot_rec)
+ if self.sample_num > 0 and img_index >= self.sample_num:
+ break
+ assert len(records) > 0, 'not found any mot record in %s' % (
+ self.image_lists)
+ self.roidbs, self.cname2cid = records, cname2cid
+
+
+@register
+@serializable
+class MCMOTDataSet(DetDataset):
+ """
+ Load dataset with MOT format, support multi-class MOT.
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_lists (list(str)): mcmot data image lists, muiti-source mcmot dataset.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ label_list (str): if use_default_label is False, will load
+ mapping between category and class index.
+ sample_num (int): number of samples to load, -1 means all.
+
+ Notes:
+ MCMOT datasets root directory following this:
+ dataset/mot
+ |鈥斺斺斺斺斺攊mage_lists
+ | |鈥斺斺斺斺斺攙isdrone_mcmot.train
+ | |鈥斺斺斺斺斺攙isdrone_mcmot.val
+ visdrone_mcmot
+ |鈥斺斺斺斺斺攊mages
+ | 鈹斺斺斺斺斺斺攖rain
+ | 鈹斺斺斺斺斺斺攙al
+ 鈹斺斺斺斺斺斺攍abels_with_ids
+ 鈹斺斺斺斺斺斺攖rain
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_lists=[],
+ data_fields=['image'],
+ label_list=None,
+ sample_num=-1):
+ super(MCMOTDataSet, self).__init__(
+ dataset_dir=dataset_dir,
+ data_fields=data_fields,
+ sample_num=sample_num)
+ self.dataset_dir = dataset_dir
+ self.image_lists = image_lists
+ if isinstance(self.image_lists, str):
+ self.image_lists = [self.image_lists]
+ self.label_list = label_list
+ self.roidbs = None
+ self.cname2cid = None
+
+ def get_anno(self):
+ if self.image_lists == []:
+ return
+ # only used to get categories and metric
+ # only check first data, but the label_list of all data should be same.
+ first_mot_data = self.image_lists[0].split('.')[0]
+ anno_file = os.path.join(self.dataset_dir, first_mot_data, 'label_list.txt')
+ return anno_file
+
+ def parse_dataset(self):
+ self.img_files = OrderedDict()
+ self.img_start_index = OrderedDict()
+ self.label_files = OrderedDict()
+ self.tid_num = OrderedDict()
+ self.tid_start_idx_of_cls_ids = defaultdict(dict) # for MCMOT
+
+ img_index = 0
+ for data_name in self.image_lists:
+ # check every data image list
+ image_lists_dir = os.path.join(self.dataset_dir, 'image_lists')
+ assert os.path.isdir(image_lists_dir), \
+ "The {} is not a directory.".format(image_lists_dir)
+
+ list_path = os.path.join(image_lists_dir, data_name)
+ assert os.path.exists(list_path), \
+ "The list path {} does not exist.".format(list_path)
+
+ # record img_files, filter out empty ones
+ with open(list_path, 'r') as file:
+ self.img_files[data_name] = file.readlines()
+ self.img_files[data_name] = [
+ os.path.join(self.dataset_dir, x.strip())
+ for x in self.img_files[data_name]
+ ]
+ self.img_files[data_name] = list(
+ filter(lambda x: len(x) > 0, self.img_files[data_name]))
+
+ self.img_start_index[data_name] = img_index
+ img_index += len(self.img_files[data_name])
+
+ # record label_files
+ self.label_files[data_name] = [
+ x.replace('images', 'labels_with_ids').replace(
+ '.png', '.txt').replace('.jpg', '.txt')
+ for x in self.img_files[data_name]
+ ]
+
+ for data_name, label_paths in self.label_files.items():
+ # using max_ids_dict rather than max_index
+ max_ids_dict = defaultdict(int)
+ for lp in label_paths:
+ lb = np.loadtxt(lp)
+ if len(lb) < 1:
+ continue
+ lb = lb.reshape(-1, 6)
+ for item in lb:
+ if item[1] > max_ids_dict[int(item[0])]:
+ # item[0]: cls_id
+ # item[1]: track id
+ max_ids_dict[int(item[0])] = int(item[1])
+ # track id number
+ self.tid_num[data_name] = max_ids_dict
+
+ last_idx_dict = defaultdict(int)
+ for i, (k, v) in enumerate(self.tid_num.items()): # each sub dataset
+ for cls_id, id_num in v.items(): # v is a max_ids_dict
+ self.tid_start_idx_of_cls_ids[k][cls_id] = last_idx_dict[cls_id]
+ last_idx_dict[cls_id] += id_num
+
+ self.num_identities_dict = defaultdict(int)
+ for k, v in last_idx_dict.items():
+ self.num_identities_dict[k] = int(v) # total ids of each category
+
+ self.num_imgs_each_data = [len(x) for x in self.img_files.values()]
+ self.total_imgs = sum(self.num_imgs_each_data)
+
+ # cname2cid and cid2cname
+ cname2cid = {}
+ if self.label_list is not None:
+ # if use label_list for multi source mix dataset,
+ # please make sure label_list in the first sub_dataset at least.
+ sub_dataset = self.image_lists[0].split('.')[0]
+ label_path = os.path.join(self.dataset_dir, sub_dataset,
+ self.label_list)
+ if not os.path.exists(label_path):
+ logger.info(
+ "Note: label_list {} does not exists, use VisDrone 10 classes labels as default.".
+ format(label_path))
+ cname2cid = visdrone_mcmot_label()
+ else:
+ with open(label_path, 'r') as fr:
+ label_id = 0
+ for line in fr.readlines():
+ cname2cid[line.strip()] = label_id
+ label_id += 1
+ else:
+ cname2cid = visdrone_mcmot_label()
+
+ cid2cname = dict([(v, k) for (k, v) in cname2cid.items()])
+
+ logger.info('MCMOT dataset summary: ')
+ logger.info(self.tid_num)
+ logger.info('Total images: {}'.format(self.total_imgs))
+ logger.info('Image start index: {}'.format(self.img_start_index))
+
+ logger.info('Total identities of each category: ')
+ num_identities_dict = sorted(
+ self.num_identities_dict.items(), key=lambda x: x[0])
+ total_IDs_all_cats = 0
+ for (k, v) in num_identities_dict:
+ logger.info('Category {} [{}] has {} IDs.'.format(k, cid2cname[k],
+ v))
+ total_IDs_all_cats += v
+ logger.info('Total identities of all categories: {}'.format(
+ total_IDs_all_cats))
+
+ logger.info('Identity start index of each category: ')
+ for k, v in self.tid_start_idx_of_cls_ids.items():
+ sorted_v = sorted(v.items(), key=lambda x: x[0])
+ for (cls_id, start_idx) in sorted_v:
+ logger.info('Start index of dataset {} category {:d} is {:d}'
+ .format(k, cls_id, start_idx))
+
+ records = []
+ for img_index in range(self.total_imgs):
+ for i, (k, v) in enumerate(self.img_start_index.items()):
+ if img_index >= v:
+ data_name = list(self.label_files.keys())[i]
+ start_index = v
+ img_file = self.img_files[data_name][img_index - start_index]
+ lbl_file = self.label_files[data_name][img_index - start_index]
+
+ if not os.path.exists(img_file):
+ logger.warning('Illegal image file: {}, and it will be ignored'.
+ format(img_file))
+ continue
+ if not os.path.isfile(lbl_file):
+ logger.warning('Illegal label file: {}, and it will be ignored'.
+ format(lbl_file))
+ continue
+
+ labels = np.loadtxt(lbl_file, dtype=np.float32).reshape(-1, 6)
+ # each row in labels (N, 6) is [gt_class, gt_identity, cx, cy, w, h]
+
+ cx, cy = labels[:, 2], labels[:, 3]
+ w, h = labels[:, 4], labels[:, 5]
+ gt_bbox = np.stack((cx, cy, w, h)).T.astype('float32')
+ gt_class = labels[:, 0:1].astype('int32')
+ gt_score = np.ones((len(labels), 1)).astype('float32')
+ gt_ide = labels[:, 1:2].astype('int32')
+ for i, _ in enumerate(gt_ide):
+ if gt_ide[i] > -1:
+ cls_id = int(gt_class[i])
+ start_idx = self.tid_start_idx_of_cls_ids[data_name][cls_id]
+ gt_ide[i] += start_idx
+
+ mot_rec = {
+ 'im_file': img_file,
+ 'im_id': img_index,
+ } if 'image' in self.data_fields else {}
+
+ gt_rec = {
+ 'gt_class': gt_class,
+ 'gt_score': gt_score,
+ 'gt_bbox': gt_bbox,
+ 'gt_ide': gt_ide,
+ }
+
+ for k, v in gt_rec.items():
+ if k in self.data_fields:
+ mot_rec[k] = v
+
+ records.append(mot_rec)
+ if self.sample_num > 0 and img_index >= self.sample_num:
+ break
+ assert len(records) > 0, 'not found any mot record in %s' % (
+ self.image_lists)
+ self.roidbs, self.cname2cid = records, cname2cid
+
+
+@register
+@serializable
+class MOTImageFolder(DetDataset):
+ """
+ Load MOT dataset with MOT format from image folder or video .
+ Args:
+ video_file (str): path of the video file, default ''.
+ frame_rate (int): frame rate of the video, use cv2 VideoCapture if not set.
+ dataset_dir (str): root directory for dataset.
+ keep_ori_im (bool): whether to keep original image, default False.
+ Set True when used during MOT model inference while saving
+ images or video, or used in DeepSORT.
+ """
+
+ def __init__(self,
+ video_file=None,
+ frame_rate=-1,
+ dataset_dir=None,
+ data_root=None,
+ image_dir=None,
+ sample_num=-1,
+ keep_ori_im=False,
+ **kwargs):
+ super(MOTImageFolder, self).__init__(
+ dataset_dir, image_dir, sample_num=sample_num)
+ self.video_file = video_file
+ self.data_root = data_root
+ self.keep_ori_im = keep_ori_im
+ self._imid2path = {}
+ self.roidbs = None
+ self.frame_rate = frame_rate
+
+ def check_or_download_dataset(self):
+ return
+
+ def parse_dataset(self, ):
+ if not self.roidbs:
+ if self.video_file is None:
+ self.frame_rate = 30 # set as default if infer image folder
+ self.roidbs = self._load_images()
+ else:
+ self.roidbs = self._load_video_images()
+
+ def _load_video_images(self):
+ if self.frame_rate == -1:
+ # if frame_rate is not set for video, use cv2.VideoCapture
+ cap = cv2.VideoCapture(self.video_file)
+ self.frame_rate = int(cap.get(cv2.CAP_PROP_FPS))
+
+ extension = self.video_file.split('.')[-1]
+ output_path = self.video_file.replace('.{}'.format(extension), '')
+ frames_path = video2frames(self.video_file, output_path,
+ self.frame_rate)
+ self.video_frames = sorted(
+ glob.glob(os.path.join(frames_path, '*.png')))
+
+ self.video_length = len(self.video_frames)
+ logger.info('Length of the video: {:d} frames.'.format(
+ self.video_length))
+ ct = 0
+ records = []
+ for image in self.video_frames:
+ assert image != '' and os.path.isfile(image), \
+ "Image {} not found".format(image)
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ rec = {'im_id': np.array([ct]), 'im_file': image}
+ if self.keep_ori_im:
+ rec.update({'keep_ori_im': 1})
+ self._imid2path[ct] = image
+ ct += 1
+ records.append(rec)
+ assert len(records) > 0, "No image file found"
+ return records
+
+ def _find_images(self):
+ image_dir = self.image_dir
+ if not isinstance(image_dir, Sequence):
+ image_dir = [image_dir]
+ images = []
+ for im_dir in image_dir:
+ if os.path.isdir(im_dir):
+ im_dir = os.path.join(self.dataset_dir, im_dir)
+ images.extend(_make_dataset(im_dir))
+ elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
+ images.append(im_dir)
+ return images
+
+ def _load_images(self):
+ images = self._find_images()
+ ct = 0
+ records = []
+ for image in images:
+ assert image != '' and os.path.isfile(image), \
+ "Image {} not found".format(image)
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ rec = {'im_id': np.array([ct]), 'im_file': image}
+ if self.keep_ori_im:
+ rec.update({'keep_ori_im': 1})
+ self._imid2path[ct] = image
+ ct += 1
+ records.append(rec)
+ assert len(records) > 0, "No image file found"
+ return records
+
+ def get_imid2path(self):
+ return self._imid2path
+
+ def set_images(self, images):
+ self.image_dir = images
+ self.roidbs = self._load_images()
+
+ def set_video(self, video_file, frame_rate):
+ # update video_file and frame_rate by command line of tools/infer_mot.py
+ self.video_file = video_file
+ self.frame_rate = frame_rate
+ assert os.path.isfile(self.video_file) and _is_valid_video(self.video_file), \
+ "wrong or unsupported file format: {}".format(self.video_file)
+ self.roidbs = self._load_video_images()
+
+
+def _is_valid_video(f, extensions=('.mp4', '.avi', '.mov', '.rmvb', 'flv')):
+ return f.lower().endswith(extensions)
+
+
+def video2frames(video_path, outpath, frame_rate, **kargs):
+ def _dict2str(kargs):
+ cmd_str = ''
+ for k, v in kargs.items():
+ cmd_str += (' ' + str(k) + ' ' + str(v))
+ return cmd_str
+
+ ffmpeg = ['ffmpeg ', ' -y -loglevel ', ' error ']
+ vid_name = os.path.basename(video_path).split('.')[0]
+ out_full_path = os.path.join(outpath, vid_name)
+
+ if not os.path.exists(out_full_path):
+ os.makedirs(out_full_path)
+
+ # video file name
+ outformat = os.path.join(out_full_path, '%08d.png')
+
+ cmd = ffmpeg
+ cmd = ffmpeg + [
+ ' -i ', video_path, ' -r ', str(frame_rate), ' -f image2 ', outformat
+ ]
+ cmd = ''.join(cmd) + _dict2str(kargs)
+
+ if os.system(cmd) != 0:
+ raise RuntimeError('ffmpeg process video: {} error'.format(video_path))
+ sys.exit(-1)
+
+ sys.stdout.flush()
+ return out_full_path
+
+
+def mot_label():
+ labels_map = {'person': 0}
+ return labels_map
+
+
+def visdrone_mcmot_label():
+ labels_map = {
+ 'pedestrian': 0,
+ 'people': 1,
+ 'bicycle': 2,
+ 'car': 3,
+ 'van': 4,
+ 'truck': 5,
+ 'tricycle': 6,
+ 'awning-tricycle': 7,
+ 'bus': 8,
+ 'motor': 9,
+ }
+ return labels_map
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/sniper_coco.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/sniper_coco.py
new file mode 100644
index 000000000..1b07e7a31
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/sniper_coco.py
@@ -0,0 +1,194 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import cv2
+import json
+import copy
+import numpy as np
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+
+from ppdet.core.workspace import register, serializable
+from ppdet.data.crop_utils.annotation_cropper import AnnoCropper
+from .coco import COCODataSet
+from .dataset import _make_dataset, _is_valid_file
+from ppdet.utils.logger import setup_logger
+
+logger = setup_logger('sniper_coco_dataset')
+
+
+@register
+@serializable
+class SniperCOCODataSet(COCODataSet):
+ """SniperCOCODataSet"""
+
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ proposals_file=None,
+ data_fields=['image'],
+ sample_num=-1,
+ load_crowd=False,
+ allow_empty=True,
+ empty_ratio=1.,
+ is_trainset=True,
+ image_target_sizes=[2000, 1000],
+ valid_box_ratio_ranges=[[-1, 0.1],[0.08, -1]],
+ chip_target_size=500,
+ chip_target_stride=200,
+ use_neg_chip=False,
+ max_neg_num_per_im=8,
+ max_per_img=-1,
+ nms_thresh=0.5):
+ super(SniperCOCODataSet, self).__init__(
+ dataset_dir=dataset_dir,
+ image_dir=image_dir,
+ anno_path=anno_path,
+ data_fields=data_fields,
+ sample_num=sample_num,
+ load_crowd=load_crowd,
+ allow_empty=allow_empty,
+ empty_ratio=empty_ratio
+ )
+ self.proposals_file = proposals_file
+ self.proposals = None
+ self.anno_cropper = None
+ self.is_trainset = is_trainset
+ self.image_target_sizes = image_target_sizes
+ self.valid_box_ratio_ranges = valid_box_ratio_ranges
+ self.chip_target_size = chip_target_size
+ self.chip_target_stride = chip_target_stride
+ self.use_neg_chip = use_neg_chip
+ self.max_neg_num_per_im = max_neg_num_per_im
+ self.max_per_img = max_per_img
+ self.nms_thresh = nms_thresh
+
+
+ def parse_dataset(self):
+ if not hasattr(self, "roidbs"):
+ super(SniperCOCODataSet, self).parse_dataset()
+ if self.is_trainset:
+ self._parse_proposals()
+ self._merge_anno_proposals()
+ self.ori_roidbs = copy.deepcopy(self.roidbs)
+ self.init_anno_cropper()
+ self.roidbs = self.generate_chips_roidbs(self.roidbs, self.is_trainset)
+
+ def set_proposals_file(self, file_path):
+ self.proposals_file = file_path
+
+ def init_anno_cropper(self):
+ logger.info("Init AnnoCropper...")
+ self.anno_cropper = AnnoCropper(
+ image_target_sizes=self.image_target_sizes,
+ valid_box_ratio_ranges=self.valid_box_ratio_ranges,
+ chip_target_size=self.chip_target_size,
+ chip_target_stride=self.chip_target_stride,
+ use_neg_chip=self.use_neg_chip,
+ max_neg_num_per_im=self.max_neg_num_per_im,
+ max_per_img=self.max_per_img,
+ nms_thresh=self.nms_thresh
+ )
+
+ def generate_chips_roidbs(self, roidbs, is_trainset):
+ if is_trainset:
+ roidbs = self.anno_cropper.crop_anno_records(roidbs)
+ else:
+ roidbs = self.anno_cropper.crop_infer_anno_records(roidbs)
+ return roidbs
+
+ def _parse_proposals(self):
+ if self.proposals_file:
+ self.proposals = {}
+ logger.info("Parse proposals file:{}".format(self.proposals_file))
+ with open(self.proposals_file, 'r') as f:
+ proposals = json.load(f)
+ for prop in proposals:
+ image_id = prop["image_id"]
+ if image_id not in self.proposals:
+ self.proposals[image_id] = []
+ x, y, w, h = prop["bbox"]
+ self.proposals[image_id].append([x, y, x + w, y + h])
+
+ def _merge_anno_proposals(self):
+ assert self.roidbs
+ if self.proposals and len(self.proposals.keys()) > 0:
+ logger.info("merge proposals to annos")
+ for id, record in enumerate(self.roidbs):
+ image_id = int(record["im_id"])
+ if image_id not in self.proposals.keys():
+ logger.info("image id :{} no proposals".format(image_id))
+ record["proposals"] = np.array(self.proposals.get(image_id, []), dtype=np.float32)
+ self.roidbs[id] = record
+
+ def get_ori_roidbs(self):
+ if not hasattr(self, "ori_roidbs"):
+ return None
+ return self.ori_roidbs
+
+ def get_roidbs(self):
+ if not hasattr(self, "roidbs"):
+ self.parse_dataset()
+ return self.roidbs
+
+ def set_roidbs(self, roidbs):
+ self.roidbs = roidbs
+
+ def check_or_download_dataset(self):
+ return
+
+ def _parse(self):
+ image_dir = self.image_dir
+ if not isinstance(image_dir, Sequence):
+ image_dir = [image_dir]
+ images = []
+ for im_dir in image_dir:
+ if os.path.isdir(im_dir):
+ im_dir = os.path.join(self.dataset_dir, im_dir)
+ images.extend(_make_dataset(im_dir))
+ elif os.path.isfile(im_dir) and _is_valid_file(im_dir):
+ images.append(im_dir)
+ return images
+
+ def _load_images(self):
+ images = self._parse()
+ ct = 0
+ records = []
+ for image in images:
+ assert image != '' and os.path.isfile(image), \
+ "Image {} not found".format(image)
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ im = cv2.imread(image)
+ h, w, c = im.shape
+ rec = {'im_id': np.array([ct]), 'im_file': image, "h": h, "w": w}
+ self._imid2path[ct] = image
+ ct += 1
+ records.append(rec)
+ assert len(records) > 0, "No image file found"
+ return records
+
+ def get_imid2path(self):
+ return self._imid2path
+
+ def set_images(self, images):
+ self._imid2path = {}
+ self.image_dir = images
+ self.roidbs = self._load_images()
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/voc.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/voc.py
new file mode 100644
index 000000000..1c2a7ef98
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/voc.py
@@ -0,0 +1,231 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import numpy as np
+
+import xml.etree.ElementTree as ET
+
+from ppdet.core.workspace import register, serializable
+
+from .dataset import DetDataset
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class VOCDataSet(DetDataset):
+ """
+ Load dataset with PascalVOC format.
+
+ Notes:
+ `anno_path` must contains xml file and image file path for annotations.
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_dir (str): directory for images.
+ anno_path (str): voc annotation file path.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ sample_num (int): number of samples to load, -1 means all.
+ label_list (str): if use_default_label is False, will load
+ mapping between category and class index.
+ allow_empty (bool): whether to load empty entry. False as default
+ empty_ratio (float): the ratio of empty record number to total
+ record's, if empty_ratio is out of [0. ,1.), do not sample the
+ records and use all the empty entries. 1. as default
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ data_fields=['image'],
+ sample_num=-1,
+ label_list=None,
+ allow_empty=False,
+ empty_ratio=1.):
+ super(VOCDataSet, self).__init__(
+ dataset_dir=dataset_dir,
+ image_dir=image_dir,
+ anno_path=anno_path,
+ data_fields=data_fields,
+ sample_num=sample_num)
+ self.label_list = label_list
+ self.allow_empty = allow_empty
+ self.empty_ratio = empty_ratio
+
+ def _sample_empty(self, records, num):
+ # if empty_ratio is out of [0. ,1.), do not sample the records
+ if self.empty_ratio < 0. or self.empty_ratio >= 1.:
+ return records
+ import random
+ sample_num = min(
+ int(num * self.empty_ratio / (1 - self.empty_ratio)), len(records))
+ records = random.sample(records, sample_num)
+ return records
+
+ def parse_dataset(self, ):
+ anno_path = os.path.join(self.dataset_dir, self.anno_path)
+ image_dir = os.path.join(self.dataset_dir, self.image_dir)
+
+ # mapping category name to class id
+ # first_class:0, second_class:1, ...
+ records = []
+ empty_records = []
+ ct = 0
+ cname2cid = {}
+ if self.label_list:
+ label_path = os.path.join(self.dataset_dir, self.label_list)
+ if not os.path.exists(label_path):
+ raise ValueError("label_list {} does not exists".format(
+ label_path))
+ with open(label_path, 'r') as fr:
+ label_id = 0
+ for line in fr.readlines():
+ cname2cid[line.strip()] = label_id
+ label_id += 1
+ else:
+ cname2cid = pascalvoc_label()
+
+ with open(anno_path, 'r') as fr:
+ while True:
+ line = fr.readline()
+ if not line:
+ break
+ img_file, xml_file = [os.path.join(image_dir, x) \
+ for x in line.strip().split()[:2]]
+ if not os.path.exists(img_file):
+ logger.warning(
+ 'Illegal image file: {}, and it will be ignored'.format(
+ img_file))
+ continue
+ if not os.path.isfile(xml_file):
+ logger.warning(
+ 'Illegal xml file: {}, and it will be ignored'.format(
+ xml_file))
+ continue
+ tree = ET.parse(xml_file)
+ if tree.find('id') is None:
+ im_id = np.array([ct])
+ else:
+ im_id = np.array([int(tree.find('id').text)])
+
+ objs = tree.findall('object')
+ im_w = float(tree.find('size').find('width').text)
+ im_h = float(tree.find('size').find('height').text)
+ if im_w < 0 or im_h < 0:
+ logger.warning(
+ 'Illegal width: {} or height: {} in annotation, '
+ 'and {} will be ignored'.format(im_w, im_h, xml_file))
+ continue
+
+ num_bbox, i = len(objs), 0
+ gt_bbox = np.zeros((num_bbox, 4), dtype=np.float32)
+ gt_class = np.zeros((num_bbox, 1), dtype=np.int32)
+ gt_score = np.zeros((num_bbox, 1), dtype=np.float32)
+ difficult = np.zeros((num_bbox, 1), dtype=np.int32)
+ for obj in objs:
+ cname = obj.find('name').text
+
+ # user dataset may not contain difficult field
+ _difficult = obj.find('difficult')
+ _difficult = int(
+ _difficult.text) if _difficult is not None else 0
+
+ x1 = float(obj.find('bndbox').find('xmin').text)
+ y1 = float(obj.find('bndbox').find('ymin').text)
+ x2 = float(obj.find('bndbox').find('xmax').text)
+ y2 = float(obj.find('bndbox').find('ymax').text)
+ x1 = max(0, x1)
+ y1 = max(0, y1)
+ x2 = min(im_w - 1, x2)
+ y2 = min(im_h - 1, y2)
+ if x2 > x1 and y2 > y1:
+ gt_bbox[i, :] = [x1, y1, x2, y2]
+ gt_class[i, 0] = cname2cid[cname]
+ gt_score[i, 0] = 1.
+ difficult[i, 0] = _difficult
+ i += 1
+ else:
+ logger.warning(
+ 'Found an invalid bbox in annotations: xml_file: {}'
+ ', x1: {}, y1: {}, x2: {}, y2: {}.'.format(
+ xml_file, x1, y1, x2, y2))
+ gt_bbox = gt_bbox[:i, :]
+ gt_class = gt_class[:i, :]
+ gt_score = gt_score[:i, :]
+ difficult = difficult[:i, :]
+
+ voc_rec = {
+ 'im_file': img_file,
+ 'im_id': im_id,
+ 'h': im_h,
+ 'w': im_w
+ } if 'image' in self.data_fields else {}
+
+ gt_rec = {
+ 'gt_class': gt_class,
+ 'gt_score': gt_score,
+ 'gt_bbox': gt_bbox,
+ 'difficult': difficult
+ }
+ for k, v in gt_rec.items():
+ if k in self.data_fields:
+ voc_rec[k] = v
+
+ if len(objs) == 0:
+ empty_records.append(voc_rec)
+ else:
+ records.append(voc_rec)
+
+ ct += 1
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ assert ct > 0, 'not found any voc record in %s' % (self.anno_path)
+ logger.debug('{} samples in file {}'.format(ct, anno_path))
+ if self.allow_empty and len(empty_records) > 0:
+ empty_records = self._sample_empty(empty_records, len(records))
+ records += empty_records
+ self.roidbs, self.cname2cid = records, cname2cid
+
+ def get_label_list(self):
+ return os.path.join(self.dataset_dir, self.label_list)
+
+
+def pascalvoc_label():
+ labels_map = {
+ 'aeroplane': 0,
+ 'bicycle': 1,
+ 'bird': 2,
+ 'boat': 3,
+ 'bottle': 4,
+ 'bus': 5,
+ 'car': 6,
+ 'cat': 7,
+ 'chair': 8,
+ 'cow': 9,
+ 'diningtable': 10,
+ 'dog': 11,
+ 'horse': 12,
+ 'motorbike': 13,
+ 'person': 14,
+ 'pottedplant': 15,
+ 'sheep': 16,
+ 'sofa': 17,
+ 'train': 18,
+ 'tvmonitor': 19
+ }
+ return labels_map
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/widerface.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/widerface.py
new file mode 100644
index 000000000..a17c2aaf8
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/source/widerface.py
@@ -0,0 +1,180 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import numpy as np
+
+from ppdet.core.workspace import register, serializable
+from .dataset import DetDataset
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class WIDERFaceDataSet(DetDataset):
+ """
+ Load WiderFace records with 'anno_path'
+
+ Args:
+ dataset_dir (str): root directory for dataset.
+ image_dir (str): directory for images.
+ anno_path (str): WiderFace annotation data.
+ data_fields (list): key name of data dictionary, at least have 'image'.
+ sample_num (int): number of samples to load, -1 means all.
+ with_lmk (bool): whether to load face landmark keypoint labels.
+ """
+
+ def __init__(self,
+ dataset_dir=None,
+ image_dir=None,
+ anno_path=None,
+ data_fields=['image'],
+ sample_num=-1,
+ with_lmk=False):
+ super(WIDERFaceDataSet, self).__init__(
+ dataset_dir=dataset_dir,
+ image_dir=image_dir,
+ anno_path=anno_path,
+ data_fields=data_fields,
+ sample_num=sample_num,
+ with_lmk=with_lmk)
+ self.anno_path = anno_path
+ self.sample_num = sample_num
+ self.roidbs = None
+ self.cname2cid = None
+ self.with_lmk = with_lmk
+
+ def parse_dataset(self):
+ anno_path = os.path.join(self.dataset_dir, self.anno_path)
+ image_dir = os.path.join(self.dataset_dir, self.image_dir)
+
+ txt_file = anno_path
+
+ records = []
+ ct = 0
+ file_lists = self._load_file_list(txt_file)
+ cname2cid = widerface_label()
+
+ for item in file_lists:
+ im_fname = item[0]
+ im_id = np.array([ct])
+ gt_bbox = np.zeros((len(item) - 1, 4), dtype=np.float32)
+ gt_class = np.zeros((len(item) - 1, 1), dtype=np.int32)
+ gt_lmk_labels = np.zeros((len(item) - 1, 10), dtype=np.float32)
+ lmk_ignore_flag = np.zeros((len(item) - 1, 1), dtype=np.int32)
+ for index_box in range(len(item)):
+ if index_box < 1:
+ continue
+ gt_bbox[index_box - 1] = item[index_box][0]
+ if self.with_lmk:
+ gt_lmk_labels[index_box - 1] = item[index_box][1]
+ lmk_ignore_flag[index_box - 1] = item[index_box][2]
+ im_fname = os.path.join(image_dir,
+ im_fname) if image_dir else im_fname
+ widerface_rec = {
+ 'im_file': im_fname,
+ 'im_id': im_id,
+ } if 'image' in self.data_fields else {}
+ gt_rec = {
+ 'gt_bbox': gt_bbox,
+ 'gt_class': gt_class,
+ }
+ for k, v in gt_rec.items():
+ if k in self.data_fields:
+ widerface_rec[k] = v
+ if self.with_lmk:
+ widerface_rec['gt_keypoint'] = gt_lmk_labels
+ widerface_rec['keypoint_ignore'] = lmk_ignore_flag
+
+ if len(item) != 0:
+ records.append(widerface_rec)
+
+ ct += 1
+ if self.sample_num > 0 and ct >= self.sample_num:
+ break
+ assert len(records) > 0, 'not found any widerface in %s' % (anno_path)
+ logger.debug('{} samples in file {}'.format(ct, anno_path))
+ self.roidbs, self.cname2cid = records, cname2cid
+
+ def _load_file_list(self, input_txt):
+ with open(input_txt, 'r') as f_dir:
+ lines_input_txt = f_dir.readlines()
+
+ file_dict = {}
+ num_class = 0
+ exts = ['jpg', 'jpeg', 'png', 'bmp']
+ exts += [ext.upper() for ext in exts]
+ for i in range(len(lines_input_txt)):
+ line_txt = lines_input_txt[i].strip('\n\t\r')
+ split_str = line_txt.split(' ')
+ if len(split_str) == 1:
+ img_file_name = os.path.split(split_str[0])[1]
+ split_txt = img_file_name.split('.')
+ if len(split_txt) < 2:
+ continue
+ elif split_txt[-1] in exts:
+ if i != 0:
+ num_class += 1
+ file_dict[num_class] = [line_txt]
+ else:
+ if len(line_txt) <= 6:
+ continue
+ result_boxs = []
+ xmin = float(split_str[0])
+ ymin = float(split_str[1])
+ w = float(split_str[2])
+ h = float(split_str[3])
+ # Filter out wrong labels
+ if w < 0 or h < 0:
+ logger.warning('Illegal box with w: {}, h: {} in '
+ 'img: {}, and it will be ignored'.format(
+ w, h, file_dict[num_class][0]))
+ continue
+ xmin = max(0, xmin)
+ ymin = max(0, ymin)
+ xmax = xmin + w
+ ymax = ymin + h
+ gt_bbox = [xmin, ymin, xmax, ymax]
+ result_boxs.append(gt_bbox)
+ if self.with_lmk:
+ assert len(split_str) > 18, 'When `with_lmk=True`, the number' \
+ 'of characters per line in the annotation file should' \
+ 'exceed 18.'
+ lmk0_x = float(split_str[5])
+ lmk0_y = float(split_str[6])
+ lmk1_x = float(split_str[8])
+ lmk1_y = float(split_str[9])
+ lmk2_x = float(split_str[11])
+ lmk2_y = float(split_str[12])
+ lmk3_x = float(split_str[14])
+ lmk3_y = float(split_str[15])
+ lmk4_x = float(split_str[17])
+ lmk4_y = float(split_str[18])
+ lmk_ignore_flag = 0 if lmk0_x == -1 else 1
+ gt_lmk_label = [
+ lmk0_x, lmk0_y, lmk1_x, lmk1_y, lmk2_x, lmk2_y, lmk3_x,
+ lmk3_y, lmk4_x, lmk4_y
+ ]
+ result_boxs.append(gt_lmk_label)
+ result_boxs.append(lmk_ignore_flag)
+ file_dict[num_class].append(result_boxs)
+
+ return list(file_dict.values())
+
+
+def widerface_label():
+ labels_map = {'face': 0}
+ return labels_map
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__init__.py
new file mode 100644
index 000000000..fb8a1a449
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__init__.py
@@ -0,0 +1,28 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import operators
+from . import batch_operators
+from . import keypoint_operators
+from . import mot_operators
+
+from .operators import *
+from .batch_operators import *
+from .keypoint_operators import *
+from .mot_operators import *
+
+__all__ = []
+__all__ += registered_ops
+__all__ += keypoint_operators.__all__
+__all__ += mot_operators.__all__
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..ada30eb03
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/atss_assigner.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/atss_assigner.cpython-37.pyc
new file mode 100644
index 000000000..937f1cda9
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/atss_assigner.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/batch_operators.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/batch_operators.cpython-37.pyc
new file mode 100644
index 000000000..06f9ac59e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/batch_operators.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/keypoint_operators.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/keypoint_operators.cpython-37.pyc
new file mode 100644
index 000000000..54cc72243
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/keypoint_operators.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/mot_operators.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/mot_operators.cpython-37.pyc
new file mode 100644
index 000000000..0de9ed2c8
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/mot_operators.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/op_helper.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/op_helper.cpython-37.pyc
new file mode 100644
index 000000000..a2570b09e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/op_helper.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/operators.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/operators.cpython-37.pyc
new file mode 100644
index 000000000..8f1e42647
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/__pycache__/operators.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/atss_assigner.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/atss_assigner.py
new file mode 100644
index 000000000..178d94fb6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/atss_assigner.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/assigners/atss_assigner.py
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+def bbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False, eps=1e-6):
+ """Calculate overlap between two set of bboxes.
+ If ``is_aligned `` is ``False``, then calculate the overlaps between each
+ bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned
+ pair of bboxes1 and bboxes2.
+ Args:
+ bboxes1 (Tensor): shape (B, m, 4) in format or empty.
+ bboxes2 (Tensor): shape (B, n, 4) in format or empty.
+ B indicates the batch dim, in shape (B1, B2, ..., Bn).
+ If ``is_aligned `` is ``True``, then m and n must be equal.
+ mode (str): "iou" (intersection over union) or "iof" (intersection over
+ foreground).
+ is_aligned (bool, optional): If True, then m and n must be equal.
+ Default False.
+ eps (float, optional): A value added to the denominator for numerical
+ stability. Default 1e-6.
+ Returns:
+ Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,)
+ """
+ assert mode in ['iou', 'iof', 'giou'], 'Unsupported mode {}'.format(mode)
+ # Either the boxes are empty or the length of boxes's last dimenstion is 4
+ assert (bboxes1.shape[-1] == 4 or bboxes1.shape[0] == 0)
+ assert (bboxes2.shape[-1] == 4 or bboxes2.shape[0] == 0)
+
+ # Batch dim must be the same
+ # Batch dim: (B1, B2, ... Bn)
+ assert bboxes1.shape[:-2] == bboxes2.shape[:-2]
+ batch_shape = bboxes1.shape[:-2]
+
+ rows = bboxes1.shape[-2] if bboxes1.shape[0] > 0 else 0
+ cols = bboxes2.shape[-2] if bboxes2.shape[0] > 0 else 0
+ if is_aligned:
+ assert rows == cols
+
+ if rows * cols == 0:
+ if is_aligned:
+ return np.random.random(batch_shape + (rows, ))
+ else:
+ return np.random.random(batch_shape + (rows, cols))
+
+ area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (
+ bboxes1[..., 3] - bboxes1[..., 1])
+ area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (
+ bboxes2[..., 3] - bboxes2[..., 1])
+
+ if is_aligned:
+ lt = np.maximum(bboxes1[..., :2], bboxes2[..., :2]) # [B, rows, 2]
+ rb = np.minimum(bboxes1[..., 2:], bboxes2[..., 2:]) # [B, rows, 2]
+
+ wh = (rb - lt).clip(min=0) # [B, rows, 2]
+ overlap = wh[..., 0] * wh[..., 1]
+
+ if mode in ['iou', 'giou']:
+ union = area1 + area2 - overlap
+ else:
+ union = area1
+ if mode == 'giou':
+ enclosed_lt = np.minimum(bboxes1[..., :2], bboxes2[..., :2])
+ enclosed_rb = np.maximum(bboxes1[..., 2:], bboxes2[..., 2:])
+ else:
+ lt = np.maximum(bboxes1[..., :, None, :2],
+ bboxes2[..., None, :, :2]) # [B, rows, cols, 2]
+ rb = np.minimum(bboxes1[..., :, None, 2:],
+ bboxes2[..., None, :, 2:]) # [B, rows, cols, 2]
+
+ wh = (rb - lt).clip(min=0) # [B, rows, cols, 2]
+ overlap = wh[..., 0] * wh[..., 1]
+
+ if mode in ['iou', 'giou']:
+ union = area1[..., None] + area2[..., None, :] - overlap
+ else:
+ union = area1[..., None]
+ if mode == 'giou':
+ enclosed_lt = np.minimum(bboxes1[..., :, None, :2],
+ bboxes2[..., None, :, :2])
+ enclosed_rb = np.maximum(bboxes1[..., :, None, 2:],
+ bboxes2[..., None, :, 2:])
+
+ eps = np.array([eps])
+ union = np.maximum(union, eps)
+ ious = overlap / union
+ if mode in ['iou', 'iof']:
+ return ious
+ # calculate gious
+ enclose_wh = (enclosed_rb - enclosed_lt).clip(min=0)
+ enclose_area = enclose_wh[..., 0] * enclose_wh[..., 1]
+ enclose_area = np.maximum(enclose_area, eps)
+ gious = ious - (enclose_area - union) / enclose_area
+ return gious
+
+
+def topk_(input, k, axis=1, largest=True):
+ x = -input if largest else input
+ if axis == 0:
+ row_index = np.arange(input.shape[1 - axis])
+ topk_index = np.argpartition(x, k, axis=axis)[0:k, :]
+ topk_data = x[topk_index, row_index]
+
+ topk_index_sort = np.argsort(topk_data, axis=axis)
+ topk_data_sort = topk_data[topk_index_sort, row_index]
+ topk_index_sort = topk_index[0:k, :][topk_index_sort, row_index]
+ else:
+ column_index = np.arange(x.shape[1 - axis])[:, None]
+ topk_index = np.argpartition(x, k, axis=axis)[:, 0:k]
+ topk_data = x[column_index, topk_index]
+ topk_data = -topk_data if largest else topk_data
+ topk_index_sort = np.argsort(topk_data, axis=axis)
+ topk_data_sort = topk_data[column_index, topk_index_sort]
+ topk_index_sort = topk_index[:, 0:k][column_index, topk_index_sort]
+
+ return topk_data_sort, topk_index_sort
+
+
+class ATSSAssigner(object):
+ """Assign a corresponding gt bbox or background to each bbox.
+
+ Each proposals will be assigned with `0` or a positive integer
+ indicating the ground truth index.
+
+ - 0: negative sample, no assigned gt
+ - positive integer: positive sample, index (1-based) of assigned gt
+
+ Args:
+ topk (float): number of bbox selected in each level
+ """
+
+ def __init__(self, topk=9):
+ self.topk = topk
+
+ def __call__(self,
+ bboxes,
+ num_level_bboxes,
+ gt_bboxes,
+ gt_bboxes_ignore=None,
+ gt_labels=None):
+ """Assign gt to bboxes.
+ The assignment is done in following steps
+ 1. compute iou between all bbox (bbox of all pyramid levels) and gt
+ 2. compute center distance between all bbox and gt
+ 3. on each pyramid level, for each gt, select k bbox whose center
+ are closest to the gt center, so we total select k*l bbox as
+ candidates for each gt
+ 4. get corresponding iou for the these candidates, and compute the
+ mean and std, set mean + std as the iou threshold
+ 5. select these candidates whose iou are greater than or equal to
+ the threshold as postive
+ 6. limit the positive sample's center in gt
+ Args:
+ bboxes (np.array): Bounding boxes to be assigned, shape(n, 4).
+ num_level_bboxes (List): num of bboxes in each level
+ gt_bboxes (np.array): Groundtruth boxes, shape (k, 4).
+ gt_bboxes_ignore (np.array, optional): Ground truth bboxes that are
+ labelled as `ignored`, e.g., crowd boxes in COCO.
+ gt_labels (np.array, optional): Label of gt_bboxes, shape (k, ).
+ """
+ bboxes = bboxes[:, :4]
+ num_gt, num_bboxes = gt_bboxes.shape[0], bboxes.shape[0]
+
+ # assign 0 by default
+ assigned_gt_inds = np.zeros((num_bboxes, ), dtype=np.int64)
+
+ if num_gt == 0 or num_bboxes == 0:
+ # No ground truth or boxes, return empty assignment
+ max_overlaps = np.zeros((num_bboxes, ))
+ if num_gt == 0:
+ # No truth, assign everything to background
+ assigned_gt_inds[:] = 0
+ if not np.any(gt_labels):
+ assigned_labels = None
+ else:
+ assigned_labels = -np.ones((num_bboxes, ), dtype=np.int64)
+ return assigned_gt_inds, max_overlaps
+
+ # compute iou between all bbox and gt
+ overlaps = bbox_overlaps(bboxes, gt_bboxes)
+ # compute center distance between all bbox and gt
+ gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0
+ gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0
+ gt_points = np.stack((gt_cx, gt_cy), axis=1)
+
+ bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0
+ bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0
+ bboxes_points = np.stack((bboxes_cx, bboxes_cy), axis=1)
+
+ distances = np.sqrt(
+ np.power((bboxes_points[:, None, :] - gt_points[None, :, :]), 2)
+ .sum(-1))
+
+ # Selecting candidates based on the center distance
+ candidate_idxs = []
+ start_idx = 0
+ for bboxes_per_level in num_level_bboxes:
+ # on each pyramid level, for each gt,
+ # select k bbox whose center are closest to the gt center
+ end_idx = start_idx + bboxes_per_level
+ distances_per_level = distances[start_idx:end_idx, :]
+ selectable_k = min(self.topk, bboxes_per_level)
+ _, topk_idxs_per_level = topk_(
+ distances_per_level, selectable_k, axis=0, largest=False)
+ candidate_idxs.append(topk_idxs_per_level + start_idx)
+ start_idx = end_idx
+ candidate_idxs = np.concatenate(candidate_idxs, axis=0)
+
+ # get corresponding iou for the these candidates, and compute the
+ # mean and std, set mean + std as the iou threshold
+ candidate_overlaps = overlaps[candidate_idxs, np.arange(num_gt)]
+ overlaps_mean_per_gt = candidate_overlaps.mean(0)
+ overlaps_std_per_gt = candidate_overlaps.std(0)
+ overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt
+
+ is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :]
+
+ # limit the positive sample's center in gt
+ for gt_idx in range(num_gt):
+ candidate_idxs[:, gt_idx] += gt_idx * num_bboxes
+ ep_bboxes_cx = np.broadcast_to(
+ bboxes_cx.reshape(1, -1), [num_gt, num_bboxes]).reshape(-1)
+ ep_bboxes_cy = np.broadcast_to(
+ bboxes_cy.reshape(1, -1), [num_gt, num_bboxes]).reshape(-1)
+ candidate_idxs = candidate_idxs.reshape(-1)
+
+ # calculate the left, top, right, bottom distance between positive
+ # bbox center and gt side
+ l_ = ep_bboxes_cx[candidate_idxs].reshape(-1, num_gt) - gt_bboxes[:, 0]
+ t_ = ep_bboxes_cy[candidate_idxs].reshape(-1, num_gt) - gt_bboxes[:, 1]
+ r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].reshape(-1, num_gt)
+ b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].reshape(-1, num_gt)
+ is_in_gts = np.stack([l_, t_, r_, b_], axis=1).min(axis=1) > 0.01
+ is_pos = is_pos & is_in_gts
+
+ # if an anchor box is assigned to multiple gts,
+ # the one with the highest IoU will be selected.
+ overlaps_inf = -np.inf * np.ones_like(overlaps).T.reshape(-1)
+ index = candidate_idxs.reshape(-1)[is_pos.reshape(-1)]
+ overlaps_inf[index] = overlaps.T.reshape(-1)[index]
+ overlaps_inf = overlaps_inf.reshape(num_gt, -1).T
+
+ max_overlaps = overlaps_inf.max(axis=1)
+ argmax_overlaps = overlaps_inf.argmax(axis=1)
+ assigned_gt_inds[max_overlaps !=
+ -np.inf] = argmax_overlaps[max_overlaps != -np.inf] + 1
+
+ return assigned_gt_inds, max_overlaps
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/autoaugment_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/autoaugment_utils.py
new file mode 100644
index 000000000..cfa89d374
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/autoaugment_utils.py
@@ -0,0 +1,1586 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# Reference:
+# https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/autoaugment_utils.py
+"""AutoAugment util file."""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import inspect
+import math
+from PIL import Image, ImageEnhance
+import numpy as np
+import cv2
+from copy import deepcopy
+
+# This signifies the max integer that the controller RNN could predict for the
+# augmentation scheme.
+_MAX_LEVEL = 10.
+
+# Represents an invalid bounding box that is used for checking for padding
+# lists of bounding box coordinates for a few augmentation operations
+_INVALID_BOX = [[-1.0, -1.0, -1.0, -1.0]]
+
+
+def policy_v0():
+ """Autoaugment policy that was used in AutoAugment Detection Paper."""
+ # Each tuple is an augmentation operation of the form
+ # (operation, probability, magnitude). Each element in policy is a
+ # sub-policy that will be applied sequentially on the image.
+ policy = [
+ [('TranslateX_BBox', 0.6, 4), ('Equalize', 0.8, 10)],
+ [('TranslateY_Only_BBoxes', 0.2, 2), ('Cutout', 0.8, 8)],
+ [('Sharpness', 0.0, 8), ('ShearX_BBox', 0.4, 0)],
+ [('ShearY_BBox', 1.0, 2), ('TranslateY_Only_BBoxes', 0.6, 6)],
+ [('Rotate_BBox', 0.6, 10), ('Color', 1.0, 6)],
+ ]
+ return policy
+
+
+def policy_v1():
+ """Autoaugment policy that was used in AutoAugment Detection Paper."""
+ # Each tuple is an augmentation operation of the form
+ # (operation, probability, magnitude). Each element in policy is a
+ # sub-policy that will be applied sequentially on the image.
+ policy = [
+ [('TranslateX_BBox', 0.6, 4), ('Equalize', 0.8, 10)],
+ [('TranslateY_Only_BBoxes', 0.2, 2), ('Cutout', 0.8, 8)],
+ [('Sharpness', 0.0, 8), ('ShearX_BBox', 0.4, 0)],
+ [('ShearY_BBox', 1.0, 2), ('TranslateY_Only_BBoxes', 0.6, 6)],
+ [('Rotate_BBox', 0.6, 10), ('Color', 1.0, 6)],
+ [('Color', 0.0, 0), ('ShearX_Only_BBoxes', 0.8, 4)],
+ [('ShearY_Only_BBoxes', 0.8, 2), ('Flip_Only_BBoxes', 0.0, 10)],
+ [('Equalize', 0.6, 10), ('TranslateX_BBox', 0.2, 2)],
+ [('Color', 1.0, 10), ('TranslateY_Only_BBoxes', 0.4, 6)],
+ [('Rotate_BBox', 0.8, 10), ('Contrast', 0.0, 10)], # ,
+ [('Cutout', 0.2, 2), ('Brightness', 0.8, 10)],
+ [('Color', 1.0, 6), ('Equalize', 1.0, 2)],
+ [('Cutout_Only_BBoxes', 0.4, 6), ('TranslateY_Only_BBoxes', 0.8, 2)],
+ [('Color', 0.2, 8), ('Rotate_BBox', 0.8, 10)],
+ [('Sharpness', 0.4, 4), ('TranslateY_Only_BBoxes', 0.0, 4)],
+ [('Sharpness', 1.0, 4), ('SolarizeAdd', 0.4, 4)],
+ [('Rotate_BBox', 1.0, 8), ('Sharpness', 0.2, 8)],
+ [('ShearY_BBox', 0.6, 10), ('Equalize_Only_BBoxes', 0.6, 8)],
+ [('ShearX_BBox', 0.2, 6), ('TranslateY_Only_BBoxes', 0.2, 10)],
+ [('SolarizeAdd', 0.6, 8), ('Brightness', 0.8, 10)],
+ ]
+ return policy
+
+
+def policy_vtest():
+ """Autoaugment test policy for debugging."""
+ # Each tuple is an augmentation operation of the form
+ # (operation, probability, magnitude). Each element in policy is a
+ # sub-policy that will be applied sequentially on the image.
+ policy = [[('TranslateX_BBox', 1.0, 4), ('Equalize', 1.0, 10)], ]
+ return policy
+
+
+def policy_v2():
+ """Additional policy that performs well on object detection."""
+ # Each tuple is an augmentation operation of the form
+ # (operation, probability, magnitude). Each element in policy is a
+ # sub-policy that will be applied sequentially on the image.
+ policy = [
+ [('Color', 0.0, 6), ('Cutout', 0.6, 8), ('Sharpness', 0.4, 8)],
+ [('Rotate_BBox', 0.4, 8), ('Sharpness', 0.4, 2),
+ ('Rotate_BBox', 0.8, 10)],
+ [('TranslateY_BBox', 1.0, 8), ('AutoContrast', 0.8, 2)],
+ [('AutoContrast', 0.4, 6), ('ShearX_BBox', 0.8, 8),
+ ('Brightness', 0.0, 10)],
+ [('SolarizeAdd', 0.2, 6), ('Contrast', 0.0, 10),
+ ('AutoContrast', 0.6, 0)],
+ [('Cutout', 0.2, 0), ('Solarize', 0.8, 8), ('Color', 1.0, 4)],
+ [('TranslateY_BBox', 0.0, 4), ('Equalize', 0.6, 8),
+ ('Solarize', 0.0, 10)],
+ [('TranslateY_BBox', 0.2, 2), ('ShearY_BBox', 0.8, 8),
+ ('Rotate_BBox', 0.8, 8)],
+ [('Cutout', 0.8, 8), ('Brightness', 0.8, 8), ('Cutout', 0.2, 2)],
+ [('Color', 0.8, 4), ('TranslateY_BBox', 1.0, 6),
+ ('Rotate_BBox', 0.6, 6)],
+ [('Rotate_BBox', 0.6, 10), ('BBox_Cutout', 1.0, 4), ('Cutout', 0.2, 8)],
+ [('Rotate_BBox', 0.0, 0), ('Equalize', 0.6, 6),
+ ('ShearY_BBox', 0.6, 8)],
+ [('Brightness', 0.8, 8), ('AutoContrast', 0.4, 2),
+ ('Brightness', 0.2, 2)],
+ [('TranslateY_BBox', 0.4, 8), ('Solarize', 0.4, 6),
+ ('SolarizeAdd', 0.2, 10)],
+ [('Contrast', 1.0, 10), ('SolarizeAdd', 0.2, 8), ('Equalize', 0.2, 4)],
+ ]
+ return policy
+
+
+def policy_v3():
+ """"Additional policy that performs well on object detection."""
+ # Each tuple is an augmentation operation of the form
+ # (operation, probability, magnitude). Each element in policy is a
+ # sub-policy that will be applied sequentially on the image.
+ policy = [
+ [('Posterize', 0.8, 2), ('TranslateX_BBox', 1.0, 8)],
+ [('BBox_Cutout', 0.2, 10), ('Sharpness', 1.0, 8)],
+ [('Rotate_BBox', 0.6, 8), ('Rotate_BBox', 0.8, 10)],
+ [('Equalize', 0.8, 10), ('AutoContrast', 0.2, 10)],
+ [('SolarizeAdd', 0.2, 2), ('TranslateY_BBox', 0.2, 8)],
+ [('Sharpness', 0.0, 2), ('Color', 0.4, 8)],
+ [('Equalize', 1.0, 8), ('TranslateY_BBox', 1.0, 8)],
+ [('Posterize', 0.6, 2), ('Rotate_BBox', 0.0, 10)],
+ [('AutoContrast', 0.6, 0), ('Rotate_BBox', 1.0, 6)],
+ [('Equalize', 0.0, 4), ('Cutout', 0.8, 10)],
+ [('Brightness', 1.0, 2), ('TranslateY_BBox', 1.0, 6)],
+ [('Contrast', 0.0, 2), ('ShearY_BBox', 0.8, 0)],
+ [('AutoContrast', 0.8, 10), ('Contrast', 0.2, 10)],
+ [('Rotate_BBox', 1.0, 10), ('Cutout', 1.0, 10)],
+ [('SolarizeAdd', 0.8, 6), ('Equalize', 0.8, 8)],
+ ]
+ return policy
+
+
+def _equal(val1, val2, eps=1e-8):
+ return abs(val1 - val2) <= eps
+
+
+def blend(image1, image2, factor):
+ """Blend image1 and image2 using 'factor'.
+
+ Factor can be above 0.0. A value of 0.0 means only image1 is used.
+ A value of 1.0 means only image2 is used. A value between 0.0 and
+ 1.0 means we linearly interpolate the pixel values between the two
+ images. A value greater than 1.0 "extrapolates" the difference
+ between the two pixel values, and we clip the results to values
+ between 0 and 255.
+
+ Args:
+ image1: An image Tensor of type uint8.
+ image2: An image Tensor of type uint8.
+ factor: A floating point value above 0.0.
+
+ Returns:
+ A blended image Tensor of type uint8.
+ """
+ if factor == 0.0:
+ return image1
+ if factor == 1.0:
+ return image2
+
+ image1 = image1.astype(np.float32)
+ image2 = image2.astype(np.float32)
+
+ difference = image2 - image1
+ scaled = factor * difference
+
+ # Do addition in float.
+ temp = image1 + scaled
+
+ # Interpolate
+ if factor > 0.0 and factor < 1.0:
+ # Interpolation means we always stay within 0 and 255.
+ return temp.astype(np.uint8)
+
+ # Extrapolate:
+ #
+ # We need to clip and then cast.
+ return np.clip(temp, a_min=0, a_max=255).astype(np.uint8)
+
+
+def cutout(image, pad_size, replace=0):
+ """Apply cutout (https://arxiv.org/abs/1708.04552) to image.
+
+ This operation applies a (2*pad_size x 2*pad_size) mask of zeros to
+ a random location within `img`. The pixel values filled in will be of the
+ value `replace`. The located where the mask will be applied is randomly
+ chosen uniformly over the whole image.
+
+ Args:
+ image: An image Tensor of type uint8.
+ pad_size: Specifies how big the zero mask that will be generated is that
+ is applied to the image. The mask will be of size
+ (2*pad_size x 2*pad_size).
+ replace: What pixel value to fill in the image in the area that has
+ the cutout mask applied to it.
+
+ Returns:
+ An image Tensor that is of type uint8.
+ Example:
+ img = cv2.imread( "/home/vis/gry/train/img_data/test.jpg", cv2.COLOR_BGR2RGB )
+ new_img = cutout(img, pad_size=50, replace=0)
+ """
+ image_height, image_width = image.shape[0], image.shape[1]
+
+ cutout_center_height = np.random.randint(low=0, high=image_height)
+ cutout_center_width = np.random.randint(low=0, high=image_width)
+
+ lower_pad = np.maximum(0, cutout_center_height - pad_size)
+ upper_pad = np.maximum(0, image_height - cutout_center_height - pad_size)
+ left_pad = np.maximum(0, cutout_center_width - pad_size)
+ right_pad = np.maximum(0, image_width - cutout_center_width - pad_size)
+
+ cutout_shape = [
+ image_height - (lower_pad + upper_pad),
+ image_width - (left_pad + right_pad)
+ ]
+ padding_dims = [[lower_pad, upper_pad], [left_pad, right_pad]]
+ mask = np.pad(np.zeros(
+ cutout_shape, dtype=image.dtype),
+ padding_dims,
+ 'constant',
+ constant_values=1)
+ mask = np.expand_dims(mask, -1)
+ mask = np.tile(mask, [1, 1, 3])
+ image = np.where(
+ np.equal(mask, 0),
+ np.ones_like(
+ image, dtype=image.dtype) * replace,
+ image)
+ return image.astype(np.uint8)
+
+
+def solarize(image, threshold=128):
+ # For each pixel in the image, select the pixel
+ # if the value is less than the threshold.
+ # Otherwise, subtract 255 from the pixel.
+ return np.where(image < threshold, image, 255 - image)
+
+
+def solarize_add(image, addition=0, threshold=128):
+ # For each pixel in the image less than threshold
+ # we add 'addition' amount to it and then clip the
+ # pixel value to be between 0 and 255. The value
+ # of 'addition' is between -128 and 128.
+ added_image = image.astype(np.int64) + addition
+ added_image = np.clip(added_image, a_min=0, a_max=255).astype(np.uint8)
+ return np.where(image < threshold, added_image, image)
+
+
+def color(image, factor):
+ """use cv2 to deal"""
+ gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
+ degenerate = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
+ return blend(degenerate, image, factor)
+
+
+# refer to https://github.com/4uiiurz1/pytorch-auto-augment/blob/024b2eac4140c38df8342f09998e307234cafc80/auto_augment.py#L197
+def contrast(img, factor):
+ img = ImageEnhance.Contrast(Image.fromarray(img)).enhance(factor)
+ return np.array(img)
+
+
+def brightness(image, factor):
+ """Equivalent of PIL Brightness."""
+ degenerate = np.zeros_like(image)
+ return blend(degenerate, image, factor)
+
+
+def posterize(image, bits):
+ """Equivalent of PIL Posterize."""
+ shift = 8 - bits
+ return np.left_shift(np.right_shift(image, shift), shift)
+
+
+def rotate(image, degrees, replace):
+ """Rotates the image by degrees either clockwise or counterclockwise.
+
+ Args:
+ image: An image Tensor of type uint8.
+ degrees: Float, a scalar angle in degrees to rotate all images by. If
+ degrees is positive the image will be rotated clockwise otherwise it will
+ be rotated counterclockwise.
+ replace: A one or three value 1D tensor to fill empty pixels caused by
+ the rotate operation.
+
+ Returns:
+ The rotated version of image.
+ """
+ image = wrap(image)
+ image = Image.fromarray(image)
+ image = image.rotate(degrees)
+ image = np.array(image, dtype=np.uint8)
+ return unwrap(image, replace)
+
+
+def random_shift_bbox(image,
+ bbox,
+ pixel_scaling,
+ replace,
+ new_min_bbox_coords=None):
+ """Move the bbox and the image content to a slightly new random location.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ The potential values for the new min corner of the bbox will be between
+ [old_min - pixel_scaling * bbox_height/2,
+ old_min - pixel_scaling * bbox_height/2].
+ pixel_scaling: A float between 0 and 1 that specifies the pixel range
+ that the new bbox location will be sampled from.
+ replace: A one or three value 1D tensor to fill empty pixels.
+ new_min_bbox_coords: If not None, then this is a tuple that specifies the
+ (min_y, min_x) coordinates of the new bbox. Normally this is randomly
+ specified, but this allows it to be manually set. The coordinates are
+ the absolute coordinates between 0 and image height/width and are int32.
+
+ Returns:
+ The new image that will have the shifted bbox location in it along with
+ the new bbox that contains the new coordinates.
+ """
+ # Obtains image height and width and create helper clip functions.
+ image_height, image_width = image.shape[0], image.shape[1]
+ image_height = float(image_height)
+ image_width = float(image_width)
+
+ def clip_y(val):
+ return np.clip(val, a_min=0, a_max=image_height - 1).astype(np.int32)
+
+ def clip_x(val):
+ return np.clip(val, a_min=0, a_max=image_width - 1).astype(np.int32)
+
+ # Convert bbox to pixel coordinates.
+ min_y = int(image_height * bbox[0])
+ min_x = int(image_width * bbox[1])
+ max_y = clip_y(image_height * bbox[2])
+ max_x = clip_x(image_width * bbox[3])
+
+ bbox_height, bbox_width = (max_y - min_y + 1, max_x - min_x + 1)
+ image_height = int(image_height)
+ image_width = int(image_width)
+
+ # Select the new min/max bbox ranges that are used for sampling the
+ # new min x/y coordinates of the shifted bbox.
+ minval_y = clip_y(min_y - np.int32(pixel_scaling * float(bbox_height) /
+ 2.0))
+ maxval_y = clip_y(min_y + np.int32(pixel_scaling * float(bbox_height) /
+ 2.0))
+ minval_x = clip_x(min_x - np.int32(pixel_scaling * float(bbox_width) / 2.0))
+ maxval_x = clip_x(min_x + np.int32(pixel_scaling * float(bbox_width) / 2.0))
+
+ # Sample and calculate the new unclipped min/max coordinates of the new bbox.
+ if new_min_bbox_coords is None:
+ unclipped_new_min_y = np.random.randint(
+ low=minval_y, high=maxval_y, dtype=np.int32)
+ unclipped_new_min_x = np.random.randint(
+ low=minval_x, high=maxval_x, dtype=np.int32)
+ else:
+ unclipped_new_min_y, unclipped_new_min_x = (
+ clip_y(new_min_bbox_coords[0]), clip_x(new_min_bbox_coords[1]))
+ unclipped_new_max_y = unclipped_new_min_y + bbox_height - 1
+ unclipped_new_max_x = unclipped_new_min_x + bbox_width - 1
+
+ # Determine if any of the new bbox was shifted outside the current image.
+ # This is used for determining if any of the original bbox content should be
+ # discarded.
+ new_min_y, new_min_x, new_max_y, new_max_x = (
+ clip_y(unclipped_new_min_y), clip_x(unclipped_new_min_x),
+ clip_y(unclipped_new_max_y), clip_x(unclipped_new_max_x))
+ shifted_min_y = (new_min_y - unclipped_new_min_y) + min_y
+ shifted_max_y = max_y - (unclipped_new_max_y - new_max_y)
+ shifted_min_x = (new_min_x - unclipped_new_min_x) + min_x
+ shifted_max_x = max_x - (unclipped_new_max_x - new_max_x)
+
+ # Create the new bbox tensor by converting pixel integer values to floats.
+ new_bbox = np.stack([
+ float(new_min_y) / float(image_height), float(new_min_x) /
+ float(image_width), float(new_max_y) / float(image_height),
+ float(new_max_x) / float(image_width)
+ ])
+
+ # Copy the contents in the bbox and fill the old bbox location
+ # with gray (128).
+ bbox_content = image[shifted_min_y:shifted_max_y + 1, shifted_min_x:
+ shifted_max_x + 1, :]
+
+ def mask_and_add_image(min_y_, min_x_, max_y_, max_x_, mask, content_tensor,
+ image_):
+ """Applies mask to bbox region in image then adds content_tensor to it."""
+ mask = np.pad(mask, [[min_y_, (image_height - 1) - max_y_],
+ [min_x_, (image_width - 1) - max_x_], [0, 0]],
+ 'constant',
+ constant_values=1)
+
+ content_tensor = np.pad(content_tensor,
+ [[min_y_, (image_height - 1) - max_y_],
+ [min_x_, (image_width - 1) - max_x_], [0, 0]],
+ 'constant',
+ constant_values=0)
+ return image_ * mask + content_tensor
+
+ # Zero out original bbox location.
+ mask = np.zeros_like(image)[min_y:max_y + 1, min_x:max_x + 1, :]
+ grey_tensor = np.zeros_like(mask) + replace[0]
+ image = mask_and_add_image(min_y, min_x, max_y, max_x, mask, grey_tensor,
+ image)
+
+ # Fill in bbox content to new bbox location.
+ mask = np.zeros_like(bbox_content)
+ image = mask_and_add_image(new_min_y, new_min_x, new_max_y, new_max_x, mask,
+ bbox_content, image)
+
+ return image.astype(np.uint8), new_bbox
+
+
+def _clip_bbox(min_y, min_x, max_y, max_x):
+ """Clip bounding box coordinates between 0 and 1.
+
+ Args:
+ min_y: Normalized bbox coordinate of type float between 0 and 1.
+ min_x: Normalized bbox coordinate of type float between 0 and 1.
+ max_y: Normalized bbox coordinate of type float between 0 and 1.
+ max_x: Normalized bbox coordinate of type float between 0 and 1.
+
+ Returns:
+ Clipped coordinate values between 0 and 1.
+ """
+ min_y = np.clip(min_y, a_min=0, a_max=1.0)
+ min_x = np.clip(min_x, a_min=0, a_max=1.0)
+ max_y = np.clip(max_y, a_min=0, a_max=1.0)
+ max_x = np.clip(max_x, a_min=0, a_max=1.0)
+ return min_y, min_x, max_y, max_x
+
+
+def _check_bbox_area(min_y, min_x, max_y, max_x, delta=0.05):
+ """Adjusts bbox coordinates to make sure the area is > 0.
+
+ Args:
+ min_y: Normalized bbox coordinate of type float between 0 and 1.
+ min_x: Normalized bbox coordinate of type float between 0 and 1.
+ max_y: Normalized bbox coordinate of type float between 0 and 1.
+ max_x: Normalized bbox coordinate of type float between 0 and 1.
+ delta: Float, this is used to create a gap of size 2 * delta between
+ bbox min/max coordinates that are the same on the boundary.
+ This prevents the bbox from having an area of zero.
+
+ Returns:
+ Tuple of new bbox coordinates between 0 and 1 that will now have a
+ guaranteed area > 0.
+ """
+ height = max_y - min_y
+ width = max_x - min_x
+
+ def _adjust_bbox_boundaries(min_coord, max_coord):
+ # Make sure max is never 0 and min is never 1.
+ max_coord = np.maximum(max_coord, 0.0 + delta)
+ min_coord = np.minimum(min_coord, 1.0 - delta)
+ return min_coord, max_coord
+
+ if _equal(height, 0):
+ min_y, max_y = _adjust_bbox_boundaries(min_y, max_y)
+
+ if _equal(width, 0):
+ min_x, max_x = _adjust_bbox_boundaries(min_x, max_x)
+
+ return min_y, min_x, max_y, max_x
+
+
+def _scale_bbox_only_op_probability(prob):
+ """Reduce the probability of the bbox-only operation.
+
+ Probability is reduced so that we do not distort the content of too many
+ bounding boxes that are close to each other. The value of 3.0 was a chosen
+ hyper parameter when designing the autoaugment algorithm that we found
+ empirically to work well.
+
+ Args:
+ prob: Float that is the probability of applying the bbox-only operation.
+
+ Returns:
+ Reduced probability.
+ """
+ return prob / 3.0
+
+
+def _apply_bbox_augmentation(image, bbox, augmentation_func, *args):
+ """Applies augmentation_func to the subsection of image indicated by bbox.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ augmentation_func: Augmentation function that will be applied to the
+ subsection of image.
+ *args: Additional parameters that will be passed into augmentation_func
+ when it is called.
+
+ Returns:
+ A modified version of image, where the bbox location in the image will
+ have `ugmentation_func applied to it.
+ """
+ image_height = image.shape[0]
+ image_width = image.shape[1]
+
+ min_y = int(image_height * bbox[0])
+ min_x = int(image_width * bbox[1])
+ max_y = int(image_height * bbox[2])
+ max_x = int(image_width * bbox[3])
+
+ # Clip to be sure the max values do not fall out of range.
+ max_y = np.minimum(max_y, image_height - 1)
+ max_x = np.minimum(max_x, image_width - 1)
+
+ # Get the sub-tensor that is the image within the bounding box region.
+ bbox_content = image[min_y:max_y + 1, min_x:max_x + 1, :]
+
+ # Apply the augmentation function to the bbox portion of the image.
+ augmented_bbox_content = augmentation_func(bbox_content, *args)
+
+ # Pad the augmented_bbox_content and the mask to match the shape of original
+ # image.
+ augmented_bbox_content = np.pad(
+ augmented_bbox_content, [[min_y, (image_height - 1) - max_y],
+ [min_x, (image_width - 1) - max_x], [0, 0]],
+ 'constant',
+ constant_values=1)
+
+ # Create a mask that will be used to zero out a part of the original image.
+ mask_tensor = np.zeros_like(bbox_content)
+
+ mask_tensor = np.pad(mask_tensor,
+ [[min_y, (image_height - 1) - max_y],
+ [min_x, (image_width - 1) - max_x], [0, 0]],
+ 'constant',
+ constant_values=1)
+ # Replace the old bbox content with the new augmented content.
+ image = image * mask_tensor + augmented_bbox_content
+ return image.astype(np.uint8)
+
+
+def _concat_bbox(bbox, bboxes):
+ """Helper function that concates bbox to bboxes along the first dimension."""
+
+ # Note if all elements in bboxes are -1 (_INVALID_BOX), then this means
+ # we discard bboxes and start the bboxes Tensor with the current bbox.
+ bboxes_sum_check = np.sum(bboxes)
+ bbox = np.expand_dims(bbox, 0)
+ # This check will be true when it is an _INVALID_BOX
+ if _equal(bboxes_sum_check, -4):
+ bboxes = bbox
+ else:
+ bboxes = np.concatenate([bboxes, bbox], 0)
+ return bboxes
+
+
+def _apply_bbox_augmentation_wrapper(image, bbox, new_bboxes, prob,
+ augmentation_func, func_changes_bbox,
+ *args):
+ """Applies _apply_bbox_augmentation with probability prob.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ new_bboxes: 2D Tensor that is a list of the bboxes in the image after they
+ have been altered by aug_func. These will only be changed when
+ func_changes_bbox is set to true. Each bbox has 4 elements
+ (min_y, min_x, max_y, max_x) of type float that are the normalized
+ bbox coordinates between 0 and 1.
+ prob: Float that is the probability of applying _apply_bbox_augmentation.
+ augmentation_func: Augmentation function that will be applied to the
+ subsection of image.
+ func_changes_bbox: Boolean. Does augmentation_func return bbox in addition
+ to image.
+ *args: Additional parameters that will be passed into augmentation_func
+ when it is called.
+
+ Returns:
+ A tuple. Fist element is a modified version of image, where the bbox
+ location in the image will have augmentation_func applied to it if it is
+ chosen to be called with probability `prob`. The second element is a
+ Tensor of Tensors of length 4 that will contain the altered bbox after
+ applying augmentation_func.
+ """
+ should_apply_op = (np.random.rand() + prob >= 1)
+ if func_changes_bbox:
+ if should_apply_op:
+ augmented_image, bbox = augmentation_func(image, bbox, *args)
+ else:
+ augmented_image, bbox = (image, bbox)
+ else:
+ if should_apply_op:
+ augmented_image = _apply_bbox_augmentation(image, bbox,
+ augmentation_func, *args)
+ else:
+ augmented_image = image
+ new_bboxes = _concat_bbox(bbox, new_bboxes)
+ return augmented_image.astype(np.uint8), new_bboxes
+
+
+def _apply_multi_bbox_augmentation(image, bboxes, prob, aug_func,
+ func_changes_bbox, *args):
+ """Applies aug_func to the image for each bbox in bboxes.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
+ has 4 elements (min_y, min_x, max_y, max_x) of type float.
+ prob: Float that is the probability of applying aug_func to a specific
+ bounding box within the image.
+ aug_func: Augmentation function that will be applied to the
+ subsections of image indicated by the bbox values in bboxes.
+ func_changes_bbox: Boolean. Does augmentation_func return bbox in addition
+ to image.
+ *args: Additional parameters that will be passed into augmentation_func
+ when it is called.
+
+ Returns:
+ A modified version of image, where each bbox location in the image will
+ have augmentation_func applied to it if it is chosen to be called with
+ probability prob independently across all bboxes. Also the final
+ bboxes are returned that will be unchanged if func_changes_bbox is set to
+ false and if true, the new altered ones will be returned.
+ """
+ # Will keep track of the new altered bboxes after aug_func is repeatedly
+ # applied. The -1 values are a dummy value and this first Tensor will be
+ # removed upon appending the first real bbox.
+ new_bboxes = np.array(_INVALID_BOX)
+
+ # If the bboxes are empty, then just give it _INVALID_BOX. The result
+ # will be thrown away.
+ bboxes = np.array((_INVALID_BOX)) if bboxes.size == 0 else bboxes
+
+ assert bboxes.shape[1] == 4, "bboxes.shape[1] must be 4!!!!"
+
+ # pylint:disable=g-long-lambda
+ # pylint:disable=line-too-long
+ wrapped_aug_func = lambda _image, bbox, _new_bboxes: _apply_bbox_augmentation_wrapper(_image, bbox, _new_bboxes, prob, aug_func, func_changes_bbox, *args)
+ # pylint:enable=g-long-lambda
+ # pylint:enable=line-too-long
+
+ # Setup the while_loop.
+ num_bboxes = bboxes.shape[0] # We loop until we go over all bboxes.
+ idx = 0 # Counter for the while loop.
+
+ # Conditional function when to end the loop once we go over all bboxes
+ # images_and_bboxes contain (_image, _new_bboxes)
+ def cond(_idx, _images_and_bboxes):
+ return _idx < num_bboxes
+
+ # Shuffle the bboxes so that the augmentation order is not deterministic if
+ # we are not changing the bboxes with aug_func.
+ # if not func_changes_bbox:
+ # print(bboxes)
+ # loop_bboxes = np.take(bboxes,np.random.permutation(bboxes.shape[0]),axis=0)
+ # print(loop_bboxes)
+ # else:
+ # loop_bboxes = bboxes
+ # we can not shuffle the bbox because it does not contain class information here
+ loop_bboxes = deepcopy(bboxes)
+
+ # Main function of while_loop where we repeatedly apply augmentation on the
+ # bboxes in the image.
+ # pylint:disable=g-long-lambda
+ body = lambda _idx, _images_and_bboxes: [
+ _idx + 1, wrapped_aug_func(_images_and_bboxes[0],
+ loop_bboxes[_idx],
+ _images_and_bboxes[1])]
+ while (cond(idx, (image, new_bboxes))):
+ idx, (image, new_bboxes) = body(idx, (image, new_bboxes))
+
+ # Either return the altered bboxes or the original ones depending on if
+ # we altered them in anyway.
+ if func_changes_bbox:
+ final_bboxes = new_bboxes
+ else:
+ final_bboxes = bboxes
+ return image, final_bboxes
+
+
+def _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob, aug_func,
+ func_changes_bbox, *args):
+ """Checks to be sure num bboxes > 0 before calling inner function."""
+ num_bboxes = len(bboxes)
+ new_image = deepcopy(image)
+ new_bboxes = deepcopy(bboxes)
+ if num_bboxes != 0:
+ new_image, new_bboxes = _apply_multi_bbox_augmentation(
+ new_image, new_bboxes, prob, aug_func, func_changes_bbox, *args)
+ return new_image, new_bboxes
+
+
+def rotate_only_bboxes(image, bboxes, prob, degrees, replace):
+ """Apply rotate to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, rotate, func_changes_bbox, degrees, replace)
+
+
+def shear_x_only_bboxes(image, bboxes, prob, level, replace):
+ """Apply shear_x to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, shear_x, func_changes_bbox, level, replace)
+
+
+def shear_y_only_bboxes(image, bboxes, prob, level, replace):
+ """Apply shear_y to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, shear_y, func_changes_bbox, level, replace)
+
+
+def translate_x_only_bboxes(image, bboxes, prob, pixels, replace):
+ """Apply translate_x to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, translate_x, func_changes_bbox, pixels, replace)
+
+
+def translate_y_only_bboxes(image, bboxes, prob, pixels, replace):
+ """Apply translate_y to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, translate_y, func_changes_bbox, pixels, replace)
+
+
+def flip_only_bboxes(image, bboxes, prob):
+ """Apply flip_lr to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob,
+ np.fliplr, func_changes_bbox)
+
+
+def solarize_only_bboxes(image, bboxes, prob, threshold):
+ """Apply solarize to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob, solarize,
+ func_changes_bbox, threshold)
+
+
+def equalize_only_bboxes(image, bboxes, prob):
+ """Apply equalize to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(image, bboxes, prob, equalize,
+ func_changes_bbox)
+
+
+def cutout_only_bboxes(image, bboxes, prob, pad_size, replace):
+ """Apply cutout to each bbox in the image with probability prob."""
+ func_changes_bbox = False
+ prob = _scale_bbox_only_op_probability(prob)
+ return _apply_multi_bbox_augmentation_wrapper(
+ image, bboxes, prob, cutout, func_changes_bbox, pad_size, replace)
+
+
+def _rotate_bbox(bbox, image_height, image_width, degrees):
+ """Rotates the bbox coordinated by degrees.
+
+ Args:
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ image_height: Int, height of the image.
+ image_width: Int, height of the image.
+ degrees: Float, a scalar angle in degrees to rotate all images by. If
+ degrees is positive the image will be rotated clockwise otherwise it will
+ be rotated counterclockwise.
+
+ Returns:
+ A tensor of the same shape as bbox, but now with the rotated coordinates.
+ """
+ image_height, image_width = (float(image_height), float(image_width))
+
+ # Convert from degrees to radians.
+ degrees_to_radians = math.pi / 180.0
+ radians = degrees * degrees_to_radians
+
+ # Translate the bbox to the center of the image and turn the normalized 0-1
+ # coordinates to absolute pixel locations.
+ # Y coordinates are made negative as the y axis of images goes down with
+ # increasing pixel values, so we negate to make sure x axis and y axis points
+ # are in the traditionally positive direction.
+ min_y = -int(image_height * (bbox[0] - 0.5))
+ min_x = int(image_width * (bbox[1] - 0.5))
+ max_y = -int(image_height * (bbox[2] - 0.5))
+ max_x = int(image_width * (bbox[3] - 0.5))
+ coordinates = np.stack([[min_y, min_x], [min_y, max_x], [max_y, min_x],
+ [max_y, max_x]]).astype(np.float32)
+ # Rotate the coordinates according to the rotation matrix clockwise if
+ # radians is positive, else negative
+ rotation_matrix = np.stack([[math.cos(radians), math.sin(radians)],
+ [-math.sin(radians), math.cos(radians)]])
+ new_coords = np.matmul(rotation_matrix,
+ np.transpose(coordinates)).astype(np.int32)
+
+ # Find min/max values and convert them back to normalized 0-1 floats.
+ min_y = -(float(np.max(new_coords[0, :])) / image_height - 0.5)
+ min_x = float(np.min(new_coords[1, :])) / image_width + 0.5
+ max_y = -(float(np.min(new_coords[0, :])) / image_height - 0.5)
+ max_x = float(np.max(new_coords[1, :])) / image_width + 0.5
+
+ # Clip the bboxes to be sure the fall between [0, 1].
+ min_y, min_x, max_y, max_x = _clip_bbox(min_y, min_x, max_y, max_x)
+ min_y, min_x, max_y, max_x = _check_bbox_area(min_y, min_x, max_y, max_x)
+ return np.stack([min_y, min_x, max_y, max_x])
+
+
+def rotate_with_bboxes(image, bboxes, degrees, replace):
+ # Rotate the image.
+ image = rotate(image, degrees, replace)
+
+ # Convert bbox coordinates to pixel values.
+ image_height, image_width = image.shape[:2]
+ # pylint:disable=g-long-lambda
+ wrapped_rotate_bbox = lambda bbox: _rotate_bbox(bbox, image_height, image_width, degrees)
+ # pylint:enable=g-long-lambda
+ new_bboxes = np.zeros_like(bboxes)
+ for idx in range(len(bboxes)):
+ new_bboxes[idx] = wrapped_rotate_bbox(bboxes[idx])
+ return image, new_bboxes
+
+
+def translate_x(image, pixels, replace):
+ """Equivalent of PIL Translate in X dimension."""
+ image = Image.fromarray(wrap(image))
+ image = image.transform(image.size, Image.AFFINE, (1, 0, pixels, 0, 1, 0))
+ return unwrap(np.array(image), replace)
+
+
+def translate_y(image, pixels, replace):
+ """Equivalent of PIL Translate in Y dimension."""
+ image = Image.fromarray(wrap(image))
+ image = image.transform(image.size, Image.AFFINE, (1, 0, 0, 0, 1, pixels))
+ return unwrap(np.array(image), replace)
+
+
+def _shift_bbox(bbox, image_height, image_width, pixels, shift_horizontal):
+ """Shifts the bbox coordinates by pixels.
+
+ Args:
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ image_height: Int, height of the image.
+ image_width: Int, width of the image.
+ pixels: An int. How many pixels to shift the bbox.
+ shift_horizontal: Boolean. If true then shift in X dimension else shift in
+ Y dimension.
+
+ Returns:
+ A tensor of the same shape as bbox, but now with the shifted coordinates.
+ """
+ pixels = int(pixels)
+ # Convert bbox to integer pixel locations.
+ min_y = int(float(image_height) * bbox[0])
+ min_x = int(float(image_width) * bbox[1])
+ max_y = int(float(image_height) * bbox[2])
+ max_x = int(float(image_width) * bbox[3])
+
+ if shift_horizontal:
+ min_x = np.maximum(0, min_x - pixels)
+ max_x = np.minimum(image_width, max_x - pixels)
+ else:
+ min_y = np.maximum(0, min_y - pixels)
+ max_y = np.minimum(image_height, max_y - pixels)
+
+ # Convert bbox back to floats.
+ min_y = float(min_y) / float(image_height)
+ min_x = float(min_x) / float(image_width)
+ max_y = float(max_y) / float(image_height)
+ max_x = float(max_x) / float(image_width)
+
+ # Clip the bboxes to be sure the fall between [0, 1].
+ min_y, min_x, max_y, max_x = _clip_bbox(min_y, min_x, max_y, max_x)
+ min_y, min_x, max_y, max_x = _check_bbox_area(min_y, min_x, max_y, max_x)
+ return np.stack([min_y, min_x, max_y, max_x])
+
+
+def translate_bbox(image, bboxes, pixels, replace, shift_horizontal):
+ """Equivalent of PIL Translate in X/Y dimension that shifts image and bbox.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
+ has 4 elements (min_y, min_x, max_y, max_x) of type float with values
+ between [0, 1].
+ pixels: An int. How many pixels to shift the image and bboxes
+ replace: A one or three value 1D tensor to fill empty pixels.
+ shift_horizontal: Boolean. If true then shift in X dimension else shift in
+ Y dimension.
+
+ Returns:
+ A tuple containing a 3D uint8 Tensor that will be the result of translating
+ image by pixels. The second element of the tuple is bboxes, where now
+ the coordinates will be shifted to reflect the shifted image.
+ """
+ if shift_horizontal:
+ image = translate_x(image, pixels, replace)
+ else:
+ image = translate_y(image, pixels, replace)
+
+ # Convert bbox coordinates to pixel values.
+ image_height, image_width = image.shape[0], image.shape[1]
+ # pylint:disable=g-long-lambda
+ wrapped_shift_bbox = lambda bbox: _shift_bbox(bbox, image_height, image_width, pixels, shift_horizontal)
+ # pylint:enable=g-long-lambda
+ new_bboxes = deepcopy(bboxes)
+ num_bboxes = len(bboxes)
+ for idx in range(num_bboxes):
+ new_bboxes[idx] = wrapped_shift_bbox(bboxes[idx])
+ return image.astype(np.uint8), new_bboxes
+
+
+def shear_x(image, level, replace):
+ """Equivalent of PIL Shearing in X dimension."""
+ # Shear parallel to x axis is a projective transform
+ # with a matrix form of:
+ # [1 level
+ # 0 1].
+ image = Image.fromarray(wrap(image))
+ image = image.transform(image.size, Image.AFFINE, (1, level, 0, 0, 1, 0))
+ return unwrap(np.array(image), replace)
+
+
+def shear_y(image, level, replace):
+ """Equivalent of PIL Shearing in Y dimension."""
+ # Shear parallel to y axis is a projective transform
+ # with a matrix form of:
+ # [1 0
+ # level 1].
+ image = Image.fromarray(wrap(image))
+ image = image.transform(image.size, Image.AFFINE, (1, 0, 0, level, 1, 0))
+ return unwrap(np.array(image), replace)
+
+
+def _shear_bbox(bbox, image_height, image_width, level, shear_horizontal):
+ """Shifts the bbox according to how the image was sheared.
+
+ Args:
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ image_height: Int, height of the image.
+ image_width: Int, height of the image.
+ level: Float. How much to shear the image.
+ shear_horizontal: If true then shear in X dimension else shear in
+ the Y dimension.
+
+ Returns:
+ A tensor of the same shape as bbox, but now with the shifted coordinates.
+ """
+ image_height, image_width = (float(image_height), float(image_width))
+
+ # Change bbox coordinates to be pixels.
+ min_y = int(image_height * bbox[0])
+ min_x = int(image_width * bbox[1])
+ max_y = int(image_height * bbox[2])
+ max_x = int(image_width * bbox[3])
+ coordinates = np.stack(
+ [[min_y, min_x], [min_y, max_x], [max_y, min_x], [max_y, max_x]])
+ coordinates = coordinates.astype(np.float32)
+
+ # Shear the coordinates according to the translation matrix.
+ if shear_horizontal:
+ translation_matrix = np.stack([[1, 0], [-level, 1]])
+ else:
+ translation_matrix = np.stack([[1, -level], [0, 1]])
+ translation_matrix = translation_matrix.astype(np.float32)
+ new_coords = np.matmul(translation_matrix,
+ np.transpose(coordinates)).astype(np.int32)
+
+ # Find min/max values and convert them back to floats.
+ min_y = float(np.min(new_coords[0, :])) / image_height
+ min_x = float(np.min(new_coords[1, :])) / image_width
+ max_y = float(np.max(new_coords[0, :])) / image_height
+ max_x = float(np.max(new_coords[1, :])) / image_width
+
+ # Clip the bboxes to be sure the fall between [0, 1].
+ min_y, min_x, max_y, max_x = _clip_bbox(min_y, min_x, max_y, max_x)
+ min_y, min_x, max_y, max_x = _check_bbox_area(min_y, min_x, max_y, max_x)
+ return np.stack([min_y, min_x, max_y, max_x])
+
+
+def shear_with_bboxes(image, bboxes, level, replace, shear_horizontal):
+ """Applies Shear Transformation to the image and shifts the bboxes.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
+ has 4 elements (min_y, min_x, max_y, max_x) of type float with values
+ between [0, 1].
+ level: Float. How much to shear the image. This value will be between
+ -0.3 to 0.3.
+ replace: A one or three value 1D tensor to fill empty pixels.
+ shear_horizontal: Boolean. If true then shear in X dimension else shear in
+ the Y dimension.
+
+ Returns:
+ A tuple containing a 3D uint8 Tensor that will be the result of shearing
+ image by level. The second element of the tuple is bboxes, where now
+ the coordinates will be shifted to reflect the sheared image.
+ """
+ if shear_horizontal:
+ image = shear_x(image, level, replace)
+ else:
+ image = shear_y(image, level, replace)
+
+ # Convert bbox coordinates to pixel values.
+ image_height, image_width = image.shape[:2]
+ # pylint:disable=g-long-lambda
+ wrapped_shear_bbox = lambda bbox: _shear_bbox(bbox, image_height, image_width, level, shear_horizontal)
+ # pylint:enable=g-long-lambda
+ new_bboxes = deepcopy(bboxes)
+ num_bboxes = len(bboxes)
+ for idx in range(num_bboxes):
+ new_bboxes[idx] = wrapped_shear_bbox(bboxes[idx])
+ return image.astype(np.uint8), new_bboxes
+
+
+def autocontrast(image):
+ """Implements Autocontrast function from PIL.
+
+ Args:
+ image: A 3D uint8 tensor.
+
+ Returns:
+ The image after it has had autocontrast applied to it and will be of type
+ uint8.
+ """
+
+ def scale_channel(image):
+ """Scale the 2D image using the autocontrast rule."""
+ # A possibly cheaper version can be done using cumsum/unique_with_counts
+ # over the histogram values, rather than iterating over the entire image.
+ # to compute mins and maxes.
+ lo = float(np.min(image))
+ hi = float(np.max(image))
+
+ # Scale the image, making the lowest value 0 and the highest value 255.
+ def scale_values(im):
+ scale = 255.0 / (hi - lo)
+ offset = -lo * scale
+ im = im.astype(np.float32) * scale + offset
+ img = np.clip(im, a_min=0, a_max=255.0)
+ return im.astype(np.uint8)
+
+ result = scale_values(image) if hi > lo else image
+ return result
+
+ # Assumes RGB for now. Scales each channel independently
+ # and then stacks the result.
+ s1 = scale_channel(image[:, :, 0])
+ s2 = scale_channel(image[:, :, 1])
+ s3 = scale_channel(image[:, :, 2])
+ image = np.stack([s1, s2, s3], 2)
+ return image
+
+
+def sharpness(image, factor):
+ """Implements Sharpness function from PIL."""
+ orig_image = image
+ image = image.astype(np.float32)
+ # Make image 4D for conv operation.
+ # SMOOTH PIL Kernel.
+ kernel = np.array([[1, 1, 1], [1, 5, 1], [1, 1, 1]], dtype=np.float32) / 13.
+ result = cv2.filter2D(image, -1, kernel).astype(np.uint8)
+
+ # Blend the final result.
+ return blend(result, orig_image, factor)
+
+
+def equalize(image):
+ """Implements Equalize function from PIL using."""
+
+ def scale_channel(im, c):
+ """Scale the data in the channel to implement equalize."""
+ im = im[:, :, c].astype(np.int32)
+ # Compute the histogram of the image channel.
+ histo, _ = np.histogram(im, range=[0, 255], bins=256)
+
+ # For the purposes of computing the step, filter out the nonzeros.
+ nonzero = np.where(np.not_equal(histo, 0))
+ nonzero_histo = np.reshape(np.take(histo, nonzero), [-1])
+ step = (np.sum(nonzero_histo) - nonzero_histo[-1]) // 255
+
+ def build_lut(histo, step):
+ # Compute the cumulative sum, shifting by step // 2
+ # and then normalization by step.
+ lut = (np.cumsum(histo) + (step // 2)) // step
+ # Shift lut, prepending with 0.
+ lut = np.concatenate([[0], lut[:-1]], 0)
+ # Clip the counts to be in range. This is done
+ # in the C code for image.point.
+ return np.clip(lut, a_min=0, a_max=255).astype(np.uint8)
+
+ # If step is zero, return the original image. Otherwise, build
+ # lut from the full histogram and step and then index from it.
+ if step == 0:
+ result = im
+ else:
+ result = np.take(build_lut(histo, step), im)
+
+ return result.astype(np.uint8)
+
+ # Assumes RGB for now. Scales each channel independently
+ # and then stacks the result.
+ s1 = scale_channel(image, 0)
+ s2 = scale_channel(image, 1)
+ s3 = scale_channel(image, 2)
+ image = np.stack([s1, s2, s3], 2)
+ return image
+
+
+def wrap(image):
+ """Returns 'image' with an extra channel set to all 1s."""
+ shape = image.shape
+ extended_channel = 255 * np.ones([shape[0], shape[1], 1], image.dtype)
+ extended = np.concatenate([image, extended_channel], 2).astype(image.dtype)
+ return extended
+
+
+def unwrap(image, replace):
+ """Unwraps an image produced by wrap.
+
+ Where there is a 0 in the last channel for every spatial position,
+ the rest of the three channels in that spatial dimension are grayed
+ (set to 128). Operations like translate and shear on a wrapped
+ Tensor will leave 0s in empty locations. Some transformations look
+ at the intensity of values to do preprocessing, and we want these
+ empty pixels to assume the 'average' value, rather than pure black.
+
+
+ Args:
+ image: A 3D Image Tensor with 4 channels.
+ replace: A one or three value 1D tensor to fill empty pixels.
+
+ Returns:
+ image: A 3D image Tensor with 3 channels.
+ """
+ image_shape = image.shape
+ # Flatten the spatial dimensions.
+ flattened_image = np.reshape(image, [-1, image_shape[2]])
+
+ # Find all pixels where the last channel is zero.
+ alpha_channel = flattened_image[:, 3]
+
+ replace = np.concatenate([replace, np.ones([1], image.dtype)], 0)
+
+ # Where they are zero, fill them in with 'replace'.
+ alpha_channel = np.reshape(alpha_channel, (-1, 1))
+ alpha_channel = np.tile(alpha_channel, reps=(1, flattened_image.shape[1]))
+
+ flattened_image = np.where(
+ np.equal(alpha_channel, 0),
+ np.ones_like(
+ flattened_image, dtype=image.dtype) * replace,
+ flattened_image)
+
+ image = np.reshape(flattened_image, image_shape)
+ image = image[:, :, :3]
+ return image.astype(np.uint8)
+
+
+def _cutout_inside_bbox(image, bbox, pad_fraction):
+ """Generates cutout mask and the mean pixel value of the bbox.
+
+ First a location is randomly chosen within the image as the center where the
+ cutout mask will be applied. Note this can be towards the boundaries of the
+ image, so the full cutout mask may not be applied.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bbox: 1D Tensor that has 4 elements (min_y, min_x, max_y, max_x)
+ of type float that represents the normalized coordinates between 0 and 1.
+ pad_fraction: Float that specifies how large the cutout mask should be in
+ in reference to the size of the original bbox. If pad_fraction is 0.25,
+ then the cutout mask will be of shape
+ (0.25 * bbox height, 0.25 * bbox width).
+
+ Returns:
+ A tuple. Fist element is a tensor of the same shape as image where each
+ element is either a 1 or 0 that is used to determine where the image
+ will have cutout applied. The second element is the mean of the pixels
+ in the image where the bbox is located.
+ mask value: [0,1]
+ """
+ image_height, image_width = image.shape[0], image.shape[1]
+ # Transform from shape [1, 4] to [4].
+ bbox = np.squeeze(bbox)
+
+ min_y = int(float(image_height) * bbox[0])
+ min_x = int(float(image_width) * bbox[1])
+ max_y = int(float(image_height) * bbox[2])
+ max_x = int(float(image_width) * bbox[3])
+
+ # Calculate the mean pixel values in the bounding box, which will be used
+ # to fill the cutout region.
+ mean = np.mean(image[min_y:max_y + 1, min_x:max_x + 1], axis=(0, 1))
+ # Cutout mask will be size pad_size_heigh * 2 by pad_size_width * 2 if the
+ # region lies entirely within the bbox.
+ box_height = max_y - min_y + 1
+ box_width = max_x - min_x + 1
+ pad_size_height = int(pad_fraction * (box_height / 2))
+ pad_size_width = int(pad_fraction * (box_width / 2))
+
+ # Sample the center location in the image where the zero mask will be applied.
+ cutout_center_height = np.random.randint(min_y, max_y + 1, dtype=np.int32)
+ cutout_center_width = np.random.randint(min_x, max_x + 1, dtype=np.int32)
+
+ lower_pad = np.maximum(0, cutout_center_height - pad_size_height)
+ upper_pad = np.maximum(
+ 0, image_height - cutout_center_height - pad_size_height)
+ left_pad = np.maximum(0, cutout_center_width - pad_size_width)
+ right_pad = np.maximum(0,
+ image_width - cutout_center_width - pad_size_width)
+
+ cutout_shape = [
+ image_height - (lower_pad + upper_pad),
+ image_width - (left_pad + right_pad)
+ ]
+ padding_dims = [[lower_pad, upper_pad], [left_pad, right_pad]]
+
+ mask = np.pad(np.zeros(
+ cutout_shape, dtype=image.dtype),
+ padding_dims,
+ 'constant',
+ constant_values=1)
+
+ mask = np.expand_dims(mask, 2)
+ mask = np.tile(mask, [1, 1, 3])
+ return mask, mean
+
+
+def bbox_cutout(image, bboxes, pad_fraction, replace_with_mean):
+ """Applies cutout to the image according to bbox information.
+
+ This is a cutout variant that using bbox information to make more informed
+ decisions on where to place the cutout mask.
+
+ Args:
+ image: 3D uint8 Tensor.
+ bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
+ has 4 elements (min_y, min_x, max_y, max_x) of type float with values
+ between [0, 1].
+ pad_fraction: Float that specifies how large the cutout mask should be in
+ in reference to the size of the original bbox. If pad_fraction is 0.25,
+ then the cutout mask will be of shape
+ (0.25 * bbox height, 0.25 * bbox width).
+ replace_with_mean: Boolean that specified what value should be filled in
+ where the cutout mask is applied. Since the incoming image will be of
+ uint8 and will not have had any mean normalization applied, by default
+ we set the value to be 128. If replace_with_mean is True then we find
+ the mean pixel values across the channel dimension and use those to fill
+ in where the cutout mask is applied.
+
+ Returns:
+ A tuple. First element is a tensor of the same shape as image that has
+ cutout applied to it. Second element is the bboxes that were passed in
+ that will be unchanged.
+ """
+
+ def apply_bbox_cutout(image, bboxes, pad_fraction):
+ """Applies cutout to a single bounding box within image."""
+ # Choose a single bounding box to apply cutout to.
+ random_index = np.random.randint(0, bboxes.shape[0], dtype=np.int32)
+ # Select the corresponding bbox and apply cutout.
+ chosen_bbox = np.take(bboxes, random_index, axis=0)
+ mask, mean = _cutout_inside_bbox(image, chosen_bbox, pad_fraction)
+
+ # When applying cutout we either set the pixel value to 128 or to the mean
+ # value inside the bbox.
+ replace = mean if replace_with_mean else [128] * 3
+
+ # Apply the cutout mask to the image. Where the mask is 0 we fill it with
+ # `replace`.
+ image = np.where(
+ np.equal(mask, 0),
+ np.ones_like(
+ image, dtype=image.dtype) * replace,
+ image).astype(image.dtype)
+ return image
+
+ # Check to see if there are boxes, if so then apply boxcutout.
+ if len(bboxes) != 0:
+ image = apply_bbox_cutout(image, bboxes, pad_fraction)
+
+ return image, bboxes
+
+
+NAME_TO_FUNC = {
+ 'AutoContrast': autocontrast,
+ 'Equalize': equalize,
+ 'Posterize': posterize,
+ 'Solarize': solarize,
+ 'SolarizeAdd': solarize_add,
+ 'Color': color,
+ 'Contrast': contrast,
+ 'Brightness': brightness,
+ 'Sharpness': sharpness,
+ 'Cutout': cutout,
+ 'BBox_Cutout': bbox_cutout,
+ 'Rotate_BBox': rotate_with_bboxes,
+ # pylint:disable=g-long-lambda
+ 'TranslateX_BBox': lambda image, bboxes, pixels, replace: translate_bbox(
+ image, bboxes, pixels, replace, shift_horizontal=True),
+ 'TranslateY_BBox': lambda image, bboxes, pixels, replace: translate_bbox(
+ image, bboxes, pixels, replace, shift_horizontal=False),
+ 'ShearX_BBox': lambda image, bboxes, level, replace: shear_with_bboxes(
+ image, bboxes, level, replace, shear_horizontal=True),
+ 'ShearY_BBox': lambda image, bboxes, level, replace: shear_with_bboxes(
+ image, bboxes, level, replace, shear_horizontal=False),
+ # pylint:enable=g-long-lambda
+ 'Rotate_Only_BBoxes': rotate_only_bboxes,
+ 'ShearX_Only_BBoxes': shear_x_only_bboxes,
+ 'ShearY_Only_BBoxes': shear_y_only_bboxes,
+ 'TranslateX_Only_BBoxes': translate_x_only_bboxes,
+ 'TranslateY_Only_BBoxes': translate_y_only_bboxes,
+ 'Flip_Only_BBoxes': flip_only_bboxes,
+ 'Solarize_Only_BBoxes': solarize_only_bboxes,
+ 'Equalize_Only_BBoxes': equalize_only_bboxes,
+ 'Cutout_Only_BBoxes': cutout_only_bboxes,
+}
+
+
+def _randomly_negate_tensor(tensor):
+ """With 50% prob turn the tensor negative."""
+ should_flip = np.floor(np.random.rand() + 0.5) >= 1
+ final_tensor = tensor if should_flip else -tensor
+ return final_tensor
+
+
+def _rotate_level_to_arg(level):
+ level = (level / _MAX_LEVEL) * 30.
+ level = _randomly_negate_tensor(level)
+ return (level, )
+
+
+def _shrink_level_to_arg(level):
+ """Converts level to ratio by which we shrink the image content."""
+ if level == 0:
+ return (1.0, ) # if level is zero, do not shrink the image
+ # Maximum shrinking ratio is 2.9.
+ level = 2. / (_MAX_LEVEL / level) + 0.9
+ return (level, )
+
+
+def _enhance_level_to_arg(level):
+ return ((level / _MAX_LEVEL) * 1.8 + 0.1, )
+
+
+def _shear_level_to_arg(level):
+ level = (level / _MAX_LEVEL) * 0.3
+ # Flip level to negative with 50% chance.
+ level = _randomly_negate_tensor(level)
+ return (level, )
+
+
+def _translate_level_to_arg(level, translate_const):
+ level = (level / _MAX_LEVEL) * float(translate_const)
+ # Flip level to negative with 50% chance.
+ level = _randomly_negate_tensor(level)
+ return (level, )
+
+
+def _bbox_cutout_level_to_arg(level, hparams):
+ cutout_pad_fraction = (level /
+ _MAX_LEVEL) * 0.75 # hparams.cutout_max_pad_fraction
+ return (cutout_pad_fraction, False) # hparams.cutout_bbox_replace_with_mean
+
+
+def level_to_arg(hparams):
+ return {
+ 'AutoContrast': lambda level: (),
+ 'Equalize': lambda level: (),
+ 'Posterize': lambda level: (int((level / _MAX_LEVEL) * 4), ),
+ 'Solarize': lambda level: (int((level / _MAX_LEVEL) * 256), ),
+ 'SolarizeAdd': lambda level: (int((level / _MAX_LEVEL) * 110), ),
+ 'Color': _enhance_level_to_arg,
+ 'Contrast': _enhance_level_to_arg,
+ 'Brightness': _enhance_level_to_arg,
+ 'Sharpness': _enhance_level_to_arg,
+ 'Cutout':
+ lambda level: (int((level / _MAX_LEVEL) * 100), ), # hparams.cutout_const=100
+ # pylint:disable=g-long-lambda
+ 'BBox_Cutout': lambda level: _bbox_cutout_level_to_arg(level, hparams),
+ 'TranslateX_BBox':
+ lambda level: _translate_level_to_arg(level, 250), # hparams.translate_const=250
+ 'TranslateY_BBox':
+ lambda level: _translate_level_to_arg(level, 250), # hparams.translate_cons
+ # pylint:enable=g-long-lambda
+ 'ShearX_BBox': _shear_level_to_arg,
+ 'ShearY_BBox': _shear_level_to_arg,
+ 'Rotate_BBox': _rotate_level_to_arg,
+ 'Rotate_Only_BBoxes': _rotate_level_to_arg,
+ 'ShearX_Only_BBoxes': _shear_level_to_arg,
+ 'ShearY_Only_BBoxes': _shear_level_to_arg,
+ # pylint:disable=g-long-lambda
+ 'TranslateX_Only_BBoxes':
+ lambda level: _translate_level_to_arg(level, 120), # hparams.translate_bbox_const
+ 'TranslateY_Only_BBoxes':
+ lambda level: _translate_level_to_arg(level, 120), # hparams.translate_bbox_const
+ # pylint:enable=g-long-lambda
+ 'Flip_Only_BBoxes': lambda level: (),
+ 'Solarize_Only_BBoxes':
+ lambda level: (int((level / _MAX_LEVEL) * 256), ),
+ 'Equalize_Only_BBoxes': lambda level: (),
+ # pylint:disable=g-long-lambda
+ 'Cutout_Only_BBoxes':
+ lambda level: (int((level / _MAX_LEVEL) * 50), ), # hparams.cutout_bbox_const
+ # pylint:enable=g-long-lambda
+ }
+
+
+def bbox_wrapper(func):
+ """Adds a bboxes function argument to func and returns unchanged bboxes."""
+
+ def wrapper(images, bboxes, *args, **kwargs):
+ return (func(images, *args, **kwargs), bboxes)
+
+ return wrapper
+
+
+def _parse_policy_info(name, prob, level, replace_value, augmentation_hparams):
+ """Return the function that corresponds to `name` and update `level` param."""
+ func = NAME_TO_FUNC[name]
+ args = level_to_arg(augmentation_hparams)[name](level)
+
+ # Check to see if prob is passed into function. This is used for operations
+ # where we alter bboxes independently.
+ # pytype:disable=wrong-arg-types
+ if 'prob' in inspect.getfullargspec(func)[0]:
+ args = tuple([prob] + list(args))
+ # pytype:enable=wrong-arg-types
+
+ # Add in replace arg if it is required for the function that is being called.
+ if 'replace' in inspect.getfullargspec(func)[0]:
+ # Make sure replace is the final argument
+ assert 'replace' == inspect.getfullargspec(func)[0][-1]
+ args = tuple(list(args) + [replace_value])
+
+ # Add bboxes as the second positional argument for the function if it does
+ # not already exist.
+ if 'bboxes' not in inspect.getfullargspec(func)[0]:
+ func = bbox_wrapper(func)
+ return (func, prob, args)
+
+
+def _apply_func_with_prob(func, image, args, prob, bboxes):
+ """Apply `func` to image w/ `args` as input with probability `prob`."""
+ assert isinstance(args, tuple)
+ assert 'bboxes' == inspect.getfullargspec(func)[0][1]
+
+ # If prob is a function argument, then this randomness is being handled
+ # inside the function, so make sure it is always called.
+ if 'prob' in inspect.getfullargspec(func)[0]:
+ prob = 1.0
+
+ # Apply the function with probability `prob`.
+ should_apply_op = np.floor(np.random.rand() + 0.5) >= 1
+ if should_apply_op:
+ augmented_image, augmented_bboxes = func(image, bboxes, *args)
+ else:
+ augmented_image, augmented_bboxes = (image, bboxes)
+ return augmented_image, augmented_bboxes
+
+
+def select_and_apply_random_policy(policies, image, bboxes):
+ """Select a random policy from `policies` and apply it to `image`."""
+ policy_to_select = np.random.randint(0, len(policies), dtype=np.int32)
+ # policy_to_select = 6 # for test
+ for (i, policy) in enumerate(policies):
+ if i == policy_to_select:
+ image, bboxes = policy(image, bboxes)
+ return (image, bboxes)
+
+
+def build_and_apply_nas_policy(policies, image, bboxes, augmentation_hparams):
+ """Build a policy from the given policies passed in and apply to image.
+
+ Args:
+ policies: list of lists of tuples in the form `(func, prob, level)`, `func`
+ is a string name of the augmentation function, `prob` is the probability
+ of applying the `func` operation, `level` is the input argument for
+ `func`.
+ image: numpy array that the resulting policy will be applied to.
+ bboxes:
+ augmentation_hparams: Hparams associated with the NAS learned policy.
+
+ Returns:
+ A version of image that now has data augmentation applied to it based on
+ the `policies` pass into the function. Additionally, returns bboxes if
+ a value for them is passed in that is not None
+ """
+ replace_value = [128, 128, 128]
+
+ # func is the string name of the augmentation function, prob is the
+ # probability of applying the operation and level is the parameter associated
+
+ # tf_policies are functions that take in an image and return an augmented
+ # image.
+ tf_policies = []
+ for policy in policies:
+ tf_policy = []
+ # Link string name to the correct python function and make sure the correct
+ # argument is passed into that function.
+ for policy_info in policy:
+ policy_info = list(
+ policy_info) + [replace_value, augmentation_hparams]
+
+ tf_policy.append(_parse_policy_info(*policy_info))
+ # Now build the tf policy that will apply the augmentation procedue
+ # on image.
+ def make_final_policy(tf_policy_):
+ def final_policy(image_, bboxes_):
+ for func, prob, args in tf_policy_:
+ image_, bboxes_ = _apply_func_with_prob(func, image_, args,
+ prob, bboxes_)
+ return image_, bboxes_
+
+ return final_policy
+
+ tf_policies.append(make_final_policy(tf_policy))
+
+ augmented_images, augmented_bboxes = select_and_apply_random_policy(
+ tf_policies, image, bboxes)
+ # If no bounding boxes were specified, then just return the images.
+ return (augmented_images, augmented_bboxes)
+
+
+# TODO(barretzoph): Add in ArXiv link once paper is out.
+def distort_image_with_autoaugment(image, bboxes, augmentation_name):
+ """Applies the AutoAugment policy to `image` and `bboxes`.
+
+ Args:
+ image: `Tensor` of shape [height, width, 3] representing an image.
+ bboxes: `Tensor` of shape [N, 4] representing ground truth boxes that are
+ normalized between [0, 1].
+ augmentation_name: The name of the AutoAugment policy to use. The available
+ options are `v0`, `v1`, `v2`, `v3` and `test`. `v0` is the policy used for
+ all of the results in the paper and was found to achieve the best results
+ on the COCO dataset. `v1`, `v2` and `v3` are additional good policies
+ found on the COCO dataset that have slight variation in what operations
+ were used during the search procedure along with how many operations are
+ applied in parallel to a single image (2 vs 3).
+
+ Returns:
+ A tuple containing the augmented versions of `image` and `bboxes`.
+ """
+ available_policies = {
+ 'v0': policy_v0,
+ 'v1': policy_v1,
+ 'v2': policy_v2,
+ 'v3': policy_v3,
+ 'test': policy_vtest
+ }
+ if augmentation_name not in available_policies:
+ raise ValueError('Invalid augmentation_name: {}'.format(
+ augmentation_name))
+
+ policy = available_policies[augmentation_name]()
+ augmentation_hparams = {}
+ return build_and_apply_nas_policy(policy, image, bboxes,
+ augmentation_hparams)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/batch_operators.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/batch_operators.py
new file mode 100644
index 000000000..e43fb7d20
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/batch_operators.py
@@ -0,0 +1,1060 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+
+import cv2
+import math
+import numpy as np
+from .operators import register_op, BaseOperator, Resize
+from .op_helper import jaccard_overlap, gaussian2D, gaussian_radius, draw_umich_gaussian
+from .atss_assigner import ATSSAssigner
+from scipy import ndimage
+
+from ppdet.modeling import bbox_utils
+from ppdet.utils.logger import setup_logger
+from ppdet.modeling.keypoint_utils import get_affine_transform, affine_transform
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'PadBatch',
+ 'BatchRandomResize',
+ 'Gt2YoloTarget',
+ 'Gt2FCOSTarget',
+ 'Gt2TTFTarget',
+ 'Gt2Solov2Target',
+ 'Gt2SparseRCNNTarget',
+ 'PadMaskBatch',
+ 'Gt2GFLTarget',
+ 'Gt2CenterNetTarget',
+]
+
+
+@register_op
+class PadBatch(BaseOperator):
+ """
+ Pad a batch of samples so they can be divisible by a stride.
+ The layout of each image should be 'CHW'.
+ Args:
+ pad_to_stride (int): If `pad_to_stride > 0`, pad zeros to ensure
+ height and width is divisible by `pad_to_stride`.
+ """
+
+ def __init__(self, pad_to_stride=0):
+ super(PadBatch, self).__init__()
+ self.pad_to_stride = pad_to_stride
+
+ def __call__(self, samples, context=None):
+ """
+ Args:
+ samples (list): a batch of sample, each is dict.
+ """
+ coarsest_stride = self.pad_to_stride
+
+ max_shape = np.array([data['image'].shape for data in samples]).max(
+ axis=0)
+ if coarsest_stride > 0:
+ max_shape[1] = int(
+ np.ceil(max_shape[1] / coarsest_stride) * coarsest_stride)
+ max_shape[2] = int(
+ np.ceil(max_shape[2] / coarsest_stride) * coarsest_stride)
+
+ for data in samples:
+ im = data['image']
+ im_c, im_h, im_w = im.shape[:]
+ padding_im = np.zeros(
+ (im_c, max_shape[1], max_shape[2]), dtype=np.float32)
+ padding_im[:, :im_h, :im_w] = im
+ data['image'] = padding_im
+ if 'semantic' in data and data['semantic'] is not None:
+ semantic = data['semantic']
+ padding_sem = np.zeros(
+ (1, max_shape[1], max_shape[2]), dtype=np.float32)
+ padding_sem[:, :im_h, :im_w] = semantic
+ data['semantic'] = padding_sem
+ if 'gt_segm' in data and data['gt_segm'] is not None:
+ gt_segm = data['gt_segm']
+ padding_segm = np.zeros(
+ (gt_segm.shape[0], max_shape[1], max_shape[2]),
+ dtype=np.uint8)
+ padding_segm[:, :im_h, :im_w] = gt_segm
+ data['gt_segm'] = padding_segm
+
+ if 'gt_rbox2poly' in data and data['gt_rbox2poly'] is not None:
+ # ploy to rbox
+ polys = data['gt_rbox2poly']
+ rbox = bbox_utils.poly2rbox(polys)
+ data['gt_rbox'] = rbox
+
+ return samples
+
+
+@register_op
+class BatchRandomResize(BaseOperator):
+ """
+ Resize image to target size randomly. random target_size and interpolation method
+ Args:
+ target_size (int, list, tuple): image target size, if random size is True, must be list or tuple
+ keep_ratio (bool): whether keep_raio or not, default true
+ interp (int): the interpolation method
+ random_size (bool): whether random select target size of image
+ random_interp (bool): whether random select interpolation method
+ """
+
+ def __init__(self,
+ target_size,
+ keep_ratio,
+ interp=cv2.INTER_NEAREST,
+ random_size=True,
+ random_interp=False):
+ super(BatchRandomResize, self).__init__()
+ self.keep_ratio = keep_ratio
+ self.interps = [
+ cv2.INTER_NEAREST,
+ cv2.INTER_LINEAR,
+ cv2.INTER_AREA,
+ cv2.INTER_CUBIC,
+ cv2.INTER_LANCZOS4,
+ ]
+ self.interp = interp
+ assert isinstance(target_size, (
+ int, Sequence)), "target_size must be int, list or tuple"
+ if random_size and not isinstance(target_size, list):
+ raise TypeError(
+ "Type of target_size is invalid when random_size is True. Must be List, now is {}".
+ format(type(target_size)))
+ self.target_size = target_size
+ self.random_size = random_size
+ self.random_interp = random_interp
+
+ def __call__(self, samples, context=None):
+ if self.random_size:
+ index = np.random.choice(len(self.target_size))
+ target_size = self.target_size[index]
+ else:
+ target_size = self.target_size
+
+ if self.random_interp:
+ interp = np.random.choice(self.interps)
+ else:
+ interp = self.interp
+
+ resizer = Resize(target_size, keep_ratio=self.keep_ratio, interp=interp)
+ return resizer(samples, context=context)
+
+
+@register_op
+class Gt2YoloTarget(BaseOperator):
+ """
+ Generate YOLOv3 targets by groud truth data, this operator is only used in
+ fine grained YOLOv3 loss mode
+ """
+
+ def __init__(self,
+ anchors,
+ anchor_masks,
+ downsample_ratios,
+ num_classes=80,
+ iou_thresh=1.):
+ super(Gt2YoloTarget, self).__init__()
+ self.anchors = anchors
+ self.anchor_masks = anchor_masks
+ self.downsample_ratios = downsample_ratios
+ self.num_classes = num_classes
+ self.iou_thresh = iou_thresh
+
+ def __call__(self, samples, context=None):
+ assert len(self.anchor_masks) == len(self.downsample_ratios), \
+ "anchor_masks', and 'downsample_ratios' should have same length."
+
+ h, w = samples[0]['image'].shape[1:3]
+ an_hw = np.array(self.anchors) / np.array([[w, h]])
+ for sample in samples:
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ if 'gt_score' not in sample:
+ sample['gt_score'] = np.ones(
+ (gt_bbox.shape[0], 1), dtype=np.float32)
+ gt_score = sample['gt_score']
+ for i, (
+ mask, downsample_ratio
+ ) in enumerate(zip(self.anchor_masks, self.downsample_ratios)):
+ grid_h = int(h / downsample_ratio)
+ grid_w = int(w / downsample_ratio)
+ target = np.zeros(
+ (len(mask), 6 + self.num_classes, grid_h, grid_w),
+ dtype=np.float32)
+ for b in range(gt_bbox.shape[0]):
+ gx, gy, gw, gh = gt_bbox[b, :]
+ cls = gt_class[b]
+ score = gt_score[b]
+ if gw <= 0. or gh <= 0. or score <= 0.:
+ continue
+
+ # find best match anchor index
+ best_iou = 0.
+ best_idx = -1
+ for an_idx in range(an_hw.shape[0]):
+ iou = jaccard_overlap(
+ [0., 0., gw, gh],
+ [0., 0., an_hw[an_idx, 0], an_hw[an_idx, 1]])
+ if iou > best_iou:
+ best_iou = iou
+ best_idx = an_idx
+
+ gi = int(gx * grid_w)
+ gj = int(gy * grid_h)
+
+ # gtbox should be regresed in this layes if best match
+ # anchor index in anchor mask of this layer
+ if best_idx in mask:
+ best_n = mask.index(best_idx)
+
+ # x, y, w, h, scale
+ target[best_n, 0, gj, gi] = gx * grid_w - gi
+ target[best_n, 1, gj, gi] = gy * grid_h - gj
+ target[best_n, 2, gj, gi] = np.log(
+ gw * w / self.anchors[best_idx][0])
+ target[best_n, 3, gj, gi] = np.log(
+ gh * h / self.anchors[best_idx][1])
+ target[best_n, 4, gj, gi] = 2.0 - gw * gh
+
+ # objectness record gt_score
+ target[best_n, 5, gj, gi] = score
+
+ # classification
+ target[best_n, 6 + cls, gj, gi] = 1.
+
+ # For non-matched anchors, calculate the target if the iou
+ # between anchor and gt is larger than iou_thresh
+ if self.iou_thresh < 1:
+ for idx, mask_i in enumerate(mask):
+ if mask_i == best_idx: continue
+ iou = jaccard_overlap(
+ [0., 0., gw, gh],
+ [0., 0., an_hw[mask_i, 0], an_hw[mask_i, 1]])
+ if iou > self.iou_thresh and target[idx, 5, gj,
+ gi] == 0.:
+ # x, y, w, h, scale
+ target[idx, 0, gj, gi] = gx * grid_w - gi
+ target[idx, 1, gj, gi] = gy * grid_h - gj
+ target[idx, 2, gj, gi] = np.log(
+ gw * w / self.anchors[mask_i][0])
+ target[idx, 3, gj, gi] = np.log(
+ gh * h / self.anchors[mask_i][1])
+ target[idx, 4, gj, gi] = 2.0 - gw * gh
+
+ # objectness record gt_score
+ target[idx, 5, gj, gi] = score
+
+ # classification
+ target[idx, 6 + cls, gj, gi] = 1.
+ sample['target{}'.format(i)] = target
+
+ # remove useless gt_class and gt_score after target calculated
+ sample.pop('gt_class')
+ sample.pop('gt_score')
+
+ return samples
+
+
+@register_op
+class Gt2FCOSTarget(BaseOperator):
+ """
+ Generate FCOS targets by groud truth data
+ """
+
+ def __init__(self,
+ object_sizes_boundary,
+ center_sampling_radius,
+ downsample_ratios,
+ norm_reg_targets=False):
+ super(Gt2FCOSTarget, self).__init__()
+ self.center_sampling_radius = center_sampling_radius
+ self.downsample_ratios = downsample_ratios
+ self.INF = np.inf
+ self.object_sizes_boundary = [-1] + object_sizes_boundary + [self.INF]
+ object_sizes_of_interest = []
+ for i in range(len(self.object_sizes_boundary) - 1):
+ object_sizes_of_interest.append([
+ self.object_sizes_boundary[i], self.object_sizes_boundary[i + 1]
+ ])
+ self.object_sizes_of_interest = object_sizes_of_interest
+ self.norm_reg_targets = norm_reg_targets
+
+ def _compute_points(self, w, h):
+ """
+ compute the corresponding points in each feature map
+ :param h: image height
+ :param w: image width
+ :return: points from all feature map
+ """
+ locations = []
+ for stride in self.downsample_ratios:
+ shift_x = np.arange(0, w, stride).astype(np.float32)
+ shift_y = np.arange(0, h, stride).astype(np.float32)
+ shift_x, shift_y = np.meshgrid(shift_x, shift_y)
+ shift_x = shift_x.flatten()
+ shift_y = shift_y.flatten()
+ location = np.stack([shift_x, shift_y], axis=1) + stride // 2
+ locations.append(location)
+ num_points_each_level = [len(location) for location in locations]
+ locations = np.concatenate(locations, axis=0)
+ return locations, num_points_each_level
+
+ def _convert_xywh2xyxy(self, gt_bbox, w, h):
+ """
+ convert the bounding box from style xywh to xyxy
+ :param gt_bbox: bounding boxes normalized into [0, 1]
+ :param w: image width
+ :param h: image height
+ :return: bounding boxes in xyxy style
+ """
+ bboxes = gt_bbox.copy()
+ bboxes[:, [0, 2]] = bboxes[:, [0, 2]] * w
+ bboxes[:, [1, 3]] = bboxes[:, [1, 3]] * h
+ bboxes[:, 2] = bboxes[:, 0] + bboxes[:, 2]
+ bboxes[:, 3] = bboxes[:, 1] + bboxes[:, 3]
+ return bboxes
+
+ def _check_inside_boxes_limited(self, gt_bbox, xs, ys,
+ num_points_each_level):
+ """
+ check if points is within the clipped boxes
+ :param gt_bbox: bounding boxes
+ :param xs: horizontal coordinate of points
+ :param ys: vertical coordinate of points
+ :return: the mask of points is within gt_box or not
+ """
+ bboxes = np.reshape(
+ gt_bbox, newshape=[1, gt_bbox.shape[0], gt_bbox.shape[1]])
+ bboxes = np.tile(bboxes, reps=[xs.shape[0], 1, 1])
+ ct_x = (bboxes[:, :, 0] + bboxes[:, :, 2]) / 2
+ ct_y = (bboxes[:, :, 1] + bboxes[:, :, 3]) / 2
+ beg = 0
+ clipped_box = bboxes.copy()
+ for lvl, stride in enumerate(self.downsample_ratios):
+ end = beg + num_points_each_level[lvl]
+ stride_exp = self.center_sampling_radius * stride
+ clipped_box[beg:end, :, 0] = np.maximum(
+ bboxes[beg:end, :, 0], ct_x[beg:end, :] - stride_exp)
+ clipped_box[beg:end, :, 1] = np.maximum(
+ bboxes[beg:end, :, 1], ct_y[beg:end, :] - stride_exp)
+ clipped_box[beg:end, :, 2] = np.minimum(
+ bboxes[beg:end, :, 2], ct_x[beg:end, :] + stride_exp)
+ clipped_box[beg:end, :, 3] = np.minimum(
+ bboxes[beg:end, :, 3], ct_y[beg:end, :] + stride_exp)
+ beg = end
+ l_res = xs - clipped_box[:, :, 0]
+ r_res = clipped_box[:, :, 2] - xs
+ t_res = ys - clipped_box[:, :, 1]
+ b_res = clipped_box[:, :, 3] - ys
+ clipped_box_reg_targets = np.stack([l_res, t_res, r_res, b_res], axis=2)
+ inside_gt_box = np.min(clipped_box_reg_targets, axis=2) > 0
+ return inside_gt_box
+
+ def __call__(self, samples, context=None):
+ assert len(self.object_sizes_of_interest) == len(self.downsample_ratios), \
+ "object_sizes_of_interest', and 'downsample_ratios' should have same length."
+
+ for sample in samples:
+ im = sample['image']
+ bboxes = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ # calculate the locations
+ h, w = im.shape[1:3]
+ points, num_points_each_level = self._compute_points(w, h)
+ object_scale_exp = []
+ for i, num_pts in enumerate(num_points_each_level):
+ object_scale_exp.append(
+ np.tile(
+ np.array([self.object_sizes_of_interest[i]]),
+ reps=[num_pts, 1]))
+ object_scale_exp = np.concatenate(object_scale_exp, axis=0)
+
+ gt_area = (bboxes[:, 2] - bboxes[:, 0]) * (
+ bboxes[:, 3] - bboxes[:, 1])
+ xs, ys = points[:, 0], points[:, 1]
+ xs = np.reshape(xs, newshape=[xs.shape[0], 1])
+ xs = np.tile(xs, reps=[1, bboxes.shape[0]])
+ ys = np.reshape(ys, newshape=[ys.shape[0], 1])
+ ys = np.tile(ys, reps=[1, bboxes.shape[0]])
+
+ l_res = xs - bboxes[:, 0]
+ r_res = bboxes[:, 2] - xs
+ t_res = ys - bboxes[:, 1]
+ b_res = bboxes[:, 3] - ys
+ reg_targets = np.stack([l_res, t_res, r_res, b_res], axis=2)
+ if self.center_sampling_radius > 0:
+ is_inside_box = self._check_inside_boxes_limited(
+ bboxes, xs, ys, num_points_each_level)
+ else:
+ is_inside_box = np.min(reg_targets, axis=2) > 0
+ # check if the targets is inside the corresponding level
+ max_reg_targets = np.max(reg_targets, axis=2)
+ lower_bound = np.tile(
+ np.expand_dims(
+ object_scale_exp[:, 0], axis=1),
+ reps=[1, max_reg_targets.shape[1]])
+ high_bound = np.tile(
+ np.expand_dims(
+ object_scale_exp[:, 1], axis=1),
+ reps=[1, max_reg_targets.shape[1]])
+ is_match_current_level = \
+ (max_reg_targets > lower_bound) & \
+ (max_reg_targets < high_bound)
+ points2gtarea = np.tile(
+ np.expand_dims(
+ gt_area, axis=0), reps=[xs.shape[0], 1])
+ points2gtarea[is_inside_box == 0] = self.INF
+ points2gtarea[is_match_current_level == 0] = self.INF
+ points2min_area = points2gtarea.min(axis=1)
+ points2min_area_ind = points2gtarea.argmin(axis=1)
+ labels = gt_class[points2min_area_ind] + 1
+ labels[points2min_area == self.INF] = 0
+ reg_targets = reg_targets[range(xs.shape[0]), points2min_area_ind]
+ ctn_targets = np.sqrt((reg_targets[:, [0, 2]].min(axis=1) / \
+ reg_targets[:, [0, 2]].max(axis=1)) * \
+ (reg_targets[:, [1, 3]].min(axis=1) / \
+ reg_targets[:, [1, 3]].max(axis=1))).astype(np.float32)
+ ctn_targets = np.reshape(
+ ctn_targets, newshape=[ctn_targets.shape[0], 1])
+ ctn_targets[labels <= 0] = 0
+ pos_ind = np.nonzero(labels != 0)
+ reg_targets_pos = reg_targets[pos_ind[0], :]
+ split_sections = []
+ beg = 0
+ for lvl in range(len(num_points_each_level)):
+ end = beg + num_points_each_level[lvl]
+ split_sections.append(end)
+ beg = end
+ labels_by_level = np.split(labels, split_sections, axis=0)
+ reg_targets_by_level = np.split(reg_targets, split_sections, axis=0)
+ ctn_targets_by_level = np.split(ctn_targets, split_sections, axis=0)
+ for lvl in range(len(self.downsample_ratios)):
+ grid_w = int(np.ceil(w / self.downsample_ratios[lvl]))
+ grid_h = int(np.ceil(h / self.downsample_ratios[lvl]))
+ if self.norm_reg_targets:
+ sample['reg_target{}'.format(lvl)] = \
+ np.reshape(
+ reg_targets_by_level[lvl] / \
+ self.downsample_ratios[lvl],
+ newshape=[grid_h, grid_w, 4])
+ else:
+ sample['reg_target{}'.format(lvl)] = np.reshape(
+ reg_targets_by_level[lvl],
+ newshape=[grid_h, grid_w, 4])
+ sample['labels{}'.format(lvl)] = np.reshape(
+ labels_by_level[lvl], newshape=[grid_h, grid_w, 1])
+ sample['centerness{}'.format(lvl)] = np.reshape(
+ ctn_targets_by_level[lvl], newshape=[grid_h, grid_w, 1])
+
+ sample.pop('is_crowd', None)
+ sample.pop('difficult', None)
+ sample.pop('gt_class', None)
+ sample.pop('gt_bbox', None)
+ return samples
+
+
+@register_op
+class Gt2GFLTarget(BaseOperator):
+ """
+ Generate GFocal loss targets by groud truth data
+ """
+
+ def __init__(self,
+ num_classes=80,
+ downsample_ratios=[8, 16, 32, 64, 128],
+ grid_cell_scale=4,
+ cell_offset=0):
+ super(Gt2GFLTarget, self).__init__()
+ self.num_classes = num_classes
+ self.downsample_ratios = downsample_ratios
+ self.grid_cell_scale = grid_cell_scale
+ self.cell_offset = cell_offset
+
+ self.assigner = ATSSAssigner()
+
+ def get_grid_cells(self, featmap_size, scale, stride, offset=0):
+ """
+ Generate grid cells of a feature map for target assignment.
+ Args:
+ featmap_size: Size of a single level feature map.
+ scale: Grid cell scale.
+ stride: Down sample stride of the feature map.
+ offset: Offset of grid cells.
+ return:
+ Grid_cells xyxy position. Size should be [feat_w * feat_h, 4]
+ """
+ cell_size = stride * scale
+ h, w = featmap_size
+ x_range = (np.arange(w, dtype=np.float32) + offset) * stride
+ y_range = (np.arange(h, dtype=np.float32) + offset) * stride
+ x, y = np.meshgrid(x_range, y_range)
+ y = y.flatten()
+ x = x.flatten()
+ grid_cells = np.stack(
+ [
+ x - 0.5 * cell_size, y - 0.5 * cell_size, x + 0.5 * cell_size,
+ y + 0.5 * cell_size
+ ],
+ axis=-1)
+ return grid_cells
+
+ def get_sample(self, assign_gt_inds, gt_bboxes):
+ pos_inds = np.unique(np.nonzero(assign_gt_inds > 0)[0])
+ neg_inds = np.unique(np.nonzero(assign_gt_inds == 0)[0])
+ pos_assigned_gt_inds = assign_gt_inds[pos_inds] - 1
+
+ if gt_bboxes.size == 0:
+ # hack for index error case
+ assert pos_assigned_gt_inds.size == 0
+ pos_gt_bboxes = np.empty_like(gt_bboxes).reshape(-1, 4)
+ else:
+ if len(gt_bboxes.shape) < 2:
+ gt_bboxes = gt_bboxes.resize(-1, 4)
+ pos_gt_bboxes = gt_bboxes[pos_assigned_gt_inds, :]
+ return pos_inds, neg_inds, pos_gt_bboxes, pos_assigned_gt_inds
+
+ def __call__(self, samples, context=None):
+ assert len(samples) > 0
+ batch_size = len(samples)
+ # get grid cells of image
+ h, w = samples[0]['image'].shape[1:3]
+ multi_level_grid_cells = []
+ for stride in self.downsample_ratios:
+ featmap_size = (int(math.ceil(h / stride)),
+ int(math.ceil(w / stride)))
+ multi_level_grid_cells.append(
+ self.get_grid_cells(featmap_size, self.grid_cell_scale, stride,
+ self.cell_offset))
+ mlvl_grid_cells_list = [
+ multi_level_grid_cells for i in range(batch_size)
+ ]
+ # pixel cell number of multi-level feature maps
+ num_level_cells = [
+ grid_cells.shape[0] for grid_cells in mlvl_grid_cells_list[0]
+ ]
+ num_level_cells_list = [num_level_cells] * batch_size
+ # concat all level cells and to a single array
+ for i in range(batch_size):
+ mlvl_grid_cells_list[i] = np.concatenate(mlvl_grid_cells_list[i])
+ # target assign on all images
+ for sample, grid_cells, num_level_cells in zip(
+ samples, mlvl_grid_cells_list, num_level_cells_list):
+ gt_bboxes = sample['gt_bbox']
+ gt_labels = sample['gt_class'].squeeze()
+ if gt_labels.size == 1:
+ gt_labels = np.array([gt_labels]).astype(np.int32)
+ gt_bboxes_ignore = None
+ assign_gt_inds, _ = self.assigner(grid_cells, num_level_cells,
+ gt_bboxes, gt_bboxes_ignore,
+ gt_labels)
+ pos_inds, neg_inds, pos_gt_bboxes, pos_assigned_gt_inds = self.get_sample(
+ assign_gt_inds, gt_bboxes)
+
+ num_cells = grid_cells.shape[0]
+ bbox_targets = np.zeros_like(grid_cells)
+ bbox_weights = np.zeros_like(grid_cells)
+ labels = np.ones([num_cells], dtype=np.int64) * self.num_classes
+ label_weights = np.zeros([num_cells], dtype=np.float32)
+
+ if len(pos_inds) > 0:
+ pos_bbox_targets = pos_gt_bboxes
+ bbox_targets[pos_inds, :] = pos_bbox_targets
+ bbox_weights[pos_inds, :] = 1.0
+ if not np.any(gt_labels):
+ labels[pos_inds] = 0
+ else:
+ labels[pos_inds] = gt_labels[pos_assigned_gt_inds]
+
+ label_weights[pos_inds] = 1.0
+ if len(neg_inds) > 0:
+ label_weights[neg_inds] = 1.0
+ sample['grid_cells'] = grid_cells
+ sample['labels'] = labels
+ sample['label_weights'] = label_weights
+ sample['bbox_targets'] = bbox_targets
+ sample['pos_num'] = max(pos_inds.size, 1)
+ sample.pop('is_crowd', None)
+ sample.pop('difficult', None)
+ sample.pop('gt_class', None)
+ sample.pop('gt_bbox', None)
+ sample.pop('gt_score', None)
+ return samples
+
+
+@register_op
+class Gt2TTFTarget(BaseOperator):
+ __shared__ = ['num_classes']
+ """
+ Gt2TTFTarget
+ Generate TTFNet targets by ground truth data
+
+ Args:
+ num_classes(int): the number of classes.
+ down_ratio(int): the down ratio from images to heatmap, 4 by default.
+ alpha(float): the alpha parameter to generate gaussian target.
+ 0.54 by default.
+ """
+
+ def __init__(self, num_classes=80, down_ratio=4, alpha=0.54):
+ super(Gt2TTFTarget, self).__init__()
+ self.down_ratio = down_ratio
+ self.num_classes = num_classes
+ self.alpha = alpha
+
+ def __call__(self, samples, context=None):
+ output_size = samples[0]['image'].shape[1]
+ feat_size = output_size // self.down_ratio
+ for sample in samples:
+ heatmap = np.zeros(
+ (self.num_classes, feat_size, feat_size), dtype='float32')
+ box_target = np.ones(
+ (4, feat_size, feat_size), dtype='float32') * -1
+ reg_weight = np.zeros((1, feat_size, feat_size), dtype='float32')
+
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+
+ bbox_w = gt_bbox[:, 2] - gt_bbox[:, 0] + 1
+ bbox_h = gt_bbox[:, 3] - gt_bbox[:, 1] + 1
+ area = bbox_w * bbox_h
+ boxes_areas_log = np.log(area)
+ boxes_ind = np.argsort(boxes_areas_log, axis=0)[::-1]
+ boxes_area_topk_log = boxes_areas_log[boxes_ind]
+ gt_bbox = gt_bbox[boxes_ind]
+ gt_class = gt_class[boxes_ind]
+
+ feat_gt_bbox = gt_bbox / self.down_ratio
+ feat_gt_bbox = np.clip(feat_gt_bbox, 0, feat_size - 1)
+ feat_hs, feat_ws = (feat_gt_bbox[:, 3] - feat_gt_bbox[:, 1],
+ feat_gt_bbox[:, 2] - feat_gt_bbox[:, 0])
+
+ ct_inds = np.stack(
+ [(gt_bbox[:, 0] + gt_bbox[:, 2]) / 2,
+ (gt_bbox[:, 1] + gt_bbox[:, 3]) / 2],
+ axis=1) / self.down_ratio
+
+ h_radiuses_alpha = (feat_hs / 2. * self.alpha).astype('int32')
+ w_radiuses_alpha = (feat_ws / 2. * self.alpha).astype('int32')
+
+ for k in range(len(gt_bbox)):
+ cls_id = gt_class[k]
+ fake_heatmap = np.zeros((feat_size, feat_size), dtype='float32')
+ self.draw_truncate_gaussian(fake_heatmap, ct_inds[k],
+ h_radiuses_alpha[k],
+ w_radiuses_alpha[k])
+
+ heatmap[cls_id] = np.maximum(heatmap[cls_id], fake_heatmap)
+ box_target_inds = fake_heatmap > 0
+ box_target[:, box_target_inds] = gt_bbox[k][:, None]
+
+ local_heatmap = fake_heatmap[box_target_inds]
+ ct_div = np.sum(local_heatmap)
+ local_heatmap *= boxes_area_topk_log[k]
+ reg_weight[0, box_target_inds] = local_heatmap / ct_div
+ sample['ttf_heatmap'] = heatmap
+ sample['ttf_box_target'] = box_target
+ sample['ttf_reg_weight'] = reg_weight
+ sample.pop('is_crowd', None)
+ sample.pop('difficult', None)
+ sample.pop('gt_class', None)
+ sample.pop('gt_bbox', None)
+ sample.pop('gt_score', None)
+ return samples
+
+ def draw_truncate_gaussian(self, heatmap, center, h_radius, w_radius):
+ h, w = 2 * h_radius + 1, 2 * w_radius + 1
+ sigma_x = w / 6
+ sigma_y = h / 6
+ gaussian = gaussian2D((h, w), sigma_x, sigma_y)
+
+ x, y = int(center[0]), int(center[1])
+
+ height, width = heatmap.shape[0:2]
+
+ left, right = min(x, w_radius), min(width - x, w_radius + 1)
+ top, bottom = min(y, h_radius), min(height - y, h_radius + 1)
+
+ masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
+ masked_gaussian = gaussian[h_radius - top:h_radius + bottom, w_radius -
+ left:w_radius + right]
+ if min(masked_gaussian.shape) > 0 and min(masked_heatmap.shape) > 0:
+ heatmap[y - top:y + bottom, x - left:x + right] = np.maximum(
+ masked_heatmap, masked_gaussian)
+ return heatmap
+
+
+@register_op
+class Gt2Solov2Target(BaseOperator):
+ """Assign mask target and labels in SOLOv2 network.
+ The code of this function is based on:
+ https://github.com/WXinlong/SOLO/blob/master/mmdet/models/anchor_heads/solov2_head.py#L271
+ Args:
+ num_grids (list): The list of feature map grids size.
+ scale_ranges (list): The list of mask boundary range.
+ coord_sigma (float): The coefficient of coordinate area length.
+ sampling_ratio (float): The ratio of down sampling.
+ """
+
+ def __init__(self,
+ num_grids=[40, 36, 24, 16, 12],
+ scale_ranges=[[1, 96], [48, 192], [96, 384], [192, 768],
+ [384, 2048]],
+ coord_sigma=0.2,
+ sampling_ratio=4.0):
+ super(Gt2Solov2Target, self).__init__()
+ self.num_grids = num_grids
+ self.scale_ranges = scale_ranges
+ self.coord_sigma = coord_sigma
+ self.sampling_ratio = sampling_ratio
+
+ def _scale_size(self, im, scale):
+ h, w = im.shape[:2]
+ new_size = (int(w * float(scale) + 0.5), int(h * float(scale) + 0.5))
+ resized_img = cv2.resize(
+ im, None, None, fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR)
+ return resized_img
+
+ def __call__(self, samples, context=None):
+ sample_id = 0
+ max_ins_num = [0] * len(self.num_grids)
+ for sample in samples:
+ gt_bboxes_raw = sample['gt_bbox']
+ gt_labels_raw = sample['gt_class'] + 1
+ im_c, im_h, im_w = sample['image'].shape[:]
+ gt_masks_raw = sample['gt_segm'].astype(np.uint8)
+ mask_feat_size = [
+ int(im_h / self.sampling_ratio), int(im_w / self.sampling_ratio)
+ ]
+ gt_areas = np.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) *
+ (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1]))
+ ins_ind_label_list = []
+ idx = 0
+ for (lower_bound, upper_bound), num_grid \
+ in zip(self.scale_ranges, self.num_grids):
+
+ hit_indices = ((gt_areas >= lower_bound) &
+ (gt_areas <= upper_bound)).nonzero()[0]
+ num_ins = len(hit_indices)
+
+ ins_label = []
+ grid_order = []
+ cate_label = np.zeros([num_grid, num_grid], dtype=np.int64)
+ ins_ind_label = np.zeros([num_grid**2], dtype=np.bool)
+
+ if num_ins == 0:
+ ins_label = np.zeros(
+ [1, mask_feat_size[0], mask_feat_size[1]],
+ dtype=np.uint8)
+ ins_ind_label_list.append(ins_ind_label)
+ sample['cate_label{}'.format(idx)] = cate_label.flatten()
+ sample['ins_label{}'.format(idx)] = ins_label
+ sample['grid_order{}'.format(idx)] = np.asarray(
+ [sample_id * num_grid * num_grid + 0], dtype=np.int32)
+ idx += 1
+ continue
+ gt_bboxes = gt_bboxes_raw[hit_indices]
+ gt_labels = gt_labels_raw[hit_indices]
+ gt_masks = gt_masks_raw[hit_indices, ...]
+
+ half_ws = 0.5 * (
+ gt_bboxes[:, 2] - gt_bboxes[:, 0]) * self.coord_sigma
+ half_hs = 0.5 * (
+ gt_bboxes[:, 3] - gt_bboxes[:, 1]) * self.coord_sigma
+
+ for seg_mask, gt_label, half_h, half_w in zip(
+ gt_masks, gt_labels, half_hs, half_ws):
+ if seg_mask.sum() == 0:
+ continue
+ # mass center
+ upsampled_size = (mask_feat_size[0] * 4,
+ mask_feat_size[1] * 4)
+ center_h, center_w = ndimage.measurements.center_of_mass(
+ seg_mask)
+ coord_w = int(
+ (center_w / upsampled_size[1]) // (1. / num_grid))
+ coord_h = int(
+ (center_h / upsampled_size[0]) // (1. / num_grid))
+
+ # left, top, right, down
+ top_box = max(0,
+ int(((center_h - half_h) / upsampled_size[0])
+ // (1. / num_grid)))
+ down_box = min(num_grid - 1,
+ int(((center_h + half_h) / upsampled_size[0])
+ // (1. / num_grid)))
+ left_box = max(0,
+ int(((center_w - half_w) / upsampled_size[1])
+ // (1. / num_grid)))
+ right_box = min(num_grid - 1,
+ int(((center_w + half_w) /
+ upsampled_size[1]) // (1. / num_grid)))
+
+ top = max(top_box, coord_h - 1)
+ down = min(down_box, coord_h + 1)
+ left = max(coord_w - 1, left_box)
+ right = min(right_box, coord_w + 1)
+
+ cate_label[top:(down + 1), left:(right + 1)] = gt_label
+ seg_mask = self._scale_size(
+ seg_mask, scale=1. / self.sampling_ratio)
+ for i in range(top, down + 1):
+ for j in range(left, right + 1):
+ label = int(i * num_grid + j)
+ cur_ins_label = np.zeros(
+ [mask_feat_size[0], mask_feat_size[1]],
+ dtype=np.uint8)
+ cur_ins_label[:seg_mask.shape[0], :seg_mask.shape[
+ 1]] = seg_mask
+ ins_label.append(cur_ins_label)
+ ins_ind_label[label] = True
+ grid_order.append(sample_id * num_grid * num_grid +
+ label)
+ if ins_label == []:
+ ins_label = np.zeros(
+ [1, mask_feat_size[0], mask_feat_size[1]],
+ dtype=np.uint8)
+ ins_ind_label_list.append(ins_ind_label)
+ sample['cate_label{}'.format(idx)] = cate_label.flatten()
+ sample['ins_label{}'.format(idx)] = ins_label
+ sample['grid_order{}'.format(idx)] = np.asarray(
+ [sample_id * num_grid * num_grid + 0], dtype=np.int32)
+ else:
+ ins_label = np.stack(ins_label, axis=0)
+ ins_ind_label_list.append(ins_ind_label)
+ sample['cate_label{}'.format(idx)] = cate_label.flatten()
+ sample['ins_label{}'.format(idx)] = ins_label
+ sample['grid_order{}'.format(idx)] = np.asarray(
+ grid_order, dtype=np.int32)
+ assert len(grid_order) > 0
+ max_ins_num[idx] = max(
+ max_ins_num[idx],
+ sample['ins_label{}'.format(idx)].shape[0])
+ idx += 1
+ ins_ind_labels = np.concatenate([
+ ins_ind_labels_level_img
+ for ins_ind_labels_level_img in ins_ind_label_list
+ ])
+ fg_num = np.sum(ins_ind_labels)
+ sample['fg_num'] = fg_num
+ sample_id += 1
+
+ sample.pop('is_crowd')
+ sample.pop('gt_class')
+ sample.pop('gt_bbox')
+ sample.pop('gt_poly')
+ sample.pop('gt_segm')
+
+ # padding batch
+ for data in samples:
+ for idx in range(len(self.num_grids)):
+ gt_ins_data = np.zeros(
+ [
+ max_ins_num[idx],
+ data['ins_label{}'.format(idx)].shape[1],
+ data['ins_label{}'.format(idx)].shape[2]
+ ],
+ dtype=np.uint8)
+ gt_ins_data[0:data['ins_label{}'.format(idx)].shape[
+ 0], :, :] = data['ins_label{}'.format(idx)]
+ gt_grid_order = np.zeros([max_ins_num[idx]], dtype=np.int32)
+ gt_grid_order[0:data['grid_order{}'.format(idx)].shape[
+ 0]] = data['grid_order{}'.format(idx)]
+ data['ins_label{}'.format(idx)] = gt_ins_data
+ data['grid_order{}'.format(idx)] = gt_grid_order
+
+ return samples
+
+
+@register_op
+class Gt2SparseRCNNTarget(BaseOperator):
+ '''
+ Generate SparseRCNN targets by groud truth data
+ '''
+
+ def __init__(self):
+ super(Gt2SparseRCNNTarget, self).__init__()
+
+ def __call__(self, samples, context=None):
+ for sample in samples:
+ im = sample["image"]
+ h, w = im.shape[1:3]
+ img_whwh = np.array([w, h, w, h], dtype=np.int32)
+ sample["img_whwh"] = img_whwh
+ if "scale_factor" in sample:
+ sample["scale_factor_wh"] = np.array(
+ [sample["scale_factor"][1], sample["scale_factor"][0]],
+ dtype=np.float32)
+ else:
+ sample["scale_factor_wh"] = np.array(
+ [1.0, 1.0], dtype=np.float32)
+
+ return samples
+
+
+@register_op
+class PadMaskBatch(BaseOperator):
+ """
+ Pad a batch of samples so they can be divisible by a stride.
+ The layout of each image should be 'CHW'.
+ Args:
+ pad_to_stride (int): If `pad_to_stride > 0`, pad zeros to ensure
+ height and width is divisible by `pad_to_stride`.
+ return_pad_mask (bool): If `return_pad_mask = True`, return
+ `pad_mask` for transformer.
+ """
+
+ def __init__(self, pad_to_stride=0, return_pad_mask=False):
+ super(PadMaskBatch, self).__init__()
+ self.pad_to_stride = pad_to_stride
+ self.return_pad_mask = return_pad_mask
+
+ def __call__(self, samples, context=None):
+ """
+ Args:
+ samples (list): a batch of sample, each is dict.
+ """
+ coarsest_stride = self.pad_to_stride
+
+ max_shape = np.array([data['image'].shape for data in samples]).max(
+ axis=0)
+ if coarsest_stride > 0:
+ max_shape[1] = int(
+ np.ceil(max_shape[1] / coarsest_stride) * coarsest_stride)
+ max_shape[2] = int(
+ np.ceil(max_shape[2] / coarsest_stride) * coarsest_stride)
+
+ for data in samples:
+ im = data['image']
+ im_c, im_h, im_w = im.shape[:]
+ padding_im = np.zeros(
+ (im_c, max_shape[1], max_shape[2]), dtype=np.float32)
+ padding_im[:, :im_h, :im_w] = im
+ data['image'] = padding_im
+ if 'semantic' in data and data['semantic'] is not None:
+ semantic = data['semantic']
+ padding_sem = np.zeros(
+ (1, max_shape[1], max_shape[2]), dtype=np.float32)
+ padding_sem[:, :im_h, :im_w] = semantic
+ data['semantic'] = padding_sem
+ if 'gt_segm' in data and data['gt_segm'] is not None:
+ gt_segm = data['gt_segm']
+ padding_segm = np.zeros(
+ (gt_segm.shape[0], max_shape[1], max_shape[2]),
+ dtype=np.uint8)
+ padding_segm[:, :im_h, :im_w] = gt_segm
+ data['gt_segm'] = padding_segm
+ if self.return_pad_mask:
+ padding_mask = np.zeros(
+ (max_shape[1], max_shape[2]), dtype=np.float32)
+ padding_mask[:im_h, :im_w] = 1.
+ data['pad_mask'] = padding_mask
+
+ if 'gt_rbox2poly' in data and data['gt_rbox2poly'] is not None:
+ # ploy to rbox
+ polys = data['gt_rbox2poly']
+ rbox = bbox_utils.poly2rbox(polys)
+ data['gt_rbox'] = rbox
+
+ return samples
+
+
+@register_op
+class Gt2CenterNetTarget(BaseOperator):
+ """Gt2CenterNetTarget
+ Genterate CenterNet targets by ground-truth
+ Args:
+ down_ratio (int): The down sample ratio between output feature and
+ input image.
+ num_classes (int): The number of classes, 80 by default.
+ max_objs (int): The maximum objects detected, 128 by default.
+ """
+
+ def __init__(self, down_ratio, num_classes=80, max_objs=128):
+ super(Gt2CenterNetTarget, self).__init__()
+ self.down_ratio = down_ratio
+ self.num_classes = num_classes
+ self.max_objs = max_objs
+
+ def __call__(self, sample, context=None):
+ input_h, input_w = sample['image'].shape[1:]
+ output_h = input_h // self.down_ratio
+ output_w = input_w // self.down_ratio
+ num_classes = self.num_classes
+ c = sample['center']
+ s = sample['scale']
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+
+ hm = np.zeros((num_classes, output_h, output_w), dtype=np.float32)
+ wh = np.zeros((self.max_objs, 2), dtype=np.float32)
+ dense_wh = np.zeros((2, output_h, output_w), dtype=np.float32)
+ reg = np.zeros((self.max_objs, 2), dtype=np.float32)
+ ind = np.zeros((self.max_objs), dtype=np.int64)
+ reg_mask = np.zeros((self.max_objs), dtype=np.int32)
+ cat_spec_wh = np.zeros(
+ (self.max_objs, num_classes * 2), dtype=np.float32)
+ cat_spec_mask = np.zeros(
+ (self.max_objs, num_classes * 2), dtype=np.int32)
+
+ trans_output = get_affine_transform(c, [s, s], 0, [output_w, output_h])
+
+ gt_det = []
+ for i, (bbox, cls) in enumerate(zip(gt_bbox, gt_class)):
+ cls = int(cls)
+ bbox[:2] = affine_transform(bbox[:2], trans_output)
+ bbox[2:] = affine_transform(bbox[2:], trans_output)
+ bbox[[0, 2]] = np.clip(bbox[[0, 2]], 0, output_w - 1)
+ bbox[[1, 3]] = np.clip(bbox[[1, 3]], 0, output_h - 1)
+ h, w = bbox[3] - bbox[1], bbox[2] - bbox[0]
+ if h > 0 and w > 0:
+ radius = gaussian_radius((math.ceil(h), math.ceil(w)), 0.7)
+ radius = max(0, int(radius))
+ ct = np.array(
+ [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2],
+ dtype=np.float32)
+ ct_int = ct.astype(np.int32)
+ draw_umich_gaussian(hm[cls], ct_int, radius)
+ wh[i] = 1. * w, 1. * h
+ ind[i] = ct_int[1] * output_w + ct_int[0]
+ reg[i] = ct - ct_int
+ reg_mask[i] = 1
+ cat_spec_wh[i, cls * 2:cls * 2 + 2] = wh[i]
+ cat_spec_mask[i, cls * 2:cls * 2 + 2] = 1
+ gt_det.append([
+ ct[0] - w / 2, ct[1] - h / 2, ct[0] + w / 2, ct[1] + h / 2,
+ 1, cls
+ ])
+
+ sample.pop('gt_bbox', None)
+ sample.pop('gt_class', None)
+ sample.pop('center', None)
+ sample.pop('scale', None)
+ sample.pop('is_crowd', None)
+ sample.pop('difficult', None)
+ sample['heatmap'] = hm
+ sample['index_mask'] = reg_mask
+ sample['index'] = ind
+ sample['size'] = wh
+ sample['offset'] = reg
+ return sample
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/gridmask_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/gridmask_utils.py
new file mode 100644
index 000000000..c18701556
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/gridmask_utils.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/dvlab-research/GridMask/blob/master/detection_grid/maskrcnn_benchmark/data/transforms/grid.py
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import numpy as np
+from PIL import Image
+
+
+class Gridmask(object):
+ def __init__(self,
+ use_h=True,
+ use_w=True,
+ rotate=1,
+ offset=False,
+ ratio=0.5,
+ mode=1,
+ prob=0.7,
+ upper_iter=360000):
+ super(Gridmask, self).__init__()
+ self.use_h = use_h
+ self.use_w = use_w
+ self.rotate = rotate
+ self.offset = offset
+ self.ratio = ratio
+ self.mode = mode
+ self.prob = prob
+ self.st_prob = prob
+ self.upper_iter = upper_iter
+
+ def __call__(self, x, curr_iter):
+ self.prob = self.st_prob * min(1, 1.0 * curr_iter / self.upper_iter)
+ if np.random.rand() > self.prob:
+ return x
+ h, w, _ = x.shape
+ hh = int(1.5 * h)
+ ww = int(1.5 * w)
+ d = np.random.randint(2, h)
+ self.l = min(max(int(d * self.ratio + 0.5), 1), d - 1)
+ mask = np.ones((hh, ww), np.float32)
+ st_h = np.random.randint(d)
+ st_w = np.random.randint(d)
+ if self.use_h:
+ for i in range(hh // d):
+ s = d * i + st_h
+ t = min(s + self.l, hh)
+ mask[s:t, :] *= 0
+ if self.use_w:
+ for i in range(ww // d):
+ s = d * i + st_w
+ t = min(s + self.l, ww)
+ mask[:, s:t] *= 0
+
+ r = np.random.randint(self.rotate)
+ mask = Image.fromarray(np.uint8(mask))
+ mask = mask.rotate(r)
+ mask = np.asarray(mask)
+ mask = mask[(hh - h) // 2:(hh - h) // 2 + h, (ww - w) // 2:(ww - w) // 2
+ + w].astype(np.float32)
+
+ if self.mode == 1:
+ mask = 1 - mask
+ mask = np.expand_dims(mask, axis=-1)
+ if self.offset:
+ offset = (2 * (np.random.rand(h, w) - 0.5)).astype(np.float32)
+ x = (x * mask + offset * (1 - mask)).astype(x.dtype)
+ else:
+ x = (x * mask).astype(x.dtype)
+
+ return x
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/keypoint_operators.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/keypoint_operators.py
new file mode 100644
index 000000000..81770b63e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/keypoint_operators.py
@@ -0,0 +1,859 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# function:
+# operators to process sample,
+# eg: decode/resize/crop image
+
+from __future__ import absolute_import
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+
+import cv2
+import numpy as np
+import math
+import copy
+
+from ...modeling.keypoint_utils import get_affine_mat_kernel, warp_affine_joints, get_affine_transform, affine_transform, get_warp_matrix
+from ppdet.core.workspace import serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+registered_ops = []
+
+__all__ = [
+ 'RandomAffine',
+ 'KeyPointFlip',
+ 'TagGenerate',
+ 'ToHeatmaps',
+ 'NormalizePermute',
+ 'EvalAffine',
+ 'RandomFlipHalfBodyTransform',
+ 'TopDownAffine',
+ 'ToHeatmapsTopDown',
+ 'ToHeatmapsTopDown_DARK',
+ 'ToHeatmapsTopDown_UDP',
+ 'TopDownEvalAffine',
+ 'AugmentationbyInformantionDropping',
+]
+
+
+def register_keypointop(cls):
+ return serializable(cls)
+
+
+@register_keypointop
+class KeyPointFlip(object):
+ """Get the fliped image by flip_prob. flip the coords also
+ the left coords and right coords should exchange while flip, for the right keypoint will be left keypoint after image fliped
+
+ Args:
+ flip_permutation (list[17]): the left-right exchange order list corresponding to [0,1,2,...,16]
+ hmsize (list[2]): output heatmap's shape list of different scale outputs of higherhrnet
+ flip_prob (float): the ratio whether to flip the image
+ records(dict): the dict contained the image, mask and coords
+
+ Returns:
+ records(dict): contain the image, mask and coords after tranformed
+
+ """
+
+ def __init__(self, flip_permutation, hmsize, flip_prob=0.5):
+ super(KeyPointFlip, self).__init__()
+ assert isinstance(flip_permutation, Sequence)
+ self.flip_permutation = flip_permutation
+ self.flip_prob = flip_prob
+ self.hmsize = hmsize
+
+ def __call__(self, records):
+ image = records['image']
+ kpts_lst = records['joints']
+ mask_lst = records['mask']
+ flip = np.random.random() < self.flip_prob
+ if flip:
+ image = image[:, ::-1]
+ for idx, hmsize in enumerate(self.hmsize):
+ if len(mask_lst) > idx:
+ mask_lst[idx] = mask_lst[idx][:, ::-1]
+ if kpts_lst[idx].ndim == 3:
+ kpts_lst[idx] = kpts_lst[idx][:, self.flip_permutation]
+ else:
+ kpts_lst[idx] = kpts_lst[idx][self.flip_permutation]
+ kpts_lst[idx][..., 0] = hmsize - kpts_lst[idx][..., 0]
+ kpts_lst[idx] = kpts_lst[idx].astype(np.int64)
+ kpts_lst[idx][kpts_lst[idx][..., 0] >= hmsize, 2] = 0
+ kpts_lst[idx][kpts_lst[idx][..., 1] >= hmsize, 2] = 0
+ kpts_lst[idx][kpts_lst[idx][..., 0] < 0, 2] = 0
+ kpts_lst[idx][kpts_lst[idx][..., 1] < 0, 2] = 0
+ records['image'] = image
+ records['joints'] = kpts_lst
+ records['mask'] = mask_lst
+ return records
+
+
+@register_keypointop
+class RandomAffine(object):
+ """apply affine transform to image, mask and coords
+ to achieve the rotate, scale and shift effect for training image
+
+ Args:
+ max_degree (float): the max abslute rotate degree to apply, transform range is [-max_degree, max_degree]
+ max_scale (list[2]): the scale range to apply, transform range is [min, max]
+ max_shift (float): the max abslute shift ratio to apply, transform range is [-max_shift*imagesize, max_shift*imagesize]
+ hmsize (list[2]): output heatmap's shape list of different scale outputs of higherhrnet
+ trainsize (int): the standard length used to train, the 'scale_type' of [h,w] will be resize to trainsize for standard
+ scale_type (str): the length of [h,w] to used for trainsize, chosed between 'short' and 'long'
+ records(dict): the dict contained the image, mask and coords
+
+ Returns:
+ records(dict): contain the image, mask and coords after tranformed
+
+ """
+
+ def __init__(self,
+ max_degree=30,
+ scale=[0.75, 1.5],
+ max_shift=0.2,
+ hmsize=[128, 256],
+ trainsize=512,
+ scale_type='short'):
+ super(RandomAffine, self).__init__()
+ self.max_degree = max_degree
+ self.min_scale = scale[0]
+ self.max_scale = scale[1]
+ self.max_shift = max_shift
+ self.hmsize = hmsize
+ self.trainsize = trainsize
+ self.scale_type = scale_type
+
+ def _get_affine_matrix(self, center, scale, res, rot=0):
+ """Generate transformation matrix."""
+ h = scale
+ t = np.zeros((3, 3), dtype=np.float32)
+ t[0, 0] = float(res[1]) / h
+ t[1, 1] = float(res[0]) / h
+ t[0, 2] = res[1] * (-float(center[0]) / h + .5)
+ t[1, 2] = res[0] * (-float(center[1]) / h + .5)
+ t[2, 2] = 1
+ if rot != 0:
+ rot = -rot # To match direction of rotation from cropping
+ rot_mat = np.zeros((3, 3), dtype=np.float32)
+ rot_rad = rot * np.pi / 180
+ sn, cs = np.sin(rot_rad), np.cos(rot_rad)
+ rot_mat[0, :2] = [cs, -sn]
+ rot_mat[1, :2] = [sn, cs]
+ rot_mat[2, 2] = 1
+ # Need to rotate around center
+ t_mat = np.eye(3)
+ t_mat[0, 2] = -res[1] / 2
+ t_mat[1, 2] = -res[0] / 2
+ t_inv = t_mat.copy()
+ t_inv[:2, 2] *= -1
+ t = np.dot(t_inv, np.dot(rot_mat, np.dot(t_mat, t)))
+ return t
+
+ def __call__(self, records):
+ image = records['image']
+ keypoints = records['joints']
+ heatmap_mask = records['mask']
+
+ degree = (np.random.random() * 2 - 1) * self.max_degree
+ shape = np.array(image.shape[:2][::-1])
+ center = center = np.array((np.array(shape) / 2))
+
+ aug_scale = np.random.random() * (self.max_scale - self.min_scale
+ ) + self.min_scale
+ if self.scale_type == 'long':
+ scale = max(shape[0], shape[1]) / 1.0
+ elif self.scale_type == 'short':
+ scale = min(shape[0], shape[1]) / 1.0
+ else:
+ raise ValueError('Unknown scale type: {}'.format(self.scale_type))
+ roi_size = aug_scale * scale
+ dx = int(0)
+ dy = int(0)
+ if self.max_shift > 0:
+
+ dx = np.random.randint(-self.max_shift * roi_size,
+ self.max_shift * roi_size)
+ dy = np.random.randint(-self.max_shift * roi_size,
+ self.max_shift * roi_size)
+
+ center += np.array([dx, dy])
+ input_size = 2 * center
+
+ keypoints[..., :2] *= shape
+ heatmap_mask *= 255
+ kpts_lst = []
+ mask_lst = []
+
+ image_affine_mat = self._get_affine_matrix(
+ center, roi_size, (self.trainsize, self.trainsize), degree)[:2]
+ image = cv2.warpAffine(
+ image,
+ image_affine_mat, (self.trainsize, self.trainsize),
+ flags=cv2.INTER_LINEAR)
+ for hmsize in self.hmsize:
+ kpts = copy.deepcopy(keypoints)
+ mask_affine_mat = self._get_affine_matrix(
+ center, roi_size, (hmsize, hmsize), degree)[:2]
+ if heatmap_mask is not None:
+ mask = cv2.warpAffine(heatmap_mask, mask_affine_mat,
+ (hmsize, hmsize))
+ mask = ((mask / 255) > 0.5).astype(np.float32)
+ kpts[..., 0:2] = warp_affine_joints(kpts[..., 0:2].copy(),
+ mask_affine_mat)
+ kpts[np.trunc(kpts[..., 0]) >= hmsize, 2] = 0
+ kpts[np.trunc(kpts[..., 1]) >= hmsize, 2] = 0
+ kpts[np.trunc(kpts[..., 0]) < 0, 2] = 0
+ kpts[np.trunc(kpts[..., 1]) < 0, 2] = 0
+ kpts_lst.append(kpts)
+ mask_lst.append(mask)
+ records['image'] = image
+ records['joints'] = kpts_lst
+ records['mask'] = mask_lst
+ return records
+
+
+@register_keypointop
+class EvalAffine(object):
+ """apply affine transform to image
+ resize the short of [h,w] to standard size for eval
+
+ Args:
+ size (int): the standard length used to train, the 'short' of [h,w] will be resize to trainsize for standard
+ records(dict): the dict contained the image, mask and coords
+
+ Returns:
+ records(dict): contain the image, mask and coords after tranformed
+
+ """
+
+ def __init__(self, size, stride=64):
+ super(EvalAffine, self).__init__()
+ self.size = size
+ self.stride = stride
+
+ def __call__(self, records):
+ image = records['image']
+ mask = records['mask'] if 'mask' in records else None
+ s = self.size
+ h, w, _ = image.shape
+ trans, size_resized = get_affine_mat_kernel(h, w, s, inv=False)
+ image_resized = cv2.warpAffine(image, trans, size_resized)
+ if mask is not None:
+ mask = cv2.warpAffine(mask, trans, size_resized)
+ records['mask'] = mask
+ if 'joints' in records:
+ del records['joints']
+ records['image'] = image_resized
+ return records
+
+
+@register_keypointop
+class NormalizePermute(object):
+ def __init__(self,
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.120, 57.375],
+ is_scale=True):
+ super(NormalizePermute, self).__init__()
+ self.mean = mean
+ self.std = std
+ self.is_scale = is_scale
+
+ def __call__(self, records):
+ image = records['image']
+ image = image.astype(np.float32)
+ if self.is_scale:
+ image /= 255.
+ image = image.transpose((2, 0, 1))
+ mean = np.array(self.mean, dtype=np.float32)
+ std = np.array(self.std, dtype=np.float32)
+ invstd = 1. / std
+ for v, m, s in zip(image, mean, invstd):
+ v.__isub__(m).__imul__(s)
+ records['image'] = image
+ return records
+
+
+@register_keypointop
+class TagGenerate(object):
+ """record gt coords for aeloss to sample coords value in tagmaps
+
+ Args:
+ num_joints (int): the keypoint numbers of dataset to train
+ num_people (int): maxmum people to support for sample aeloss
+ records(dict): the dict contained the image, mask and coords
+
+ Returns:
+ records(dict): contain the gt coords used in tagmap
+
+ """
+
+ def __init__(self, num_joints, max_people=30):
+ super(TagGenerate, self).__init__()
+ self.max_people = max_people
+ self.num_joints = num_joints
+
+ def __call__(self, records):
+ kpts_lst = records['joints']
+ kpts = kpts_lst[0]
+ tagmap = np.zeros((self.max_people, self.num_joints, 4), dtype=np.int64)
+ inds = np.where(kpts[..., 2] > 0)
+ p, j = inds[0], inds[1]
+ visible = kpts[inds]
+ # tagmap is [p, j, 3], where last dim is j, y, x
+ tagmap[p, j, 0] = j
+ tagmap[p, j, 1] = visible[..., 1] # y
+ tagmap[p, j, 2] = visible[..., 0] # x
+ tagmap[p, j, 3] = 1
+ records['tagmap'] = tagmap
+ del records['joints']
+ return records
+
+
+@register_keypointop
+class ToHeatmaps(object):
+ """to generate the gaussin heatmaps of keypoint for heatmap loss
+
+ Args:
+ num_joints (int): the keypoint numbers of dataset to train
+ hmsize (list[2]): output heatmap's shape list of different scale outputs of higherhrnet
+ sigma (float): the std of gaussin kernel genereted
+ records(dict): the dict contained the image, mask and coords
+
+ Returns:
+ records(dict): contain the heatmaps used to heatmaploss
+
+ """
+
+ def __init__(self, num_joints, hmsize, sigma=None):
+ super(ToHeatmaps, self).__init__()
+ self.num_joints = num_joints
+ self.hmsize = np.array(hmsize)
+ if sigma is None:
+ sigma = hmsize[0] // 64
+ self.sigma = sigma
+
+ r = 6 * sigma + 3
+ x = np.arange(0, r, 1, np.float32)
+ y = x[:, None]
+ x0, y0 = 3 * sigma + 1, 3 * sigma + 1
+ self.gaussian = np.exp(-((x - x0)**2 + (y - y0)**2) / (2 * sigma**2))
+
+ def __call__(self, records):
+ kpts_lst = records['joints']
+ mask_lst = records['mask']
+ for idx, hmsize in enumerate(self.hmsize):
+ mask = mask_lst[idx]
+ kpts = kpts_lst[idx]
+ heatmaps = np.zeros((self.num_joints, hmsize, hmsize))
+ inds = np.where(kpts[..., 2] > 0)
+ visible = kpts[inds].astype(np.int64)[..., :2]
+ ul = np.round(visible - 3 * self.sigma - 1)
+ br = np.round(visible + 3 * self.sigma + 2)
+ sul = np.maximum(0, -ul)
+ sbr = np.minimum(hmsize, br) - ul
+ dul = np.clip(ul, 0, hmsize - 1)
+ dbr = np.clip(br, 0, hmsize)
+ for i in range(len(visible)):
+ if visible[i][0] < 0 or visible[i][1] < 0 or visible[i][
+ 0] >= hmsize or visible[i][1] >= hmsize:
+ continue
+ dx1, dy1 = dul[i]
+ dx2, dy2 = dbr[i]
+ sx1, sy1 = sul[i]
+ sx2, sy2 = sbr[i]
+ heatmaps[inds[1][i], dy1:dy2, dx1:dx2] = np.maximum(
+ self.gaussian[sy1:sy2, sx1:sx2],
+ heatmaps[inds[1][i], dy1:dy2, dx1:dx2])
+ records['heatmap_gt{}x'.format(idx + 1)] = heatmaps
+ records['mask_{}x'.format(idx + 1)] = mask
+ del records['mask']
+ return records
+
+
+@register_keypointop
+class RandomFlipHalfBodyTransform(object):
+ """apply data augment to image and coords
+ to achieve the flip, scale, rotate and half body transform effect for training image
+
+ Args:
+ trainsize (list):[w, h], Image target size
+ upper_body_ids (list): The upper body joint ids
+ flip_pairs (list): The left-right joints exchange order list
+ pixel_std (int): The pixel std of the scale
+ scale (float): The scale factor to transform the image
+ rot (int): The rotate factor to transform the image
+ num_joints_half_body (int): The joints threshold of the half body transform
+ prob_half_body (float): The threshold of the half body transform
+ flip (bool): Whether to flip the image
+
+ Returns:
+ records(dict): contain the image and coords after tranformed
+
+ """
+
+ def __init__(self,
+ trainsize,
+ upper_body_ids,
+ flip_pairs,
+ pixel_std,
+ scale=0.35,
+ rot=40,
+ num_joints_half_body=8,
+ prob_half_body=0.3,
+ flip=True,
+ rot_prob=0.6):
+ super(RandomFlipHalfBodyTransform, self).__init__()
+ self.trainsize = trainsize
+ self.upper_body_ids = upper_body_ids
+ self.flip_pairs = flip_pairs
+ self.pixel_std = pixel_std
+ self.scale = scale
+ self.rot = rot
+ self.num_joints_half_body = num_joints_half_body
+ self.prob_half_body = prob_half_body
+ self.flip = flip
+ self.aspect_ratio = trainsize[0] * 1.0 / trainsize[1]
+ self.rot_prob = rot_prob
+
+ def halfbody_transform(self, joints, joints_vis):
+ upper_joints = []
+ lower_joints = []
+ for joint_id in range(joints.shape[0]):
+ if joints_vis[joint_id][0] > 0:
+ if joint_id in self.upper_body_ids:
+ upper_joints.append(joints[joint_id])
+ else:
+ lower_joints.append(joints[joint_id])
+ if np.random.randn() < 0.5 and len(upper_joints) > 2:
+ selected_joints = upper_joints
+ else:
+ selected_joints = lower_joints if len(
+ lower_joints) > 2 else upper_joints
+ if len(selected_joints) < 2:
+ return None, None
+ selected_joints = np.array(selected_joints, dtype=np.float32)
+ center = selected_joints.mean(axis=0)[:2]
+ left_top = np.amin(selected_joints, axis=0)
+ right_bottom = np.amax(selected_joints, axis=0)
+ w = right_bottom[0] - left_top[0]
+ h = right_bottom[1] - left_top[1]
+ if w > self.aspect_ratio * h:
+ h = w * 1.0 / self.aspect_ratio
+ elif w < self.aspect_ratio * h:
+ w = h * self.aspect_ratio
+ scale = np.array(
+ [w * 1.0 / self.pixel_std, h * 1.0 / self.pixel_std],
+ dtype=np.float32)
+ scale = scale * 1.5
+
+ return center, scale
+
+ def flip_joints(self, joints, joints_vis, width, matched_parts):
+ joints[:, 0] = width - joints[:, 0] - 1
+ for pair in matched_parts:
+ joints[pair[0], :], joints[pair[1], :] = \
+ joints[pair[1], :], joints[pair[0], :].copy()
+ joints_vis[pair[0], :], joints_vis[pair[1], :] = \
+ joints_vis[pair[1], :], joints_vis[pair[0], :].copy()
+
+ return joints * joints_vis, joints_vis
+
+ def __call__(self, records):
+ image = records['image']
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ c = records['center']
+ s = records['scale']
+ r = 0
+ if (np.sum(joints_vis[:, 0]) > self.num_joints_half_body and
+ np.random.rand() < self.prob_half_body):
+ c_half_body, s_half_body = self.halfbody_transform(joints,
+ joints_vis)
+ if c_half_body is not None and s_half_body is not None:
+ c, s = c_half_body, s_half_body
+ sf = self.scale
+ rf = self.rot
+ s = s * np.clip(np.random.randn() * sf + 1, 1 - sf, 1 + sf)
+ r = np.clip(np.random.randn() * rf, -rf * 2,
+ rf * 2) if np.random.random() <= self.rot_prob else 0
+
+ if self.flip and np.random.random() <= 0.5:
+ image = image[:, ::-1, :]
+ joints, joints_vis = self.flip_joints(
+ joints, joints_vis, image.shape[1], self.flip_pairs)
+ c[0] = image.shape[1] - c[0] - 1
+ records['image'] = image
+ records['joints'] = joints
+ records['joints_vis'] = joints_vis
+ records['center'] = c
+ records['scale'] = s
+ records['rotate'] = r
+
+ return records
+
+
+@register_keypointop
+class AugmentationbyInformantionDropping(object):
+ """AID: Augmentation by Informantion Dropping. Please refer
+ to https://arxiv.org/abs/2008.07139
+
+ Args:
+ prob_cutout (float): The probability of the Cutout augmentation.
+ offset_factor (float): Offset factor of cutout center.
+ num_patch (int): Number of patches to be cutout.
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the image and coords after tranformed
+
+ """
+
+ def __init__(self,
+ trainsize,
+ prob_cutout=0.0,
+ offset_factor=0.2,
+ num_patch=1):
+ self.prob_cutout = prob_cutout
+ self.offset_factor = offset_factor
+ self.num_patch = num_patch
+ self.trainsize = trainsize
+
+ def _cutout(self, img, joints, joints_vis):
+ height, width, _ = img.shape
+ img = img.reshape((height * width, -1))
+ feat_x_int = np.arange(0, width)
+ feat_y_int = np.arange(0, height)
+ feat_x_int, feat_y_int = np.meshgrid(feat_x_int, feat_y_int)
+ feat_x_int = feat_x_int.reshape((-1, ))
+ feat_y_int = feat_y_int.reshape((-1, ))
+ for _ in range(self.num_patch):
+ vis_idx, _ = np.where(joints_vis > 0)
+ occlusion_joint_id = np.random.choice(vis_idx)
+ center = joints[occlusion_joint_id, 0:2]
+ offset = np.random.randn(2) * self.trainsize[0] * self.offset_factor
+ center = center + offset
+ radius = np.random.uniform(0.1, 0.2) * self.trainsize[0]
+ x_offset = (center[0] - feat_x_int) / radius
+ y_offset = (center[1] - feat_y_int) / radius
+ dis = x_offset**2 + y_offset**2
+ keep_pos = np.where((dis <= 1) & (dis >= 0))[0]
+ img[keep_pos, :] = 0
+ img = img.reshape((height, width, -1))
+ return img
+
+ def __call__(self, records):
+ img = records['image']
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ if np.random.rand() < self.prob_cutout:
+ img = self._cutout(img, joints, joints_vis)
+ records['image'] = img
+ return records
+
+
+@register_keypointop
+class TopDownAffine(object):
+ """apply affine transform to image and coords
+
+ Args:
+ trainsize (list): [w, h], the standard size used to train
+ use_udp (bool): whether to use Unbiased Data Processing.
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the image and coords after tranformed
+
+ """
+
+ def __init__(self, trainsize, use_udp=False):
+ self.trainsize = trainsize
+ self.use_udp = use_udp
+
+ def __call__(self, records):
+ image = records['image']
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ rot = records['rotate'] if "rotate" in records else 0
+ if self.use_udp:
+ trans = get_warp_matrix(
+ rot, records['center'] * 2.0,
+ [self.trainsize[0] - 1.0, self.trainsize[1] - 1.0],
+ records['scale'] * 200.0)
+ image = cv2.warpAffine(
+ image,
+ trans, (int(self.trainsize[0]), int(self.trainsize[1])),
+ flags=cv2.INTER_LINEAR)
+ joints[:, 0:2] = warp_affine_joints(joints[:, 0:2].copy(), trans)
+ else:
+ trans = get_affine_transform(records['center'], records['scale'] *
+ 200, rot, self.trainsize)
+ image = cv2.warpAffine(
+ image,
+ trans, (int(self.trainsize[0]), int(self.trainsize[1])),
+ flags=cv2.INTER_LINEAR)
+ for i in range(joints.shape[0]):
+ if joints_vis[i, 0] > 0.0:
+ joints[i, 0:2] = affine_transform(joints[i, 0:2], trans)
+
+ records['image'] = image
+ records['joints'] = joints
+
+ return records
+
+
+@register_keypointop
+class TopDownEvalAffine(object):
+ """apply affine transform to image and coords
+
+ Args:
+ trainsize (list): [w, h], the standard size used to train
+ use_udp (bool): whether to use Unbiased Data Processing.
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the image and coords after tranformed
+
+ """
+
+ def __init__(self, trainsize, use_udp=False):
+ self.trainsize = trainsize
+ self.use_udp = use_udp
+
+ def __call__(self, records):
+ image = records['image']
+ rot = 0
+ imshape = records['im_shape'][::-1]
+ center = imshape / 2.
+ scale = imshape
+
+ if self.use_udp:
+ trans = get_warp_matrix(
+ rot, center * 2.0,
+ [self.trainsize[0] - 1.0, self.trainsize[1] - 1.0], scale)
+ image = cv2.warpAffine(
+ image,
+ trans, (int(self.trainsize[0]), int(self.trainsize[1])),
+ flags=cv2.INTER_LINEAR)
+ else:
+ trans = get_affine_transform(center, scale, rot, self.trainsize)
+ image = cv2.warpAffine(
+ image,
+ trans, (int(self.trainsize[0]), int(self.trainsize[1])),
+ flags=cv2.INTER_LINEAR)
+ records['image'] = image
+
+ return records
+
+
+@register_keypointop
+class ToHeatmapsTopDown(object):
+ """to generate the gaussin heatmaps of keypoint for heatmap loss
+
+ Args:
+ hmsize (list): [w, h] output heatmap's size
+ sigma (float): the std of gaussin kernel genereted
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the heatmaps used to heatmaploss
+
+ """
+
+ def __init__(self, hmsize, sigma):
+ super(ToHeatmapsTopDown, self).__init__()
+ self.hmsize = np.array(hmsize)
+ self.sigma = sigma
+
+ def __call__(self, records):
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ num_joints = joints.shape[0]
+ image_size = np.array(
+ [records['image'].shape[1], records['image'].shape[0]])
+ target_weight = np.ones((num_joints, 1), dtype=np.float32)
+ target_weight[:, 0] = joints_vis[:, 0]
+ target = np.zeros(
+ (num_joints, self.hmsize[1], self.hmsize[0]), dtype=np.float32)
+ tmp_size = self.sigma * 3
+ feat_stride = image_size / self.hmsize
+ for joint_id in range(num_joints):
+ mu_x = int(joints[joint_id][0] + 0.5) / feat_stride[0]
+ mu_y = int(joints[joint_id][1] + 0.5) / feat_stride[1]
+ # Check that any part of the gaussian is in-bounds
+ ul = [int(mu_x - tmp_size), int(mu_y - tmp_size)]
+ br = [int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)]
+ if ul[0] >= self.hmsize[0] or ul[1] >= self.hmsize[1] or br[
+ 0] < 0 or br[1] < 0:
+ # If not, just return the image as is
+ target_weight[joint_id] = 0
+ continue
+ # # Generate gaussian
+ size = 2 * tmp_size + 1
+ x = np.arange(0, size, 1, np.float32)
+ y = x[:, np.newaxis]
+ x0 = y0 = size // 2
+ # The gaussian is not normalized, we want the center value to equal 1
+ g = np.exp(-((x - x0)**2 + (y - y0)**2) / (2 * self.sigma**2))
+
+ # Usable gaussian range
+ g_x = max(0, -ul[0]), min(br[0], self.hmsize[0]) - ul[0]
+ g_y = max(0, -ul[1]), min(br[1], self.hmsize[1]) - ul[1]
+ # Image range
+ img_x = max(0, ul[0]), min(br[0], self.hmsize[0])
+ img_y = max(0, ul[1]), min(br[1], self.hmsize[1])
+
+ v = target_weight[joint_id]
+ if v > 0.5:
+ target[joint_id][img_y[0]:img_y[1], img_x[0]:img_x[1]] = g[g_y[
+ 0]:g_y[1], g_x[0]:g_x[1]]
+ records['target'] = target
+ records['target_weight'] = target_weight
+ del records['joints'], records['joints_vis']
+
+ return records
+
+
+@register_keypointop
+class ToHeatmapsTopDown_DARK(object):
+ """to generate the gaussin heatmaps of keypoint for heatmap loss
+
+ Args:
+ hmsize (list): [w, h] output heatmap's size
+ sigma (float): the std of gaussin kernel genereted
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the heatmaps used to heatmaploss
+
+ """
+
+ def __init__(self, hmsize, sigma):
+ super(ToHeatmapsTopDown_DARK, self).__init__()
+ self.hmsize = np.array(hmsize)
+ self.sigma = sigma
+
+ def __call__(self, records):
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ num_joints = joints.shape[0]
+ image_size = np.array(
+ [records['image'].shape[1], records['image'].shape[0]])
+ target_weight = np.ones((num_joints, 1), dtype=np.float32)
+ target_weight[:, 0] = joints_vis[:, 0]
+ target = np.zeros(
+ (num_joints, self.hmsize[1], self.hmsize[0]), dtype=np.float32)
+ tmp_size = self.sigma * 3
+ feat_stride = image_size / self.hmsize
+ for joint_id in range(num_joints):
+ mu_x = joints[joint_id][0] / feat_stride[0]
+ mu_y = joints[joint_id][1] / feat_stride[1]
+ # Check that any part of the gaussian is in-bounds
+ ul = [int(mu_x - tmp_size), int(mu_y - tmp_size)]
+ br = [int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)]
+ if ul[0] >= self.hmsize[0] or ul[1] >= self.hmsize[1] or br[
+ 0] < 0 or br[1] < 0:
+ # If not, just return the image as is
+ target_weight[joint_id] = 0
+ continue
+
+ x = np.arange(0, self.hmsize[0], 1, np.float32)
+ y = np.arange(0, self.hmsize[1], 1, np.float32)
+ y = y[:, np.newaxis]
+
+ v = target_weight[joint_id]
+ if v > 0.5:
+ target[joint_id] = np.exp(-(
+ (x - mu_x)**2 + (y - mu_y)**2) / (2 * self.sigma**2))
+ records['target'] = target
+ records['target_weight'] = target_weight
+ del records['joints'], records['joints_vis']
+
+ return records
+
+
+@register_keypointop
+class ToHeatmapsTopDown_UDP(object):
+ """to generate the gaussian heatmaps of keypoint for heatmap loss.
+ ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing
+ for Human Pose Estimation (CVPR 2020).
+
+ Args:
+ hmsize (list): [w, h] output heatmap's size
+ sigma (float): the std of gaussin kernel genereted
+ records(dict): the dict contained the image and coords
+
+ Returns:
+ records (dict): contain the heatmaps used to heatmaploss
+ """
+
+ def __init__(self, hmsize, sigma):
+ super(ToHeatmapsTopDown_UDP, self).__init__()
+ self.hmsize = np.array(hmsize)
+ self.sigma = sigma
+
+ def __call__(self, records):
+ joints = records['joints']
+ joints_vis = records['joints_vis']
+ num_joints = joints.shape[0]
+ image_size = np.array(
+ [records['image'].shape[1], records['image'].shape[0]])
+ target_weight = np.ones((num_joints, 1), dtype=np.float32)
+ target_weight[:, 0] = joints_vis[:, 0]
+ target = np.zeros(
+ (num_joints, self.hmsize[1], self.hmsize[0]), dtype=np.float32)
+ tmp_size = self.sigma * 3
+ size = 2 * tmp_size + 1
+ x = np.arange(0, size, 1, np.float32)
+ y = x[:, None]
+ feat_stride = (image_size - 1.0) / (self.hmsize - 1.0)
+ for joint_id in range(num_joints):
+ mu_x = int(joints[joint_id][0] / feat_stride[0] + 0.5)
+ mu_y = int(joints[joint_id][1] / feat_stride[1] + 0.5)
+ # Check that any part of the gaussian is in-bounds
+ ul = [int(mu_x - tmp_size), int(mu_y - tmp_size)]
+ br = [int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)]
+ if ul[0] >= self.hmsize[0] or ul[1] >= self.hmsize[1] or br[
+ 0] < 0 or br[1] < 0:
+ # If not, just return the image as is
+ target_weight[joint_id] = 0
+ continue
+
+ mu_x_ac = joints[joint_id][0] / feat_stride[0]
+ mu_y_ac = joints[joint_id][1] / feat_stride[1]
+ x0 = y0 = size // 2
+ x0 += mu_x_ac - mu_x
+ y0 += mu_y_ac - mu_y
+ g = np.exp(-((x - x0)**2 + (y - y0)**2) / (2 * self.sigma**2))
+ # Usable gaussian range
+ g_x = max(0, -ul[0]), min(br[0], self.hmsize[0]) - ul[0]
+ g_y = max(0, -ul[1]), min(br[1], self.hmsize[1]) - ul[1]
+ # Image range
+ img_x = max(0, ul[0]), min(br[0], self.hmsize[0])
+ img_y = max(0, ul[1]), min(br[1], self.hmsize[1])
+
+ v = target_weight[joint_id]
+ if v > 0.5:
+ target[joint_id][img_y[0]:img_y[1], img_x[0]:img_x[1]] = g[g_y[
+ 0]:g_y[1], g_x[0]:g_x[1]]
+ records['target'] = target
+ records['target_weight'] = target_weight
+ del records['joints'], records['joints_vis']
+
+ return records
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/mot_operators.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/mot_operators.py
new file mode 100644
index 000000000..ef7d7be45
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/mot_operators.py
@@ -0,0 +1,627 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+from numbers import Integral
+
+import cv2
+import copy
+import numpy as np
+import random
+import math
+
+from .operators import BaseOperator, register_op
+from .batch_operators import Gt2TTFTarget
+from ppdet.modeling.bbox_utils import bbox_iou_np_expand
+from ppdet.utils.logger import setup_logger
+from .op_helper import gaussian_radius
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'RGBReverse', 'LetterBoxResize', 'MOTRandomAffine', 'Gt2JDETargetThres',
+ 'Gt2JDETargetMax', 'Gt2FairMOTTarget'
+]
+
+
+@register_op
+class RGBReverse(BaseOperator):
+ """RGB to BGR, or BGR to RGB, sensitive to MOTRandomAffine
+ """
+
+ def __init__(self):
+ super(RGBReverse, self).__init__()
+
+ def apply(self, sample, context=None):
+ im = sample['image']
+ sample['image'] = np.ascontiguousarray(im[:, :, ::-1])
+ return sample
+
+
+@register_op
+class LetterBoxResize(BaseOperator):
+ def __init__(self, target_size):
+ """
+ Resize image to target size, convert normalized xywh to pixel xyxy
+ format ([x_center, y_center, width, height] -> [x0, y0, x1, y1]).
+ Args:
+ target_size (int|list): image target size.
+ """
+ super(LetterBoxResize, self).__init__()
+ if not isinstance(target_size, (Integral, Sequence)):
+ raise TypeError(
+ "Type of target_size is invalid. Must be Integer or List or Tuple, now is {}".
+ format(type(target_size)))
+ if isinstance(target_size, Integral):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+
+ def apply_image(self, img, height, width, color=(127.5, 127.5, 127.5)):
+ # letterbox: resize a rectangular image to a padded rectangular
+ shape = img.shape[:2] # [height, width]
+ ratio_h = float(height) / shape[0]
+ ratio_w = float(width) / shape[1]
+ ratio = min(ratio_h, ratio_w)
+ new_shape = (round(shape[1] * ratio),
+ round(shape[0] * ratio)) # [width, height]
+ padw = (width - new_shape[0]) / 2
+ padh = (height - new_shape[1]) / 2
+ top, bottom = round(padh - 0.1), round(padh + 0.1)
+ left, right = round(padw - 0.1), round(padw + 0.1)
+
+ img = cv2.resize(
+ img, new_shape, interpolation=cv2.INTER_AREA) # resized, no border
+ img = cv2.copyMakeBorder(
+ img, top, bottom, left, right, cv2.BORDER_CONSTANT,
+ value=color) # padded rectangular
+ return img, ratio, padw, padh
+
+ def apply_bbox(self, bbox0, h, w, ratio, padw, padh):
+ bboxes = bbox0.copy()
+ bboxes[:, 0] = ratio * w * (bbox0[:, 0] - bbox0[:, 2] / 2) + padw
+ bboxes[:, 1] = ratio * h * (bbox0[:, 1] - bbox0[:, 3] / 2) + padh
+ bboxes[:, 2] = ratio * w * (bbox0[:, 0] + bbox0[:, 2] / 2) + padw
+ bboxes[:, 3] = ratio * h * (bbox0[:, 1] + bbox0[:, 3] / 2) + padh
+ return bboxes
+
+ def apply(self, sample, context=None):
+ """ Resize the image numpy.
+ """
+ im = sample['image']
+ h, w = sample['im_shape']
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image type is not numpy.".format(self))
+ if len(im.shape) != 3:
+ from PIL import UnidentifiedImageError
+ raise UnidentifiedImageError(
+ '{}: image is not 3-dimensional.'.format(self))
+
+ # apply image
+ height, width = self.target_size
+ img, ratio, padw, padh = self.apply_image(
+ im, height=height, width=width)
+
+ sample['image'] = img
+ new_shape = (round(h * ratio), round(w * ratio))
+ sample['im_shape'] = np.asarray(new_shape, dtype=np.float32)
+ sample['scale_factor'] = np.asarray([ratio, ratio], dtype=np.float32)
+
+ # apply bbox
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'], h, w, ratio,
+ padw, padh)
+ return sample
+
+
+@register_op
+class MOTRandomAffine(BaseOperator):
+ """
+ Affine transform to image and coords to achieve the rotate, scale and
+ shift effect for training image.
+
+ Args:
+ degrees (list[2]): the rotate range to apply, transform range is [min, max]
+ translate (list[2]): the translate range to apply, transform range is [min, max]
+ scale (list[2]): the scale range to apply, transform range is [min, max]
+ shear (list[2]): the shear range to apply, transform range is [min, max]
+ borderValue (list[3]): value used in case of a constant border when appling
+ the perspective transformation
+ reject_outside (bool): reject warped bounding bboxes outside of image
+
+ Returns:
+ records(dict): contain the image and coords after tranformed
+
+ """
+
+ def __init__(self,
+ degrees=(-5, 5),
+ translate=(0.10, 0.10),
+ scale=(0.50, 1.20),
+ shear=(-2, 2),
+ borderValue=(127.5, 127.5, 127.5),
+ reject_outside=True):
+ super(MOTRandomAffine, self).__init__()
+ self.degrees = degrees
+ self.translate = translate
+ self.scale = scale
+ self.shear = shear
+ self.borderValue = borderValue
+ self.reject_outside = reject_outside
+
+ def apply(self, sample, context=None):
+ # https://medium.com/uruvideo/dataset-augmentation-with-random-homographies-a8f4b44830d4
+ border = 0 # width of added border (optional)
+
+ img = sample['image']
+ height, width = img.shape[0], img.shape[1]
+
+ # Rotation and Scale
+ R = np.eye(3)
+ a = random.random() * (self.degrees[1] - self.degrees[0]
+ ) + self.degrees[0]
+ s = random.random() * (self.scale[1] - self.scale[0]) + self.scale[0]
+ R[:2] = cv2.getRotationMatrix2D(
+ angle=a, center=(width / 2, height / 2), scale=s)
+
+ # Translation
+ T = np.eye(3)
+ T[0, 2] = (
+ random.random() * 2 - 1
+ ) * self.translate[0] * height + border # x translation (pixels)
+ T[1, 2] = (
+ random.random() * 2 - 1
+ ) * self.translate[1] * width + border # y translation (pixels)
+
+ # Shear
+ S = np.eye(3)
+ S[0, 1] = math.tan((random.random() *
+ (self.shear[1] - self.shear[0]) + self.shear[0]) *
+ math.pi / 180) # x shear (deg)
+ S[1, 0] = math.tan((random.random() *
+ (self.shear[1] - self.shear[0]) + self.shear[0]) *
+ math.pi / 180) # y shear (deg)
+
+ M = S @T @R # Combined rotation matrix. ORDER IS IMPORTANT HERE!!
+ imw = cv2.warpPerspective(
+ img,
+ M,
+ dsize=(width, height),
+ flags=cv2.INTER_LINEAR,
+ borderValue=self.borderValue) # BGR order borderValue
+
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ targets = sample['gt_bbox']
+ n = targets.shape[0]
+ points = targets.copy()
+ area0 = (points[:, 2] - points[:, 0]) * (
+ points[:, 3] - points[:, 1])
+
+ # warp points
+ xy = np.ones((n * 4, 3))
+ xy[:, :2] = points[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape(
+ n * 4, 2) # x1y1, x2y2, x1y2, x2y1
+ xy = (xy @M.T)[:, :2].reshape(n, 8)
+
+ # create new boxes
+ x = xy[:, [0, 2, 4, 6]]
+ y = xy[:, [1, 3, 5, 7]]
+ xy = np.concatenate(
+ (x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T
+
+ # apply angle-based reduction
+ radians = a * math.pi / 180
+ reduction = max(abs(math.sin(radians)), abs(math.cos(radians)))**0.5
+ x = (xy[:, 2] + xy[:, 0]) / 2
+ y = (xy[:, 3] + xy[:, 1]) / 2
+ w = (xy[:, 2] - xy[:, 0]) * reduction
+ h = (xy[:, 3] - xy[:, 1]) * reduction
+ xy = np.concatenate(
+ (x - w / 2, y - h / 2, x + w / 2, y + h / 2)).reshape(4, n).T
+
+ # reject warped points outside of image
+ if self.reject_outside:
+ np.clip(xy[:, 0], 0, width, out=xy[:, 0])
+ np.clip(xy[:, 2], 0, width, out=xy[:, 2])
+ np.clip(xy[:, 1], 0, height, out=xy[:, 1])
+ np.clip(xy[:, 3], 0, height, out=xy[:, 3])
+ w = xy[:, 2] - xy[:, 0]
+ h = xy[:, 3] - xy[:, 1]
+ area = w * h
+ ar = np.maximum(w / (h + 1e-16), h / (w + 1e-16))
+ i = (w > 4) & (h > 4) & (area / (area0 + 1e-16) > 0.1) & (ar < 10)
+
+ if sum(i) > 0:
+ sample['gt_bbox'] = xy[i].astype(sample['gt_bbox'].dtype)
+ sample['gt_class'] = sample['gt_class'][i]
+ if 'difficult' in sample:
+ sample['difficult'] = sample['difficult'][i]
+ if 'gt_ide' in sample:
+ sample['gt_ide'] = sample['gt_ide'][i]
+ if 'is_crowd' in sample:
+ sample['is_crowd'] = sample['is_crowd'][i]
+ sample['image'] = imw
+ return sample
+ else:
+ return sample
+
+
+@register_op
+class Gt2JDETargetThres(BaseOperator):
+ __shared__ = ['num_classes']
+ """
+ Generate JDE targets by groud truth data when training
+ Args:
+ anchors (list): anchors of JDE model
+ anchor_masks (list): anchor_masks of JDE model
+ downsample_ratios (list): downsample ratios of JDE model
+ ide_thresh (float): thresh of identity, higher is groud truth
+ fg_thresh (float): thresh of foreground, higher is foreground
+ bg_thresh (float): thresh of background, lower is background
+ num_classes (int): number of classes
+ """
+
+ def __init__(self,
+ anchors,
+ anchor_masks,
+ downsample_ratios,
+ ide_thresh=0.5,
+ fg_thresh=0.5,
+ bg_thresh=0.4,
+ num_classes=1):
+ super(Gt2JDETargetThres, self).__init__()
+ self.anchors = anchors
+ self.anchor_masks = anchor_masks
+ self.downsample_ratios = downsample_ratios
+ self.ide_thresh = ide_thresh
+ self.fg_thresh = fg_thresh
+ self.bg_thresh = bg_thresh
+ self.num_classes = num_classes
+
+ def generate_anchor(self, nGh, nGw, anchor_hw):
+ nA = len(anchor_hw)
+ yy, xx = np.meshgrid(np.arange(nGh), np.arange(nGw))
+
+ mesh = np.stack([xx.T, yy.T], axis=0) # [2, nGh, nGw]
+ mesh = np.repeat(mesh[None, :], nA, axis=0) # [nA, 2, nGh, nGw]
+
+ anchor_offset_mesh = anchor_hw[:, :, None][:, :, :, None]
+ anchor_offset_mesh = np.repeat(anchor_offset_mesh, nGh, axis=-2)
+ anchor_offset_mesh = np.repeat(anchor_offset_mesh, nGw, axis=-1)
+
+ anchor_mesh = np.concatenate(
+ [mesh, anchor_offset_mesh], axis=1) # [nA, 4, nGh, nGw]
+ return anchor_mesh
+
+ def encode_delta(self, gt_box_list, fg_anchor_list):
+ px, py, pw, ph = fg_anchor_list[:, 0], fg_anchor_list[:,1], \
+ fg_anchor_list[:, 2], fg_anchor_list[:,3]
+ gx, gy, gw, gh = gt_box_list[:, 0], gt_box_list[:, 1], \
+ gt_box_list[:, 2], gt_box_list[:, 3]
+ dx = (gx - px) / pw
+ dy = (gy - py) / ph
+ dw = np.log(gw / pw)
+ dh = np.log(gh / ph)
+ return np.stack([dx, dy, dw, dh], axis=1)
+
+ def pad_box(self, sample, num_max):
+ assert 'gt_bbox' in sample
+ bbox = sample['gt_bbox']
+ gt_num = len(bbox)
+ pad_bbox = np.zeros((num_max, 4), dtype=np.float32)
+ if gt_num > 0:
+ pad_bbox[:gt_num, :] = bbox[:gt_num, :]
+ sample['gt_bbox'] = pad_bbox
+ if 'gt_score' in sample:
+ pad_score = np.zeros((num_max, ), dtype=np.float32)
+ if gt_num > 0:
+ pad_score[:gt_num] = sample['gt_score'][:gt_num, 0]
+ sample['gt_score'] = pad_score
+ if 'difficult' in sample:
+ pad_diff = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_diff[:gt_num] = sample['difficult'][:gt_num, 0]
+ sample['difficult'] = pad_diff
+ if 'is_crowd' in sample:
+ pad_crowd = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_crowd[:gt_num] = sample['is_crowd'][:gt_num, 0]
+ sample['is_crowd'] = pad_crowd
+ if 'gt_ide' in sample:
+ pad_ide = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_ide[:gt_num] = sample['gt_ide'][:gt_num, 0]
+ sample['gt_ide'] = pad_ide
+ return sample
+
+ def __call__(self, samples, context=None):
+ assert len(self.anchor_masks) == len(self.downsample_ratios), \
+ "anchor_masks', and 'downsample_ratios' should have same length."
+ h, w = samples[0]['image'].shape[1:3]
+
+ num_max = 0
+ for sample in samples:
+ num_max = max(num_max, len(sample['gt_bbox']))
+
+ for sample in samples:
+ gt_bbox = sample['gt_bbox']
+ gt_ide = sample['gt_ide']
+ for i, (anchor_hw, downsample_ratio
+ ) in enumerate(zip(self.anchors, self.downsample_ratios)):
+ anchor_hw = np.array(
+ anchor_hw, dtype=np.float32) / downsample_ratio
+ nA = len(anchor_hw)
+ nGh, nGw = int(h / downsample_ratio), int(w / downsample_ratio)
+ tbox = np.zeros((nA, nGh, nGw, 4), dtype=np.float32)
+ tconf = np.zeros((nA, nGh, nGw), dtype=np.float32)
+ tid = -np.ones((nA, nGh, nGw, 1), dtype=np.float32)
+
+ gxy, gwh = gt_bbox[:, 0:2].copy(), gt_bbox[:, 2:4].copy()
+ gxy[:, 0] = gxy[:, 0] * nGw
+ gxy[:, 1] = gxy[:, 1] * nGh
+ gwh[:, 0] = gwh[:, 0] * nGw
+ gwh[:, 1] = gwh[:, 1] * nGh
+ gxy[:, 0] = np.clip(gxy[:, 0], 0, nGw - 1)
+ gxy[:, 1] = np.clip(gxy[:, 1], 0, nGh - 1)
+ tboxes = np.concatenate([gxy, gwh], axis=1)
+
+ anchor_mesh = self.generate_anchor(nGh, nGw, anchor_hw)
+
+ anchor_list = np.transpose(anchor_mesh,
+ (0, 2, 3, 1)).reshape(-1, 4)
+ iou_pdist = bbox_iou_np_expand(
+ anchor_list, tboxes, x1y1x2y2=False)
+
+ iou_max = np.max(iou_pdist, axis=1)
+ max_gt_index = np.argmax(iou_pdist, axis=1)
+
+ iou_map = iou_max.reshape(nA, nGh, nGw)
+ gt_index_map = max_gt_index.reshape(nA, nGh, nGw)
+
+ id_index = iou_map > self.ide_thresh
+ fg_index = iou_map > self.fg_thresh
+ bg_index = iou_map < self.bg_thresh
+ ign_index = (iou_map < self.fg_thresh) * (
+ iou_map > self.bg_thresh)
+ tconf[fg_index] = 1
+ tconf[bg_index] = 0
+ tconf[ign_index] = -1
+
+ gt_index = gt_index_map[fg_index]
+ gt_box_list = tboxes[gt_index]
+ gt_id_list = gt_ide[gt_index_map[id_index]]
+
+ if np.sum(fg_index) > 0:
+ tid[id_index] = gt_id_list
+
+ fg_anchor_list = anchor_list.reshape(nA, nGh, nGw,
+ 4)[fg_index]
+ delta_target = self.encode_delta(gt_box_list,
+ fg_anchor_list)
+ tbox[fg_index] = delta_target
+
+ sample['tbox{}'.format(i)] = tbox
+ sample['tconf{}'.format(i)] = tconf
+ sample['tide{}'.format(i)] = tid
+ sample.pop('gt_class')
+ sample = self.pad_box(sample, num_max)
+ return samples
+
+
+@register_op
+class Gt2JDETargetMax(BaseOperator):
+ __shared__ = ['num_classes']
+ """
+ Generate JDE targets by groud truth data when evaluating
+ Args:
+ anchors (list): anchors of JDE model
+ anchor_masks (list): anchor_masks of JDE model
+ downsample_ratios (list): downsample ratios of JDE model
+ max_iou_thresh (float): iou thresh for high quality anchor
+ num_classes (int): number of classes
+ """
+
+ def __init__(self,
+ anchors,
+ anchor_masks,
+ downsample_ratios,
+ max_iou_thresh=0.60,
+ num_classes=1):
+ super(Gt2JDETargetMax, self).__init__()
+ self.anchors = anchors
+ self.anchor_masks = anchor_masks
+ self.downsample_ratios = downsample_ratios
+ self.max_iou_thresh = max_iou_thresh
+ self.num_classes = num_classes
+
+ def __call__(self, samples, context=None):
+ assert len(self.anchor_masks) == len(self.downsample_ratios), \
+ "anchor_masks', and 'downsample_ratios' should have same length."
+ h, w = samples[0]['image'].shape[1:3]
+ for sample in samples:
+ gt_bbox = sample['gt_bbox']
+ gt_ide = sample['gt_ide']
+ for i, (anchor_hw, downsample_ratio
+ ) in enumerate(zip(self.anchors, self.downsample_ratios)):
+ anchor_hw = np.array(
+ anchor_hw, dtype=np.float32) / downsample_ratio
+ nA = len(anchor_hw)
+ nGh, nGw = int(h / downsample_ratio), int(w / downsample_ratio)
+ tbox = np.zeros((nA, nGh, nGw, 4), dtype=np.float32)
+ tconf = np.zeros((nA, nGh, nGw), dtype=np.float32)
+ tid = -np.ones((nA, nGh, nGw, 1), dtype=np.float32)
+
+ gxy, gwh = gt_bbox[:, 0:2].copy(), gt_bbox[:, 2:4].copy()
+ gxy[:, 0] = gxy[:, 0] * nGw
+ gxy[:, 1] = gxy[:, 1] * nGh
+ gwh[:, 0] = gwh[:, 0] * nGw
+ gwh[:, 1] = gwh[:, 1] * nGh
+ gi = np.clip(gxy[:, 0], 0, nGw - 1).astype(int)
+ gj = np.clip(gxy[:, 1], 0, nGh - 1).astype(int)
+
+ # iou of targets-anchors (using wh only)
+ box1 = gwh
+ box2 = anchor_hw[:, None, :]
+ inter_area = np.minimum(box1, box2).prod(2)
+ iou = inter_area / (
+ box1.prod(1) + box2.prod(2) - inter_area + 1e-16)
+
+ # Select best iou_pred and anchor
+ iou_best = iou.max(0) # best anchor [0-2] for each target
+ a = np.argmax(iou, axis=0)
+
+ # Select best unique target-anchor combinations
+ iou_order = np.argsort(-iou_best) # best to worst
+
+ # Unique anchor selection
+ u = np.stack((gi, gj, a), 0)[:, iou_order]
+ _, first_unique = np.unique(u, axis=1, return_index=True)
+ mask = iou_order[first_unique]
+ # best anchor must share significant commonality (iou) with target
+ # TODO: examine arbitrary threshold
+ idx = mask[iou_best[mask] > self.max_iou_thresh]
+
+ if len(idx) > 0:
+ a_i, gj_i, gi_i = a[idx], gj[idx], gi[idx]
+ t_box = gt_bbox[idx]
+ t_id = gt_ide[idx]
+ if len(t_box.shape) == 1:
+ t_box = t_box.reshape(1, 4)
+
+ gxy, gwh = t_box[:, 0:2].copy(), t_box[:, 2:4].copy()
+ gxy[:, 0] = gxy[:, 0] * nGw
+ gxy[:, 1] = gxy[:, 1] * nGh
+ gwh[:, 0] = gwh[:, 0] * nGw
+ gwh[:, 1] = gwh[:, 1] * nGh
+
+ # XY coordinates
+ tbox[:, :, :, 0:2][a_i, gj_i, gi_i] = gxy - gxy.astype(int)
+ # Width and height in yolo method
+ tbox[:, :, :, 2:4][a_i, gj_i, gi_i] = np.log(gwh /
+ anchor_hw[a_i])
+ tconf[a_i, gj_i, gi_i] = 1
+ tid[a_i, gj_i, gi_i] = t_id
+
+ sample['tbox{}'.format(i)] = tbox
+ sample['tconf{}'.format(i)] = tconf
+ sample['tide{}'.format(i)] = tid
+
+
+class Gt2FairMOTTarget(Gt2TTFTarget):
+ __shared__ = ['num_classes']
+ """
+ Generate FairMOT targets by ground truth data.
+ Difference between Gt2FairMOTTarget and Gt2TTFTarget are:
+ 1. the gaussian kernal radius to generate a heatmap.
+ 2. the targets needed during traing.
+
+ Args:
+ num_classes(int): the number of classes.
+ down_ratio(int): the down ratio from images to heatmap, 4 by default.
+ max_objs(int): the maximum number of ground truth objects in a image, 500 by default.
+ """
+
+ def __init__(self, num_classes=1, down_ratio=4, max_objs=500):
+ super(Gt2TTFTarget, self).__init__()
+ self.down_ratio = down_ratio
+ self.num_classes = num_classes
+ self.max_objs = max_objs
+
+ def __call__(self, samples, context=None):
+ for b_id, sample in enumerate(samples):
+ output_h = sample['image'].shape[1] // self.down_ratio
+ output_w = sample['image'].shape[2] // self.down_ratio
+
+ heatmap = np.zeros(
+ (self.num_classes, output_h, output_w), dtype='float32')
+ bbox_size = np.zeros((self.max_objs, 4), dtype=np.float32)
+ center_offset = np.zeros((self.max_objs, 2), dtype=np.float32)
+ index = np.zeros((self.max_objs, ), dtype=np.int64)
+ index_mask = np.zeros((self.max_objs, ), dtype=np.int32)
+ reid = np.zeros((self.max_objs, ), dtype=np.int64)
+ bbox_xys = np.zeros((self.max_objs, 4), dtype=np.float32)
+ if self.num_classes > 1:
+ # each category corresponds to a set of track ids
+ cls_tr_ids = np.zeros(
+ (self.num_classes, output_h, output_w), dtype=np.int64)
+ cls_id_map = np.full((output_h, output_w), -1, dtype=np.int64)
+
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ gt_ide = sample['gt_ide']
+
+ for k in range(len(gt_bbox)):
+ cls_id = gt_class[k][0]
+ bbox = gt_bbox[k]
+ ide = gt_ide[k][0]
+ bbox[[0, 2]] = bbox[[0, 2]] * output_w
+ bbox[[1, 3]] = bbox[[1, 3]] * output_h
+ bbox_amodal = copy.deepcopy(bbox)
+ bbox_amodal[0] = bbox_amodal[0] - bbox_amodal[2] / 2.
+ bbox_amodal[1] = bbox_amodal[1] - bbox_amodal[3] / 2.
+ bbox_amodal[2] = bbox_amodal[0] + bbox_amodal[2]
+ bbox_amodal[3] = bbox_amodal[1] + bbox_amodal[3]
+ bbox[0] = np.clip(bbox[0], 0, output_w - 1)
+ bbox[1] = np.clip(bbox[1], 0, output_h - 1)
+ h = bbox[3]
+ w = bbox[2]
+
+ bbox_xy = copy.deepcopy(bbox)
+ bbox_xy[0] = bbox_xy[0] - bbox_xy[2] / 2
+ bbox_xy[1] = bbox_xy[1] - bbox_xy[3] / 2
+ bbox_xy[2] = bbox_xy[0] + bbox_xy[2]
+ bbox_xy[3] = bbox_xy[1] + bbox_xy[3]
+
+ if h > 0 and w > 0:
+ radius = gaussian_radius((math.ceil(h), math.ceil(w)), 0.7)
+ radius = max(0, int(radius))
+ ct = np.array([bbox[0], bbox[1]], dtype=np.float32)
+ ct_int = ct.astype(np.int32)
+ self.draw_truncate_gaussian(heatmap[cls_id], ct_int, radius,
+ radius)
+ bbox_size[k] = ct[0] - bbox_amodal[0], ct[1] - bbox_amodal[1], \
+ bbox_amodal[2] - ct[0], bbox_amodal[3] - ct[1]
+
+ index[k] = ct_int[1] * output_w + ct_int[0]
+ center_offset[k] = ct - ct_int
+ index_mask[k] = 1
+ reid[k] = ide
+ bbox_xys[k] = bbox_xy
+ if self.num_classes > 1:
+ cls_id_map[ct_int[1], ct_int[0]] = cls_id
+ cls_tr_ids[cls_id][ct_int[1]][ct_int[0]] = ide - 1
+ # track id start from 0
+
+ sample['heatmap'] = heatmap
+ sample['index'] = index
+ sample['offset'] = center_offset
+ sample['size'] = bbox_size
+ sample['index_mask'] = index_mask
+ sample['reid'] = reid
+ if self.num_classes > 1:
+ sample['cls_id_map'] = cls_id_map
+ sample['cls_tr_ids'] = cls_tr_ids
+ sample['bbox_xys'] = bbox_xys
+ sample.pop('is_crowd', None)
+ sample.pop('difficult', None)
+ sample.pop('gt_class', None)
+ sample.pop('gt_bbox', None)
+ sample.pop('gt_score', None)
+ sample.pop('gt_ide', None)
+ return samples
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/op_helper.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/op_helper.py
new file mode 100644
index 000000000..6c400306d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/op_helper.py
@@ -0,0 +1,494 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# this file contains helper methods for BBOX processing
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import random
+import math
+import cv2
+
+
+def meet_emit_constraint(src_bbox, sample_bbox):
+ center_x = (src_bbox[2] + src_bbox[0]) / 2
+ center_y = (src_bbox[3] + src_bbox[1]) / 2
+ if center_x >= sample_bbox[0] and \
+ center_x <= sample_bbox[2] and \
+ center_y >= sample_bbox[1] and \
+ center_y <= sample_bbox[3]:
+ return True
+ return False
+
+
+def clip_bbox(src_bbox):
+ src_bbox[0] = max(min(src_bbox[0], 1.0), 0.0)
+ src_bbox[1] = max(min(src_bbox[1], 1.0), 0.0)
+ src_bbox[2] = max(min(src_bbox[2], 1.0), 0.0)
+ src_bbox[3] = max(min(src_bbox[3], 1.0), 0.0)
+ return src_bbox
+
+
+def bbox_area(src_bbox):
+ if src_bbox[2] < src_bbox[0] or src_bbox[3] < src_bbox[1]:
+ return 0.
+ else:
+ width = src_bbox[2] - src_bbox[0]
+ height = src_bbox[3] - src_bbox[1]
+ return width * height
+
+
+def is_overlap(object_bbox, sample_bbox):
+ if object_bbox[0] >= sample_bbox[2] or \
+ object_bbox[2] <= sample_bbox[0] or \
+ object_bbox[1] >= sample_bbox[3] or \
+ object_bbox[3] <= sample_bbox[1]:
+ return False
+ else:
+ return True
+
+
+def filter_and_process(sample_bbox, bboxes, labels, scores=None,
+ keypoints=None):
+ new_bboxes = []
+ new_labels = []
+ new_scores = []
+ new_keypoints = []
+ new_kp_ignore = []
+ for i in range(len(bboxes)):
+ new_bbox = [0, 0, 0, 0]
+ obj_bbox = [bboxes[i][0], bboxes[i][1], bboxes[i][2], bboxes[i][3]]
+ if not meet_emit_constraint(obj_bbox, sample_bbox):
+ continue
+ if not is_overlap(obj_bbox, sample_bbox):
+ continue
+ sample_width = sample_bbox[2] - sample_bbox[0]
+ sample_height = sample_bbox[3] - sample_bbox[1]
+ new_bbox[0] = (obj_bbox[0] - sample_bbox[0]) / sample_width
+ new_bbox[1] = (obj_bbox[1] - sample_bbox[1]) / sample_height
+ new_bbox[2] = (obj_bbox[2] - sample_bbox[0]) / sample_width
+ new_bbox[3] = (obj_bbox[3] - sample_bbox[1]) / sample_height
+ new_bbox = clip_bbox(new_bbox)
+ if bbox_area(new_bbox) > 0:
+ new_bboxes.append(new_bbox)
+ new_labels.append([labels[i][0]])
+ if scores is not None:
+ new_scores.append([scores[i][0]])
+ if keypoints is not None:
+ sample_keypoint = keypoints[0][i]
+ for j in range(len(sample_keypoint)):
+ kp_len = sample_height if j % 2 else sample_width
+ sample_coord = sample_bbox[1] if j % 2 else sample_bbox[0]
+ sample_keypoint[j] = (
+ sample_keypoint[j] - sample_coord) / kp_len
+ sample_keypoint[j] = max(min(sample_keypoint[j], 1.0), 0.0)
+ new_keypoints.append(sample_keypoint)
+ new_kp_ignore.append(keypoints[1][i])
+
+ bboxes = np.array(new_bboxes)
+ labels = np.array(new_labels)
+ scores = np.array(new_scores)
+ if keypoints is not None:
+ keypoints = np.array(new_keypoints)
+ new_kp_ignore = np.array(new_kp_ignore)
+ return bboxes, labels, scores, (keypoints, new_kp_ignore)
+ return bboxes, labels, scores
+
+
+def bbox_area_sampling(bboxes, labels, scores, target_size, min_size):
+ new_bboxes = []
+ new_labels = []
+ new_scores = []
+ for i, bbox in enumerate(bboxes):
+ w = float((bbox[2] - bbox[0]) * target_size)
+ h = float((bbox[3] - bbox[1]) * target_size)
+ if w * h < float(min_size * min_size):
+ continue
+ else:
+ new_bboxes.append(bbox)
+ new_labels.append(labels[i])
+ if scores is not None and scores.size != 0:
+ new_scores.append(scores[i])
+ bboxes = np.array(new_bboxes)
+ labels = np.array(new_labels)
+ scores = np.array(new_scores)
+ return bboxes, labels, scores
+
+
+def generate_sample_bbox(sampler):
+ scale = np.random.uniform(sampler[2], sampler[3])
+ aspect_ratio = np.random.uniform(sampler[4], sampler[5])
+ aspect_ratio = max(aspect_ratio, (scale**2.0))
+ aspect_ratio = min(aspect_ratio, 1 / (scale**2.0))
+ bbox_width = scale * (aspect_ratio**0.5)
+ bbox_height = scale / (aspect_ratio**0.5)
+ xmin_bound = 1 - bbox_width
+ ymin_bound = 1 - bbox_height
+ xmin = np.random.uniform(0, xmin_bound)
+ ymin = np.random.uniform(0, ymin_bound)
+ xmax = xmin + bbox_width
+ ymax = ymin + bbox_height
+ sampled_bbox = [xmin, ymin, xmax, ymax]
+ return sampled_bbox
+
+
+def generate_sample_bbox_square(sampler, image_width, image_height):
+ scale = np.random.uniform(sampler[2], sampler[3])
+ aspect_ratio = np.random.uniform(sampler[4], sampler[5])
+ aspect_ratio = max(aspect_ratio, (scale**2.0))
+ aspect_ratio = min(aspect_ratio, 1 / (scale**2.0))
+ bbox_width = scale * (aspect_ratio**0.5)
+ bbox_height = scale / (aspect_ratio**0.5)
+ if image_height < image_width:
+ bbox_width = bbox_height * image_height / image_width
+ else:
+ bbox_height = bbox_width * image_width / image_height
+ xmin_bound = 1 - bbox_width
+ ymin_bound = 1 - bbox_height
+ xmin = np.random.uniform(0, xmin_bound)
+ ymin = np.random.uniform(0, ymin_bound)
+ xmax = xmin + bbox_width
+ ymax = ymin + bbox_height
+ sampled_bbox = [xmin, ymin, xmax, ymax]
+ return sampled_bbox
+
+
+def data_anchor_sampling(bbox_labels, image_width, image_height, scale_array,
+ resize_width):
+ num_gt = len(bbox_labels)
+ # np.random.randint range: [low, high)
+ rand_idx = np.random.randint(0, num_gt) if num_gt != 0 else 0
+
+ if num_gt != 0:
+ norm_xmin = bbox_labels[rand_idx][0]
+ norm_ymin = bbox_labels[rand_idx][1]
+ norm_xmax = bbox_labels[rand_idx][2]
+ norm_ymax = bbox_labels[rand_idx][3]
+
+ xmin = norm_xmin * image_width
+ ymin = norm_ymin * image_height
+ wid = image_width * (norm_xmax - norm_xmin)
+ hei = image_height * (norm_ymax - norm_ymin)
+ range_size = 0
+
+ area = wid * hei
+ for scale_ind in range(0, len(scale_array) - 1):
+ if area > scale_array[scale_ind] ** 2 and area < \
+ scale_array[scale_ind + 1] ** 2:
+ range_size = scale_ind + 1
+ break
+
+ if area > scale_array[len(scale_array) - 2]**2:
+ range_size = len(scale_array) - 2
+
+ scale_choose = 0.0
+ if range_size == 0:
+ rand_idx_size = 0
+ else:
+ # np.random.randint range: [low, high)
+ rng_rand_size = np.random.randint(0, range_size + 1)
+ rand_idx_size = rng_rand_size % (range_size + 1)
+
+ if rand_idx_size == range_size:
+ min_resize_val = scale_array[rand_idx_size] / 2.0
+ max_resize_val = min(2.0 * scale_array[rand_idx_size],
+ 2 * math.sqrt(wid * hei))
+ scale_choose = random.uniform(min_resize_val, max_resize_val)
+ else:
+ min_resize_val = scale_array[rand_idx_size] / 2.0
+ max_resize_val = 2.0 * scale_array[rand_idx_size]
+ scale_choose = random.uniform(min_resize_val, max_resize_val)
+
+ sample_bbox_size = wid * resize_width / scale_choose
+
+ w_off_orig = 0.0
+ h_off_orig = 0.0
+ if sample_bbox_size < max(image_height, image_width):
+ if wid <= sample_bbox_size:
+ w_off_orig = np.random.uniform(xmin + wid - sample_bbox_size,
+ xmin)
+ else:
+ w_off_orig = np.random.uniform(xmin,
+ xmin + wid - sample_bbox_size)
+
+ if hei <= sample_bbox_size:
+ h_off_orig = np.random.uniform(ymin + hei - sample_bbox_size,
+ ymin)
+ else:
+ h_off_orig = np.random.uniform(ymin,
+ ymin + hei - sample_bbox_size)
+
+ else:
+ w_off_orig = np.random.uniform(image_width - sample_bbox_size, 0.0)
+ h_off_orig = np.random.uniform(image_height - sample_bbox_size, 0.0)
+
+ w_off_orig = math.floor(w_off_orig)
+ h_off_orig = math.floor(h_off_orig)
+
+ # Figure out top left coordinates.
+ w_off = float(w_off_orig / image_width)
+ h_off = float(h_off_orig / image_height)
+
+ sampled_bbox = [
+ w_off, h_off, w_off + float(sample_bbox_size / image_width),
+ h_off + float(sample_bbox_size / image_height)
+ ]
+ return sampled_bbox
+ else:
+ return 0
+
+
+def jaccard_overlap(sample_bbox, object_bbox):
+ if sample_bbox[0] >= object_bbox[2] or \
+ sample_bbox[2] <= object_bbox[0] or \
+ sample_bbox[1] >= object_bbox[3] or \
+ sample_bbox[3] <= object_bbox[1]:
+ return 0
+ intersect_xmin = max(sample_bbox[0], object_bbox[0])
+ intersect_ymin = max(sample_bbox[1], object_bbox[1])
+ intersect_xmax = min(sample_bbox[2], object_bbox[2])
+ intersect_ymax = min(sample_bbox[3], object_bbox[3])
+ intersect_size = (intersect_xmax - intersect_xmin) * (
+ intersect_ymax - intersect_ymin)
+ sample_bbox_size = bbox_area(sample_bbox)
+ object_bbox_size = bbox_area(object_bbox)
+ overlap = intersect_size / (
+ sample_bbox_size + object_bbox_size - intersect_size)
+ return overlap
+
+
+def intersect_bbox(bbox1, bbox2):
+ if bbox2[0] > bbox1[2] or bbox2[2] < bbox1[0] or \
+ bbox2[1] > bbox1[3] or bbox2[3] < bbox1[1]:
+ intersection_box = [0.0, 0.0, 0.0, 0.0]
+ else:
+ intersection_box = [
+ max(bbox1[0], bbox2[0]), max(bbox1[1], bbox2[1]),
+ min(bbox1[2], bbox2[2]), min(bbox1[3], bbox2[3])
+ ]
+ return intersection_box
+
+
+def bbox_coverage(bbox1, bbox2):
+ inter_box = intersect_bbox(bbox1, bbox2)
+ intersect_size = bbox_area(inter_box)
+
+ if intersect_size > 0:
+ bbox1_size = bbox_area(bbox1)
+ return intersect_size / bbox1_size
+ else:
+ return 0.
+
+
+def satisfy_sample_constraint(sampler,
+ sample_bbox,
+ gt_bboxes,
+ satisfy_all=False):
+ if sampler[6] == 0 and sampler[7] == 0:
+ return True
+ satisfied = []
+ for i in range(len(gt_bboxes)):
+ object_bbox = [
+ gt_bboxes[i][0], gt_bboxes[i][1], gt_bboxes[i][2], gt_bboxes[i][3]
+ ]
+ overlap = jaccard_overlap(sample_bbox, object_bbox)
+ if sampler[6] != 0 and \
+ overlap < sampler[6]:
+ satisfied.append(False)
+ continue
+ if sampler[7] != 0 and \
+ overlap > sampler[7]:
+ satisfied.append(False)
+ continue
+ satisfied.append(True)
+ if not satisfy_all:
+ return True
+
+ if satisfy_all:
+ return np.all(satisfied)
+ else:
+ return False
+
+
+def satisfy_sample_constraint_coverage(sampler, sample_bbox, gt_bboxes):
+ if sampler[6] == 0 and sampler[7] == 0:
+ has_jaccard_overlap = False
+ else:
+ has_jaccard_overlap = True
+ if sampler[8] == 0 and sampler[9] == 0:
+ has_object_coverage = False
+ else:
+ has_object_coverage = True
+
+ if not has_jaccard_overlap and not has_object_coverage:
+ return True
+ found = False
+ for i in range(len(gt_bboxes)):
+ object_bbox = [
+ gt_bboxes[i][0], gt_bboxes[i][1], gt_bboxes[i][2], gt_bboxes[i][3]
+ ]
+ if has_jaccard_overlap:
+ overlap = jaccard_overlap(sample_bbox, object_bbox)
+ if sampler[6] != 0 and \
+ overlap < sampler[6]:
+ continue
+ if sampler[7] != 0 and \
+ overlap > sampler[7]:
+ continue
+ found = True
+ if has_object_coverage:
+ object_coverage = bbox_coverage(object_bbox, sample_bbox)
+ if sampler[8] != 0 and \
+ object_coverage < sampler[8]:
+ continue
+ if sampler[9] != 0 and \
+ object_coverage > sampler[9]:
+ continue
+ found = True
+ if found:
+ return True
+ return found
+
+
+def crop_image_sampling(img, sample_bbox, image_width, image_height,
+ target_size):
+ # no clipping here
+ xmin = int(sample_bbox[0] * image_width)
+ xmax = int(sample_bbox[2] * image_width)
+ ymin = int(sample_bbox[1] * image_height)
+ ymax = int(sample_bbox[3] * image_height)
+
+ w_off = xmin
+ h_off = ymin
+ width = xmax - xmin
+ height = ymax - ymin
+ cross_xmin = max(0.0, float(w_off))
+ cross_ymin = max(0.0, float(h_off))
+ cross_xmax = min(float(w_off + width - 1.0), float(image_width))
+ cross_ymax = min(float(h_off + height - 1.0), float(image_height))
+ cross_width = cross_xmax - cross_xmin
+ cross_height = cross_ymax - cross_ymin
+
+ roi_xmin = 0 if w_off >= 0 else abs(w_off)
+ roi_ymin = 0 if h_off >= 0 else abs(h_off)
+ roi_width = cross_width
+ roi_height = cross_height
+
+ roi_y1 = int(roi_ymin)
+ roi_y2 = int(roi_ymin + roi_height)
+ roi_x1 = int(roi_xmin)
+ roi_x2 = int(roi_xmin + roi_width)
+
+ cross_y1 = int(cross_ymin)
+ cross_y2 = int(cross_ymin + cross_height)
+ cross_x1 = int(cross_xmin)
+ cross_x2 = int(cross_xmin + cross_width)
+
+ sample_img = np.zeros((height, width, 3))
+ sample_img[roi_y1: roi_y2, roi_x1: roi_x2] = \
+ img[cross_y1: cross_y2, cross_x1: cross_x2]
+
+ sample_img = cv2.resize(
+ sample_img, (target_size, target_size), interpolation=cv2.INTER_AREA)
+
+ return sample_img
+
+
+def is_poly(segm):
+ assert isinstance(segm, (list, dict)), \
+ "Invalid segm type: {}".format(type(segm))
+ return isinstance(segm, list)
+
+
+def gaussian_radius(bbox_size, min_overlap):
+ height, width = bbox_size
+
+ a1 = 1
+ b1 = (height + width)
+ c1 = width * height * (1 - min_overlap) / (1 + min_overlap)
+ sq1 = np.sqrt(b1**2 - 4 * a1 * c1)
+ radius1 = (b1 + sq1) / (2 * a1)
+
+ a2 = 4
+ b2 = 2 * (height + width)
+ c2 = (1 - min_overlap) * width * height
+ sq2 = np.sqrt(b2**2 - 4 * a2 * c2)
+ radius2 = (b2 + sq2) / 2
+
+ a3 = 4 * min_overlap
+ b3 = -2 * min_overlap * (height + width)
+ c3 = (min_overlap - 1) * width * height
+ sq3 = np.sqrt(b3**2 - 4 * a3 * c3)
+ radius3 = (b3 + sq3) / 2
+ return min(radius1, radius2, radius3)
+
+
+def draw_gaussian(heatmap, center, radius, k=1, delte=6):
+ diameter = 2 * radius + 1
+ sigma = diameter / delte
+ gaussian = gaussian2D((diameter, diameter), sigma_x=sigma, sigma_y=sigma)
+
+ x, y = center
+
+ height, width = heatmap.shape[0:2]
+
+ left, right = min(x, radius), min(width - x, radius + 1)
+ top, bottom = min(y, radius), min(height - y, radius + 1)
+
+ masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
+ masked_gaussian = gaussian[radius - top:radius + bottom, radius - left:
+ radius + right]
+ np.maximum(masked_heatmap, masked_gaussian * k, out=masked_heatmap)
+
+
+def gaussian2D(shape, sigma_x=1, sigma_y=1):
+ m, n = [(ss - 1.) / 2. for ss in shape]
+ y, x = np.ogrid[-m:m + 1, -n:n + 1]
+
+ h = np.exp(-(x * x / (2 * sigma_x * sigma_x) + y * y / (2 * sigma_y *
+ sigma_y)))
+ h[h < np.finfo(h.dtype).eps * h.max()] = 0
+ return h
+
+
+def draw_umich_gaussian(heatmap, center, radius, k=1):
+ """
+ draw_umich_gaussian, refer to https://github.com/xingyizhou/CenterNet/blob/master/src/lib/utils/image.py#L126
+ """
+ diameter = 2 * radius + 1
+ gaussian = gaussian2D(
+ (diameter, diameter), sigma_x=diameter / 6, sigma_y=diameter / 6)
+
+ x, y = int(center[0]), int(center[1])
+
+ height, width = heatmap.shape[0:2]
+
+ left, right = min(x, radius), min(width - x, radius + 1)
+ top, bottom = min(y, radius), min(height - y, radius + 1)
+
+ masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right]
+ masked_gaussian = gaussian[radius - top:radius + bottom, radius - left:
+ radius + right]
+ if min(masked_gaussian.shape) > 0 and min(masked_heatmap.shape) > 0:
+ np.maximum(masked_heatmap, masked_gaussian * k, out=masked_heatmap)
+ return heatmap
+
+
+def get_border(border, size):
+ i = 1
+ while size - border // i <= border // i:
+ i *= 2
+ return border // i
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/operators.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/operators.py
new file mode 100644
index 000000000..5cc14a44d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/data/transform/operators.py
@@ -0,0 +1,3015 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# function:
+# operators to process sample,
+# eg: decode/resize/crop image
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+
+from numbers import Number, Integral
+
+import uuid
+import random
+import math
+import numpy as np
+import os
+import copy
+import logging
+import cv2
+from PIL import Image, ImageDraw
+import pickle
+import threading
+MUTEX = threading.Lock()
+
+from ppdet.core.workspace import serializable
+from ppdet.modeling import bbox_utils
+from ..reader import Compose
+
+from .op_helper import (satisfy_sample_constraint, filter_and_process,
+ generate_sample_bbox, clip_bbox, data_anchor_sampling,
+ satisfy_sample_constraint_coverage, crop_image_sampling,
+ generate_sample_bbox_square, bbox_area_sampling,
+ is_poly, get_border)
+
+from ppdet.utils.logger import setup_logger
+from ppdet.modeling.keypoint_utils import get_affine_transform, affine_transform
+logger = setup_logger(__name__)
+
+registered_ops = []
+
+
+def register_op(cls):
+ registered_ops.append(cls.__name__)
+ if not hasattr(BaseOperator, cls.__name__):
+ setattr(BaseOperator, cls.__name__, cls)
+ else:
+ raise KeyError("The {} class has been registered.".format(cls.__name__))
+ return serializable(cls)
+
+
+class BboxError(ValueError):
+ pass
+
+
+class ImageError(ValueError):
+ pass
+
+
+class BaseOperator(object):
+ def __init__(self, name=None):
+ if name is None:
+ name = self.__class__.__name__
+ self._id = name + '_' + str(uuid.uuid4())[-6:]
+
+ def apply(self, sample, context=None):
+ """ Process a sample.
+ Args:
+ sample (dict): a dict of sample, eg: {'image':xx, 'label': xxx}
+ context (dict): info about this sample processing
+ Returns:
+ result (dict): a processed sample
+ """
+ return sample
+
+ def __call__(self, sample, context=None):
+ """ Process a sample.
+ Args:
+ sample (dict): a dict of sample, eg: {'image':xx, 'label': xxx}
+ context (dict): info about this sample processing
+ Returns:
+ result (dict): a processed sample
+ """
+ if isinstance(sample, Sequence):
+ for i in range(len(sample)):
+ sample[i] = self.apply(sample[i], context)
+ else:
+ sample = self.apply(sample, context)
+ return sample
+
+ def __str__(self):
+ return str(self._id)
+
+
+@register_op
+class Decode(BaseOperator):
+ def __init__(self):
+ """ Transform the image data to numpy format following the rgb format
+ """
+ super(Decode, self).__init__()
+
+ def apply(self, sample, context=None):
+ """ load image if 'im_file' field is not empty but 'image' is"""
+ if 'image' not in sample:
+ with open(sample['im_file'], 'rb') as f:
+ sample['image'] = f.read()
+ sample.pop('im_file')
+
+ im = sample['image']
+ data = np.frombuffer(im, dtype='uint8')
+ im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode
+ if 'keep_ori_im' in sample and sample['keep_ori_im']:
+ sample['ori_image'] = im
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+
+ sample['image'] = im
+ if 'h' not in sample:
+ sample['h'] = im.shape[0]
+ elif sample['h'] != im.shape[0]:
+ logger.warning(
+ "The actual image height: {} is not equal to the "
+ "height: {} in annotation, and update sample['h'] by actual "
+ "image height.".format(im.shape[0], sample['h']))
+ sample['h'] = im.shape[0]
+ if 'w' not in sample:
+ sample['w'] = im.shape[1]
+ elif sample['w'] != im.shape[1]:
+ logger.warning(
+ "The actual image width: {} is not equal to the "
+ "width: {} in annotation, and update sample['w'] by actual "
+ "image width.".format(im.shape[1], sample['w']))
+ sample['w'] = im.shape[1]
+
+ sample['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ sample['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ return sample
+
+
+def _make_dirs(dirname):
+ try:
+ from pathlib import Path
+ except ImportError:
+ from pathlib2 import Path
+ Path(dirname).mkdir(exist_ok=True)
+
+
+@register_op
+class DecodeCache(BaseOperator):
+ def __init__(self, cache_root=None):
+ '''decode image and caching
+ '''
+ super(DecodeCache, self).__init__()
+
+ self.use_cache = False if cache_root is None else True
+ self.cache_root = cache_root
+
+ if cache_root is not None:
+ _make_dirs(cache_root)
+
+ def apply(self, sample, context=None):
+
+ if self.use_cache and os.path.exists(
+ self.cache_path(self.cache_root, sample['im_file'])):
+ path = self.cache_path(self.cache_root, sample['im_file'])
+ im = self.load(path)
+
+ else:
+ if 'image' not in sample:
+ with open(sample['im_file'], 'rb') as f:
+ sample['image'] = f.read()
+
+ im = sample['image']
+ data = np.frombuffer(im, dtype='uint8')
+ im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode
+ if 'keep_ori_im' in sample and sample['keep_ori_im']:
+ sample['ori_image'] = im
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+
+ if self.use_cache and not os.path.exists(
+ self.cache_path(self.cache_root, sample['im_file'])):
+ path = self.cache_path(self.cache_root, sample['im_file'])
+ self.dump(im, path)
+
+ sample['image'] = im
+ sample['h'] = im.shape[0]
+ sample['w'] = im.shape[1]
+
+ sample['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ sample['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+
+ sample.pop('im_file')
+
+ return sample
+
+ @staticmethod
+ def cache_path(dir_oot, im_file):
+ return os.path.join(dir_oot, os.path.basename(im_file) + '.pkl')
+
+ @staticmethod
+ def load(path):
+ with open(path, 'rb') as f:
+ im = pickle.load(f)
+ return im
+
+ @staticmethod
+ def dump(obj, path):
+ MUTEX.acquire()
+ try:
+ with open(path, 'wb') as f:
+ pickle.dump(obj, f)
+
+ except Exception as e:
+ logger.warning('dump {} occurs exception {}'.format(path, str(e)))
+
+ finally:
+ MUTEX.release()
+
+
+@register_op
+class SniperDecodeCrop(BaseOperator):
+ def __init__(self):
+ super(SniperDecodeCrop, self).__init__()
+
+ def __call__(self, sample, context=None):
+ if 'image' not in sample:
+ with open(sample['im_file'], 'rb') as f:
+ sample['image'] = f.read()
+ sample.pop('im_file')
+
+ im = sample['image']
+ data = np.frombuffer(im, dtype='uint8')
+ im = cv2.imdecode(data, cv2.IMREAD_COLOR) # BGR mode, but need RGB mode
+ if 'keep_ori_im' in sample and sample['keep_ori_im']:
+ sample['ori_image'] = im
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+
+ chip = sample['chip']
+ x1, y1, x2, y2 = [int(xi) for xi in chip]
+ im = im[max(y1, 0):min(y2, im.shape[0]), max(x1, 0):min(x2, im.shape[
+ 1]), :]
+
+ sample['image'] = im
+ h = im.shape[0]
+ w = im.shape[1]
+ # sample['im_info'] = [h, w, 1.0]
+ sample['h'] = h
+ sample['w'] = w
+
+ sample['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ sample['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ return sample
+
+
+@register_op
+class Permute(BaseOperator):
+ def __init__(self):
+ """
+ Change the channel to be (C, H, W)
+ """
+ super(Permute, self).__init__()
+
+ def apply(self, sample, context=None):
+ im = sample['image']
+ im = im.transpose((2, 0, 1))
+ sample['image'] = im
+ return sample
+
+
+@register_op
+class Lighting(BaseOperator):
+ """
+ Lighting the image by eigenvalues and eigenvectors
+ Args:
+ eigval (list): eigenvalues
+ eigvec (list): eigenvectors
+ alphastd (float): random weight of lighting, 0.1 by default
+ """
+
+ def __init__(self, eigval, eigvec, alphastd=0.1):
+ super(Lighting, self).__init__()
+ self.alphastd = alphastd
+ self.eigval = np.array(eigval).astype('float32')
+ self.eigvec = np.array(eigvec).astype('float32')
+
+ def apply(self, sample, context=None):
+ alpha = np.random.normal(scale=self.alphastd, size=(3, ))
+ sample['image'] += np.dot(self.eigvec, self.eigval * alpha)
+ return sample
+
+
+@register_op
+class RandomErasingImage(BaseOperator):
+ def __init__(self, prob=0.5, lower=0.02, higher=0.4, aspect_ratio=0.3):
+ """
+ Random Erasing Data Augmentation, see https://arxiv.org/abs/1708.04896
+ Args:
+ prob (float): probability to carry out random erasing
+ lower (float): lower limit of the erasing area ratio
+ higher (float): upper limit of the erasing area ratio
+ aspect_ratio (float): aspect ratio of the erasing region
+ """
+ super(RandomErasingImage, self).__init__()
+ self.prob = prob
+ self.lower = lower
+ self.higher = higher
+ self.aspect_ratio = aspect_ratio
+
+ def apply(self, sample):
+ gt_bbox = sample['gt_bbox']
+ im = sample['image']
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image is not a numpy array.".format(self))
+ if len(im.shape) != 3:
+ raise ImageError("{}: image is not 3-dimensional.".format(self))
+
+ for idx in range(gt_bbox.shape[0]):
+ if self.prob <= np.random.rand():
+ continue
+
+ x1, y1, x2, y2 = gt_bbox[idx, :]
+ w_bbox = x2 - x1
+ h_bbox = y2 - y1
+ area = w_bbox * h_bbox
+
+ target_area = random.uniform(self.lower, self.higher) * area
+ aspect_ratio = random.uniform(self.aspect_ratio,
+ 1 / self.aspect_ratio)
+
+ h = int(round(math.sqrt(target_area * aspect_ratio)))
+ w = int(round(math.sqrt(target_area / aspect_ratio)))
+
+ if w < w_bbox and h < h_bbox:
+ off_y1 = random.randint(0, int(h_bbox - h))
+ off_x1 = random.randint(0, int(w_bbox - w))
+ im[int(y1 + off_y1):int(y1 + off_y1 + h), int(x1 + off_x1):int(
+ x1 + off_x1 + w), :] = 0
+ sample['image'] = im
+ return sample
+
+
+@register_op
+class NormalizeImage(BaseOperator):
+ def __init__(self, mean=[0.485, 0.456, 0.406], std=[1, 1, 1],
+ is_scale=True):
+ """
+ Args:
+ mean (list): the pixel mean
+ std (list): the pixel variance
+ """
+ super(NormalizeImage, self).__init__()
+ self.mean = mean
+ self.std = std
+ self.is_scale = is_scale
+ if not (isinstance(self.mean, list) and isinstance(self.std, list) and
+ isinstance(self.is_scale, bool)):
+ raise TypeError("{}: input type is invalid.".format(self))
+ from functools import reduce
+ if reduce(lambda x, y: x * y, self.std) == 0:
+ raise ValueError('{}: std is invalid!'.format(self))
+
+ def apply(self, sample, context=None):
+ """Normalize the image.
+ Operators:
+ 1.(optional) Scale the image to [0,1]
+ 2. Each pixel minus mean and is divided by std
+ """
+ im = sample['image']
+ im = im.astype(np.float32, copy=False)
+ mean = np.array(self.mean)[np.newaxis, np.newaxis, :]
+ std = np.array(self.std)[np.newaxis, np.newaxis, :]
+
+ if self.is_scale:
+ im = im / 255.0
+
+ im -= mean
+ im /= std
+
+ sample['image'] = im
+ return sample
+
+
+@register_op
+class GridMask(BaseOperator):
+ def __init__(self,
+ use_h=True,
+ use_w=True,
+ rotate=1,
+ offset=False,
+ ratio=0.5,
+ mode=1,
+ prob=0.7,
+ upper_iter=360000):
+ """
+ GridMask Data Augmentation, see https://arxiv.org/abs/2001.04086
+ Args:
+ use_h (bool): whether to mask vertically
+ use_w (boo;): whether to mask horizontally
+ rotate (float): angle for the mask to rotate
+ offset (float): mask offset
+ ratio (float): mask ratio
+ mode (int): gridmask mode
+ prob (float): max probability to carry out gridmask
+ upper_iter (int): suggested to be equal to global max_iter
+ """
+ super(GridMask, self).__init__()
+ self.use_h = use_h
+ self.use_w = use_w
+ self.rotate = rotate
+ self.offset = offset
+ self.ratio = ratio
+ self.mode = mode
+ self.prob = prob
+ self.upper_iter = upper_iter
+
+ from .gridmask_utils import Gridmask
+ self.gridmask_op = Gridmask(
+ use_h,
+ use_w,
+ rotate=rotate,
+ offset=offset,
+ ratio=ratio,
+ mode=mode,
+ prob=prob,
+ upper_iter=upper_iter)
+
+ def apply(self, sample, context=None):
+ sample['image'] = self.gridmask_op(sample['image'], sample['curr_iter'])
+ return sample
+
+
+@register_op
+class RandomDistort(BaseOperator):
+ """Random color distortion.
+ Args:
+ hue (list): hue settings. in [lower, upper, probability] format.
+ saturation (list): saturation settings. in [lower, upper, probability] format.
+ contrast (list): contrast settings. in [lower, upper, probability] format.
+ brightness (list): brightness settings. in [lower, upper, probability] format.
+ random_apply (bool): whether to apply in random (yolo) or fixed (SSD)
+ order.
+ count (int): the number of doing distrot
+ random_channel (bool): whether to swap channels randomly
+ """
+
+ def __init__(self,
+ hue=[-18, 18, 0.5],
+ saturation=[0.5, 1.5, 0.5],
+ contrast=[0.5, 1.5, 0.5],
+ brightness=[0.5, 1.5, 0.5],
+ random_apply=True,
+ count=4,
+ random_channel=False):
+ super(RandomDistort, self).__init__()
+ self.hue = hue
+ self.saturation = saturation
+ self.contrast = contrast
+ self.brightness = brightness
+ self.random_apply = random_apply
+ self.count = count
+ self.random_channel = random_channel
+
+ def apply_hue(self, img):
+ low, high, prob = self.hue
+ if np.random.uniform(0., 1.) < prob:
+ return img
+
+ img = img.astype(np.float32)
+ # it works, but result differ from HSV version
+ delta = np.random.uniform(low, high)
+ u = np.cos(delta * np.pi)
+ w = np.sin(delta * np.pi)
+ bt = np.array([[1.0, 0.0, 0.0], [0.0, u, -w], [0.0, w, u]])
+ tyiq = np.array([[0.299, 0.587, 0.114], [0.596, -0.274, -0.321],
+ [0.211, -0.523, 0.311]])
+ ityiq = np.array([[1.0, 0.956, 0.621], [1.0, -0.272, -0.647],
+ [1.0, -1.107, 1.705]])
+ t = np.dot(np.dot(ityiq, bt), tyiq).T
+ img = np.dot(img, t)
+ return img
+
+ def apply_saturation(self, img):
+ low, high, prob = self.saturation
+ if np.random.uniform(0., 1.) < prob:
+ return img
+ delta = np.random.uniform(low, high)
+ img = img.astype(np.float32)
+ # it works, but result differ from HSV version
+ gray = img * np.array([[[0.299, 0.587, 0.114]]], dtype=np.float32)
+ gray = gray.sum(axis=2, keepdims=True)
+ gray *= (1.0 - delta)
+ img *= delta
+ img += gray
+ return img
+
+ def apply_contrast(self, img):
+ low, high, prob = self.contrast
+ if np.random.uniform(0., 1.) < prob:
+ return img
+ delta = np.random.uniform(low, high)
+ img = img.astype(np.float32)
+ img *= delta
+ return img
+
+ def apply_brightness(self, img):
+ low, high, prob = self.brightness
+ if np.random.uniform(0., 1.) < prob:
+ return img
+ delta = np.random.uniform(low, high)
+ img = img.astype(np.float32)
+ img += delta
+ return img
+
+ def apply(self, sample, context=None):
+ img = sample['image']
+ if self.random_apply:
+ functions = [
+ self.apply_brightness, self.apply_contrast,
+ self.apply_saturation, self.apply_hue
+ ]
+ distortions = np.random.permutation(functions)[:self.count]
+ for func in distortions:
+ img = func(img)
+ sample['image'] = img
+ return sample
+
+ img = self.apply_brightness(img)
+ mode = np.random.randint(0, 2)
+
+ if mode:
+ img = self.apply_contrast(img)
+
+ img = self.apply_saturation(img)
+ img = self.apply_hue(img)
+
+ if not mode:
+ img = self.apply_contrast(img)
+
+ if self.random_channel:
+ if np.random.randint(0, 2):
+ img = img[..., np.random.permutation(3)]
+ sample['image'] = img
+ return sample
+
+
+@register_op
+class AutoAugment(BaseOperator):
+ def __init__(self, autoaug_type="v1"):
+ """
+ Args:
+ autoaug_type (str): autoaug type, support v0, v1, v2, v3, test
+ """
+ super(AutoAugment, self).__init__()
+ self.autoaug_type = autoaug_type
+
+ def apply(self, sample, context=None):
+ """
+ Learning Data Augmentation Strategies for Object Detection, see https://arxiv.org/abs/1906.11172
+ """
+ im = sample['image']
+ gt_bbox = sample['gt_bbox']
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image is not a numpy array.".format(self))
+ if len(im.shape) != 3:
+ raise ImageError("{}: image is not 3-dimensional.".format(self))
+ if len(gt_bbox) == 0:
+ return sample
+
+ height, width, _ = im.shape
+ norm_gt_bbox = np.ones_like(gt_bbox, dtype=np.float32)
+ norm_gt_bbox[:, 0] = gt_bbox[:, 1] / float(height)
+ norm_gt_bbox[:, 1] = gt_bbox[:, 0] / float(width)
+ norm_gt_bbox[:, 2] = gt_bbox[:, 3] / float(height)
+ norm_gt_bbox[:, 3] = gt_bbox[:, 2] / float(width)
+
+ from .autoaugment_utils import distort_image_with_autoaugment
+ im, norm_gt_bbox = distort_image_with_autoaugment(im, norm_gt_bbox,
+ self.autoaug_type)
+
+ gt_bbox[:, 0] = norm_gt_bbox[:, 1] * float(width)
+ gt_bbox[:, 1] = norm_gt_bbox[:, 0] * float(height)
+ gt_bbox[:, 2] = norm_gt_bbox[:, 3] * float(width)
+ gt_bbox[:, 3] = norm_gt_bbox[:, 2] * float(height)
+
+ sample['image'] = im
+ sample['gt_bbox'] = gt_bbox
+ return sample
+
+
+@register_op
+class RandomFlip(BaseOperator):
+ def __init__(self, prob=0.5):
+ """
+ Args:
+ prob (float): the probability of flipping image
+ """
+ super(RandomFlip, self).__init__()
+ self.prob = prob
+ if not (isinstance(self.prob, float)):
+ raise TypeError("{}: input type is invalid.".format(self))
+
+ def apply_segm(self, segms, height, width):
+ def _flip_poly(poly, width):
+ flipped_poly = np.array(poly)
+ flipped_poly[0::2] = width - np.array(poly[0::2])
+ return flipped_poly.tolist()
+
+ def _flip_rle(rle, height, width):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, height, width)
+ mask = mask_util.decode(rle)
+ mask = mask[:, ::-1]
+ rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
+ return rle
+
+ flipped_segms = []
+ for segm in segms:
+ if is_poly(segm):
+ # Polygon format
+ flipped_segms.append([_flip_poly(poly, width) for poly in segm])
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ flipped_segms.append(_flip_rle(segm, height, width))
+ return flipped_segms
+
+ def apply_keypoint(self, gt_keypoint, width):
+ for i in range(gt_keypoint.shape[1]):
+ if i % 2 == 0:
+ old_x = gt_keypoint[:, i].copy()
+ gt_keypoint[:, i] = width - old_x
+ return gt_keypoint
+
+ def apply_image(self, image):
+ return image[:, ::-1, :]
+
+ def apply_bbox(self, bbox, width):
+ oldx1 = bbox[:, 0].copy()
+ oldx2 = bbox[:, 2].copy()
+ bbox[:, 0] = width - oldx2
+ bbox[:, 2] = width - oldx1
+ return bbox
+
+ def apply_rbox(self, bbox, width):
+ oldx1 = bbox[:, 0].copy()
+ oldx2 = bbox[:, 2].copy()
+ oldx3 = bbox[:, 4].copy()
+ oldx4 = bbox[:, 6].copy()
+ bbox[:, 0] = width - oldx1
+ bbox[:, 2] = width - oldx2
+ bbox[:, 4] = width - oldx3
+ bbox[:, 6] = width - oldx4
+ bbox = [bbox_utils.get_best_begin_point_single(e) for e in bbox]
+ return bbox
+
+ def apply(self, sample, context=None):
+ """Filp the image and bounding box.
+ Operators:
+ 1. Flip the image numpy.
+ 2. Transform the bboxes' x coordinates.
+ (Must judge whether the coordinates are normalized!)
+ 3. Transform the segmentations' x coordinates.
+ (Must judge whether the coordinates are normalized!)
+ Output:
+ sample: the image, bounding box and segmentation part
+ in sample are flipped.
+ """
+ if np.random.uniform(0, 1) < self.prob:
+ im = sample['image']
+ height, width = im.shape[:2]
+ im = self.apply_image(im)
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'], width)
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], height,
+ width)
+ if 'gt_keypoint' in sample and len(sample['gt_keypoint']) > 0:
+ sample['gt_keypoint'] = self.apply_keypoint(
+ sample['gt_keypoint'], width)
+
+ if 'semantic' in sample and sample['semantic']:
+ sample['semantic'] = sample['semantic'][:, ::-1]
+
+ if 'gt_segm' in sample and sample['gt_segm'].any():
+ sample['gt_segm'] = sample['gt_segm'][:, :, ::-1]
+
+ if 'gt_rbox2poly' in sample and sample['gt_rbox2poly'].any():
+ sample['gt_rbox2poly'] = self.apply_rbox(sample['gt_rbox2poly'],
+ width)
+
+ sample['flipped'] = True
+ sample['image'] = im
+ return sample
+
+
+@register_op
+class Resize(BaseOperator):
+ def __init__(self, target_size, keep_ratio, interp=cv2.INTER_LINEAR):
+ """
+ Resize image to target size. if keep_ratio is True,
+ resize the image's long side to the maximum of target_size
+ if keep_ratio is False, resize the image to target size(h, w)
+ Args:
+ target_size (int|list): image target size
+ keep_ratio (bool): whether keep_ratio or not, default true
+ interp (int): the interpolation method
+ """
+ super(Resize, self).__init__()
+ self.keep_ratio = keep_ratio
+ self.interp = interp
+ if not isinstance(target_size, (Integral, Sequence)):
+ raise TypeError(
+ "Type of target_size is invalid. Must be Integer or List or Tuple, now is {}".
+ format(type(target_size)))
+ if isinstance(target_size, Integral):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+
+ def apply_image(self, image, scale):
+ im_scale_x, im_scale_y = scale
+
+ return cv2.resize(
+ image,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+
+ def apply_bbox(self, bbox, scale, size):
+ im_scale_x, im_scale_y = scale
+ resize_w, resize_h = size
+ bbox[:, 0::2] *= im_scale_x
+ bbox[:, 1::2] *= im_scale_y
+ bbox[:, 0::2] = np.clip(bbox[:, 0::2], 0, resize_w)
+ bbox[:, 1::2] = np.clip(bbox[:, 1::2], 0, resize_h)
+ return bbox
+
+ def apply_segm(self, segms, im_size, scale):
+ def _resize_poly(poly, im_scale_x, im_scale_y):
+ resized_poly = np.array(poly).astype('float32')
+ resized_poly[0::2] *= im_scale_x
+ resized_poly[1::2] *= im_scale_y
+ return resized_poly.tolist()
+
+ def _resize_rle(rle, im_h, im_w, im_scale_x, im_scale_y):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, im_h, im_w)
+
+ mask = mask_util.decode(rle)
+ mask = cv2.resize(
+ mask,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
+ return rle
+
+ im_h, im_w = im_size
+ im_scale_x, im_scale_y = scale
+ resized_segms = []
+ for segm in segms:
+ if is_poly(segm):
+ # Polygon format
+ resized_segms.append([
+ _resize_poly(poly, im_scale_x, im_scale_y) for poly in segm
+ ])
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ resized_segms.append(
+ _resize_rle(segm, im_h, im_w, im_scale_x, im_scale_y))
+
+ return resized_segms
+
+ def apply(self, sample, context=None):
+ """ Resize the image numpy.
+ """
+ im = sample['image']
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image type is not numpy.".format(self))
+ if len(im.shape) != 3:
+ raise ImageError('{}: image is not 3-dimensional.'.format(self))
+
+ # apply image
+ im_shape = im.shape
+ if self.keep_ratio:
+
+ im_size_min = np.min(im_shape[0:2])
+ im_size_max = np.max(im_shape[0:2])
+
+ target_size_min = np.min(self.target_size)
+ target_size_max = np.max(self.target_size)
+
+ im_scale = min(target_size_min / im_size_min,
+ target_size_max / im_size_max)
+
+ resize_h = im_scale * float(im_shape[0])
+ resize_w = im_scale * float(im_shape[1])
+
+ im_scale_x = im_scale
+ im_scale_y = im_scale
+ else:
+ resize_h, resize_w = self.target_size
+ im_scale_y = resize_h / im_shape[0]
+ im_scale_x = resize_w / im_shape[1]
+
+ im = self.apply_image(sample['image'], [im_scale_x, im_scale_y])
+ sample['image'] = im
+ sample['im_shape'] = np.asarray([resize_h, resize_w], dtype=np.float32)
+ if 'scale_factor' in sample:
+ scale_factor = sample['scale_factor']
+ sample['scale_factor'] = np.asarray(
+ [scale_factor[0] * im_scale_y, scale_factor[1] * im_scale_x],
+ dtype=np.float32)
+ else:
+ sample['scale_factor'] = np.asarray(
+ [im_scale_y, im_scale_x], dtype=np.float32)
+
+ # apply bbox
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'],
+ [im_scale_x, im_scale_y],
+ [resize_w, resize_h])
+
+ # apply rbox
+ if 'gt_rbox2poly' in sample:
+ if np.array(sample['gt_rbox2poly']).shape[1] != 8:
+ logger.warning(
+ "gt_rbox2poly's length shoule be 8, but actually is {}".
+ format(len(sample['gt_rbox2poly'])))
+ sample['gt_rbox2poly'] = self.apply_bbox(sample['gt_rbox2poly'],
+ [im_scale_x, im_scale_y],
+ [resize_w, resize_h])
+
+ # apply polygon
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], im_shape[:2],
+ [im_scale_x, im_scale_y])
+
+ # apply semantic
+ if 'semantic' in sample and sample['semantic']:
+ semantic = sample['semantic']
+ semantic = cv2.resize(
+ semantic.astype('float32'),
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ semantic = np.asarray(semantic).astype('int32')
+ semantic = np.expand_dims(semantic, 0)
+ sample['semantic'] = semantic
+
+ # apply gt_segm
+ if 'gt_segm' in sample and len(sample['gt_segm']) > 0:
+ masks = [
+ cv2.resize(
+ gt_segm,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=cv2.INTER_NEAREST)
+ for gt_segm in sample['gt_segm']
+ ]
+ sample['gt_segm'] = np.asarray(masks).astype(np.uint8)
+
+ return sample
+
+
+@register_op
+class MultiscaleTestResize(BaseOperator):
+ def __init__(self,
+ origin_target_size=[800, 1333],
+ target_size=[],
+ interp=cv2.INTER_LINEAR,
+ use_flip=True):
+ """
+ Rescale image to the each size in target size, and capped at max_size.
+ Args:
+ origin_target_size (list): origin target size of image
+ target_size (list): A list of target sizes of image.
+ interp (int): the interpolation method.
+ use_flip (bool): whether use flip augmentation.
+ """
+ super(MultiscaleTestResize, self).__init__()
+ self.interp = interp
+ self.use_flip = use_flip
+
+ if not isinstance(target_size, Sequence):
+ raise TypeError(
+ "Type of target_size is invalid. Must be List or Tuple, now is {}".
+ format(type(target_size)))
+ self.target_size = target_size
+
+ if not isinstance(origin_target_size, Sequence):
+ raise TypeError(
+ "Type of origin_target_size is invalid. Must be List or Tuple, now is {}".
+ format(type(origin_target_size)))
+
+ self.origin_target_size = origin_target_size
+
+ def apply(self, sample, context=None):
+ """ Resize the image numpy for multi-scale test.
+ """
+ samples = []
+ resizer = Resize(
+ self.origin_target_size, keep_ratio=True, interp=self.interp)
+ samples.append(resizer(sample.copy(), context))
+ if self.use_flip:
+ flipper = RandomFlip(1.1)
+ samples.append(flipper(sample.copy(), context=context))
+
+ for size in self.target_size:
+ resizer = Resize(size, keep_ratio=True, interp=self.interp)
+ samples.append(resizer(sample.copy(), context))
+
+ return samples
+
+
+@register_op
+class RandomResize(BaseOperator):
+ def __init__(self,
+ target_size,
+ keep_ratio=True,
+ interp=cv2.INTER_LINEAR,
+ random_size=True,
+ random_interp=False):
+ """
+ Resize image to target size randomly. random target_size and interpolation method
+ Args:
+ target_size (int, list, tuple): image target size, if random size is True, must be list or tuple
+ keep_ratio (bool): whether keep_raio or not, default true
+ interp (int): the interpolation method
+ random_size (bool): whether random select target size of image
+ random_interp (bool): whether random select interpolation method
+ """
+ super(RandomResize, self).__init__()
+ self.keep_ratio = keep_ratio
+ self.interp = interp
+ self.interps = [
+ cv2.INTER_NEAREST,
+ cv2.INTER_LINEAR,
+ cv2.INTER_AREA,
+ cv2.INTER_CUBIC,
+ cv2.INTER_LANCZOS4,
+ ]
+ assert isinstance(target_size, (
+ Integral, Sequence)), "target_size must be Integer, List or Tuple"
+ if random_size and not isinstance(target_size, Sequence):
+ raise TypeError(
+ "Type of target_size is invalid when random_size is True. Must be List or Tuple, now is {}".
+ format(type(target_size)))
+ self.target_size = target_size
+ self.random_size = random_size
+ self.random_interp = random_interp
+
+ def apply(self, sample, context=None):
+ """ Resize the image numpy.
+ """
+ if self.random_size:
+ target_size = random.choice(self.target_size)
+ else:
+ target_size = self.target_size
+
+ if self.random_interp:
+ interp = random.choice(self.interps)
+ else:
+ interp = self.interp
+
+ resizer = Resize(target_size, self.keep_ratio, interp)
+ return resizer(sample, context=context)
+
+
+@register_op
+class RandomExpand(BaseOperator):
+ """Random expand the canvas.
+ Args:
+ ratio (float): maximum expansion ratio.
+ prob (float): probability to expand.
+ fill_value (list): color value used to fill the canvas. in RGB order.
+ """
+
+ def __init__(self, ratio=4., prob=0.5, fill_value=(127.5, 127.5, 127.5)):
+ super(RandomExpand, self).__init__()
+ assert ratio > 1.01, "expand ratio must be larger than 1.01"
+ self.ratio = ratio
+ self.prob = prob
+ assert isinstance(fill_value, (Number, Sequence)), \
+ "fill value must be either float or sequence"
+ if isinstance(fill_value, Number):
+ fill_value = (fill_value, ) * 3
+ if not isinstance(fill_value, tuple):
+ fill_value = tuple(fill_value)
+ self.fill_value = fill_value
+
+ def apply(self, sample, context=None):
+ if np.random.uniform(0., 1.) < self.prob:
+ return sample
+
+ im = sample['image']
+ height, width = im.shape[:2]
+ ratio = np.random.uniform(1., self.ratio)
+ h = int(height * ratio)
+ w = int(width * ratio)
+ if not h > height or not w > width:
+ return sample
+ y = np.random.randint(0, h - height)
+ x = np.random.randint(0, w - width)
+ offsets, size = [x, y], [h, w]
+
+ pad = Pad(size,
+ pad_mode=-1,
+ offsets=offsets,
+ fill_value=self.fill_value)
+
+ return pad(sample, context=context)
+
+
+@register_op
+class CropWithSampling(BaseOperator):
+ def __init__(self, batch_sampler, satisfy_all=False, avoid_no_bbox=True):
+ """
+ Args:
+ batch_sampler (list): Multiple sets of different
+ parameters for cropping.
+ satisfy_all (bool): whether all boxes must satisfy.
+ e.g.[[1, 1, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.1, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.3, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.5, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.7, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.9, 1.0],
+ [1, 50, 0.3, 1.0, 0.5, 2.0, 0.0, 1.0]]
+ [max sample, max trial, min scale, max scale,
+ min aspect ratio, max aspect ratio,
+ min overlap, max overlap]
+ avoid_no_bbox (bool): whether to to avoid the
+ situation where the box does not appear.
+ """
+ super(CropWithSampling, self).__init__()
+ self.batch_sampler = batch_sampler
+ self.satisfy_all = satisfy_all
+ self.avoid_no_bbox = avoid_no_bbox
+
+ def apply(self, sample, context):
+ """
+ Crop the image and modify bounding box.
+ Operators:
+ 1. Scale the image width and height.
+ 2. Crop the image according to a radom sample.
+ 3. Rescale the bounding box.
+ 4. Determine if the new bbox is satisfied in the new image.
+ Returns:
+ sample: the image, bounding box are replaced.
+ """
+ assert 'image' in sample, "image data not found"
+ im = sample['image']
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ im_height, im_width = im.shape[:2]
+ gt_score = None
+ if 'gt_score' in sample:
+ gt_score = sample['gt_score']
+ sampled_bbox = []
+ gt_bbox = gt_bbox.tolist()
+ for sampler in self.batch_sampler:
+ found = 0
+ for i in range(sampler[1]):
+ if found >= sampler[0]:
+ break
+ sample_bbox = generate_sample_bbox(sampler)
+ if satisfy_sample_constraint(sampler, sample_bbox, gt_bbox,
+ self.satisfy_all):
+ sampled_bbox.append(sample_bbox)
+ found = found + 1
+ im = np.array(im)
+ while sampled_bbox:
+ idx = int(np.random.uniform(0, len(sampled_bbox)))
+ sample_bbox = sampled_bbox.pop(idx)
+ sample_bbox = clip_bbox(sample_bbox)
+ crop_bbox, crop_class, crop_score = \
+ filter_and_process(sample_bbox, gt_bbox, gt_class, scores=gt_score)
+ if self.avoid_no_bbox:
+ if len(crop_bbox) < 1:
+ continue
+ xmin = int(sample_bbox[0] * im_width)
+ xmax = int(sample_bbox[2] * im_width)
+ ymin = int(sample_bbox[1] * im_height)
+ ymax = int(sample_bbox[3] * im_height)
+ im = im[ymin:ymax, xmin:xmax]
+ sample['image'] = im
+ sample['gt_bbox'] = crop_bbox
+ sample['gt_class'] = crop_class
+ sample['gt_score'] = crop_score
+ return sample
+ return sample
+
+
+@register_op
+class CropWithDataAchorSampling(BaseOperator):
+ def __init__(self,
+ batch_sampler,
+ anchor_sampler=None,
+ target_size=None,
+ das_anchor_scales=[16, 32, 64, 128],
+ sampling_prob=0.5,
+ min_size=8.,
+ avoid_no_bbox=True):
+ """
+ Args:
+ anchor_sampler (list): anchor_sampling sets of different
+ parameters for cropping.
+ batch_sampler (list): Multiple sets of different
+ parameters for cropping.
+ e.g.[[1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2, 0.0]]
+ [[1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0],
+ [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0],
+ [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0],
+ [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0],
+ [1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0]]
+ [max sample, max trial, min scale, max scale,
+ min aspect ratio, max aspect ratio,
+ min overlap, max overlap, min coverage, max coverage]
+ target_size (int): target image size.
+ das_anchor_scales (list[float]): a list of anchor scales in data
+ anchor smapling.
+ min_size (float): minimum size of sampled bbox.
+ avoid_no_bbox (bool): whether to to avoid the
+ situation where the box does not appear.
+ """
+ super(CropWithDataAchorSampling, self).__init__()
+ self.anchor_sampler = anchor_sampler
+ self.batch_sampler = batch_sampler
+ self.target_size = target_size
+ self.sampling_prob = sampling_prob
+ self.min_size = min_size
+ self.avoid_no_bbox = avoid_no_bbox
+ self.das_anchor_scales = np.array(das_anchor_scales)
+
+ def apply(self, sample, context):
+ """
+ Crop the image and modify bounding box.
+ Operators:
+ 1. Scale the image width and height.
+ 2. Crop the image according to a radom sample.
+ 3. Rescale the bounding box.
+ 4. Determine if the new bbox is satisfied in the new image.
+ Returns:
+ sample: the image, bounding box are replaced.
+ """
+ assert 'image' in sample, "image data not found"
+ im = sample['image']
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ image_height, image_width = im.shape[:2]
+ gt_bbox[:, 0] /= image_width
+ gt_bbox[:, 1] /= image_height
+ gt_bbox[:, 2] /= image_width
+ gt_bbox[:, 3] /= image_height
+ gt_score = None
+ if 'gt_score' in sample:
+ gt_score = sample['gt_score']
+ sampled_bbox = []
+ gt_bbox = gt_bbox.tolist()
+
+ prob = np.random.uniform(0., 1.)
+ if prob > self.sampling_prob: # anchor sampling
+ assert self.anchor_sampler
+ for sampler in self.anchor_sampler:
+ found = 0
+ for i in range(sampler[1]):
+ if found >= sampler[0]:
+ break
+ sample_bbox = data_anchor_sampling(
+ gt_bbox, image_width, image_height,
+ self.das_anchor_scales, self.target_size)
+ if sample_bbox == 0:
+ break
+ if satisfy_sample_constraint_coverage(sampler, sample_bbox,
+ gt_bbox):
+ sampled_bbox.append(sample_bbox)
+ found = found + 1
+ im = np.array(im)
+ while sampled_bbox:
+ idx = int(np.random.uniform(0, len(sampled_bbox)))
+ sample_bbox = sampled_bbox.pop(idx)
+
+ if 'gt_keypoint' in sample.keys():
+ keypoints = (sample['gt_keypoint'],
+ sample['keypoint_ignore'])
+ crop_bbox, crop_class, crop_score, gt_keypoints = \
+ filter_and_process(sample_bbox, gt_bbox, gt_class,
+ scores=gt_score,
+ keypoints=keypoints)
+ else:
+ crop_bbox, crop_class, crop_score = filter_and_process(
+ sample_bbox, gt_bbox, gt_class, scores=gt_score)
+ crop_bbox, crop_class, crop_score = bbox_area_sampling(
+ crop_bbox, crop_class, crop_score, self.target_size,
+ self.min_size)
+
+ if self.avoid_no_bbox:
+ if len(crop_bbox) < 1:
+ continue
+ im = crop_image_sampling(im, sample_bbox, image_width,
+ image_height, self.target_size)
+ height, width = im.shape[:2]
+ crop_bbox[:, 0] *= width
+ crop_bbox[:, 1] *= height
+ crop_bbox[:, 2] *= width
+ crop_bbox[:, 3] *= height
+ sample['image'] = im
+ sample['gt_bbox'] = crop_bbox
+ sample['gt_class'] = crop_class
+ if 'gt_score' in sample:
+ sample['gt_score'] = crop_score
+ if 'gt_keypoint' in sample.keys():
+ sample['gt_keypoint'] = gt_keypoints[0]
+ sample['keypoint_ignore'] = gt_keypoints[1]
+ return sample
+ return sample
+
+ else:
+ for sampler in self.batch_sampler:
+ found = 0
+ for i in range(sampler[1]):
+ if found >= sampler[0]:
+ break
+ sample_bbox = generate_sample_bbox_square(
+ sampler, image_width, image_height)
+ if satisfy_sample_constraint_coverage(sampler, sample_bbox,
+ gt_bbox):
+ sampled_bbox.append(sample_bbox)
+ found = found + 1
+ im = np.array(im)
+ while sampled_bbox:
+ idx = int(np.random.uniform(0, len(sampled_bbox)))
+ sample_bbox = sampled_bbox.pop(idx)
+ sample_bbox = clip_bbox(sample_bbox)
+
+ if 'gt_keypoint' in sample.keys():
+ keypoints = (sample['gt_keypoint'],
+ sample['keypoint_ignore'])
+ crop_bbox, crop_class, crop_score, gt_keypoints = \
+ filter_and_process(sample_bbox, gt_bbox, gt_class,
+ scores=gt_score,
+ keypoints=keypoints)
+ else:
+ crop_bbox, crop_class, crop_score = filter_and_process(
+ sample_bbox, gt_bbox, gt_class, scores=gt_score)
+ # sampling bbox according the bbox area
+ crop_bbox, crop_class, crop_score = bbox_area_sampling(
+ crop_bbox, crop_class, crop_score, self.target_size,
+ self.min_size)
+
+ if self.avoid_no_bbox:
+ if len(crop_bbox) < 1:
+ continue
+ xmin = int(sample_bbox[0] * image_width)
+ xmax = int(sample_bbox[2] * image_width)
+ ymin = int(sample_bbox[1] * image_height)
+ ymax = int(sample_bbox[3] * image_height)
+ im = im[ymin:ymax, xmin:xmax]
+ height, width = im.shape[:2]
+ crop_bbox[:, 0] *= width
+ crop_bbox[:, 1] *= height
+ crop_bbox[:, 2] *= width
+ crop_bbox[:, 3] *= height
+ sample['image'] = im
+ sample['gt_bbox'] = crop_bbox
+ sample['gt_class'] = crop_class
+ if 'gt_score' in sample:
+ sample['gt_score'] = crop_score
+ if 'gt_keypoint' in sample.keys():
+ sample['gt_keypoint'] = gt_keypoints[0]
+ sample['keypoint_ignore'] = gt_keypoints[1]
+ return sample
+ return sample
+
+
+@register_op
+class RandomCrop(BaseOperator):
+ """Random crop image and bboxes.
+ Args:
+ aspect_ratio (list): aspect ratio of cropped region.
+ in [min, max] format.
+ thresholds (list): iou thresholds for decide a valid bbox crop.
+ scaling (list): ratio between a cropped region and the original image.
+ in [min, max] format.
+ num_attempts (int): number of tries before giving up.
+ allow_no_crop (bool): allow return without actually cropping them.
+ cover_all_box (bool): ensure all bboxes are covered in the final crop.
+ is_mask_crop(bool): whether crop the segmentation.
+ """
+
+ def __init__(self,
+ aspect_ratio=[.5, 2.],
+ thresholds=[.0, .1, .3, .5, .7, .9],
+ scaling=[.3, 1.],
+ num_attempts=50,
+ allow_no_crop=True,
+ cover_all_box=False,
+ is_mask_crop=False):
+ super(RandomCrop, self).__init__()
+ self.aspect_ratio = aspect_ratio
+ self.thresholds = thresholds
+ self.scaling = scaling
+ self.num_attempts = num_attempts
+ self.allow_no_crop = allow_no_crop
+ self.cover_all_box = cover_all_box
+ self.is_mask_crop = is_mask_crop
+
+ def crop_segms(self, segms, valid_ids, crop, height, width):
+ def _crop_poly(segm, crop):
+ xmin, ymin, xmax, ymax = crop
+ crop_coord = [xmin, ymin, xmin, ymax, xmax, ymax, xmax, ymin]
+ crop_p = np.array(crop_coord).reshape(4, 2)
+ crop_p = Polygon(crop_p)
+
+ crop_segm = list()
+ for poly in segm:
+ poly = np.array(poly).reshape(len(poly) // 2, 2)
+ polygon = Polygon(poly)
+ if not polygon.is_valid:
+ exterior = polygon.exterior
+ multi_lines = exterior.intersection(exterior)
+ polygons = shapely.ops.polygonize(multi_lines)
+ polygon = MultiPolygon(polygons)
+ multi_polygon = list()
+ if isinstance(polygon, MultiPolygon):
+ multi_polygon = copy.deepcopy(polygon)
+ else:
+ multi_polygon.append(copy.deepcopy(polygon))
+ for per_polygon in multi_polygon:
+ inter = per_polygon.intersection(crop_p)
+ if not inter:
+ continue
+ if isinstance(inter, (MultiPolygon, GeometryCollection)):
+ for part in inter:
+ if not isinstance(part, Polygon):
+ continue
+ part = np.squeeze(
+ np.array(part.exterior.coords[:-1]).reshape(1,
+ -1))
+ part[0::2] -= xmin
+ part[1::2] -= ymin
+ crop_segm.append(part.tolist())
+ elif isinstance(inter, Polygon):
+ crop_poly = np.squeeze(
+ np.array(inter.exterior.coords[:-1]).reshape(1, -1))
+ crop_poly[0::2] -= xmin
+ crop_poly[1::2] -= ymin
+ crop_segm.append(crop_poly.tolist())
+ else:
+ continue
+ return crop_segm
+
+ def _crop_rle(rle, crop, height, width):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, height, width)
+ mask = mask_util.decode(rle)
+ mask = mask[crop[1]:crop[3], crop[0]:crop[2]]
+ rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
+ return rle
+
+ crop_segms = []
+ for id in valid_ids:
+ segm = segms[id]
+ if is_poly(segm):
+ import copy
+ import shapely.ops
+ from shapely.geometry import Polygon, MultiPolygon, GeometryCollection
+ logging.getLogger("shapely").setLevel(logging.WARNING)
+ # Polygon format
+ crop_segms.append(_crop_poly(segm, crop))
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ crop_segms.append(_crop_rle(segm, crop, height, width))
+ return crop_segms
+
+ def apply(self, sample, context=None):
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) == 0:
+ return sample
+
+ h, w = sample['image'].shape[:2]
+ gt_bbox = sample['gt_bbox']
+
+ # NOTE Original method attempts to generate one candidate for each
+ # threshold then randomly sample one from the resulting list.
+ # Here a short circuit approach is taken, i.e., randomly choose a
+ # threshold and attempt to find a valid crop, and simply return the
+ # first one found.
+ # The probability is not exactly the same, kinda resembling the
+ # "Monty Hall" problem. Actually carrying out the attempts will affect
+ # observability (just like opening doors in the "Monty Hall" game).
+ thresholds = list(self.thresholds)
+ if self.allow_no_crop:
+ thresholds.append('no_crop')
+ np.random.shuffle(thresholds)
+
+ for thresh in thresholds:
+ if thresh == 'no_crop':
+ return sample
+
+ found = False
+ for i in range(self.num_attempts):
+ scale = np.random.uniform(*self.scaling)
+ if self.aspect_ratio is not None:
+ min_ar, max_ar = self.aspect_ratio
+ aspect_ratio = np.random.uniform(
+ max(min_ar, scale**2), min(max_ar, scale**-2))
+ h_scale = scale / np.sqrt(aspect_ratio)
+ w_scale = scale * np.sqrt(aspect_ratio)
+ else:
+ h_scale = np.random.uniform(*self.scaling)
+ w_scale = np.random.uniform(*self.scaling)
+ crop_h = h * h_scale
+ crop_w = w * w_scale
+ if self.aspect_ratio is None:
+ if crop_h / crop_w < 0.5 or crop_h / crop_w > 2.0:
+ continue
+
+ crop_h = int(crop_h)
+ crop_w = int(crop_w)
+ crop_y = np.random.randint(0, h - crop_h)
+ crop_x = np.random.randint(0, w - crop_w)
+ crop_box = [crop_x, crop_y, crop_x + crop_w, crop_y + crop_h]
+ iou = self._iou_matrix(
+ gt_bbox, np.array(
+ [crop_box], dtype=np.float32))
+ if iou.max() < thresh:
+ continue
+
+ if self.cover_all_box and iou.min() < thresh:
+ continue
+
+ cropped_box, valid_ids = self._crop_box_with_center_constraint(
+ gt_bbox, np.array(
+ crop_box, dtype=np.float32))
+ if valid_ids.size > 0:
+ found = True
+ break
+
+ if found:
+ if self.is_mask_crop and 'gt_poly' in sample and len(sample[
+ 'gt_poly']) > 0:
+ crop_polys = self.crop_segms(
+ sample['gt_poly'],
+ valid_ids,
+ np.array(
+ crop_box, dtype=np.int64),
+ h,
+ w)
+ if [] in crop_polys:
+ delete_id = list()
+ valid_polys = list()
+ for id, crop_poly in enumerate(crop_polys):
+ if crop_poly == []:
+ delete_id.append(id)
+ else:
+ valid_polys.append(crop_poly)
+ valid_ids = np.delete(valid_ids, delete_id)
+ if len(valid_polys) == 0:
+ return sample
+ sample['gt_poly'] = valid_polys
+ else:
+ sample['gt_poly'] = crop_polys
+
+ if 'gt_segm' in sample:
+ sample['gt_segm'] = self._crop_segm(sample['gt_segm'],
+ crop_box)
+ sample['gt_segm'] = np.take(
+ sample['gt_segm'], valid_ids, axis=0)
+
+ sample['image'] = self._crop_image(sample['image'], crop_box)
+ sample['gt_bbox'] = np.take(cropped_box, valid_ids, axis=0)
+ sample['gt_class'] = np.take(
+ sample['gt_class'], valid_ids, axis=0)
+ if 'gt_score' in sample:
+ sample['gt_score'] = np.take(
+ sample['gt_score'], valid_ids, axis=0)
+
+ if 'is_crowd' in sample:
+ sample['is_crowd'] = np.take(
+ sample['is_crowd'], valid_ids, axis=0)
+ return sample
+
+ return sample
+
+ def _iou_matrix(self, a, b):
+ tl_i = np.maximum(a[:, np.newaxis, :2], b[:, :2])
+ br_i = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])
+
+ area_i = np.prod(br_i - tl_i, axis=2) * (tl_i < br_i).all(axis=2)
+ area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)
+ area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)
+ area_o = (area_a[:, np.newaxis] + area_b - area_i)
+ return area_i / (area_o + 1e-10)
+
+ def _crop_box_with_center_constraint(self, box, crop):
+ cropped_box = box.copy()
+
+ cropped_box[:, :2] = np.maximum(box[:, :2], crop[:2])
+ cropped_box[:, 2:] = np.minimum(box[:, 2:], crop[2:])
+ cropped_box[:, :2] -= crop[:2]
+ cropped_box[:, 2:] -= crop[:2]
+
+ centers = (box[:, :2] + box[:, 2:]) / 2
+ valid = np.logical_and(crop[:2] <= centers,
+ centers < crop[2:]).all(axis=1)
+ valid = np.logical_and(
+ valid, (cropped_box[:, :2] < cropped_box[:, 2:]).all(axis=1))
+
+ return cropped_box, np.where(valid)[0]
+
+ def _crop_image(self, img, crop):
+ x1, y1, x2, y2 = crop
+ return img[y1:y2, x1:x2, :]
+
+ def _crop_segm(self, segm, crop):
+ x1, y1, x2, y2 = crop
+ return segm[:, y1:y2, x1:x2]
+
+
+@register_op
+class RandomScaledCrop(BaseOperator):
+ """Resize image and bbox based on long side (with optional random scaling),
+ then crop or pad image to target size.
+ Args:
+ target_dim (int): target size.
+ scale_range (list): random scale range.
+ interp (int): interpolation method, default to `cv2.INTER_LINEAR`.
+ """
+
+ def __init__(self,
+ target_dim=512,
+ scale_range=[.1, 2.],
+ interp=cv2.INTER_LINEAR):
+ super(RandomScaledCrop, self).__init__()
+ self.target_dim = target_dim
+ self.scale_range = scale_range
+ self.interp = interp
+
+ def apply(self, sample, context=None):
+ img = sample['image']
+ h, w = img.shape[:2]
+ random_scale = np.random.uniform(*self.scale_range)
+ dim = self.target_dim
+ random_dim = int(dim * random_scale)
+ dim_max = max(h, w)
+ scale = random_dim / dim_max
+ resize_w = w * scale
+ resize_h = h * scale
+ offset_x = int(max(0, np.random.uniform(0., resize_w - dim)))
+ offset_y = int(max(0, np.random.uniform(0., resize_h - dim)))
+
+ img = cv2.resize(img, (resize_w, resize_h), interpolation=self.interp)
+ img = np.array(img)
+ canvas = np.zeros((dim, dim, 3), dtype=img.dtype)
+ canvas[:min(dim, resize_h), :min(dim, resize_w), :] = img[
+ offset_y:offset_y + dim, offset_x:offset_x + dim, :]
+ sample['image'] = canvas
+ sample['im_shape'] = np.asarray([resize_h, resize_w], dtype=np.float32)
+ scale_factor = sample['sacle_factor']
+ sample['scale_factor'] = np.asarray(
+ [scale_factor[0] * scale, scale_factor[1] * scale],
+ dtype=np.float32)
+
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ scale_array = np.array([scale, scale] * 2, dtype=np.float32)
+ shift_array = np.array([offset_x, offset_y] * 2, dtype=np.float32)
+ boxes = sample['gt_bbox'] * scale_array - shift_array
+ boxes = np.clip(boxes, 0, dim - 1)
+ # filter boxes with no area
+ area = np.prod(boxes[..., 2:] - boxes[..., :2], axis=1)
+ valid = (area > 1.).nonzero()[0]
+ sample['gt_bbox'] = boxes[valid]
+ sample['gt_class'] = sample['gt_class'][valid]
+
+ return sample
+
+
+@register_op
+class Cutmix(BaseOperator):
+ def __init__(self, alpha=1.5, beta=1.5):
+ """
+ CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features, see https://arxiv.org/abs/1905.04899
+ Cutmix image and gt_bbbox/gt_score
+ Args:
+ alpha (float): alpha parameter of beta distribute
+ beta (float): beta parameter of beta distribute
+ """
+ super(Cutmix, self).__init__()
+ self.alpha = alpha
+ self.beta = beta
+ if self.alpha <= 0.0:
+ raise ValueError("alpha shold be positive in {}".format(self))
+ if self.beta <= 0.0:
+ raise ValueError("beta shold be positive in {}".format(self))
+
+ def apply_image(self, img1, img2, factor):
+ """ _rand_bbox """
+ h = max(img1.shape[0], img2.shape[0])
+ w = max(img1.shape[1], img2.shape[1])
+ cut_rat = np.sqrt(1. - factor)
+
+ cut_w = np.int32(w * cut_rat)
+ cut_h = np.int32(h * cut_rat)
+
+ # uniform
+ cx = np.random.randint(w)
+ cy = np.random.randint(h)
+
+ bbx1 = np.clip(cx - cut_w // 2, 0, w - 1)
+ bby1 = np.clip(cy - cut_h // 2, 0, h - 1)
+ bbx2 = np.clip(cx + cut_w // 2, 0, w - 1)
+ bby2 = np.clip(cy + cut_h // 2, 0, h - 1)
+
+ img_1_pad = np.zeros((h, w, img1.shape[2]), 'float32')
+ img_1_pad[:img1.shape[0], :img1.shape[1], :] = \
+ img1.astype('float32')
+ img_2_pad = np.zeros((h, w, img2.shape[2]), 'float32')
+ img_2_pad[:img2.shape[0], :img2.shape[1], :] = \
+ img2.astype('float32')
+ img_1_pad[bby1:bby2, bbx1:bbx2, :] = img_2_pad[bby1:bby2, bbx1:bbx2, :]
+ return img_1_pad
+
+ def __call__(self, sample, context=None):
+ if not isinstance(sample, Sequence):
+ return sample
+
+ assert len(sample) == 2, 'cutmix need two samples'
+
+ factor = np.random.beta(self.alpha, self.beta)
+ factor = max(0.0, min(1.0, factor))
+ if factor >= 1.0:
+ return sample[0]
+ if factor <= 0.0:
+ return sample[1]
+ img1 = sample[0]['image']
+ img2 = sample[1]['image']
+ img = self.apply_image(img1, img2, factor)
+ gt_bbox1 = sample[0]['gt_bbox']
+ gt_bbox2 = sample[1]['gt_bbox']
+ gt_bbox = np.concatenate((gt_bbox1, gt_bbox2), axis=0)
+ gt_class1 = sample[0]['gt_class']
+ gt_class2 = sample[1]['gt_class']
+ gt_class = np.concatenate((gt_class1, gt_class2), axis=0)
+ gt_score1 = np.ones_like(sample[0]['gt_class'])
+ gt_score2 = np.ones_like(sample[1]['gt_class'])
+ gt_score = np.concatenate(
+ (gt_score1 * factor, gt_score2 * (1. - factor)), axis=0)
+ result = copy.deepcopy(sample[0])
+ result['image'] = img
+ result['gt_bbox'] = gt_bbox
+ result['gt_score'] = gt_score
+ result['gt_class'] = gt_class
+ if 'is_crowd' in sample[0]:
+ is_crowd1 = sample[0]['is_crowd']
+ is_crowd2 = sample[1]['is_crowd']
+ is_crowd = np.concatenate((is_crowd1, is_crowd2), axis=0)
+ result['is_crowd'] = is_crowd
+ if 'difficult' in sample[0]:
+ is_difficult1 = sample[0]['difficult']
+ is_difficult2 = sample[1]['difficult']
+ is_difficult = np.concatenate(
+ (is_difficult1, is_difficult2), axis=0)
+ result['difficult'] = is_difficult
+ return result
+
+
+@register_op
+class Mixup(BaseOperator):
+ def __init__(self, alpha=1.5, beta=1.5):
+ """ Mixup image and gt_bbbox/gt_score
+ Args:
+ alpha (float): alpha parameter of beta distribute
+ beta (float): beta parameter of beta distribute
+ """
+ super(Mixup, self).__init__()
+ self.alpha = alpha
+ self.beta = beta
+ if self.alpha <= 0.0:
+ raise ValueError("alpha shold be positive in {}".format(self))
+ if self.beta <= 0.0:
+ raise ValueError("beta shold be positive in {}".format(self))
+
+ def apply_image(self, img1, img2, factor):
+ h = max(img1.shape[0], img2.shape[0])
+ w = max(img1.shape[1], img2.shape[1])
+ img = np.zeros((h, w, img1.shape[2]), 'float32')
+ img[:img1.shape[0], :img1.shape[1], :] = \
+ img1.astype('float32') * factor
+ img[:img2.shape[0], :img2.shape[1], :] += \
+ img2.astype('float32') * (1.0 - factor)
+ return img.astype('uint8')
+
+ def __call__(self, sample, context=None):
+ if not isinstance(sample, Sequence):
+ return sample
+
+ assert len(sample) == 2, 'mixup need two samples'
+
+ factor = np.random.beta(self.alpha, self.beta)
+ factor = max(0.0, min(1.0, factor))
+ if factor >= 1.0:
+ return sample[0]
+ if factor <= 0.0:
+ return sample[1]
+ im = self.apply_image(sample[0]['image'], sample[1]['image'], factor)
+ result = copy.deepcopy(sample[0])
+ result['image'] = im
+ # apply bbox and score
+ if 'gt_bbox' in sample[0]:
+ gt_bbox1 = sample[0]['gt_bbox']
+ gt_bbox2 = sample[1]['gt_bbox']
+ gt_bbox = np.concatenate((gt_bbox1, gt_bbox2), axis=0)
+ result['gt_bbox'] = gt_bbox
+ if 'gt_class' in sample[0]:
+ gt_class1 = sample[0]['gt_class']
+ gt_class2 = sample[1]['gt_class']
+ gt_class = np.concatenate((gt_class1, gt_class2), axis=0)
+ result['gt_class'] = gt_class
+
+ gt_score1 = np.ones_like(sample[0]['gt_class'])
+ gt_score2 = np.ones_like(sample[1]['gt_class'])
+ gt_score = np.concatenate(
+ (gt_score1 * factor, gt_score2 * (1. - factor)), axis=0)
+ result['gt_score'] = gt_score
+ if 'is_crowd' in sample[0]:
+ is_crowd1 = sample[0]['is_crowd']
+ is_crowd2 = sample[1]['is_crowd']
+ is_crowd = np.concatenate((is_crowd1, is_crowd2), axis=0)
+ result['is_crowd'] = is_crowd
+ if 'difficult' in sample[0]:
+ is_difficult1 = sample[0]['difficult']
+ is_difficult2 = sample[1]['difficult']
+ is_difficult = np.concatenate(
+ (is_difficult1, is_difficult2), axis=0)
+ result['difficult'] = is_difficult
+
+ if 'gt_ide' in sample[0]:
+ gt_ide1 = sample[0]['gt_ide']
+ gt_ide2 = sample[1]['gt_ide']
+ gt_ide = np.concatenate((gt_ide1, gt_ide2), axis=0)
+ result['gt_ide'] = gt_ide
+ return result
+
+
+@register_op
+class NormalizeBox(BaseOperator):
+ """Transform the bounding box's coornidates to [0,1]."""
+
+ def __init__(self):
+ super(NormalizeBox, self).__init__()
+
+ def apply(self, sample, context):
+ im = sample['image']
+ gt_bbox = sample['gt_bbox']
+ height, width, _ = im.shape
+ for i in range(gt_bbox.shape[0]):
+ gt_bbox[i][0] = gt_bbox[i][0] / width
+ gt_bbox[i][1] = gt_bbox[i][1] / height
+ gt_bbox[i][2] = gt_bbox[i][2] / width
+ gt_bbox[i][3] = gt_bbox[i][3] / height
+ sample['gt_bbox'] = gt_bbox
+
+ if 'gt_keypoint' in sample.keys():
+ gt_keypoint = sample['gt_keypoint']
+
+ for i in range(gt_keypoint.shape[1]):
+ if i % 2:
+ gt_keypoint[:, i] = gt_keypoint[:, i] / height
+ else:
+ gt_keypoint[:, i] = gt_keypoint[:, i] / width
+ sample['gt_keypoint'] = gt_keypoint
+
+ return sample
+
+
+@register_op
+class BboxXYXY2XYWH(BaseOperator):
+ """
+ Convert bbox XYXY format to XYWH format.
+ """
+
+ def __init__(self):
+ super(BboxXYXY2XYWH, self).__init__()
+
+ def apply(self, sample, context=None):
+ assert 'gt_bbox' in sample
+ bbox = sample['gt_bbox']
+ bbox[:, 2:4] = bbox[:, 2:4] - bbox[:, :2]
+ bbox[:, :2] = bbox[:, :2] + bbox[:, 2:4] / 2.
+ sample['gt_bbox'] = bbox
+ return sample
+
+
+@register_op
+class PadBox(BaseOperator):
+ def __init__(self, num_max_boxes=50):
+ """
+ Pad zeros to bboxes if number of bboxes is less than num_max_boxes.
+ Args:
+ num_max_boxes (int): the max number of bboxes
+ """
+ self.num_max_boxes = num_max_boxes
+ super(PadBox, self).__init__()
+
+ def apply(self, sample, context=None):
+ assert 'gt_bbox' in sample
+ bbox = sample['gt_bbox']
+ gt_num = min(self.num_max_boxes, len(bbox))
+ num_max = self.num_max_boxes
+ # fields = context['fields'] if context else []
+ pad_bbox = np.zeros((num_max, 4), dtype=np.float32)
+ if gt_num > 0:
+ pad_bbox[:gt_num, :] = bbox[:gt_num, :]
+ sample['gt_bbox'] = pad_bbox
+ if 'gt_class' in sample:
+ pad_class = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_class[:gt_num] = sample['gt_class'][:gt_num, 0]
+ sample['gt_class'] = pad_class
+ if 'gt_score' in sample:
+ pad_score = np.zeros((num_max, ), dtype=np.float32)
+ if gt_num > 0:
+ pad_score[:gt_num] = sample['gt_score'][:gt_num, 0]
+ sample['gt_score'] = pad_score
+ # in training, for example in op ExpandImage,
+ # the bbox and gt_class is expandded, but the difficult is not,
+ # so, judging by it's length
+ if 'difficult' in sample:
+ pad_diff = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_diff[:gt_num] = sample['difficult'][:gt_num, 0]
+ sample['difficult'] = pad_diff
+ if 'is_crowd' in sample:
+ pad_crowd = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_crowd[:gt_num] = sample['is_crowd'][:gt_num, 0]
+ sample['is_crowd'] = pad_crowd
+ if 'gt_ide' in sample:
+ pad_ide = np.zeros((num_max, ), dtype=np.int32)
+ if gt_num > 0:
+ pad_ide[:gt_num] = sample['gt_ide'][:gt_num, 0]
+ sample['gt_ide'] = pad_ide
+ return sample
+
+
+@register_op
+class DebugVisibleImage(BaseOperator):
+ """
+ In debug mode, visualize images according to `gt_box`.
+ (Currently only supported when not cropping and flipping image.)
+ """
+
+ def __init__(self, output_dir='output/debug', is_normalized=False):
+ super(DebugVisibleImage, self).__init__()
+ self.is_normalized = is_normalized
+ self.output_dir = output_dir
+ if not os.path.isdir(output_dir):
+ os.makedirs(output_dir)
+ if not isinstance(self.is_normalized, bool):
+ raise TypeError("{}: input type is invalid.".format(self))
+
+ def apply(self, sample, context=None):
+ image = Image.fromarray(sample['image'].astype(np.uint8))
+ out_file_name = '{:012d}.jpg'.format(sample['im_id'][0])
+ width = sample['w']
+ height = sample['h']
+ gt_bbox = sample['gt_bbox']
+ gt_class = sample['gt_class']
+ draw = ImageDraw.Draw(image)
+ for i in range(gt_bbox.shape[0]):
+ if self.is_normalized:
+ gt_bbox[i][0] = gt_bbox[i][0] * width
+ gt_bbox[i][1] = gt_bbox[i][1] * height
+ gt_bbox[i][2] = gt_bbox[i][2] * width
+ gt_bbox[i][3] = gt_bbox[i][3] * height
+
+ xmin, ymin, xmax, ymax = gt_bbox[i]
+ draw.line(
+ [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
+ (xmin, ymin)],
+ width=2,
+ fill='green')
+ # draw label
+ text = str(gt_class[i][0])
+ tw, th = draw.textsize(text)
+ draw.rectangle(
+ [(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill='green')
+ draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
+
+ if 'gt_keypoint' in sample.keys():
+ gt_keypoint = sample['gt_keypoint']
+ if self.is_normalized:
+ for i in range(gt_keypoint.shape[1]):
+ if i % 2:
+ gt_keypoint[:, i] = gt_keypoint[:, i] * height
+ else:
+ gt_keypoint[:, i] = gt_keypoint[:, i] * width
+ for i in range(gt_keypoint.shape[0]):
+ keypoint = gt_keypoint[i]
+ for j in range(int(keypoint.shape[0] / 2)):
+ x1 = round(keypoint[2 * j]).astype(np.int32)
+ y1 = round(keypoint[2 * j + 1]).astype(np.int32)
+ draw.ellipse(
+ (x1, y1, x1 + 5, y1 + 5), fill='green', outline='green')
+ save_path = os.path.join(self.output_dir, out_file_name)
+ image.save(save_path, quality=95)
+ return sample
+
+
+@register_op
+class Pad(BaseOperator):
+ def __init__(self,
+ size=None,
+ size_divisor=32,
+ pad_mode=0,
+ offsets=None,
+ fill_value=(127.5, 127.5, 127.5)):
+ """
+ Pad image to a specified size or multiple of size_divisor.
+ Args:
+ size (int, Sequence): image target size, if None, pad to multiple of size_divisor, default None
+ size_divisor (int): size divisor, default 32
+ pad_mode (int): pad mode, currently only supports four modes [-1, 0, 1, 2]. if -1, use specified offsets
+ if 0, only pad to right and bottom. if 1, pad according to center. if 2, only pad left and top
+ offsets (list): [offset_x, offset_y], specify offset while padding, only supported pad_mode=-1
+ fill_value (bool): rgb value of pad area, default (127.5, 127.5, 127.5)
+ """
+ super(Pad, self).__init__()
+
+ if not isinstance(size, (int, Sequence)):
+ raise TypeError(
+ "Type of target_size is invalid when random_size is True. \
+ Must be List, now is {}".format(type(size)))
+
+ if isinstance(size, int):
+ size = [size, size]
+
+ assert pad_mode in [
+ -1, 0, 1, 2
+ ], 'currently only supports four modes [-1, 0, 1, 2]'
+ if pad_mode == -1:
+ assert offsets, 'if pad_mode is -1, offsets should not be None'
+
+ self.size = size
+ self.size_divisor = size_divisor
+ self.pad_mode = pad_mode
+ self.fill_value = fill_value
+ self.offsets = offsets
+
+ def apply_segm(self, segms, offsets, im_size, size):
+ def _expand_poly(poly, x, y):
+ expanded_poly = np.array(poly)
+ expanded_poly[0::2] += x
+ expanded_poly[1::2] += y
+ return expanded_poly.tolist()
+
+ def _expand_rle(rle, x, y, height, width, h, w):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, height, width)
+ mask = mask_util.decode(rle)
+ expanded_mask = np.full((h, w), 0).astype(mask.dtype)
+ expanded_mask[y:y + height, x:x + width] = mask
+ rle = mask_util.encode(
+ np.array(
+ expanded_mask, order='F', dtype=np.uint8))
+ return rle
+
+ x, y = offsets
+ height, width = im_size
+ h, w = size
+ expanded_segms = []
+ for segm in segms:
+ if is_poly(segm):
+ # Polygon format
+ expanded_segms.append(
+ [_expand_poly(poly, x, y) for poly in segm])
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ expanded_segms.append(
+ _expand_rle(segm, x, y, height, width, h, w))
+ return expanded_segms
+
+ def apply_bbox(self, bbox, offsets):
+ return bbox + np.array(offsets * 2, dtype=np.float32)
+
+ def apply_keypoint(self, keypoints, offsets):
+ n = len(keypoints[0]) // 2
+ return keypoints + np.array(offsets * n, dtype=np.float32)
+
+ def apply_image(self, image, offsets, im_size, size):
+ x, y = offsets
+ im_h, im_w = im_size
+ h, w = size
+ canvas = np.ones((h, w, 3), dtype=np.float32)
+ canvas *= np.array(self.fill_value, dtype=np.float32)
+ canvas[y:y + im_h, x:x + im_w, :] = image.astype(np.float32)
+ return canvas
+
+ def apply(self, sample, context=None):
+ im = sample['image']
+ im_h, im_w = im.shape[:2]
+ if self.size:
+ h, w = self.size
+ assert (
+ im_h < h and im_w < w
+ ), '(h, w) of target size should be greater than (im_h, im_w)'
+ else:
+ h = np.ceil(im_h / self.size_divisor) * self.size_divisor
+ w = np.ceil(im_w / self.size_divisor) * self.size_divisor
+
+ if h == im_h and w == im_w:
+ return sample
+
+ if self.pad_mode == -1:
+ offset_x, offset_y = self.offsets
+ elif self.pad_mode == 0:
+ offset_y, offset_x = 0, 0
+ elif self.pad_mode == 1:
+ offset_y, offset_x = (h - im_h) // 2, (w - im_w) // 2
+ else:
+ offset_y, offset_x = h - im_h, w - im_w
+
+ offsets, im_size, size = [offset_x, offset_y], [im_h, im_w], [h, w]
+
+ sample['image'] = self.apply_image(im, offsets, im_size, size)
+
+ if self.pad_mode == 0:
+ return sample
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'], offsets)
+
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], offsets,
+ im_size, size)
+
+ if 'gt_keypoint' in sample and len(sample['gt_keypoint']) > 0:
+ sample['gt_keypoint'] = self.apply_keypoint(sample['gt_keypoint'],
+ offsets)
+
+ return sample
+
+
+@register_op
+class Poly2Mask(BaseOperator):
+ """
+ gt poly to mask annotations
+ """
+
+ def __init__(self):
+ super(Poly2Mask, self).__init__()
+ import pycocotools.mask as maskUtils
+ self.maskutils = maskUtils
+
+ def _poly2mask(self, mask_ann, img_h, img_w):
+ if isinstance(mask_ann, list):
+ # polygon -- a single object might consist of multiple parts
+ # we merge all parts into one mask rle code
+ rles = self.maskutils.frPyObjects(mask_ann, img_h, img_w)
+ rle = self.maskutils.merge(rles)
+ elif isinstance(mask_ann['counts'], list):
+ # uncompressed RLE
+ rle = self.maskutils.frPyObjects(mask_ann, img_h, img_w)
+ else:
+ # rle
+ rle = mask_ann
+ mask = self.maskutils.decode(rle)
+ return mask
+
+ def apply(self, sample, context=None):
+ assert 'gt_poly' in sample
+ im_h = sample['h']
+ im_w = sample['w']
+ masks = [
+ self._poly2mask(gt_poly, im_h, im_w)
+ for gt_poly in sample['gt_poly']
+ ]
+ sample['gt_segm'] = np.asarray(masks).astype(np.uint8)
+ return sample
+
+
+@register_op
+class Rbox2Poly(BaseOperator):
+ """
+ Convert rbbox format to poly format.
+ """
+
+ def __init__(self):
+ super(Rbox2Poly, self).__init__()
+
+ def apply(self, sample, context=None):
+ assert 'gt_rbox' in sample
+ assert sample['gt_rbox'].shape[1] == 5
+ rrects = sample['gt_rbox']
+ x_ctr = rrects[:, 0]
+ y_ctr = rrects[:, 1]
+ width = rrects[:, 2]
+ height = rrects[:, 3]
+ x1 = x_ctr - width / 2.0
+ y1 = y_ctr - height / 2.0
+ x2 = x_ctr + width / 2.0
+ y2 = y_ctr + height / 2.0
+ sample['gt_bbox'] = np.stack([x1, y1, x2, y2], axis=1)
+ polys = bbox_utils.rbox2poly_np(rrects)
+ sample['gt_rbox2poly'] = polys
+ return sample
+
+
+@register_op
+class AugmentHSV(BaseOperator):
+ def __init__(self, fraction=0.50, is_bgr=True):
+ """
+ Augment the SV channel of image data.
+ Args:
+ fraction (float): the fraction for augment. Default: 0.5.
+ is_bgr (bool): whether the image is BGR mode. Default: True.
+ """
+ super(AugmentHSV, self).__init__()
+ self.fraction = fraction
+ self.is_bgr = is_bgr
+
+ def apply(self, sample, context=None):
+ img = sample['image']
+ if self.is_bgr:
+ img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
+ else:
+ img_hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
+ S = img_hsv[:, :, 1].astype(np.float32)
+ V = img_hsv[:, :, 2].astype(np.float32)
+
+ a = (random.random() * 2 - 1) * self.fraction + 1
+ S *= a
+ if a > 1:
+ np.clip(S, a_min=0, a_max=255, out=S)
+
+ a = (random.random() * 2 - 1) * self.fraction + 1
+ V *= a
+ if a > 1:
+ np.clip(V, a_min=0, a_max=255, out=V)
+
+ img_hsv[:, :, 1] = S.astype(np.uint8)
+ img_hsv[:, :, 2] = V.astype(np.uint8)
+ if self.is_bgr:
+ cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img)
+ else:
+ cv2.cvtColor(img_hsv, cv2.COLOR_HSV2RGB, dst=img)
+
+ sample['image'] = img
+ return sample
+
+
+@register_op
+class Norm2PixelBbox(BaseOperator):
+ """
+ Transform the bounding box's coornidates which is in [0,1] to pixels.
+ """
+
+ def __init__(self):
+ super(Norm2PixelBbox, self).__init__()
+
+ def apply(self, sample, context=None):
+ assert 'gt_bbox' in sample
+ bbox = sample['gt_bbox']
+ height, width = sample['image'].shape[:2]
+ bbox[:, 0::2] = bbox[:, 0::2] * width
+ bbox[:, 1::2] = bbox[:, 1::2] * height
+ sample['gt_bbox'] = bbox
+ return sample
+
+
+@register_op
+class BboxCXCYWH2XYXY(BaseOperator):
+ """
+ Convert bbox CXCYWH format to XYXY format.
+ [center_x, center_y, width, height] -> [x0, y0, x1, y1]
+ """
+
+ def __init__(self):
+ super(BboxCXCYWH2XYXY, self).__init__()
+
+ def apply(self, sample, context=None):
+ assert 'gt_bbox' in sample
+ bbox0 = sample['gt_bbox']
+ bbox = bbox0.copy()
+
+ bbox[:, :2] = bbox0[:, :2] - bbox0[:, 2:4] / 2.
+ bbox[:, 2:4] = bbox0[:, :2] + bbox0[:, 2:4] / 2.
+ sample['gt_bbox'] = bbox
+ return sample
+
+
+@register_op
+class RandomResizeCrop(BaseOperator):
+ """Random resize and crop image and bboxes.
+ Args:
+ resizes (list): resize image to one of resizes. if keep_ratio is True and mode is
+ 'long', resize the image's long side to the maximum of target_size, if keep_ratio is
+ True and mode is 'short', resize the image's short side to the minimum of target_size.
+ cropsizes (list): crop sizes after resize, [(min_crop_1, max_crop_1), ...]
+ mode (str): resize mode, `long` or `short`. Details see resizes.
+ prob (float): probability of this op.
+ keep_ratio (bool): whether keep_ratio or not, default true
+ interp (int): the interpolation method
+ thresholds (list): iou thresholds for decide a valid bbox crop.
+ num_attempts (int): number of tries before giving up.
+ allow_no_crop (bool): allow return without actually cropping them.
+ cover_all_box (bool): ensure all bboxes are covered in the final crop.
+ is_mask_crop(bool): whether crop the segmentation.
+ """
+
+ def __init__(
+ self,
+ resizes,
+ cropsizes,
+ prob=0.5,
+ mode='short',
+ keep_ratio=True,
+ interp=cv2.INTER_LINEAR,
+ num_attempts=3,
+ cover_all_box=False,
+ allow_no_crop=False,
+ thresholds=[0.3, 0.5, 0.7],
+ is_mask_crop=False, ):
+ super(RandomResizeCrop, self).__init__()
+
+ self.resizes = resizes
+ self.cropsizes = cropsizes
+ self.prob = prob
+ self.mode = mode
+
+ self.resizer = Resize(0, keep_ratio=keep_ratio, interp=interp)
+ self.croper = RandomCrop(
+ num_attempts=num_attempts,
+ cover_all_box=cover_all_box,
+ thresholds=thresholds,
+ allow_no_crop=allow_no_crop,
+ is_mask_crop=is_mask_crop)
+
+ def _format_size(self, size):
+ if isinstance(size, Integral):
+ size = (size, size)
+ return size
+
+ def apply(self, sample, context=None):
+ if random.random() < self.prob:
+ _resize = self._format_size(random.choice(self.resizes))
+ _cropsize = self._format_size(random.choice(self.cropsizes))
+ sample = self._resize(
+ self.resizer,
+ sample,
+ size=_resize,
+ mode=self.mode,
+ context=context)
+ sample = self._random_crop(
+ self.croper, sample, size=_cropsize, context=context)
+ return sample
+
+ @staticmethod
+ def _random_crop(croper, sample, size, context=None):
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) == 0:
+ return sample
+
+ self = croper
+ h, w = sample['image'].shape[:2]
+ gt_bbox = sample['gt_bbox']
+ cropsize = size
+ min_crop = min(cropsize)
+ max_crop = max(cropsize)
+
+ thresholds = list(self.thresholds)
+ np.random.shuffle(thresholds)
+
+ for thresh in thresholds:
+ found = False
+ for _ in range(self.num_attempts):
+
+ crop_h = random.randint(min_crop, min(h, max_crop))
+ crop_w = random.randint(min_crop, min(w, max_crop))
+
+ crop_y = random.randint(0, h - crop_h)
+ crop_x = random.randint(0, w - crop_w)
+
+ crop_box = [crop_x, crop_y, crop_x + crop_w, crop_y + crop_h]
+ iou = self._iou_matrix(
+ gt_bbox, np.array(
+ [crop_box], dtype=np.float32))
+ if iou.max() < thresh:
+ continue
+
+ if self.cover_all_box and iou.min() < thresh:
+ continue
+
+ cropped_box, valid_ids = self._crop_box_with_center_constraint(
+ gt_bbox, np.array(
+ crop_box, dtype=np.float32))
+ if valid_ids.size > 0:
+ found = True
+ break
+
+ if found:
+ if self.is_mask_crop and 'gt_poly' in sample and len(sample[
+ 'gt_poly']) > 0:
+ crop_polys = self.crop_segms(
+ sample['gt_poly'],
+ valid_ids,
+ np.array(
+ crop_box, dtype=np.int64),
+ h,
+ w)
+ if [] in crop_polys:
+ delete_id = list()
+ valid_polys = list()
+ for id, crop_poly in enumerate(crop_polys):
+ if crop_poly == []:
+ delete_id.append(id)
+ else:
+ valid_polys.append(crop_poly)
+ valid_ids = np.delete(valid_ids, delete_id)
+ if len(valid_polys) == 0:
+ return sample
+ sample['gt_poly'] = valid_polys
+ else:
+ sample['gt_poly'] = crop_polys
+
+ if 'gt_segm' in sample:
+ sample['gt_segm'] = self._crop_segm(sample['gt_segm'],
+ crop_box)
+ sample['gt_segm'] = np.take(
+ sample['gt_segm'], valid_ids, axis=0)
+
+ sample['image'] = self._crop_image(sample['image'], crop_box)
+ sample['gt_bbox'] = np.take(cropped_box, valid_ids, axis=0)
+ sample['gt_class'] = np.take(
+ sample['gt_class'], valid_ids, axis=0)
+ if 'gt_score' in sample:
+ sample['gt_score'] = np.take(
+ sample['gt_score'], valid_ids, axis=0)
+
+ if 'is_crowd' in sample:
+ sample['is_crowd'] = np.take(
+ sample['is_crowd'], valid_ids, axis=0)
+ return sample
+
+ return sample
+
+ @staticmethod
+ def _resize(resizer, sample, size, mode='short', context=None):
+ self = resizer
+ im = sample['image']
+ target_size = size
+
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image type is not numpy.".format(self))
+ if len(im.shape) != 3:
+ raise ImageError('{}: image is not 3-dimensional.'.format(self))
+
+ # apply image
+ im_shape = im.shape
+ if self.keep_ratio:
+
+ im_size_min = np.min(im_shape[0:2])
+ im_size_max = np.max(im_shape[0:2])
+
+ target_size_min = np.min(target_size)
+ target_size_max = np.max(target_size)
+
+ if mode == 'long':
+ im_scale = min(target_size_min / im_size_min,
+ target_size_max / im_size_max)
+ else:
+ im_scale = max(target_size_min / im_size_min,
+ target_size_max / im_size_max)
+
+ resize_h = im_scale * float(im_shape[0])
+ resize_w = im_scale * float(im_shape[1])
+
+ im_scale_x = im_scale
+ im_scale_y = im_scale
+ else:
+ resize_h, resize_w = target_size
+ im_scale_y = resize_h / im_shape[0]
+ im_scale_x = resize_w / im_shape[1]
+
+ im = self.apply_image(sample['image'], [im_scale_x, im_scale_y])
+ sample['image'] = im
+ sample['im_shape'] = np.asarray([resize_h, resize_w], dtype=np.float32)
+ if 'scale_factor' in sample:
+ scale_factor = sample['scale_factor']
+ sample['scale_factor'] = np.asarray(
+ [scale_factor[0] * im_scale_y, scale_factor[1] * im_scale_x],
+ dtype=np.float32)
+ else:
+ sample['scale_factor'] = np.asarray(
+ [im_scale_y, im_scale_x], dtype=np.float32)
+
+ # apply bbox
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'],
+ [im_scale_x, im_scale_y],
+ [resize_w, resize_h])
+
+ # apply rbox
+ if 'gt_rbox2poly' in sample:
+ if np.array(sample['gt_rbox2poly']).shape[1] != 8:
+ logger.warn(
+ "gt_rbox2poly's length shoule be 8, but actually is {}".
+ format(len(sample['gt_rbox2poly'])))
+ sample['gt_rbox2poly'] = self.apply_bbox(sample['gt_rbox2poly'],
+ [im_scale_x, im_scale_y],
+ [resize_w, resize_h])
+
+ # apply polygon
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], im_shape[:2],
+ [im_scale_x, im_scale_y])
+
+ # apply semantic
+ if 'semantic' in sample and sample['semantic']:
+ semantic = sample['semantic']
+ semantic = cv2.resize(
+ semantic.astype('float32'),
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ semantic = np.asarray(semantic).astype('int32')
+ semantic = np.expand_dims(semantic, 0)
+ sample['semantic'] = semantic
+
+ # apply gt_segm
+ if 'gt_segm' in sample and len(sample['gt_segm']) > 0:
+ masks = [
+ cv2.resize(
+ gt_segm,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=cv2.INTER_NEAREST)
+ for gt_segm in sample['gt_segm']
+ ]
+ sample['gt_segm'] = np.asarray(masks).astype(np.uint8)
+
+ return sample
+
+
+@register_op
+class RandomSelect(BaseOperator):
+ """
+ Randomly choose a transformation between transforms1 and transforms2,
+ and the probability of choosing transforms1 is p.
+
+ The code is based on https://github.com/facebookresearch/detr/blob/main/datasets/transforms.py
+
+ """
+
+ def __init__(self, transforms1, transforms2, p=0.5):
+ super(RandomSelect, self).__init__()
+ self.transforms1 = Compose(transforms1)
+ self.transforms2 = Compose(transforms2)
+ self.p = p
+
+ def apply(self, sample, context=None):
+ if random.random() < self.p:
+ return self.transforms1(sample)
+ return self.transforms2(sample)
+
+
+@register_op
+class RandomShortSideResize(BaseOperator):
+ def __init__(self,
+ short_side_sizes,
+ max_size=None,
+ interp=cv2.INTER_LINEAR,
+ random_interp=False):
+ """
+ Resize the image randomly according to the short side. If max_size is not None,
+ the long side is scaled according to max_size. The whole process will be keep ratio.
+ Args:
+ short_side_sizes (list|tuple): Image target short side size.
+ max_size (int): The size of the longest side of image after resize.
+ interp (int): The interpolation method.
+ random_interp (bool): Whether random select interpolation method.
+ """
+ super(RandomShortSideResize, self).__init__()
+
+ assert isinstance(short_side_sizes,
+ Sequence), "short_side_sizes must be List or Tuple"
+
+ self.short_side_sizes = short_side_sizes
+ self.max_size = max_size
+ self.interp = interp
+ self.random_interp = random_interp
+ self.interps = [
+ cv2.INTER_NEAREST,
+ cv2.INTER_LINEAR,
+ cv2.INTER_AREA,
+ cv2.INTER_CUBIC,
+ cv2.INTER_LANCZOS4,
+ ]
+
+ def get_size_with_aspect_ratio(self, image_shape, size, max_size=None):
+ h, w = image_shape
+ if max_size is not None:
+ min_original_size = float(min((w, h)))
+ max_original_size = float(max((w, h)))
+ if max_original_size / min_original_size * size > max_size:
+ size = int(
+ round(max_size * min_original_size / max_original_size))
+
+ if (w <= h and w == size) or (h <= w and h == size):
+ return (w, h)
+
+ if w < h:
+ ow = size
+ oh = int(size * h / w)
+ else:
+ oh = size
+ ow = int(size * w / h)
+
+ return (ow, oh)
+
+ def resize(self,
+ sample,
+ target_size,
+ max_size=None,
+ interp=cv2.INTER_LINEAR):
+ im = sample['image']
+ if not isinstance(im, np.ndarray):
+ raise TypeError("{}: image type is not numpy.".format(self))
+ if len(im.shape) != 3:
+ raise ImageError('{}: image is not 3-dimensional.'.format(self))
+
+ target_size = self.get_size_with_aspect_ratio(im.shape[:2], target_size,
+ max_size)
+ im_scale_y, im_scale_x = target_size[1] / im.shape[0], target_size[
+ 0] / im.shape[1]
+
+ sample['image'] = cv2.resize(im, target_size, interpolation=interp)
+ sample['im_shape'] = np.asarray(target_size[::-1], dtype=np.float32)
+ if 'scale_factor' in sample:
+ scale_factor = sample['scale_factor']
+ sample['scale_factor'] = np.asarray(
+ [scale_factor[0] * im_scale_y, scale_factor[1] * im_scale_x],
+ dtype=np.float32)
+ else:
+ sample['scale_factor'] = np.asarray(
+ [im_scale_y, im_scale_x], dtype=np.float32)
+
+ # apply bbox
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(
+ sample['gt_bbox'], [im_scale_x, im_scale_y], target_size)
+ # apply polygon
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], im.shape[:2],
+ [im_scale_x, im_scale_y])
+ # apply semantic
+ if 'semantic' in sample and sample['semantic']:
+ semantic = sample['semantic']
+ semantic = cv2.resize(
+ semantic.astype('float32'),
+ target_size,
+ interpolation=self.interp)
+ semantic = np.asarray(semantic).astype('int32')
+ semantic = np.expand_dims(semantic, 0)
+ sample['semantic'] = semantic
+ # apply gt_segm
+ if 'gt_segm' in sample and len(sample['gt_segm']) > 0:
+ masks = [
+ cv2.resize(
+ gt_segm, target_size, interpolation=cv2.INTER_NEAREST)
+ for gt_segm in sample['gt_segm']
+ ]
+ sample['gt_segm'] = np.asarray(masks).astype(np.uint8)
+ return sample
+
+ def apply_bbox(self, bbox, scale, size):
+ im_scale_x, im_scale_y = scale
+ resize_w, resize_h = size
+ bbox[:, 0::2] *= im_scale_x
+ bbox[:, 1::2] *= im_scale_y
+ bbox[:, 0::2] = np.clip(bbox[:, 0::2], 0, resize_w)
+ bbox[:, 1::2] = np.clip(bbox[:, 1::2], 0, resize_h)
+ return bbox.astype('float32')
+
+ def apply_segm(self, segms, im_size, scale):
+ def _resize_poly(poly, im_scale_x, im_scale_y):
+ resized_poly = np.array(poly).astype('float32')
+ resized_poly[0::2] *= im_scale_x
+ resized_poly[1::2] *= im_scale_y
+ return resized_poly.tolist()
+
+ def _resize_rle(rle, im_h, im_w, im_scale_x, im_scale_y):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, im_h, im_w)
+
+ mask = mask_util.decode(rle)
+ mask = cv2.resize(
+ mask,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
+ return rle
+
+ im_h, im_w = im_size
+ im_scale_x, im_scale_y = scale
+ resized_segms = []
+ for segm in segms:
+ if is_poly(segm):
+ # Polygon format
+ resized_segms.append([
+ _resize_poly(poly, im_scale_x, im_scale_y) for poly in segm
+ ])
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ resized_segms.append(
+ _resize_rle(segm, im_h, im_w, im_scale_x, im_scale_y))
+
+ return resized_segms
+
+ def apply(self, sample, context=None):
+ target_size = random.choice(self.short_side_sizes)
+ interp = random.choice(
+ self.interps) if self.random_interp else self.interp
+
+ return self.resize(sample, target_size, self.max_size, interp)
+
+
+@register_op
+class RandomSizeCrop(BaseOperator):
+ """
+ Cut the image randomly according to `min_size` and `max_size`
+ """
+
+ def __init__(self, min_size, max_size):
+ super(RandomSizeCrop, self).__init__()
+ self.min_size = min_size
+ self.max_size = max_size
+
+ from paddle.vision.transforms.functional import crop as paddle_crop
+ self.paddle_crop = paddle_crop
+
+ @staticmethod
+ def get_crop_params(img_shape, output_size):
+ """Get parameters for ``crop`` for a random crop.
+ Args:
+ img_shape (list|tuple): Image's height and width.
+ output_size (list|tuple): Expected output size of the crop.
+ Returns:
+ tuple: params (i, j, h, w) to be passed to ``crop`` for random crop.
+ """
+ h, w = img_shape
+ th, tw = output_size
+
+ if h + 1 < th or w + 1 < tw:
+ raise ValueError(
+ "Required crop size {} is larger then input image size {}".
+ format((th, tw), (h, w)))
+
+ if w == tw and h == th:
+ return 0, 0, h, w
+
+ i = random.randint(0, h - th + 1)
+ j = random.randint(0, w - tw + 1)
+ return i, j, th, tw
+
+ def crop(self, sample, region):
+ image_shape = sample['image'].shape[:2]
+ sample['image'] = self.paddle_crop(sample['image'], *region)
+
+ keep_index = None
+ # apply bbox
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) > 0:
+ sample['gt_bbox'] = self.apply_bbox(sample['gt_bbox'], region)
+ bbox = sample['gt_bbox'].reshape([-1, 2, 2])
+ area = (bbox[:, 1, :] - bbox[:, 0, :]).prod(axis=1)
+ keep_index = np.where(area > 0)[0]
+ sample['gt_bbox'] = sample['gt_bbox'][keep_index] if len(
+ keep_index) > 0 else np.zeros(
+ [0, 4], dtype=np.float32)
+ sample['gt_class'] = sample['gt_class'][keep_index] if len(
+ keep_index) > 0 else np.zeros(
+ [0, 1], dtype=np.float32)
+ if 'gt_score' in sample:
+ sample['gt_score'] = sample['gt_score'][keep_index] if len(
+ keep_index) > 0 else np.zeros(
+ [0, 1], dtype=np.float32)
+ if 'is_crowd' in sample:
+ sample['is_crowd'] = sample['is_crowd'][keep_index] if len(
+ keep_index) > 0 else np.zeros(
+ [0, 1], dtype=np.float32)
+
+ # apply polygon
+ if 'gt_poly' in sample and len(sample['gt_poly']) > 0:
+ sample['gt_poly'] = self.apply_segm(sample['gt_poly'], region,
+ image_shape)
+ if keep_index is not None:
+ sample['gt_poly'] = sample['gt_poly'][keep_index]
+ # apply gt_segm
+ if 'gt_segm' in sample and len(sample['gt_segm']) > 0:
+ i, j, h, w = region
+ sample['gt_segm'] = sample['gt_segm'][:, i:i + h, j:j + w]
+ if keep_index is not None:
+ sample['gt_segm'] = sample['gt_segm'][keep_index]
+
+ return sample
+
+ def apply_bbox(self, bbox, region):
+ i, j, h, w = region
+ region_size = np.asarray([w, h])
+ crop_bbox = bbox - np.asarray([j, i, j, i])
+ crop_bbox = np.minimum(crop_bbox.reshape([-1, 2, 2]), region_size)
+ crop_bbox = crop_bbox.clip(min=0)
+ return crop_bbox.reshape([-1, 4]).astype('float32')
+
+ def apply_segm(self, segms, region, image_shape):
+ def _crop_poly(segm, crop):
+ xmin, ymin, xmax, ymax = crop
+ crop_coord = [xmin, ymin, xmin, ymax, xmax, ymax, xmax, ymin]
+ crop_p = np.array(crop_coord).reshape(4, 2)
+ crop_p = Polygon(crop_p)
+
+ crop_segm = list()
+ for poly in segm:
+ poly = np.array(poly).reshape(len(poly) // 2, 2)
+ polygon = Polygon(poly)
+ if not polygon.is_valid:
+ exterior = polygon.exterior
+ multi_lines = exterior.intersection(exterior)
+ polygons = shapely.ops.polygonize(multi_lines)
+ polygon = MultiPolygon(polygons)
+ multi_polygon = list()
+ if isinstance(polygon, MultiPolygon):
+ multi_polygon = copy.deepcopy(polygon)
+ else:
+ multi_polygon.append(copy.deepcopy(polygon))
+ for per_polygon in multi_polygon:
+ inter = per_polygon.intersection(crop_p)
+ if not inter:
+ continue
+ if isinstance(inter, (MultiPolygon, GeometryCollection)):
+ for part in inter:
+ if not isinstance(part, Polygon):
+ continue
+ part = np.squeeze(
+ np.array(part.exterior.coords[:-1]).reshape(1,
+ -1))
+ part[0::2] -= xmin
+ part[1::2] -= ymin
+ crop_segm.append(part.tolist())
+ elif isinstance(inter, Polygon):
+ crop_poly = np.squeeze(
+ np.array(inter.exterior.coords[:-1]).reshape(1, -1))
+ crop_poly[0::2] -= xmin
+ crop_poly[1::2] -= ymin
+ crop_segm.append(crop_poly.tolist())
+ else:
+ continue
+ return crop_segm
+
+ def _crop_rle(rle, crop, height, width):
+ if 'counts' in rle and type(rle['counts']) == list:
+ rle = mask_util.frPyObjects(rle, height, width)
+ mask = mask_util.decode(rle)
+ mask = mask[crop[1]:crop[3], crop[0]:crop[2]]
+ rle = mask_util.encode(np.array(mask, order='F', dtype=np.uint8))
+ return rle
+
+ i, j, h, w = region
+ crop = [j, i, j + w, i + h]
+ height, width = image_shape
+ crop_segms = []
+ for segm in segms:
+ if is_poly(segm):
+ import copy
+ import shapely.ops
+ from shapely.geometry import Polygon, MultiPolygon, GeometryCollection
+ # Polygon format
+ crop_segms.append(_crop_poly(segm, crop))
+ else:
+ # RLE format
+ import pycocotools.mask as mask_util
+ crop_segms.append(_crop_rle(segm, crop, height, width))
+ return crop_segms
+
+ def apply(self, sample, context=None):
+ h = random.randint(self.min_size,
+ min(sample['image'].shape[0], self.max_size))
+ w = random.randint(self.min_size,
+ min(sample['image'].shape[1], self.max_size))
+
+ region = self.get_crop_params(sample['image'].shape[:2], [h, w])
+ return self.crop(sample, region)
+
+
+@register_op
+class WarpAffine(BaseOperator):
+ def __init__(self,
+ keep_res=False,
+ pad=31,
+ input_h=512,
+ input_w=512,
+ scale=0.4,
+ shift=0.1):
+ """WarpAffine
+ Warp affine the image
+
+ The code is based on https://github.com/xingyizhou/CenterNet/blob/master/src/lib/datasets/sample/ctdet.py
+
+
+ """
+ super(WarpAffine, self).__init__()
+ self.keep_res = keep_res
+ self.pad = pad
+ self.input_h = input_h
+ self.input_w = input_w
+ self.scale = scale
+ self.shift = shift
+
+ def apply(self, sample, context=None):
+ img = sample['image']
+ img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) == 0:
+ return sample
+
+ h, w = img.shape[:2]
+
+ if self.keep_res:
+ input_h = (h | self.pad) + 1
+ input_w = (w | self.pad) + 1
+ s = np.array([input_w, input_h], dtype=np.float32)
+ c = np.array([w // 2, h // 2], dtype=np.float32)
+
+ else:
+ s = max(h, w) * 1.0
+ input_h, input_w = self.input_h, self.input_w
+ c = np.array([w / 2., h / 2.], dtype=np.float32)
+
+ trans_input = get_affine_transform(c, s, 0, [input_w, input_h])
+ img = cv2.resize(img, (w, h))
+ inp = cv2.warpAffine(
+ img, trans_input, (input_w, input_h), flags=cv2.INTER_LINEAR)
+ sample['image'] = inp
+ return sample
+
+
+@register_op
+class FlipWarpAffine(BaseOperator):
+ def __init__(self,
+ keep_res=False,
+ pad=31,
+ input_h=512,
+ input_w=512,
+ not_rand_crop=False,
+ scale=0.4,
+ shift=0.1,
+ flip=0.5,
+ is_scale=True,
+ use_random=True):
+ """FlipWarpAffine
+ 1. Random Crop
+ 2. Flip the image horizontal
+ 3. Warp affine the image
+ """
+ super(FlipWarpAffine, self).__init__()
+ self.keep_res = keep_res
+ self.pad = pad
+ self.input_h = input_h
+ self.input_w = input_w
+ self.not_rand_crop = not_rand_crop
+ self.scale = scale
+ self.shift = shift
+ self.flip = flip
+ self.is_scale = is_scale
+ self.use_random = use_random
+
+ def apply(self, sample, context=None):
+ img = sample['image']
+ img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
+ if 'gt_bbox' in sample and len(sample['gt_bbox']) == 0:
+ return sample
+
+ h, w = img.shape[:2]
+
+ if self.keep_res:
+ input_h = (h | self.pad) + 1
+ input_w = (w | self.pad) + 1
+ s = np.array([input_w, input_h], dtype=np.float32)
+ c = np.array([w // 2, h // 2], dtype=np.float32)
+
+ else:
+ s = max(h, w) * 1.0
+ input_h, input_w = self.input_h, self.input_w
+ c = np.array([w / 2., h / 2.], dtype=np.float32)
+
+ if self.use_random:
+ gt_bbox = sample['gt_bbox']
+ if not self.not_rand_crop:
+ s = s * np.random.choice(np.arange(0.6, 1.4, 0.1))
+ w_border = get_border(128, w)
+ h_border = get_border(128, h)
+ c[0] = np.random.randint(low=w_border, high=w - w_border)
+ c[1] = np.random.randint(low=h_border, high=h - h_border)
+ else:
+ sf = self.scale
+ cf = self.shift
+ c[0] += s * np.clip(np.random.randn() * cf, -2 * cf, 2 * cf)
+ c[1] += s * np.clip(np.random.randn() * cf, -2 * cf, 2 * cf)
+ s = s * np.clip(np.random.randn() * sf + 1, 1 - sf, 1 + sf)
+
+ if np.random.random() < self.flip:
+ img = img[:, ::-1, :]
+ c[0] = w - c[0] - 1
+ oldx1 = gt_bbox[:, 0].copy()
+ oldx2 = gt_bbox[:, 2].copy()
+ gt_bbox[:, 0] = w - oldx2 - 1
+ gt_bbox[:, 2] = w - oldx1 - 1
+ sample['gt_bbox'] = gt_bbox
+
+ trans_input = get_affine_transform(c, s, 0, [input_w, input_h])
+ if not self.use_random:
+ img = cv2.resize(img, (w, h))
+ inp = cv2.warpAffine(
+ img, trans_input, (input_w, input_h), flags=cv2.INTER_LINEAR)
+ if self.is_scale:
+ inp = (inp.astype(np.float32) / 255.)
+ sample['image'] = inp
+ sample['center'] = c
+ sample['scale'] = s
+ return sample
+
+
+@register_op
+class CenterRandColor(BaseOperator):
+ """Random color for CenterNet series models.
+ Args:
+ saturation (float): saturation settings.
+ contrast (float): contrast settings.
+ brightness (float): brightness settings.
+ """
+
+ def __init__(self, saturation=0.4, contrast=0.4, brightness=0.4):
+ super(CenterRandColor, self).__init__()
+ self.saturation = saturation
+ self.contrast = contrast
+ self.brightness = brightness
+
+ def apply_saturation(self, img, img_gray):
+ alpha = 1. + np.random.uniform(
+ low=-self.saturation, high=self.saturation)
+ self._blend(alpha, img, img_gray[:, :, None])
+ return img
+
+ def apply_contrast(self, img, img_gray):
+ alpha = 1. + np.random.uniform(low=-self.contrast, high=self.contrast)
+ img_mean = img_gray.mean()
+ self._blend(alpha, img, img_mean)
+ return img
+
+ def apply_brightness(self, img, img_gray):
+ alpha = 1 + np.random.uniform(
+ low=-self.brightness, high=self.brightness)
+ img *= alpha
+ return img
+
+ def _blend(self, alpha, img, img_mean):
+ img *= alpha
+ img_mean *= (1 - alpha)
+ img += img_mean
+
+ def __call__(self, sample, context=None):
+ img = sample['image']
+ img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
+ functions = [
+ self.apply_brightness,
+ self.apply_contrast,
+ self.apply_saturation,
+ ]
+ distortions = np.random.permutation(functions)
+ for func in distortions:
+ img = func(img, img_gray)
+ sample['image'] = img
+ return sample
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__init__.py
new file mode 100644
index 000000000..9d14ee634
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__init__.py
@@ -0,0 +1,30 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import trainer
+from .trainer import *
+
+from . import callbacks
+from .callbacks import *
+
+from . import env
+from .env import *
+
+__all__ = trainer.__all__ \
+ + callbacks.__all__ \
+ + env.__all__
+
+from . import tracker
+from .tracker import *
+__all__ = __all__ + tracker.__all__
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..3720322a3
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/callbacks.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/callbacks.cpython-37.pyc
new file mode 100644
index 000000000..fa174b4c0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/callbacks.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/env.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/env.cpython-37.pyc
new file mode 100644
index 000000000..c8937c83a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/env.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/export_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/export_utils.cpython-37.pyc
new file mode 100644
index 000000000..0ec950e26
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/export_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/tracker.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/tracker.cpython-37.pyc
new file mode 100644
index 000000000..179fd496d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/tracker.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/trainer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/trainer.cpython-37.pyc
new file mode 100644
index 000000000..e3294bdaa
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/__pycache__/trainer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/callbacks.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/callbacks.py
new file mode 100644
index 000000000..df42a687c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/callbacks.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+import datetime
+import six
+import copy
+import json
+
+import paddle
+import paddle.distributed as dist
+
+from ppdet.utils.checkpoint import save_model
+from ppdet.metrics import get_infer_results
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('ppdet.engine')
+
+__all__ = ['Callback', 'ComposeCallback', 'LogPrinter', 'Checkpointer', 'VisualDLWriter', 'SniperProposalsGenerator']
+
+
+class Callback(object):
+ def __init__(self, model):
+ self.model = model
+
+ def on_step_begin(self, status):
+ pass
+
+ def on_step_end(self, status):
+ pass
+
+ def on_epoch_begin(self, status):
+ pass
+
+ def on_epoch_end(self, status):
+ pass
+
+ def on_train_begin(self, status):
+ pass
+
+ def on_train_end(self, status):
+ pass
+
+
+class ComposeCallback(object):
+ def __init__(self, callbacks):
+ callbacks = [c for c in list(callbacks) if c is not None]
+ for c in callbacks:
+ assert isinstance(
+ c, Callback), "callback should be subclass of Callback"
+ self._callbacks = callbacks
+
+ def on_step_begin(self, status):
+ for c in self._callbacks:
+ c.on_step_begin(status)
+
+ def on_step_end(self, status):
+ for c in self._callbacks:
+ c.on_step_end(status)
+
+ def on_epoch_begin(self, status):
+ for c in self._callbacks:
+ c.on_epoch_begin(status)
+
+ def on_epoch_end(self, status):
+ for c in self._callbacks:
+ c.on_epoch_end(status)
+
+ def on_train_begin(self, status):
+ for c in self._callbacks:
+ c.on_train_begin(status)
+
+ def on_train_end(self, status):
+ for c in self._callbacks:
+ c.on_train_end(status)
+
+
+class LogPrinter(Callback):
+ def __init__(self, model):
+ super(LogPrinter, self).__init__(model)
+
+ def on_step_end(self, status):
+ if dist.get_world_size() < 2 or dist.get_rank() == 0:
+ mode = status['mode']
+ if mode == 'train':
+ epoch_id = status['epoch_id']
+ step_id = status['step_id']
+ steps_per_epoch = status['steps_per_epoch']
+ training_staus = status['training_staus']
+ batch_time = status['batch_time']
+ data_time = status['data_time']
+
+ epoches = self.model.cfg.epoch
+ batch_size = self.model.cfg['{}Reader'.format(mode.capitalize(
+ ))]['batch_size']
+
+ logs = training_staus.log()
+ space_fmt = ':' + str(len(str(steps_per_epoch))) + 'd'
+ if step_id % self.model.cfg.log_iter == 0:
+ eta_steps = (epoches - epoch_id) * steps_per_epoch - step_id
+ eta_sec = eta_steps * batch_time.global_avg
+ eta_str = str(datetime.timedelta(seconds=int(eta_sec)))
+ ips = float(batch_size) / batch_time.avg
+ fmt = ' '.join([
+ 'Epoch: [{}]',
+ '[{' + space_fmt + '}/{}]',
+ 'learning_rate: {lr:.6f}',
+ '{meters}',
+ 'eta: {eta}',
+ 'batch_cost: {btime}',
+ 'data_cost: {dtime}',
+ 'ips: {ips:.4f} images/s',
+ ])
+ fmt = fmt.format(
+ epoch_id,
+ step_id,
+ steps_per_epoch,
+ lr=status['learning_rate'],
+ meters=logs,
+ eta=eta_str,
+ btime=str(batch_time),
+ dtime=str(data_time),
+ ips=ips)
+ logger.info(fmt)
+ if mode == 'eval':
+ step_id = status['step_id']
+ if step_id % 100 == 0:
+ logger.info("Eval iter: {}".format(step_id))
+
+ def on_epoch_end(self, status):
+ if dist.get_world_size() < 2 or dist.get_rank() == 0:
+ mode = status['mode']
+ if mode == 'eval':
+ sample_num = status['sample_num']
+ cost_time = status['cost_time']
+ logger.info('Total sample number: {}, averge FPS: {}'.format(
+ sample_num, sample_num / cost_time))
+
+
+class Checkpointer(Callback):
+ def __init__(self, model):
+ super(Checkpointer, self).__init__(model)
+ cfg = self.model.cfg
+ self.best_ap = 0.
+ self.save_dir = os.path.join(self.model.cfg.save_dir,
+ self.model.cfg.filename)
+ if hasattr(self.model.model, 'student_model'):
+ self.weight = self.model.model.student_model
+ else:
+ self.weight = self.model.model
+
+ def on_epoch_end(self, status):
+ # Checkpointer only performed during training
+ mode = status['mode']
+ epoch_id = status['epoch_id']
+ weight = None
+ save_name = None
+ if dist.get_world_size() < 2 or dist.get_rank() == 0:
+ if mode == 'train':
+ end_epoch = self.model.cfg.epoch
+ if (
+ epoch_id + 1
+ ) % self.model.cfg.snapshot_epoch == 0 or epoch_id == end_epoch - 1:
+ save_name = str(
+ epoch_id) if epoch_id != end_epoch - 1 else "model_final"
+ weight = self.weight
+ elif mode == 'eval':
+ if 'save_best_model' in status and status['save_best_model']:
+ for metric in self.model._metrics:
+ map_res = metric.get_results()
+ if 'bbox' in map_res:
+ key = 'bbox'
+ elif 'keypoint' in map_res:
+ key = 'keypoint'
+ else:
+ key = 'mask'
+ if key not in map_res:
+ logger.warning("Evaluation results empty, this may be due to " \
+ "training iterations being too few or not " \
+ "loading the correct weights.")
+ return
+ if map_res[key][0] > self.best_ap:
+ self.best_ap = map_res[key][0]
+ save_name = 'best_model'
+ weight = self.weight
+ logger.info("Best test {} ap is {:0.3f}.".format(
+ key, self.best_ap))
+ if weight:
+ save_model(weight, self.model.optimizer, self.save_dir,
+ save_name, epoch_id + 1)
+
+
+class WiferFaceEval(Callback):
+ def __init__(self, model):
+ super(WiferFaceEval, self).__init__(model)
+
+ def on_epoch_begin(self, status):
+ assert self.model.mode == 'eval', \
+ "WiferFaceEval can only be set during evaluation"
+ for metric in self.model._metrics:
+ metric.update(self.model.model)
+ sys.exit()
+
+
+class VisualDLWriter(Callback):
+ """
+ Use VisualDL to log data or image
+ """
+
+ def __init__(self, model):
+ super(VisualDLWriter, self).__init__(model)
+
+ assert six.PY3, "VisualDL requires Python >= 3.5"
+ try:
+ from visualdl import LogWriter
+ except Exception as e:
+ logger.error('visualdl not found, plaese install visualdl. '
+ 'for example: `pip install visualdl`.')
+ raise e
+ self.vdl_writer = LogWriter(
+ model.cfg.get('vdl_log_dir', 'vdl_log_dir/scalar'))
+ self.vdl_loss_step = 0
+ self.vdl_mAP_step = 0
+ self.vdl_image_step = 0
+ self.vdl_image_frame = 0
+
+ def on_step_end(self, status):
+ mode = status['mode']
+ if dist.get_world_size() < 2 or dist.get_rank() == 0:
+ if mode == 'train':
+ training_staus = status['training_staus']
+ for loss_name, loss_value in training_staus.get().items():
+ self.vdl_writer.add_scalar(loss_name, loss_value,
+ self.vdl_loss_step)
+ self.vdl_loss_step += 1
+ elif mode == 'test':
+ ori_image = status['original_image']
+ result_image = status['result_image']
+ self.vdl_writer.add_image(
+ "original/frame_{}".format(self.vdl_image_frame), ori_image,
+ self.vdl_image_step)
+ self.vdl_writer.add_image(
+ "result/frame_{}".format(self.vdl_image_frame),
+ result_image, self.vdl_image_step)
+ self.vdl_image_step += 1
+ # each frame can display ten pictures at most.
+ if self.vdl_image_step % 10 == 0:
+ self.vdl_image_step = 0
+ self.vdl_image_frame += 1
+
+ def on_epoch_end(self, status):
+ mode = status['mode']
+ if dist.get_world_size() < 2 or dist.get_rank() == 0:
+ if mode == 'eval':
+ for metric in self.model._metrics:
+ for key, map_value in metric.get_results().items():
+ self.vdl_writer.add_scalar("{}-mAP".format(key),
+ map_value[0],
+ self.vdl_mAP_step)
+ self.vdl_mAP_step += 1
+
+
+class SniperProposalsGenerator(Callback):
+ def __init__(self, model):
+ super(SniperProposalsGenerator, self).__init__(model)
+ ori_dataset = self.model.dataset
+ self.dataset = self._create_new_dataset(ori_dataset)
+ self.loader = self.model.loader
+ self.cfg = self.model.cfg
+ self.infer_model = self.model.model
+
+ def _create_new_dataset(self, ori_dataset):
+ dataset = copy.deepcopy(ori_dataset)
+ # init anno_cropper
+ dataset.init_anno_cropper()
+ # generate infer roidbs
+ ori_roidbs = dataset.get_ori_roidbs()
+ roidbs = dataset.anno_cropper.crop_infer_anno_records(ori_roidbs)
+ # set new roidbs
+ dataset.set_roidbs(roidbs)
+
+ return dataset
+
+ def _eval_with_loader(self, loader):
+ results = []
+ with paddle.no_grad():
+ self.infer_model.eval()
+ for step_id, data in enumerate(loader):
+ outs = self.infer_model(data)
+ for key in ['im_shape', 'scale_factor', 'im_id']:
+ outs[key] = data[key]
+ for key, value in outs.items():
+ if hasattr(value, 'numpy'):
+ outs[key] = value.numpy()
+
+ results.append(outs)
+
+ return results
+
+ def on_train_end(self, status):
+ self.loader.dataset = self.dataset
+ results = self._eval_with_loader(self.loader)
+ results = self.dataset.anno_cropper.aggregate_chips_detections(results)
+ # sniper
+ proposals = []
+ clsid2catid = {v: k for k, v in self.dataset.catid2clsid.items()}
+ for outs in results:
+ batch_res = get_infer_results(outs, clsid2catid)
+ start = 0
+ for i, im_id in enumerate(outs['im_id']):
+ bbox_num = outs['bbox_num']
+ end = start + bbox_num[i]
+ bbox_res = batch_res['bbox'][start:end] \
+ if 'bbox' in batch_res else None
+ if bbox_res:
+ proposals += bbox_res
+ logger.info("save proposals in {}".format(self.cfg.proposals_path))
+ with open(self.cfg.proposals_path, 'w') as f:
+ json.dump(proposals, f)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/env.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/env.py
new file mode 100644
index 000000000..0a896571d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/env.py
@@ -0,0 +1,50 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import random
+import numpy as np
+
+import paddle
+from paddle.distributed import fleet
+
+__all__ = ['init_parallel_env', 'set_random_seed', 'init_fleet_env']
+
+
+def init_fleet_env(find_unused_parameters=False):
+ strategy = fleet.DistributedStrategy()
+ strategy.find_unused_parameters = find_unused_parameters
+ fleet.init(is_collective=True, strategy=strategy)
+
+
+def init_parallel_env():
+ env = os.environ
+ dist = 'PADDLE_TRAINER_ID' in env and 'PADDLE_TRAINERS_NUM' in env
+ if dist:
+ trainer_id = int(env['PADDLE_TRAINER_ID'])
+ local_seed = (99 + trainer_id)
+ random.seed(local_seed)
+ np.random.seed(local_seed)
+
+ paddle.distributed.init_parallel_env()
+
+
+def set_random_seed(seed):
+ paddle.seed(seed)
+ random.seed(seed)
+ np.random.seed(seed)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/export_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/export_utils.py
new file mode 100644
index 000000000..e1cf64638
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/export_utils.py
@@ -0,0 +1,175 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import yaml
+from collections import OrderedDict
+
+import paddle
+from ppdet.data.source.category import get_categories
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('ppdet.engine')
+
+# Global dictionary
+TRT_MIN_SUBGRAPH = {
+ 'YOLO': 3,
+ 'SSD': 60,
+ 'RCNN': 40,
+ 'RetinaNet': 40,
+ 'S2ANet': 80,
+ 'EfficientDet': 40,
+ 'Face': 3,
+ 'TTFNet': 60,
+ 'FCOS': 16,
+ 'SOLOv2': 60,
+ 'HigherHRNet': 3,
+ 'HRNet': 3,
+ 'DeepSORT': 3,
+ 'JDE': 10,
+ 'FairMOT': 5,
+ 'GFL': 16,
+ 'PicoDet': 3,
+ 'CenterNet': 5,
+}
+
+KEYPOINT_ARCH = ['HigherHRNet', 'TopDownHRNet']
+MOT_ARCH = ['DeepSORT', 'JDE', 'FairMOT']
+
+
+def _prune_input_spec(input_spec, program, targets):
+ # try to prune static program to figure out pruned input spec
+ # so we perform following operations in static mode
+ paddle.enable_static()
+ pruned_input_spec = [{}]
+ program = program.clone()
+ program = program._prune(targets=targets)
+ global_block = program.global_block()
+ for name, spec in input_spec[0].items():
+ try:
+ v = global_block.var(name)
+ pruned_input_spec[0][name] = spec
+ except Exception:
+ pass
+ paddle.disable_static()
+ return pruned_input_spec
+
+
+def _parse_reader(reader_cfg, dataset_cfg, metric, arch, image_shape):
+ preprocess_list = []
+
+ anno_file = dataset_cfg.get_anno()
+
+ clsid2catid, catid2name = get_categories(metric, anno_file, arch)
+
+ label_list = [str(cat) for cat in catid2name.values()]
+
+ fuse_normalize = reader_cfg.get('fuse_normalize', False)
+ sample_transforms = reader_cfg['sample_transforms']
+ for st in sample_transforms[1:]:
+ for key, value in st.items():
+ p = {'type': key}
+ if key == 'Resize':
+ if int(image_shape[1]) != -1:
+ value['target_size'] = image_shape[1:]
+ if fuse_normalize and key == 'NormalizeImage':
+ continue
+ p.update(value)
+ preprocess_list.append(p)
+ batch_transforms = reader_cfg.get('batch_transforms', None)
+ if batch_transforms:
+ for bt in batch_transforms:
+ for key, value in bt.items():
+ # for deploy/infer, use PadStride(stride) instead PadBatch(pad_to_stride)
+ if key == 'PadBatch':
+ preprocess_list.append({
+ 'type': 'PadStride',
+ 'stride': value['pad_to_stride']
+ })
+ break
+
+ return preprocess_list, label_list
+
+
+def _parse_tracker(tracker_cfg):
+ tracker_params = {}
+ for k, v in tracker_cfg.items():
+ tracker_params.update({k: v})
+ return tracker_params
+
+
+def _dump_infer_config(config, path, image_shape, model):
+ arch_state = False
+ from ppdet.core.config.yaml_helpers import setup_orderdict
+ setup_orderdict()
+ use_dynamic_shape = True if image_shape[2] == -1 else False
+ infer_cfg = OrderedDict({
+ 'mode': 'fluid',
+ 'draw_threshold': 0.5,
+ 'metric': config['metric'],
+ 'use_dynamic_shape': use_dynamic_shape
+ })
+ infer_arch = config['architecture']
+
+ if infer_arch in MOT_ARCH:
+ if infer_arch == 'DeepSORT':
+ tracker_cfg = config['DeepSORTTracker']
+ else:
+ tracker_cfg = config['JDETracker']
+ infer_cfg['tracker'] = _parse_tracker(tracker_cfg)
+
+ for arch, min_subgraph_size in TRT_MIN_SUBGRAPH.items():
+ if arch in infer_arch:
+ infer_cfg['arch'] = arch
+ infer_cfg['min_subgraph_size'] = min_subgraph_size
+ arch_state = True
+ break
+ if not arch_state:
+ logger.error(
+ 'Architecture: {} is not supported for exporting model now.\n'.
+ format(infer_arch) +
+ 'Please set TRT_MIN_SUBGRAPH in ppdet/engine/export_utils.py')
+ os._exit(0)
+ if 'mask_head' in config[config['architecture']] and config[config[
+ 'architecture']]['mask_head']:
+ infer_cfg['mask'] = True
+ label_arch = 'detection_arch'
+ if infer_arch in KEYPOINT_ARCH:
+ label_arch = 'keypoint_arch'
+
+ if infer_arch in MOT_ARCH:
+ label_arch = 'mot_arch'
+ reader_cfg = config['TestMOTReader']
+ dataset_cfg = config['TestMOTDataset']
+ else:
+ reader_cfg = config['TestReader']
+ dataset_cfg = config['TestDataset']
+
+ infer_cfg['Preprocess'], infer_cfg['label_list'] = _parse_reader(
+ reader_cfg, dataset_cfg, config['metric'], label_arch, image_shape[1:])
+
+ if infer_arch == 'PicoDet':
+ infer_cfg['NMS'] = config['PicoHead']['nms']
+ # In order to speed up the prediction, the threshold of nms
+ # is adjusted here, which can be changed in infer_cfg.yml
+ config['PicoHead']['nms']["score_threshold"] = 0.3
+ config['PicoHead']['nms']["nms_threshold"] = 0.5
+ infer_cfg['fpn_stride'] = config['PicoHead']['fpn_stride']
+
+ yaml.dump(infer_cfg, open(path, 'w'))
+ logger.info("Export inference config file to {}".format(os.path.join(path)))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/tracker.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/tracker.py
new file mode 100644
index 000000000..75602cb64
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/tracker.py
@@ -0,0 +1,536 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import cv2
+import glob
+import paddle
+import numpy as np
+from collections import defaultdict
+
+from ppdet.core.workspace import create
+from ppdet.utils.checkpoint import load_weight, load_pretrain_weight
+from ppdet.modeling.mot.utils import Detection, get_crops, scale_coords, clip_box
+from ppdet.modeling.mot.utils import MOTTimer, load_det_results, write_mot_results, save_vis_results
+
+from ppdet.metrics import Metric, MOTMetric, KITTIMOTMetric
+from ppdet.metrics import MCMOTMetric
+import ppdet.utils.stats as stats
+
+from .callbacks import Callback, ComposeCallback
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['Tracker']
+
+
+class Tracker(object):
+ def __init__(self, cfg, mode='eval'):
+ self.cfg = cfg
+ assert mode.lower() in ['test', 'eval'], \
+ "mode should be 'test' or 'eval'"
+ self.mode = mode.lower()
+ self.optimizer = None
+
+ # build MOT data loader
+ self.dataset = cfg['{}MOTDataset'.format(self.mode.capitalize())]
+
+ # build model
+ self.model = create(cfg.architecture)
+
+ self.status = {}
+ self.start_epoch = 0
+
+ # initial default callbacks
+ self._init_callbacks()
+
+ # initial default metrics
+ self._init_metrics()
+ self._reset_metrics()
+
+ def _init_callbacks(self):
+ self._callbacks = []
+ self._compose_callback = None
+
+ def _init_metrics(self):
+ if self.mode in ['test']:
+ self._metrics = []
+ return
+
+ if self.cfg.metric == 'MOT':
+ self._metrics = [MOTMetric(), ]
+ elif self.cfg.metric == 'MCMOT':
+ self._metrics = [MCMOTMetric(self.cfg.num_classes), ]
+ elif self.cfg.metric == 'KITTI':
+ self._metrics = [KITTIMOTMetric(), ]
+ else:
+ logger.warning("Metric not support for metric type {}".format(
+ self.cfg.metric))
+ self._metrics = []
+
+ def _reset_metrics(self):
+ for metric in self._metrics:
+ metric.reset()
+
+ def register_callbacks(self, callbacks):
+ callbacks = [h for h in list(callbacks) if h is not None]
+ for c in callbacks:
+ assert isinstance(c, Callback), \
+ "metrics shoule be instances of subclass of Metric"
+ self._callbacks.extend(callbacks)
+ self._compose_callback = ComposeCallback(self._callbacks)
+
+ def register_metrics(self, metrics):
+ metrics = [m for m in list(metrics) if m is not None]
+ for m in metrics:
+ assert isinstance(m, Metric), \
+ "metrics shoule be instances of subclass of Metric"
+ self._metrics.extend(metrics)
+
+ def load_weights_jde(self, weights):
+ load_weight(self.model, weights, self.optimizer)
+
+ def load_weights_sde(self, det_weights, reid_weights):
+ if self.model.detector:
+ load_weight(self.model.detector, det_weights)
+ load_weight(self.model.reid, reid_weights)
+ else:
+ load_weight(self.model.reid, reid_weights, self.optimizer)
+
+ def _eval_seq_jde(self,
+ dataloader,
+ save_dir=None,
+ show_image=False,
+ frame_rate=30,
+ draw_threshold=0):
+ if save_dir:
+ if not os.path.exists(save_dir): os.makedirs(save_dir)
+ tracker = self.model.tracker
+ tracker.max_time_lost = int(frame_rate / 30.0 * tracker.track_buffer)
+
+ timer = MOTTimer()
+ frame_id = 0
+ self.status['mode'] = 'track'
+ self.model.eval()
+ results = defaultdict(list) # support single class and multi classes
+
+ for step_id, data in enumerate(dataloader):
+ self.status['step_id'] = step_id
+ if frame_id % 40 == 0:
+ logger.info('Processing frame {} ({:.2f} fps)'.format(
+ frame_id, 1. / max(1e-5, timer.average_time)))
+ # forward
+ timer.tic()
+ pred_dets, pred_embs = self.model(data)
+
+ pred_dets, pred_embs = pred_dets.numpy(), pred_embs.numpy()
+ online_targets_dict = self.model.tracker.update(pred_dets,
+ pred_embs)
+ online_tlwhs = defaultdict(list)
+ online_scores = defaultdict(list)
+ online_ids = defaultdict(list)
+ for cls_id in range(self.cfg.num_classes):
+ online_targets = online_targets_dict[cls_id]
+ for t in online_targets:
+ tlwh = t.tlwh
+ tid = t.track_id
+ tscore = t.score
+ if tlwh[2] * tlwh[3] <= tracker.min_box_area: continue
+ if tracker.vertical_ratio > 0 and tlwh[2] / tlwh[
+ 3] > tracker.vertical_ratio:
+ continue
+ online_tlwhs[cls_id].append(tlwh)
+ online_ids[cls_id].append(tid)
+ online_scores[cls_id].append(tscore)
+ # save results
+ results[cls_id].append(
+ (frame_id + 1, online_tlwhs[cls_id], online_scores[cls_id],
+ online_ids[cls_id]))
+
+ timer.toc()
+ save_vis_results(data, frame_id, online_ids, online_tlwhs,
+ online_scores, timer.average_time, show_image,
+ save_dir, self.cfg.num_classes)
+ frame_id += 1
+
+ return results, frame_id, timer.average_time, timer.calls
+
+ def _eval_seq_sde(self,
+ dataloader,
+ save_dir=None,
+ show_image=False,
+ frame_rate=30,
+ seq_name='',
+ scaled=False,
+ det_file='',
+ draw_threshold=0):
+ if save_dir:
+ if not os.path.exists(save_dir): os.makedirs(save_dir)
+ use_detector = False if not self.model.detector else True
+
+ timer = MOTTimer()
+ results = defaultdict(list)
+ frame_id = 0
+ self.status['mode'] = 'track'
+ self.model.eval()
+ self.model.reid.eval()
+ if not use_detector:
+ dets_list = load_det_results(det_file, len(dataloader))
+ logger.info('Finish loading detection results file {}.'.format(
+ det_file))
+
+ for step_id, data in enumerate(dataloader):
+ self.status['step_id'] = step_id
+ if frame_id % 40 == 0:
+ logger.info('Processing frame {} ({:.2f} fps)'.format(
+ frame_id, 1. / max(1e-5, timer.average_time)))
+
+ ori_image = data['ori_image'] # [bs, H, W, 3]
+ ori_image_shape = data['ori_image'].shape[1:3]
+ # ori_image_shape: [H, W]
+
+ input_shape = data['image'].shape[2:]
+ # input_shape: [h, w], before data transforms, set in model config
+
+ im_shape = data['im_shape'][0].numpy()
+ # im_shape: [new_h, new_w], after data transforms
+ scale_factor = data['scale_factor'][0].numpy()
+
+ empty_detections = False
+ # when it has no detected bboxes, will not inference reid model
+ # and if visualize, use original image instead
+
+ # forward
+ timer.tic()
+ if not use_detector:
+ dets = dets_list[frame_id]
+ bbox_tlwh = np.array(dets['bbox'], dtype='float32')
+ if bbox_tlwh.shape[0] > 0:
+ # detector outputs: pred_cls_ids, pred_scores, pred_bboxes
+ pred_cls_ids = np.array(dets['cls_id'], dtype='float32')
+ pred_scores = np.array(dets['score'], dtype='float32')
+ pred_bboxes = np.concatenate(
+ (bbox_tlwh[:, 0:2],
+ bbox_tlwh[:, 2:4] + bbox_tlwh[:, 0:2]),
+ axis=1)
+ else:
+ logger.warning(
+ 'Frame {} has not object, try to modify score threshold.'.
+ format(frame_id))
+ empty_detections = True
+ else:
+ outs = self.model.detector(data)
+ outs['bbox'] = outs['bbox'].numpy()
+ outs['bbox_num'] = outs['bbox_num'].numpy()
+
+ if outs['bbox_num'] > 0 and empty_detections == False:
+ # detector outputs: pred_cls_ids, pred_scores, pred_bboxes
+ pred_cls_ids = outs['bbox'][:, 0:1]
+ pred_scores = outs['bbox'][:, 1:2]
+ if not scaled:
+ # Note: scaled=False only in JDE YOLOv3 or other detectors
+ # with LetterBoxResize and JDEBBoxPostProcess.
+ #
+ # 'scaled' means whether the coords after detector outputs
+ # have been scaled back to the original image, set True
+ # in general detector, set False in JDE YOLOv3.
+ pred_bboxes = scale_coords(outs['bbox'][:, 2:],
+ input_shape, im_shape,
+ scale_factor)
+ else:
+ pred_bboxes = outs['bbox'][:, 2:]
+ else:
+ logger.warning(
+ 'Frame {} has not detected object, try to modify score threshold.'.
+ format(frame_id))
+ empty_detections = True
+
+ if not empty_detections:
+ pred_xyxys, keep_idx = clip_box(pred_bboxes, ori_image_shape)
+ if len(keep_idx[0]) == 0:
+ logger.warning(
+ 'Frame {} has not detected object left after clip_box.'.
+ format(frame_id))
+ empty_detections = True
+
+ if empty_detections:
+ timer.toc()
+ # if visualize, use original image instead
+ online_ids, online_tlwhs, online_scores = None, None, None
+ save_vis_results(data, frame_id, online_ids, online_tlwhs,
+ online_scores, timer.average_time, show_image,
+ save_dir, self.cfg.num_classes)
+ frame_id += 1
+ # thus will not inference reid model
+ continue
+
+ pred_scores = pred_scores[keep_idx[0]]
+ pred_cls_ids = pred_cls_ids[keep_idx[0]]
+ pred_tlwhs = np.concatenate(
+ (pred_xyxys[:, 0:2],
+ pred_xyxys[:, 2:4] - pred_xyxys[:, 0:2] + 1),
+ axis=1)
+ pred_dets = np.concatenate(
+ (pred_tlwhs, pred_scores, pred_cls_ids), axis=1)
+
+ tracker = self.model.tracker
+ crops = get_crops(
+ pred_xyxys,
+ ori_image,
+ w=tracker.input_size[0],
+ h=tracker.input_size[1])
+ crops = paddle.to_tensor(crops)
+
+ data.update({'crops': crops})
+ pred_embs = self.model(data).numpy()
+
+ tracker.predict()
+ online_targets = tracker.update(pred_dets, pred_embs)
+
+ online_tlwhs, online_scores, online_ids = [], [], []
+ for t in online_targets:
+ if not t.is_confirmed() or t.time_since_update > 1:
+ continue
+ tlwh = t.to_tlwh()
+ tscore = t.score
+ tid = t.track_id
+ if tscore < draw_threshold: continue
+ if tlwh[2] * tlwh[3] <= tracker.min_box_area: continue
+ if tracker.vertical_ratio > 0 and tlwh[2] / tlwh[
+ 3] > tracker.vertical_ratio:
+ continue
+ online_tlwhs.append(tlwh)
+ online_scores.append(tscore)
+ online_ids.append(tid)
+ timer.toc()
+
+ # save results
+ results[0].append(
+ (frame_id + 1, online_tlwhs, online_scores, online_ids))
+ save_vis_results(data, frame_id, online_ids, online_tlwhs,
+ online_scores, timer.average_time, show_image,
+ save_dir, self.cfg.num_classes)
+ frame_id += 1
+
+ return results, frame_id, timer.average_time, timer.calls
+
+ def mot_evaluate(self,
+ data_root,
+ seqs,
+ output_dir,
+ data_type='mot',
+ model_type='JDE',
+ save_images=False,
+ save_videos=False,
+ show_image=False,
+ scaled=False,
+ det_results_dir=''):
+ if not os.path.exists(output_dir): os.makedirs(output_dir)
+ result_root = os.path.join(output_dir, 'mot_results')
+ if not os.path.exists(result_root): os.makedirs(result_root)
+ assert data_type in ['mot', 'mcmot', 'kitti'], \
+ "data_type should be 'mot', 'mcmot' or 'kitti'"
+ assert model_type in ['JDE', 'DeepSORT', 'FairMOT'], \
+ "model_type should be 'JDE', 'DeepSORT' or 'FairMOT'"
+
+ # run tracking
+ n_frame = 0
+ timer_avgs, timer_calls = [], []
+ for seq in seqs:
+ infer_dir = os.path.join(data_root, seq)
+ if not os.path.exists(infer_dir) or not os.path.isdir(infer_dir):
+ logger.warning("Seq {} error, {} has no images.".format(
+ seq, infer_dir))
+ continue
+ if os.path.exists(os.path.join(infer_dir, 'img1')):
+ infer_dir = os.path.join(infer_dir, 'img1')
+
+ frame_rate = 30
+ seqinfo = os.path.join(data_root, seq, 'seqinfo.ini')
+ if os.path.exists(seqinfo):
+ meta_info = open(seqinfo).read()
+ frame_rate = int(meta_info[meta_info.find('frameRate') + 10:
+ meta_info.find('\nseqLength')])
+
+ save_dir = os.path.join(output_dir, 'mot_outputs',
+ seq) if save_images or save_videos else None
+ logger.info('start seq: {}'.format(seq))
+
+ self.dataset.set_images(self.get_infer_images(infer_dir))
+ dataloader = create('EvalMOTReader')(self.dataset, 0)
+
+ result_filename = os.path.join(result_root, '{}.txt'.format(seq))
+
+ with paddle.no_grad():
+ if model_type in ['JDE', 'FairMOT']:
+ results, nf, ta, tc = self._eval_seq_jde(
+ dataloader,
+ save_dir=save_dir,
+ show_image=show_image,
+ frame_rate=frame_rate)
+ elif model_type in ['DeepSORT']:
+ results, nf, ta, tc = self._eval_seq_sde(
+ dataloader,
+ save_dir=save_dir,
+ show_image=show_image,
+ frame_rate=frame_rate,
+ seq_name=seq,
+ scaled=scaled,
+ det_file=os.path.join(det_results_dir,
+ '{}.txt'.format(seq)))
+ else:
+ raise ValueError(model_type)
+
+ write_mot_results(result_filename, results, data_type,
+ self.cfg.num_classes)
+ n_frame += nf
+ timer_avgs.append(ta)
+ timer_calls.append(tc)
+
+ if save_videos:
+ output_video_path = os.path.join(save_dir, '..',
+ '{}_vis.mp4'.format(seq))
+ cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg {}'.format(
+ save_dir, output_video_path)
+ os.system(cmd_str)
+ logger.info('Save video in {}.'.format(output_video_path))
+
+ logger.info('Evaluate seq: {}'.format(seq))
+ # update metrics
+ for metric in self._metrics:
+ metric.update(data_root, seq, data_type, result_root,
+ result_filename)
+
+ timer_avgs = np.asarray(timer_avgs)
+ timer_calls = np.asarray(timer_calls)
+ all_time = np.dot(timer_avgs, timer_calls)
+ avg_time = all_time / np.sum(timer_calls)
+ logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(
+ all_time, 1.0 / avg_time))
+
+ # accumulate metric to log out
+ for metric in self._metrics:
+ metric.accumulate()
+ metric.log()
+ # reset metric states for metric may performed multiple times
+ self._reset_metrics()
+
+ def get_infer_images(self, infer_dir):
+ assert infer_dir is None or os.path.isdir(infer_dir), \
+ "{} is not a directory".format(infer_dir)
+ images = set()
+ assert os.path.isdir(infer_dir), \
+ "infer_dir {} is not a directory".format(infer_dir)
+ exts = ['jpg', 'jpeg', 'png', 'bmp']
+ exts += [ext.upper() for ext in exts]
+ for ext in exts:
+ images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
+ images = list(images)
+ images.sort()
+ assert len(images) > 0, "no image found in {}".format(infer_dir)
+ logger.info("Found {} inference images in total.".format(len(images)))
+ return images
+
+ def mot_predict_seq(self,
+ video_file,
+ frame_rate,
+ image_dir,
+ output_dir,
+ data_type='mot',
+ model_type='JDE',
+ save_images=False,
+ save_videos=True,
+ show_image=False,
+ scaled=False,
+ det_results_dir='',
+ draw_threshold=0.5):
+ assert video_file is not None or image_dir is not None, \
+ "--video_file or --image_dir should be set."
+ assert video_file is None or os.path.isfile(video_file), \
+ "{} is not a file".format(video_file)
+ assert image_dir is None or os.path.isdir(image_dir), \
+ "{} is not a directory".format(image_dir)
+
+ if not os.path.exists(output_dir): os.makedirs(output_dir)
+ result_root = os.path.join(output_dir, 'mot_results')
+ if not os.path.exists(result_root): os.makedirs(result_root)
+ assert data_type in ['mot', 'mcmot', 'kitti'], \
+ "data_type should be 'mot', 'mcmot' or 'kitti'"
+ assert model_type in ['JDE', 'DeepSORT', 'FairMOT'], \
+ "model_type should be 'JDE', 'DeepSORT' or 'FairMOT'"
+
+ # run tracking
+ if video_file:
+ seq = video_file.split('/')[-1].split('.')[0]
+ self.dataset.set_video(video_file, frame_rate)
+ logger.info('Starting tracking video {}'.format(video_file))
+ elif image_dir:
+ seq = image_dir.split('/')[-1].split('.')[0]
+ if os.path.exists(os.path.join(image_dir, 'img1')):
+ image_dir = os.path.join(image_dir, 'img1')
+ images = [
+ '{}/{}'.format(image_dir, x) for x in os.listdir(image_dir)
+ ]
+ images.sort()
+ self.dataset.set_images(images)
+ logger.info('Starting tracking folder {}, found {} images'.format(
+ image_dir, len(images)))
+ else:
+ raise ValueError('--video_file or --image_dir should be set.')
+
+ save_dir = os.path.join(output_dir, 'mot_outputs',
+ seq) if save_images or save_videos else None
+
+ dataloader = create('TestMOTReader')(self.dataset, 0)
+ result_filename = os.path.join(result_root, '{}.txt'.format(seq))
+ if frame_rate == -1:
+ frame_rate = self.dataset.frame_rate
+
+ with paddle.no_grad():
+ if model_type in ['JDE', 'FairMOT']:
+ results, nf, ta, tc = self._eval_seq_jde(
+ dataloader,
+ save_dir=save_dir,
+ show_image=show_image,
+ frame_rate=frame_rate,
+ draw_threshold=draw_threshold)
+ elif model_type in ['DeepSORT']:
+ results, nf, ta, tc = self._eval_seq_sde(
+ dataloader,
+ save_dir=save_dir,
+ show_image=show_image,
+ frame_rate=frame_rate,
+ seq_name=seq,
+ scaled=scaled,
+ det_file=os.path.join(det_results_dir,
+ '{}.txt'.format(seq)),
+ draw_threshold=draw_threshold)
+ else:
+ raise ValueError(model_type)
+
+ if save_videos:
+ output_video_path = os.path.join(save_dir, '..',
+ '{}_vis.mp4'.format(seq))
+ cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg {}'.format(
+ save_dir, output_video_path)
+ os.system(cmd_str)
+ logger.info('Save video in {}'.format(output_video_path))
+
+ write_mot_results(result_filename, results, data_type,
+ self.cfg.num_classes)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/trainer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/trainer.py
new file mode 100644
index 000000000..dc739ff62
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/engine/trainer.py
@@ -0,0 +1,715 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+import copy
+import time
+
+import numpy as np
+from PIL import Image, ImageOps
+
+import paddle
+import paddle.distributed as dist
+from paddle.distributed import fleet
+from paddle import amp
+from paddle.static import InputSpec
+from ppdet.optimizer import ModelEMA
+
+from ppdet.core.workspace import create
+from ppdet.utils.checkpoint import load_weight, load_pretrain_weight
+from ppdet.utils.visualizer import visualize_results, save_result
+from ppdet.metrics import Metric, COCOMetric, VOCMetric, WiderFaceMetric, get_infer_results, KeyPointTopDownCOCOEval, KeyPointTopDownMPIIEval
+from ppdet.metrics import RBoxMetric, JDEDetMetric, SNIPERCOCOMetric
+from ppdet.data.source.sniper_coco import SniperCOCODataSet
+from ppdet.data.source.category import get_categories
+import ppdet.utils.stats as stats
+from ppdet.utils import profiler
+
+from .callbacks import Callback, ComposeCallback, LogPrinter, Checkpointer, WiferFaceEval, VisualDLWriter, SniperProposalsGenerator
+from .export_utils import _dump_infer_config, _prune_input_spec
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('ppdet.engine')
+
+__all__ = ['Trainer']
+
+MOT_ARCH = ['DeepSORT', 'JDE', 'FairMOT']
+
+
+class Trainer(object):
+ def __init__(self, cfg, mode='train'):
+ self.cfg = cfg
+ assert mode.lower() in ['train', 'eval', 'test'], \
+ "mode should be 'train', 'eval' or 'test'"
+ self.mode = mode.lower()
+ self.optimizer = None
+ self.is_loaded_weights = False
+
+ # build data loader
+ if cfg.architecture in MOT_ARCH and self.mode in ['eval', 'test']:
+ self.dataset = cfg['{}MOTDataset'.format(self.mode.capitalize())]
+ else:
+ self.dataset = cfg['{}Dataset'.format(self.mode.capitalize())]
+
+ if cfg.architecture == 'DeepSORT' and self.mode == 'train':
+ logger.error('DeepSORT has no need of training on mot dataset.')
+ sys.exit(1)
+
+ if self.mode == 'train':
+ self.loader = create('{}Reader'.format(self.mode.capitalize()))(
+ self.dataset, cfg.worker_num)
+
+ if cfg.architecture == 'JDE' and self.mode == 'train':
+ cfg['JDEEmbeddingHead'][
+ 'num_identities'] = self.dataset.num_identities_dict[0]
+ # JDE only support single class MOT now.
+
+ if cfg.architecture == 'FairMOT' and self.mode == 'train':
+ cfg['FairMOTEmbeddingHead'][
+ 'num_identities_dict'] = self.dataset.num_identities_dict
+ # FairMOT support single class and multi-class MOT now.
+
+ # build model
+ if 'model' not in self.cfg:
+ self.model = create(cfg.architecture)
+ else:
+ self.model = self.cfg.model
+ self.is_loaded_weights = True
+
+ #normalize params for deploy
+ self.model.load_meanstd(cfg['TestReader']['sample_transforms'])
+
+ self.use_ema = ('use_ema' in cfg and cfg['use_ema'])
+ if self.use_ema:
+ ema_decay = self.cfg.get('ema_decay', 0.9998)
+ cycle_epoch = self.cfg.get('cycle_epoch', -1)
+ self.ema = ModelEMA(
+ self.model,
+ decay=ema_decay,
+ use_thres_step=True,
+ cycle_epoch=cycle_epoch)
+
+ # EvalDataset build with BatchSampler to evaluate in single device
+ # TODO: multi-device evaluate
+ if self.mode == 'eval':
+ self._eval_batch_sampler = paddle.io.BatchSampler(
+ self.dataset, batch_size=self.cfg.EvalReader['batch_size'])
+ self.loader = create('{}Reader'.format(self.mode.capitalize()))(
+ self.dataset, cfg.worker_num, self._eval_batch_sampler)
+ # TestDataset build after user set images, skip loader creation here
+
+ # build optimizer in train mode
+ if self.mode == 'train':
+ steps_per_epoch = len(self.loader)
+ self.lr = create('LearningRate')(steps_per_epoch)
+ self.optimizer = create('OptimizerBuilder')(self.lr, self.model)
+
+ if self.cfg.get('unstructured_prune'):
+ self.pruner = create('UnstructuredPruner')(self.model,
+ steps_per_epoch)
+
+ self._nranks = dist.get_world_size()
+ self._local_rank = dist.get_rank()
+
+ self.status = {}
+
+ self.start_epoch = 0
+ self.end_epoch = 0 if 'epoch' not in cfg else cfg.epoch
+
+ # initial default callbacks
+ self._init_callbacks()
+
+ # initial default metrics
+ self._init_metrics()
+ self._reset_metrics()
+
+ def _init_callbacks(self):
+ if self.mode == 'train':
+ self._callbacks = [LogPrinter(self), Checkpointer(self)]
+ if self.cfg.get('use_vdl', False):
+ self._callbacks.append(VisualDLWriter(self))
+ if self.cfg.get('save_proposals', False):
+ self._callbacks.append(SniperProposalsGenerator(self))
+ self._compose_callback = ComposeCallback(self._callbacks)
+ elif self.mode == 'eval':
+ self._callbacks = [LogPrinter(self)]
+ if self.cfg.metric == 'WiderFace':
+ self._callbacks.append(WiferFaceEval(self))
+ self._compose_callback = ComposeCallback(self._callbacks)
+ elif self.mode == 'test' and self.cfg.get('use_vdl', False):
+ self._callbacks = [VisualDLWriter(self)]
+ self._compose_callback = ComposeCallback(self._callbacks)
+ else:
+ self._callbacks = []
+ self._compose_callback = None
+
+ def _init_metrics(self, validate=False):
+ if self.mode == 'test' or (self.mode == 'train' and not validate):
+ self._metrics = []
+ return
+ classwise = self.cfg['classwise'] if 'classwise' in self.cfg else False
+ if self.cfg.metric == 'COCO' or self.cfg.metric == "SNIPERCOCO":
+ # TODO: bias should be unified
+ bias = self.cfg['bias'] if 'bias' in self.cfg else 0
+ output_eval = self.cfg['output_eval'] \
+ if 'output_eval' in self.cfg else None
+ save_prediction_only = self.cfg.get('save_prediction_only', False)
+
+ # pass clsid2catid info to metric instance to avoid multiple loading
+ # annotation file
+ clsid2catid = {v: k for k, v in self.dataset.catid2clsid.items()} \
+ if self.mode == 'eval' else None
+
+ # when do validation in train, annotation file should be get from
+ # EvalReader instead of self.dataset(which is TrainReader)
+ anno_file = self.dataset.get_anno()
+ dataset = self.dataset
+ if self.mode == 'train' and validate:
+ eval_dataset = self.cfg['EvalDataset']
+ eval_dataset.check_or_download_dataset()
+ anno_file = eval_dataset.get_anno()
+ dataset = eval_dataset
+
+ IouType = self.cfg['IouType'] if 'IouType' in self.cfg else 'bbox'
+ if self.cfg.metric == "COCO":
+ self._metrics = [
+ COCOMetric(
+ anno_file=anno_file,
+ clsid2catid=clsid2catid,
+ classwise=classwise,
+ output_eval=output_eval,
+ bias=bias,
+ IouType=IouType,
+ save_prediction_only=save_prediction_only)
+ ]
+ elif self.cfg.metric == "SNIPERCOCO": # sniper
+ self._metrics = [
+ SNIPERCOCOMetric(
+ anno_file=anno_file,
+ dataset=dataset,
+ clsid2catid=clsid2catid,
+ classwise=classwise,
+ output_eval=output_eval,
+ bias=bias,
+ IouType=IouType,
+ save_prediction_only=save_prediction_only)
+ ]
+ elif self.cfg.metric == 'RBOX':
+ # TODO: bias should be unified
+ bias = self.cfg['bias'] if 'bias' in self.cfg else 0
+ output_eval = self.cfg['output_eval'] \
+ if 'output_eval' in self.cfg else None
+ save_prediction_only = self.cfg.get('save_prediction_only', False)
+
+ # pass clsid2catid info to metric instance to avoid multiple loading
+ # annotation file
+ clsid2catid = {v: k for k, v in self.dataset.catid2clsid.items()} \
+ if self.mode == 'eval' else None
+
+ # when do validation in train, annotation file should be get from
+ # EvalReader instead of self.dataset(which is TrainReader)
+ anno_file = self.dataset.get_anno()
+ if self.mode == 'train' and validate:
+ eval_dataset = self.cfg['EvalDataset']
+ eval_dataset.check_or_download_dataset()
+ anno_file = eval_dataset.get_anno()
+
+ self._metrics = [
+ RBoxMetric(
+ anno_file=anno_file,
+ clsid2catid=clsid2catid,
+ classwise=classwise,
+ output_eval=output_eval,
+ bias=bias,
+ save_prediction_only=save_prediction_only)
+ ]
+ elif self.cfg.metric == 'VOC':
+ self._metrics = [
+ VOCMetric(
+ label_list=self.dataset.get_label_list(),
+ class_num=self.cfg.num_classes,
+ map_type=self.cfg.map_type,
+ classwise=classwise)
+ ]
+ elif self.cfg.metric == 'WiderFace':
+ multi_scale = self.cfg.multi_scale_eval if 'multi_scale_eval' in self.cfg else True
+ self._metrics = [
+ WiderFaceMetric(
+ image_dir=os.path.join(self.dataset.dataset_dir,
+ self.dataset.image_dir),
+ anno_file=self.dataset.get_anno(),
+ multi_scale=multi_scale)
+ ]
+ elif self.cfg.metric == 'KeyPointTopDownCOCOEval':
+ eval_dataset = self.cfg['EvalDataset']
+ eval_dataset.check_or_download_dataset()
+ anno_file = eval_dataset.get_anno()
+ save_prediction_only = self.cfg.get('save_prediction_only', False)
+ self._metrics = [
+ KeyPointTopDownCOCOEval(
+ anno_file,
+ len(eval_dataset),
+ self.cfg.num_joints,
+ self.cfg.save_dir,
+ save_prediction_only=save_prediction_only)
+ ]
+ elif self.cfg.metric == 'KeyPointTopDownMPIIEval':
+ eval_dataset = self.cfg['EvalDataset']
+ eval_dataset.check_or_download_dataset()
+ anno_file = eval_dataset.get_anno()
+ save_prediction_only = self.cfg.get('save_prediction_only', False)
+ self._metrics = [
+ KeyPointTopDownMPIIEval(
+ anno_file,
+ len(eval_dataset),
+ self.cfg.num_joints,
+ self.cfg.save_dir,
+ save_prediction_only=save_prediction_only)
+ ]
+ elif self.cfg.metric == 'MOTDet':
+ self._metrics = [JDEDetMetric(), ]
+ else:
+ logger.warning("Metric not support for metric type {}".format(
+ self.cfg.metric))
+ self._metrics = []
+
+ def _reset_metrics(self):
+ for metric in self._metrics:
+ metric.reset()
+
+ def register_callbacks(self, callbacks):
+ callbacks = [c for c in list(callbacks) if c is not None]
+ for c in callbacks:
+ assert isinstance(c, Callback), \
+ "metrics shoule be instances of subclass of Metric"
+ self._callbacks.extend(callbacks)
+ self._compose_callback = ComposeCallback(self._callbacks)
+
+ def register_metrics(self, metrics):
+ metrics = [m for m in list(metrics) if m is not None]
+ for m in metrics:
+ assert isinstance(m, Metric), \
+ "metrics shoule be instances of subclass of Metric"
+ self._metrics.extend(metrics)
+
+ def load_weights(self, weights):
+ if self.is_loaded_weights:
+ return
+ self.start_epoch = 0
+ load_pretrain_weight(self.model, weights)
+ logger.debug("Load weights {} to start training".format(weights))
+
+ def load_weights_sde(self, det_weights, reid_weights):
+ if self.model.detector:
+ load_weight(self.model.detector, det_weights)
+ load_weight(self.model.reid, reid_weights)
+ else:
+ load_weight(self.model.reid, reid_weights)
+
+ def resume_weights(self, weights):
+ # support Distill resume weights
+ if hasattr(self.model, 'student_model'):
+ self.start_epoch = load_weight(self.model.student_model, weights,
+ self.optimizer)
+ else:
+ self.start_epoch = load_weight(self.model, weights, self.optimizer)
+ logger.debug("Resume weights of epoch {}".format(self.start_epoch))
+
+ def train(self, validate=False):
+ assert self.mode == 'train', "Model not in 'train' mode"
+ Init_mark = False
+
+ model = self.model
+ if self.cfg.get('fleet', False):
+ model = fleet.distributed_model(model)
+ self.optimizer = fleet.distributed_optimizer(self.optimizer)
+ elif self._nranks > 1:
+ find_unused_parameters = self.cfg[
+ 'find_unused_parameters'] if 'find_unused_parameters' in self.cfg else False
+ model = paddle.DataParallel(
+ self.model, find_unused_parameters=find_unused_parameters)
+
+ # initial fp16
+ if self.cfg.get('fp16', False):
+ scaler = amp.GradScaler(
+ enable=self.cfg.use_gpu, init_loss_scaling=1024)
+
+ self.status.update({
+ 'epoch_id': self.start_epoch,
+ 'step_id': 0,
+ 'steps_per_epoch': len(self.loader)
+ })
+
+ self.status['batch_time'] = stats.SmoothedValue(
+ self.cfg.log_iter, fmt='{avg:.4f}')
+ self.status['data_time'] = stats.SmoothedValue(
+ self.cfg.log_iter, fmt='{avg:.4f}')
+ self.status['training_staus'] = stats.TrainingStats(self.cfg.log_iter)
+
+ if self.cfg.get('print_flops', False):
+ self._flops(self.loader)
+ profiler_options = self.cfg.get('profiler_options', None)
+
+ self._compose_callback.on_train_begin(self.status)
+
+ for epoch_id in range(self.start_epoch, self.cfg.epoch):
+ self.status['mode'] = 'train'
+ self.status['epoch_id'] = epoch_id
+ self._compose_callback.on_epoch_begin(self.status)
+ self.loader.dataset.set_epoch(epoch_id)
+ model.train()
+ iter_tic = time.time()
+ for step_id, data in enumerate(self.loader):
+ self.status['data_time'].update(time.time() - iter_tic)
+ self.status['step_id'] = step_id
+ profiler.add_profiler_step(profiler_options)
+ self._compose_callback.on_step_begin(self.status)
+ data['epoch_id'] = epoch_id
+
+ if self.cfg.get('fp16', False):
+ with amp.auto_cast(enable=self.cfg.use_gpu):
+ # model forward
+ outputs = model(data)
+ loss = outputs['loss']
+
+ # model backward
+ scaled_loss = scaler.scale(loss)
+ scaled_loss.backward()
+ # in dygraph mode, optimizer.minimize is equal to optimizer.step
+ scaler.minimize(self.optimizer, scaled_loss)
+ else:
+ # model forward
+ outputs = model(data)
+ loss = outputs['loss']
+ # model backward
+ loss.backward()
+ self.optimizer.step()
+ curr_lr = self.optimizer.get_lr()
+ self.lr.step()
+ if self.cfg.get('unstructured_prune'):
+ self.pruner.step()
+ self.optimizer.clear_grad()
+ self.status['learning_rate'] = curr_lr
+
+ if self._nranks < 2 or self._local_rank == 0:
+ self.status['training_staus'].update(outputs)
+
+ self.status['batch_time'].update(time.time() - iter_tic)
+ self._compose_callback.on_step_end(self.status)
+ if self.use_ema:
+ self.ema.update(self.model)
+ iter_tic = time.time()
+
+ # apply ema weight on model
+ if self.use_ema:
+ weight = copy.deepcopy(self.model.state_dict())
+ self.model.set_dict(self.ema.apply())
+ if self.cfg.get('unstructured_prune'):
+ self.pruner.update_params()
+
+ self._compose_callback.on_epoch_end(self.status)
+
+ if validate and (self._nranks < 2 or self._local_rank == 0) \
+ and ((epoch_id + 1) % self.cfg.snapshot_epoch == 0 \
+ or epoch_id == self.end_epoch - 1):
+ if not hasattr(self, '_eval_loader'):
+ # build evaluation dataset and loader
+ self._eval_dataset = self.cfg.EvalDataset
+ self._eval_batch_sampler = \
+ paddle.io.BatchSampler(
+ self._eval_dataset,
+ batch_size=self.cfg.EvalReader['batch_size'])
+ self._eval_loader = create('EvalReader')(
+ self._eval_dataset,
+ self.cfg.worker_num,
+ batch_sampler=self._eval_batch_sampler)
+ # if validation in training is enabled, metrics should be re-init
+ # Init_mark makes sure this code will only execute once
+ if validate and Init_mark == False:
+ Init_mark = True
+ self._init_metrics(validate=validate)
+ self._reset_metrics()
+ with paddle.no_grad():
+ self.status['save_best_model'] = True
+ self._eval_with_loader(self._eval_loader)
+
+ # restore origin weight on model
+ if self.use_ema:
+ self.model.set_dict(weight)
+
+ self._compose_callback.on_train_end(self.status)
+
+ def _eval_with_loader(self, loader):
+ sample_num = 0
+ tic = time.time()
+ self._compose_callback.on_epoch_begin(self.status)
+ self.status['mode'] = 'eval'
+ self.model.eval()
+ if self.cfg.get('print_flops', False):
+ self._flops(loader)
+ for step_id, data in enumerate(loader):
+ self.status['step_id'] = step_id
+ self._compose_callback.on_step_begin(self.status)
+ # forward
+ outs = self.model(data)
+
+ # update metrics
+ for metric in self._metrics:
+ metric.update(data, outs)
+
+ sample_num += data['im_id'].numpy().shape[0]
+ self._compose_callback.on_step_end(self.status)
+
+ self.status['sample_num'] = sample_num
+ self.status['cost_time'] = time.time() - tic
+
+ # accumulate metric to log out
+ for metric in self._metrics:
+ metric.accumulate()
+ metric.log()
+ self._compose_callback.on_epoch_end(self.status)
+ # reset metric states for metric may performed multiple times
+ self._reset_metrics()
+
+ def evaluate(self):
+ with paddle.no_grad():
+ self._eval_with_loader(self.loader)
+
+ def predict(self,
+ images,
+ draw_threshold=0.5,
+ output_dir='output',
+ save_txt=False):
+ self.dataset.set_images(images)
+ loader = create('TestReader')(self.dataset, 0)
+
+ imid2path = self.dataset.get_imid2path()
+
+ anno_file = self.dataset.get_anno()
+ clsid2catid, catid2name = get_categories(
+ self.cfg.metric, anno_file=anno_file)
+
+ # Run Infer
+ self.status['mode'] = 'test'
+ self.model.eval()
+ if self.cfg.get('print_flops', False):
+ self._flops(loader)
+ results = []
+ for step_id, data in enumerate(loader):
+ self.status['step_id'] = step_id
+ # forward
+ outs = self.model(data)
+
+ for key in ['im_shape', 'scale_factor', 'im_id']:
+ outs[key] = data[key]
+ for key, value in outs.items():
+ if hasattr(value, 'numpy'):
+ outs[key] = value.numpy()
+ results.append(outs)
+ # sniper
+ if type(self.dataset) == SniperCOCODataSet:
+ results = self.dataset.anno_cropper.aggregate_chips_detections(
+ results)
+
+ for outs in results:
+ batch_res = get_infer_results(outs, clsid2catid)
+ bbox_num = outs['bbox_num']
+
+ start = 0
+ for i, im_id in enumerate(outs['im_id']):
+ image_path = imid2path[int(im_id)]
+ image = Image.open(image_path).convert('RGB')
+ image = ImageOps.exif_transpose(image)
+ self.status['original_image'] = np.array(image.copy())
+
+ end = start + bbox_num[i]
+ bbox_res = batch_res['bbox'][start:end] \
+ if 'bbox' in batch_res else None
+ mask_res = batch_res['mask'][start:end] \
+ if 'mask' in batch_res else None
+ segm_res = batch_res['segm'][start:end] \
+ if 'segm' in batch_res else None
+ keypoint_res = batch_res['keypoint'][start:end] \
+ if 'keypoint' in batch_res else None
+ image = visualize_results(
+ image, bbox_res, mask_res, segm_res, keypoint_res,
+ int(im_id), catid2name, draw_threshold)
+ self.status['result_image'] = np.array(image.copy())
+ if self._compose_callback:
+ self._compose_callback.on_step_end(self.status)
+ # save image with detection
+ save_name = self._get_save_image_name(output_dir, image_path)
+ logger.info("Detection bbox results save in {}".format(
+ save_name))
+ image.save(save_name, quality=95)
+ if save_txt:
+ save_path = os.path.splitext(save_name)[0] + '.txt'
+ results = {}
+ results["im_id"] = im_id
+ if bbox_res:
+ results["bbox_res"] = bbox_res
+ if keypoint_res:
+ results["keypoint_res"] = keypoint_res
+ save_result(save_path, results, catid2name, draw_threshold)
+ start = end
+
+ def _get_save_image_name(self, output_dir, image_path):
+ """
+ Get save image name from source image path.
+ """
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ image_name = os.path.split(image_path)[-1]
+ name, ext = os.path.splitext(image_name)
+ return os.path.join(output_dir, "{}".format(name)) + ext
+
+ def _get_infer_cfg_and_input_spec(self, save_dir, prune_input=True):
+ image_shape = None
+ im_shape = [None, 2]
+ scale_factor = [None, 2]
+ if self.cfg.architecture in MOT_ARCH:
+ test_reader_name = 'TestMOTReader'
+ else:
+ test_reader_name = 'TestReader'
+ if 'inputs_def' in self.cfg[test_reader_name]:
+ inputs_def = self.cfg[test_reader_name]['inputs_def']
+ image_shape = inputs_def.get('image_shape', None)
+ # set image_shape=[None, 3, -1, -1] as default
+ if image_shape is None:
+ image_shape = [None, 3, -1, -1]
+
+ if len(image_shape) == 3:
+ image_shape = [None] + image_shape
+ else:
+ im_shape = [image_shape[0], 2]
+ scale_factor = [image_shape[0], 2]
+
+ if hasattr(self.model, 'deploy'):
+ self.model.deploy = True
+ if hasattr(self.model, 'fuse_norm'):
+ self.model.fuse_norm = self.cfg['TestReader'].get('fuse_normalize',
+ False)
+
+ # Save infer cfg
+ _dump_infer_config(self.cfg,
+ os.path.join(save_dir, 'infer_cfg.yml'), image_shape,
+ self.model)
+
+ input_spec = [{
+ "image": InputSpec(
+ shape=image_shape, name='image'),
+ "im_shape": InputSpec(
+ shape=im_shape, name='im_shape'),
+ "scale_factor": InputSpec(
+ shape=scale_factor, name='scale_factor')
+ }]
+ if self.cfg.architecture == 'DeepSORT':
+ input_spec[0].update({
+ "crops": InputSpec(
+ shape=[None, 3, 192, 64], name='crops')
+ })
+ if prune_input:
+ static_model = paddle.jit.to_static(
+ self.model, input_spec=input_spec)
+ # NOTE: dy2st do not pruned program, but jit.save will prune program
+ # input spec, prune input spec here and save with pruned input spec
+ pruned_input_spec = _prune_input_spec(
+ input_spec, static_model.forward.main_program,
+ static_model.forward.outputs)
+ else:
+ static_model = None
+ pruned_input_spec = input_spec
+
+ # TODO: Hard code, delete it when support prune input_spec.
+ if self.cfg.architecture == 'PicoDet':
+ pruned_input_spec = [{
+ "image": InputSpec(
+ shape=image_shape, name='image')
+ }]
+
+ return static_model, pruned_input_spec
+
+ def export(self, output_dir='output_inference'):
+ self.model.eval()
+ model_name = os.path.splitext(os.path.split(self.cfg.filename)[-1])[0]
+ save_dir = os.path.join(output_dir, model_name)
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ static_model, pruned_input_spec = self._get_infer_cfg_and_input_spec(
+ save_dir)
+
+ # dy2st and save model
+ if 'slim' not in self.cfg or self.cfg['slim_type'] != 'QAT':
+ paddle.jit.save(
+ static_model,
+ os.path.join(save_dir, 'model'),
+ input_spec=pruned_input_spec)
+ else:
+ self.cfg.slim.save_quantized_model(
+ self.model,
+ os.path.join(save_dir, 'model'),
+ input_spec=pruned_input_spec)
+ logger.info("Export model and saved in {}".format(save_dir))
+
+ def post_quant(self, output_dir='output_inference'):
+ model_name = os.path.splitext(os.path.split(self.cfg.filename)[-1])[0]
+ save_dir = os.path.join(output_dir, model_name)
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+
+ for idx, data in enumerate(self.loader):
+ self.model(data)
+ if idx == int(self.cfg.get('quant_batch_num', 10)):
+ break
+
+ # TODO: support prune input_spec
+ _, pruned_input_spec = self._get_infer_cfg_and_input_spec(
+ save_dir, prune_input=False)
+
+ self.cfg.slim.save_quantized_model(
+ self.model,
+ os.path.join(save_dir, 'model'),
+ input_spec=pruned_input_spec)
+ logger.info("Export Post-Quant model and saved in {}".format(save_dir))
+
+ def _flops(self, loader):
+ self.model.eval()
+ try:
+ import paddleslim
+ except Exception as e:
+ logger.warning(
+ 'Unable to calculate flops, please install paddleslim, for example: `pip install paddleslim`'
+ )
+ return
+
+ from paddleslim.analysis import dygraph_flops as flops
+ input_data = None
+ for data in loader:
+ input_data = data
+ break
+
+ input_spec = [{
+ "image": input_data['image'][0].unsqueeze(0),
+ "im_shape": input_data['im_shape'][0].unsqueeze(0),
+ "scale_factor": input_data['scale_factor'][0].unsqueeze(0)
+ }]
+ flops = flops(self.model, input_spec) / (1000**3)
+ logger.info(" Model FLOPs : {:.6f}G. (image shape is {})".format(
+ flops, input_data['image'][0].unsqueeze(0).shape))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/README.md b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/README.md
new file mode 100644
index 000000000..7ada0acf7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/README.md
@@ -0,0 +1,38 @@
+# 鑷畾涔塐P缂栬瘧
+鏃嬭浆妗咺OU璁$畻OP鏄弬鑰僛鑷畾涔夊閮ㄧ畻瀛怾(https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/07_new_op/new_custom_op.html) 銆
+
+## 1. 鐜渚濊禆
+- Paddle >= 2.0.1
+- gcc 8.2
+
+## 2. 瀹夎
+```
+python3.7 setup.py install
+```
+
+鎸夌収濡備笅鏂瑰紡浣跨敤
+```
+# 寮曞叆鑷畾涔塷p
+from rbox_iou_ops import rbox_iou
+
+paddle.set_device('gpu:0')
+paddle.disable_static()
+
+rbox1 = np.random.rand(13000, 5)
+rbox2 = np.random.rand(7, 5)
+
+pd_rbox1 = paddle.to_tensor(rbox1)
+pd_rbox2 = paddle.to_tensor(rbox2)
+
+iou = rbox_iou(pd_rbox1, pd_rbox2)
+print('iou', iou)
+```
+
+## 3. 鍗曞厓娴嬭瘯
+鍗曞厓娴嬭瘯`test.py`鏂囦欢涓紝閫氳繃瀵规瘮python瀹炵幇鐨勭粨鏋滃拰娴嬭瘯鑷畾涔塷p缁撴灉銆
+
+鐢变簬python璁$畻缁嗚妭涓巆pp璁$畻缁嗚妭鐣ユ湁鍖哄埆锛岃宸尯闂磋缃负0.02銆
+```
+python3.7 test.py
+```
+鎻愮ず`rbox_iou OP compute right!`璇存槑OP娴嬭瘯閫氳繃銆
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cc
new file mode 100644
index 000000000..6031953d2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cc
@@ -0,0 +1,97 @@
+// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated
+
+#include "rbox_iou_op.h"
+#include "paddle/extension.h"
+
+
+template
+void rbox_iou_cpu_kernel(
+ const int rbox1_num,
+ const int rbox2_num,
+ const T* rbox1_data_ptr,
+ const T* rbox2_data_ptr,
+ T* output_data_ptr) {
+
+ int i, j;
+ for (i = 0; i < rbox1_num; i++) {
+ for (j = 0; j < rbox2_num; j++) {
+ int offset = i * rbox2_num + j;
+ output_data_ptr[offset] = rbox_iou_single(rbox1_data_ptr + i * 5, rbox2_data_ptr + j * 5);
+ }
+ }
+}
+
+
+#define CHECK_INPUT_CPU(x) PD_CHECK(x.place() == paddle::PlaceType::kCPU, #x " must be a CPU Tensor.")
+
+std::vector RboxIouCPUForward(const paddle::Tensor& rbox1, const paddle::Tensor& rbox2) {
+ CHECK_INPUT_CPU(rbox1);
+ CHECK_INPUT_CPU(rbox2);
+
+ auto rbox1_num = rbox1.shape()[0];
+ auto rbox2_num = rbox2.shape()[0];
+
+ auto output = paddle::Tensor(paddle::PlaceType::kCPU, {rbox1_num, rbox2_num});
+
+ PD_DISPATCH_FLOATING_TYPES(
+ rbox1.type(),
+ "rbox_iou_cpu_kernel",
+ ([&] {
+ rbox_iou_cpu_kernel(
+ rbox1_num,
+ rbox2_num,
+ rbox1.data(),
+ rbox2.data(),
+ output.mutable_data());
+ }));
+
+ return {output};
+}
+
+
+#ifdef PADDLE_WITH_CUDA
+std::vector RboxIouCUDAForward(const paddle::Tensor& rbox1, const paddle::Tensor& rbox2);
+#endif
+
+
+#define CHECK_INPUT_SAME(x1, x2) PD_CHECK(x1.place() == x2.place(), "input must be smae pacle.")
+
+std::vector RboxIouForward(const paddle::Tensor& rbox1, const paddle::Tensor& rbox2) {
+ CHECK_INPUT_SAME(rbox1, rbox2);
+ if (rbox1.place() == paddle::PlaceType::kCPU) {
+ return RboxIouCPUForward(rbox1, rbox2);
+#ifdef PADDLE_WITH_CUDA
+ } else if (rbox1.place() == paddle::PlaceType::kGPU) {
+ return RboxIouCUDAForward(rbox1, rbox2);
+#endif
+ }
+}
+
+std::vector> InferShape(std::vector rbox1_shape, std::vector rbox2_shape) {
+ return {{rbox1_shape[0], rbox2_shape[0]}};
+}
+
+std::vector InferDtype(paddle::DataType t1, paddle::DataType t2) {
+ return {t1};
+}
+
+PD_BUILD_OP(rbox_iou)
+ .Inputs({"RBOX1", "RBOX2"})
+ .Outputs({"Output"})
+ .SetKernelFn(PD_KERNEL(RboxIouForward))
+ .SetInferShapeFn(PD_INFER_SHAPE(InferShape))
+ .SetInferDtypeFn(PD_INFER_DTYPE(InferDtype));
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cu b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cu
new file mode 100644
index 000000000..8ec43e54b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.cu
@@ -0,0 +1,120 @@
+// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated
+
+#include "rbox_iou_op.h"
+#include "paddle/extension.h"
+
+// 2D block with 32 * 16 = 512 threads per block
+const int BLOCK_DIM_X = 32;
+const int BLOCK_DIM_Y = 16;
+
+/**
+ Computes ceil(a / b)
+*/
+
+static inline int CeilDiv(const int a, const int b) {
+ return (a + b - 1) / b;
+}
+
+template
+__global__ void rbox_iou_cuda_kernel(
+ const int rbox1_num,
+ const int rbox2_num,
+ const T* rbox1_data_ptr,
+ const T* rbox2_data_ptr,
+ T* output_data_ptr) {
+
+ // get row_start and col_start
+ const int rbox1_block_idx = blockIdx.x * blockDim.x;
+ const int rbox2_block_idx = blockIdx.y * blockDim.y;
+
+ const int rbox1_thread_num = min(rbox1_num - rbox1_block_idx, blockDim.x);
+ const int rbox2_thread_num = min(rbox2_num - rbox2_block_idx, blockDim.y);
+
+ __shared__ T block_boxes1[BLOCK_DIM_X * 5];
+ __shared__ T block_boxes2[BLOCK_DIM_Y * 5];
+
+
+ // It's safe to copy using threadIdx.x since BLOCK_DIM_X >= BLOCK_DIM_Y
+ if (threadIdx.x < rbox1_thread_num && threadIdx.y == 0) {
+ block_boxes1[threadIdx.x * 5 + 0] =
+ rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 0];
+ block_boxes1[threadIdx.x * 5 + 1] =
+ rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 1];
+ block_boxes1[threadIdx.x * 5 + 2] =
+ rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 2];
+ block_boxes1[threadIdx.x * 5 + 3] =
+ rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 3];
+ block_boxes1[threadIdx.x * 5 + 4] =
+ rbox1_data_ptr[(rbox1_block_idx + threadIdx.x) * 5 + 4];
+ }
+
+ // threadIdx.x < BLOCK_DIM_Y=rbox2_thread_num, just use same condition as above: threadIdx.y == 0
+ if (threadIdx.x < rbox2_thread_num && threadIdx.y == 0) {
+ block_boxes2[threadIdx.x * 5 + 0] =
+ rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 0];
+ block_boxes2[threadIdx.x * 5 + 1] =
+ rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 1];
+ block_boxes2[threadIdx.x * 5 + 2] =
+ rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 2];
+ block_boxes2[threadIdx.x * 5 + 3] =
+ rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 3];
+ block_boxes2[threadIdx.x * 5 + 4] =
+ rbox2_data_ptr[(rbox2_block_idx + threadIdx.x) * 5 + 4];
+ }
+
+ // sync
+ __syncthreads();
+
+ if (threadIdx.x < rbox1_thread_num && threadIdx.y < rbox2_thread_num) {
+ int offset = (rbox1_block_idx + threadIdx.x) * rbox2_num + rbox2_block_idx + threadIdx.y;
+ output_data_ptr[offset] = rbox_iou_single(block_boxes1 + threadIdx.x * 5, block_boxes2 + threadIdx.y * 5);
+ }
+}
+
+#define CHECK_INPUT_GPU(x) PD_CHECK(x.place() == paddle::PlaceType::kGPU, #x " must be a GPU Tensor.")
+
+std::vector RboxIouCUDAForward(const paddle::Tensor& rbox1, const paddle::Tensor& rbox2) {
+ CHECK_INPUT_GPU(rbox1);
+ CHECK_INPUT_GPU(rbox2);
+
+ auto rbox1_num = rbox1.shape()[0];
+ auto rbox2_num = rbox2.shape()[0];
+
+ auto output = paddle::Tensor(paddle::PlaceType::kGPU, {rbox1_num, rbox2_num});
+
+ const int blocks_x = CeilDiv(rbox1_num, BLOCK_DIM_X);
+ const int blocks_y = CeilDiv(rbox2_num, BLOCK_DIM_Y);
+
+ dim3 blocks(blocks_x, blocks_y);
+ dim3 threads(BLOCK_DIM_X, BLOCK_DIM_Y);
+
+ PD_DISPATCH_FLOATING_TYPES(
+ rbox1.type(),
+ "rbox_iou_cuda_kernel",
+ ([&] {
+ rbox_iou_cuda_kernel<<>>(
+ rbox1_num,
+ rbox2_num,
+ rbox1.data(),
+ rbox2.data(),
+ output.mutable_data());
+ }));
+
+ return {output};
+}
+
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.h b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.h
new file mode 100644
index 000000000..77fb62e39
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/rbox_iou_op.h
@@ -0,0 +1,356 @@
+// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/ops/box_iou_rotated
+
+#pragma once
+
+#include
+#include
+#include
+
+#ifdef __CUDACC__
+// Designates functions callable from the host (CPU) and the device (GPU)
+#define HOST_DEVICE __host__ __device__
+#define HOST_DEVICE_INLINE HOST_DEVICE __forceinline__
+#else
+#include
+#define HOST_DEVICE
+#define HOST_DEVICE_INLINE HOST_DEVICE inline
+#endif
+
+namespace {
+
+template
+struct RotatedBox {
+ T x_ctr, y_ctr, w, h, a;
+};
+
+template
+struct Point {
+ T x, y;
+ HOST_DEVICE_INLINE Point(const T& px = 0, const T& py = 0) : x(px), y(py) {}
+ HOST_DEVICE_INLINE Point operator+(const Point& p) const {
+ return Point(x + p.x, y + p.y);
+ }
+ HOST_DEVICE_INLINE Point& operator+=(const Point& p) {
+ x += p.x;
+ y += p.y;
+ return *this;
+ }
+ HOST_DEVICE_INLINE Point operator-(const Point& p) const {
+ return Point(x - p.x, y - p.y);
+ }
+ HOST_DEVICE_INLINE Point operator*(const T coeff) const {
+ return Point(x * coeff, y * coeff);
+ }
+};
+
+template
+HOST_DEVICE_INLINE T dot_2d(const Point& A, const Point& B) {
+ return A.x * B.x + A.y * B.y;
+}
+
+template
+HOST_DEVICE_INLINE T cross_2d(const Point& A, const Point& B) {
+ return A.x * B.y - B.x * A.y;
+}
+
+template
+HOST_DEVICE_INLINE void get_rotated_vertices(
+ const RotatedBox& box,
+ Point (&pts)[4]) {
+ // M_PI / 180. == 0.01745329251
+ //double theta = box.a * 0.01745329251;
+ //MODIFIED
+ double theta = box.a;
+ T cosTheta2 = (T)cos(theta) * 0.5f;
+ T sinTheta2 = (T)sin(theta) * 0.5f;
+
+ // y: top --> down; x: left --> right
+ pts[0].x = box.x_ctr - sinTheta2 * box.h - cosTheta2 * box.w;
+ pts[0].y = box.y_ctr + cosTheta2 * box.h - sinTheta2 * box.w;
+ pts[1].x = box.x_ctr + sinTheta2 * box.h - cosTheta2 * box.w;
+ pts[1].y = box.y_ctr - cosTheta2 * box.h - sinTheta2 * box.w;
+ pts[2].x = 2 * box.x_ctr - pts[0].x;
+ pts[2].y = 2 * box.y_ctr - pts[0].y;
+ pts[3].x = 2 * box.x_ctr - pts[1].x;
+ pts[3].y = 2 * box.y_ctr - pts[1].y;
+}
+
+template
+HOST_DEVICE_INLINE int get_intersection_points(
+ const Point (&pts1)[4],
+ const Point (&pts2)[4],
+ Point (&intersections)[24]) {
+ // Line vector
+ // A line from p1 to p2 is: p1 + (p2-p1)*t, t=[0,1]
+ Point vec1[4], vec2[4];
+ for (int i = 0; i < 4; i++) {
+ vec1[i] = pts1[(i + 1) % 4] - pts1[i];
+ vec2[i] = pts2[(i + 1) % 4] - pts2[i];
+ }
+
+ // Line test - test all line combos for intersection
+ int num = 0; // number of intersections
+ for (int i = 0; i < 4; i++) {
+ for (int j = 0; j < 4; j++) {
+ // Solve for 2x2 Ax=b
+ T det = cross_2d(vec2[j], vec1[i]);
+
+ // This takes care of parallel lines
+ if (fabs(det) <= 1e-14) {
+ continue;
+ }
+
+ auto vec12 = pts2[j] - pts1[i];
+
+ T t1 = cross_2d(vec2[j], vec12) / det;
+ T t2 = cross_2d(vec1[i], vec12) / det;
+
+ if (t1 >= 0.0f && t1 <= 1.0f && t2 >= 0.0f && t2 <= 1.0f) {
+ intersections[num++] = pts1[i] + vec1[i] * t1;
+ }
+ }
+ }
+
+ // Check for vertices of rect1 inside rect2
+ {
+ const auto& AB = vec2[0];
+ const auto& DA = vec2[3];
+ auto ABdotAB = dot_2d(AB, AB);
+ auto ADdotAD = dot_2d(DA, DA);
+ for (int i = 0; i < 4; i++) {
+ // assume ABCD is the rectangle, and P is the point to be judged
+ // P is inside ABCD iff. P's projection on AB lies within AB
+ // and P's projection on AD lies within AD
+
+ auto AP = pts1[i] - pts2[0];
+
+ auto APdotAB = dot_2d(AP, AB);
+ auto APdotAD = -dot_2d(AP, DA);
+
+ if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) &&
+ (APdotAD <= ADdotAD)) {
+ intersections[num++] = pts1[i];
+ }
+ }
+ }
+
+ // Reverse the check - check for vertices of rect2 inside rect1
+ {
+ const auto& AB = vec1[0];
+ const auto& DA = vec1[3];
+ auto ABdotAB = dot_2d(AB, AB);
+ auto ADdotAD = dot_2d(DA, DA);
+ for (int i = 0; i < 4; i++) {
+ auto AP = pts2[i] - pts1[0];
+
+ auto APdotAB = dot_2d(AP, AB);
+ auto APdotAD = -dot_2d(AP, DA);
+
+ if ((APdotAB >= 0) && (APdotAD >= 0) && (APdotAB <= ABdotAB) &&
+ (APdotAD <= ADdotAD)) {
+ intersections[num++] = pts2[i];
+ }
+ }
+ }
+
+ return num;
+}
+
+template
+HOST_DEVICE_INLINE int convex_hull_graham(
+ const Point (&p)[24],
+ const int& num_in,
+ Point (&q)[24],
+ bool shift_to_zero = false) {
+ assert(num_in >= 2);
+
+ // Step 1:
+ // Find point with minimum y
+ // if more than 1 points have the same minimum y,
+ // pick the one with the minimum x.
+ int t = 0;
+ for (int i = 1; i < num_in; i++) {
+ if (p[i].y < p[t].y || (p[i].y == p[t].y && p[i].x < p[t].x)) {
+ t = i;
+ }
+ }
+ auto& start = p[t]; // starting point
+
+ // Step 2:
+ // Subtract starting point from every points (for sorting in the next step)
+ for (int i = 0; i < num_in; i++) {
+ q[i] = p[i] - start;
+ }
+
+ // Swap the starting point to position 0
+ auto tmp = q[0];
+ q[0] = q[t];
+ q[t] = tmp;
+
+ // Step 3:
+ // Sort point 1 ~ num_in according to their relative cross-product values
+ // (essentially sorting according to angles)
+ // If the angles are the same, sort according to their distance to origin
+ T dist[24];
+ for (int i = 0; i < num_in; i++) {
+ dist[i] = dot_2d(q[i], q[i]);
+ }
+
+#ifdef __CUDACC__
+ // CUDA version
+ // In the future, we can potentially use thrust
+ // for sorting here to improve speed (though not guaranteed)
+ for (int i = 1; i < num_in - 1; i++) {
+ for (int j = i + 1; j < num_in; j++) {
+ T crossProduct = cross_2d(q[i], q[j]);
+ if ((crossProduct < -1e-6) ||
+ (fabs(crossProduct) < 1e-6 && dist[i] > dist[j])) {
+ auto q_tmp = q[i];
+ q[i] = q[j];
+ q[j] = q_tmp;
+ auto dist_tmp = dist[i];
+ dist[i] = dist[j];
+ dist[j] = dist_tmp;
+ }
+ }
+ }
+#else
+ // CPU version
+ std::sort(
+ q + 1, q + num_in, [](const Point& A, const Point& B) -> bool {
+ T temp = cross_2d(A, B);
+ if (fabs(temp) < 1e-6) {
+ return dot_2d(A, A) < dot_2d(B, B);
+ } else {
+ return temp > 0;
+ }
+ });
+#endif
+
+ // Step 4:
+ // Make sure there are at least 2 points (that don't overlap with each other)
+ // in the stack
+ int k; // index of the non-overlapped second point
+ for (k = 1; k < num_in; k++) {
+ if (dist[k] > 1e-8) {
+ break;
+ }
+ }
+ if (k == num_in) {
+ // We reach the end, which means the convex hull is just one point
+ q[0] = p[t];
+ return 1;
+ }
+ q[1] = q[k];
+ int m = 2; // 2 points in the stack
+ // Step 5:
+ // Finally we can start the scanning process.
+ // When a non-convex relationship between the 3 points is found
+ // (either concave shape or duplicated points),
+ // we pop the previous point from the stack
+ // until the 3-point relationship is convex again, or
+ // until the stack only contains two points
+ for (int i = k + 1; i < num_in; i++) {
+ while (m > 1 && cross_2d(q[i] - q[m - 2], q[m - 1] - q[m - 2]) >= 0) {
+ m--;
+ }
+ q[m++] = q[i];
+ }
+
+ // Step 6 (Optional):
+ // In general sense we need the original coordinates, so we
+ // need to shift the points back (reverting Step 2)
+ // But if we're only interested in getting the area/perimeter of the shape
+ // We can simply return.
+ if (!shift_to_zero) {
+ for (int i = 0; i < m; i++) {
+ q[i] += start;
+ }
+ }
+
+ return m;
+}
+
+template
+HOST_DEVICE_INLINE T polygon_area(const Point (&q)[24], const int& m) {
+ if (m <= 2) {
+ return 0;
+ }
+
+ T area = 0;
+ for (int i = 1; i < m - 1; i++) {
+ area += fabs(cross_2d(q[i] - q[0], q[i + 1] - q[0]));
+ }
+
+ return area / 2.0;
+}
+
+template
+HOST_DEVICE_INLINE T rboxes_intersection(
+ const RotatedBox& box1,
+ const RotatedBox& box2) {
+ // There are up to 4 x 4 + 4 + 4 = 24 intersections (including dups) returned
+ // from rotated_rect_intersection_pts
+ Point intersectPts[24], orderedPts[24];
+
+ Point pts1[4];
+ Point pts2[4];
+ get_rotated_vertices(box1, pts1);
+ get_rotated_vertices(box2, pts2);
+
+ int num = get_intersection_points(pts1, pts2, intersectPts);
+
+ if (num <= 2) {
+ return 0.0;
+ }
+
+ // Convex Hull to order the intersection points in clockwise order and find
+ // the contour area.
+ int num_convex = convex_hull_graham(intersectPts, num, orderedPts, true);
+ return polygon_area(orderedPts, num_convex);
+}
+
+} // namespace
+
+template
+HOST_DEVICE_INLINE T
+rbox_iou_single(T const* const box1_raw, T const* const box2_raw) {
+ // shift center to the middle point to achieve higher precision in result
+ RotatedBox box1, box2;
+ auto center_shift_x = (box1_raw[0] + box2_raw[0]) / 2.0;
+ auto center_shift_y = (box1_raw[1] + box2_raw[1]) / 2.0;
+ box1.x_ctr = box1_raw[0] - center_shift_x;
+ box1.y_ctr = box1_raw[1] - center_shift_y;
+ box1.w = box1_raw[2];
+ box1.h = box1_raw[3];
+ box1.a = box1_raw[4];
+ box2.x_ctr = box2_raw[0] - center_shift_x;
+ box2.y_ctr = box2_raw[1] - center_shift_y;
+ box2.w = box2_raw[2];
+ box2.h = box2_raw[3];
+ box2.a = box2_raw[4];
+
+ const T area1 = box1.w * box1.h;
+ const T area2 = box2.w * box2.h;
+ if (area1 < 1e-14 || area2 < 1e-14) {
+ return 0.f;
+ }
+
+ const T intersection = rboxes_intersection(box1, box2);
+ const T iou = intersection / (area1 + area2 - intersection);
+ return iou;
+}
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/setup.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/setup.py
new file mode 100644
index 000000000..d364db7ed
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/setup.py
@@ -0,0 +1,14 @@
+import paddle
+from paddle.utils.cpp_extension import CppExtension, CUDAExtension, setup
+
+if __name__ == "__main__":
+ if paddle.device.is_compiled_with_cuda():
+ setup(
+ name='rbox_iou_ops',
+ ext_modules=CUDAExtension(
+ sources=['rbox_iou_op.cc', 'rbox_iou_op.cu'],
+ extra_compile_args={'cxx': ['-DPADDLE_WITH_CUDA']}))
+ else:
+ setup(
+ name='rbox_iou_ops',
+ ext_modules=CppExtension(sources=['rbox_iou_op.cc']))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/test.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/test.py
new file mode 100644
index 000000000..85872e484
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/ext_op/test.py
@@ -0,0 +1,156 @@
+import numpy as np
+import sys
+import time
+from shapely.geometry import Polygon
+import paddle
+import unittest
+
+try:
+ from rbox_iou_ops import rbox_iou
+except Exception as e:
+ print('import rbox_iou_ops error', e)
+ sys.exit(-1)
+
+
+def rbox2poly_single(rrect, get_best_begin_point=False):
+ """
+ rrect:[x_ctr,y_ctr,w,h,angle]
+ to
+ poly:[x0,y0,x1,y1,x2,y2,x3,y3]
+ """
+ x_ctr, y_ctr, width, height, angle = rrect[:5]
+ tl_x, tl_y, br_x, br_y = -width / 2, -height / 2, width / 2, height / 2
+ # rect 2x4
+ rect = np.array([[tl_x, br_x, br_x, tl_x], [tl_y, tl_y, br_y, br_y]])
+ R = np.array([[np.cos(angle), -np.sin(angle)],
+ [np.sin(angle), np.cos(angle)]])
+ # poly
+ poly = R.dot(rect)
+ x0, x1, x2, x3 = poly[0, :4] + x_ctr
+ y0, y1, y2, y3 = poly[1, :4] + y_ctr
+ poly = np.array([x0, y0, x1, y1, x2, y2, x3, y3], dtype=np.float64)
+ return poly
+
+
+def intersection(g, p):
+ """
+ Intersection.
+ """
+
+ g = g[:8].reshape((4, 2))
+ p = p[:8].reshape((4, 2))
+
+ a = g
+ b = p
+
+ use_filter = True
+ if use_filter:
+ # step1:
+ inter_x1 = np.maximum(np.min(a[:, 0]), np.min(b[:, 0]))
+ inter_x2 = np.minimum(np.max(a[:, 0]), np.max(b[:, 0]))
+ inter_y1 = np.maximum(np.min(a[:, 1]), np.min(b[:, 1]))
+ inter_y2 = np.minimum(np.max(a[:, 1]), np.max(b[:, 1]))
+ if inter_x1 >= inter_x2 or inter_y1 >= inter_y2:
+ return 0.
+ x1 = np.minimum(np.min(a[:, 0]), np.min(b[:, 0]))
+ x2 = np.maximum(np.max(a[:, 0]), np.max(b[:, 0]))
+ y1 = np.minimum(np.min(a[:, 1]), np.min(b[:, 1]))
+ y2 = np.maximum(np.max(a[:, 1]), np.max(b[:, 1]))
+ if x1 >= x2 or y1 >= y2 or (x2 - x1) < 2 or (y2 - y1) < 2:
+ return 0.
+
+ g = Polygon(g)
+ p = Polygon(p)
+ if not g.is_valid or not p.is_valid:
+ return 0
+
+ inter = Polygon(g).intersection(Polygon(p)).area
+ union = g.area + p.area - inter
+ if union == 0:
+ return 0
+ else:
+ return inter / union
+
+
+def rbox_overlaps(anchors, gt_bboxes, use_cv2=False):
+ """
+
+ Args:
+ anchors: [NA, 5] x1,y1,x2,y2,angle
+ gt_bboxes: [M, 5] x1,y1,x2,y2,angle
+
+ Returns:
+
+ """
+ assert anchors.shape[1] == 5
+ assert gt_bboxes.shape[1] == 5
+
+ gt_bboxes_ploy = [rbox2poly_single(e) for e in gt_bboxes]
+ anchors_ploy = [rbox2poly_single(e) for e in anchors]
+
+ num_gt, num_anchors = len(gt_bboxes_ploy), len(anchors_ploy)
+ iou = np.zeros((num_gt, num_anchors), dtype=np.float64)
+
+ start_time = time.time()
+ for i in range(num_gt):
+ for j in range(num_anchors):
+ try:
+ iou[i, j] = intersection(gt_bboxes_ploy[i], anchors_ploy[j])
+ except Exception as e:
+ print('cur gt_bboxes_ploy[i]', gt_bboxes_ploy[i],
+ 'anchors_ploy[j]', anchors_ploy[j], e)
+ iou = iou.T
+ return iou
+
+
+def gen_sample(n):
+ rbox = np.random.rand(n, 5)
+ rbox[:, 0:4] = rbox[:, 0:4] * 0.45 + 0.001
+ rbox[:, 4] = rbox[:, 4] - 0.5
+ return rbox
+
+
+class RBoxIoUTest(unittest.TestCase):
+ def setUp(self):
+ self.initTestCase()
+ self.rbox1 = gen_sample(self.n)
+ self.rbox2 = gen_sample(self.m)
+
+ def initTestCase(self):
+ self.n = 13000
+ self.m = 7
+
+ def assertAllClose(self, x, y, msg, atol=5e-1, rtol=1e-2):
+ self.assertTrue(np.allclose(x, y, atol=atol, rtol=rtol), msg=msg)
+
+ def get_places(self):
+ places = [paddle.CPUPlace()]
+ if paddle.device.is_compiled_with_cuda():
+ places.append(paddle.CUDAPlace(0))
+
+ return places
+
+ def check_output(self, place):
+ paddle.disable_static()
+ pd_rbox1 = paddle.to_tensor(self.rbox1, place=place)
+ pd_rbox2 = paddle.to_tensor(self.rbox2, place=place)
+ actual_t = rbox_iou(pd_rbox1, pd_rbox2).numpy()
+ poly_rbox1 = self.rbox1
+ poly_rbox2 = self.rbox2
+ poly_rbox1[:, 0:4] = self.rbox1[:, 0:4] * 1024
+ poly_rbox2[:, 0:4] = self.rbox2[:, 0:4] * 1024
+ expect_t = rbox_overlaps(poly_rbox1, poly_rbox2, use_cv2=False)
+ self.assertAllClose(
+ actual_t,
+ expect_t,
+ msg="rbox_iou has diff at {} \nExpect {}\nBut got {}".format(
+ str(place), str(expect_t), str(actual_t)))
+
+ def test_output(self):
+ places = self.get_places()
+ for place in places:
+ self.check_output(place)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__init__.py
new file mode 100644
index 000000000..d69e8af0f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__init__.py
@@ -0,0 +1,29 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import metrics
+from . import keypoint_metrics
+
+from .metrics import *
+from .keypoint_metrics import *
+
+__all__ = metrics.__all__ + keypoint_metrics.__all__
+
+from . import mot_metrics
+from .mot_metrics import *
+__all__ = metrics.__all__ + mot_metrics.__all__
+
+from . import mcmot_metrics
+from .mcmot_metrics import *
+__all__ = metrics.__all__ + mcmot_metrics.__all__
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..08ea6f44d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/coco_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/coco_utils.cpython-37.pyc
new file mode 100644
index 000000000..c55b3d121
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/coco_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/json_results.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/json_results.cpython-37.pyc
new file mode 100644
index 000000000..d99601c30
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/json_results.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/keypoint_metrics.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/keypoint_metrics.cpython-37.pyc
new file mode 100644
index 000000000..3ea557dda
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/keypoint_metrics.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/map_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/map_utils.cpython-37.pyc
new file mode 100644
index 000000000..c2a9ed2f0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/map_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mcmot_metrics.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mcmot_metrics.cpython-37.pyc
new file mode 100644
index 000000000..1390aa87b
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mcmot_metrics.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/metrics.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/metrics.cpython-37.pyc
new file mode 100644
index 000000000..0a4895277
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/metrics.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mot_metrics.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mot_metrics.cpython-37.pyc
new file mode 100644
index 000000000..683014969
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/mot_metrics.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/munkres.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/munkres.cpython-37.pyc
new file mode 100644
index 000000000..c00df5d8f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/munkres.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/widerface_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/widerface_utils.cpython-37.pyc
new file mode 100644
index 000000000..bd9959299
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/__pycache__/widerface_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/coco_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/coco_utils.py
new file mode 100644
index 000000000..47b92bc62
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/coco_utils.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+import numpy as np
+import itertools
+
+from ppdet.metrics.json_results import get_det_res, get_det_poly_res, get_seg_res, get_solov2_segm_res, get_keypoint_res
+from ppdet.metrics.map_utils import draw_pr_curve
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+def get_infer_results(outs, catid, bias=0):
+ """
+ Get result at the stage of inference.
+ The output format is dictionary containing bbox or mask result.
+
+ For example, bbox result is a list and each element contains
+ image_id, category_id, bbox and score.
+ """
+ if outs is None or len(outs) == 0:
+ raise ValueError(
+ 'The number of valid detection result if zero. Please use reasonable model and check input data.'
+ )
+
+ im_id = outs['im_id']
+
+ infer_res = {}
+ if 'bbox' in outs:
+ if len(outs['bbox']) > 0 and len(outs['bbox'][0]) > 6:
+ infer_res['bbox'] = get_det_poly_res(
+ outs['bbox'], outs['bbox_num'], im_id, catid, bias=bias)
+ else:
+ infer_res['bbox'] = get_det_res(
+ outs['bbox'], outs['bbox_num'], im_id, catid, bias=bias)
+
+ if 'mask' in outs:
+ # mask post process
+ infer_res['mask'] = get_seg_res(outs['mask'], outs['bbox'],
+ outs['bbox_num'], im_id, catid)
+
+ if 'segm' in outs:
+ infer_res['segm'] = get_solov2_segm_res(outs, im_id, catid)
+
+ if 'keypoint' in outs:
+ infer_res['keypoint'] = get_keypoint_res(outs, im_id)
+ outs['bbox_num'] = [len(infer_res['keypoint'])]
+
+ return infer_res
+
+
+def cocoapi_eval(jsonfile,
+ style,
+ coco_gt=None,
+ anno_file=None,
+ max_dets=(100, 300, 1000),
+ classwise=False,
+ sigmas=None,
+ use_area=True):
+ """
+ Args:
+ jsonfile (str): Evaluation json file, eg: bbox.json, mask.json.
+ style (str): COCOeval style, can be `bbox` , `segm` , `proposal`, `keypoints` and `keypoints_crowd`.
+ coco_gt (str): Whether to load COCOAPI through anno_file,
+ eg: coco_gt = COCO(anno_file)
+ anno_file (str): COCO annotations file.
+ max_dets (tuple): COCO evaluation maxDets.
+ classwise (bool): Whether per-category AP and draw P-R Curve or not.
+ sigmas (nparray): keypoint labelling sigmas.
+ use_area (bool): If gt annotations (eg. CrowdPose, AIC)
+ do not have 'area', please set use_area=False.
+ """
+ assert coco_gt != None or anno_file != None
+ if style == 'keypoints_crowd':
+ #please install xtcocotools==1.6
+ from xtcocotools.coco import COCO
+ from xtcocotools.cocoeval import COCOeval
+ else:
+ from pycocotools.coco import COCO
+ from pycocotools.cocoeval import COCOeval
+
+ if coco_gt == None:
+ coco_gt = COCO(anno_file)
+ logger.info("Start evaluate...")
+ coco_dt = coco_gt.loadRes(jsonfile)
+ if style == 'proposal':
+ coco_eval = COCOeval(coco_gt, coco_dt, 'bbox')
+ coco_eval.params.useCats = 0
+ coco_eval.params.maxDets = list(max_dets)
+ elif style == 'keypoints_crowd':
+ coco_eval = COCOeval(coco_gt, coco_dt, style, sigmas, use_area)
+ else:
+ coco_eval = COCOeval(coco_gt, coco_dt, style)
+ coco_eval.evaluate()
+ coco_eval.accumulate()
+ coco_eval.summarize()
+ if classwise:
+ # Compute per-category AP and PR curve
+ try:
+ from terminaltables import AsciiTable
+ except Exception as e:
+ logger.error(
+ 'terminaltables not found, plaese install terminaltables. '
+ 'for example: `pip install terminaltables`.')
+ raise e
+ precisions = coco_eval.eval['precision']
+ cat_ids = coco_gt.getCatIds()
+ # precision: (iou, recall, cls, area range, max dets)
+ assert len(cat_ids) == precisions.shape[2]
+ results_per_category = []
+ for idx, catId in enumerate(cat_ids):
+ # area range index 0: all area ranges
+ # max dets index -1: typically 100 per image
+ nm = coco_gt.loadCats(catId)[0]
+ precision = precisions[:, :, idx, 0, -1]
+ precision = precision[precision > -1]
+ if precision.size:
+ ap = np.mean(precision)
+ else:
+ ap = float('nan')
+ results_per_category.append(
+ (str(nm["name"]), '{:0.3f}'.format(float(ap))))
+ pr_array = precisions[0, :, idx, 0, 2]
+ recall_array = np.arange(0.0, 1.01, 0.01)
+ draw_pr_curve(
+ pr_array,
+ recall_array,
+ out_dir=style + '_pr_curve',
+ file_name='{}_precision_recall_curve.jpg'.format(nm["name"]))
+
+ num_columns = min(6, len(results_per_category) * 2)
+ results_flatten = list(itertools.chain(*results_per_category))
+ headers = ['category', 'AP'] * (num_columns // 2)
+ results_2d = itertools.zip_longest(
+ *[results_flatten[i::num_columns] for i in range(num_columns)])
+ table_data = [headers]
+ table_data += [result for result in results_2d]
+ table = AsciiTable(table_data)
+ logger.info('Per-category of {} AP: \n{}'.format(style, table.table))
+ logger.info("per-category PR curve has output to {} folder.".format(
+ style + '_pr_curve'))
+ # flush coco evaluation result
+ sys.stdout.flush()
+ return coco_eval.stats
+
+
+def json_eval_results(metric, json_directory, dataset):
+ """
+ cocoapi eval with already exists proposal.json, bbox.json or mask.json
+ """
+ assert metric == 'COCO'
+ anno_file = dataset.get_anno()
+ json_file_list = ['proposal.json', 'bbox.json', 'mask.json']
+ if json_directory:
+ assert os.path.exists(
+ json_directory), "The json directory:{} does not exist".format(
+ json_directory)
+ for k, v in enumerate(json_file_list):
+ json_file_list[k] = os.path.join(str(json_directory), v)
+
+ coco_eval_style = ['proposal', 'bbox', 'segm']
+ for i, v_json in enumerate(json_file_list):
+ if os.path.exists(v_json):
+ cocoapi_eval(v_json, coco_eval_style[i], anno_file=anno_file)
+ else:
+ logger.info("{} not exists!".format(v_json))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/json_results.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/json_results.py
new file mode 100644
index 000000000..c703de63b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/json_results.py
@@ -0,0 +1,149 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import six
+import numpy as np
+
+
+def get_det_res(bboxes, bbox_nums, image_id, label_to_cat_id_map, bias=0):
+ det_res = []
+ k = 0
+ for i in range(len(bbox_nums)):
+ cur_image_id = int(image_id[i][0])
+ det_nums = bbox_nums[i]
+ for j in range(det_nums):
+ dt = bboxes[k]
+ k = k + 1
+ num_id, score, xmin, ymin, xmax, ymax = dt.tolist()
+ if int(num_id) < 0:
+ continue
+ category_id = label_to_cat_id_map[int(num_id)]
+ w = xmax - xmin + bias
+ h = ymax - ymin + bias
+ bbox = [xmin, ymin, w, h]
+ dt_res = {
+ 'image_id': cur_image_id,
+ 'category_id': category_id,
+ 'bbox': bbox,
+ 'score': score
+ }
+ det_res.append(dt_res)
+ return det_res
+
+
+def get_det_poly_res(bboxes, bbox_nums, image_id, label_to_cat_id_map, bias=0):
+ det_res = []
+ k = 0
+ for i in range(len(bbox_nums)):
+ cur_image_id = int(image_id[i][0])
+ det_nums = bbox_nums[i]
+ for j in range(det_nums):
+ dt = bboxes[k]
+ k = k + 1
+ num_id, score, x1, y1, x2, y2, x3, y3, x4, y4 = dt.tolist()
+ if int(num_id) < 0:
+ continue
+ category_id = label_to_cat_id_map[int(num_id)]
+ rbox = [x1, y1, x2, y2, x3, y3, x4, y4]
+ dt_res = {
+ 'image_id': cur_image_id,
+ 'category_id': category_id,
+ 'bbox': rbox,
+ 'score': score
+ }
+ det_res.append(dt_res)
+ return det_res
+
+
+def get_seg_res(masks, bboxes, mask_nums, image_id, label_to_cat_id_map):
+ import pycocotools.mask as mask_util
+ seg_res = []
+ k = 0
+ for i in range(len(mask_nums)):
+ cur_image_id = int(image_id[i][0])
+ det_nums = mask_nums[i]
+ for j in range(det_nums):
+ mask = masks[k].astype(np.uint8)
+ score = float(bboxes[k][1])
+ label = int(bboxes[k][0])
+ k = k + 1
+ if label == -1:
+ continue
+ cat_id = label_to_cat_id_map[label]
+ rle = mask_util.encode(
+ np.array(
+ mask[:, :, None], order="F", dtype="uint8"))[0]
+ if six.PY3:
+ if 'counts' in rle:
+ rle['counts'] = rle['counts'].decode("utf8")
+ sg_res = {
+ 'image_id': cur_image_id,
+ 'category_id': cat_id,
+ 'segmentation': rle,
+ 'score': score
+ }
+ seg_res.append(sg_res)
+ return seg_res
+
+
+def get_solov2_segm_res(results, image_id, num_id_to_cat_id_map):
+ import pycocotools.mask as mask_util
+ segm_res = []
+ # for each batch
+ segms = results['segm'].astype(np.uint8)
+ clsid_labels = results['cate_label']
+ clsid_scores = results['cate_score']
+ lengths = segms.shape[0]
+ im_id = int(image_id[0][0])
+ if lengths == 0 or segms is None:
+ return None
+ # for each sample
+ for i in range(lengths - 1):
+ clsid = int(clsid_labels[i])
+ catid = num_id_to_cat_id_map[clsid]
+ score = float(clsid_scores[i])
+ mask = segms[i]
+ segm = mask_util.encode(np.array(mask[:, :, np.newaxis], order='F'))[0]
+ segm['counts'] = segm['counts'].decode('utf8')
+ coco_res = {
+ 'image_id': im_id,
+ 'category_id': catid,
+ 'segmentation': segm,
+ 'score': score
+ }
+ segm_res.append(coco_res)
+ return segm_res
+
+
+def get_keypoint_res(results, im_id):
+ anns = []
+ preds = results['keypoint']
+ for idx in range(im_id.shape[0]):
+ image_id = im_id[idx].item()
+ kpts, scores = preds[idx]
+ for kpt, score in zip(kpts, scores):
+ kpt = kpt.flatten()
+ ann = {
+ 'image_id': image_id,
+ 'category_id': 1, # XXX hard code
+ 'keypoints': kpt.tolist(),
+ 'score': float(score)
+ }
+ x = kpt[0::3]
+ y = kpt[1::3]
+ x0, x1, y0, y1 = np.min(x).item(), np.max(x).item(), np.min(y).item(
+ ), np.max(y).item()
+ ann['area'] = (x1 - x0) * (y1 - y0)
+ ann['bbox'] = [x0, y0, x1 - x0, y1 - y0]
+ anns.append(ann)
+ return anns
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/keypoint_metrics.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/keypoint_metrics.py
new file mode 100644
index 000000000..d8bc0e782
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/keypoint_metrics.py
@@ -0,0 +1,402 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import json
+from collections import defaultdict, OrderedDict
+import numpy as np
+from pycocotools.coco import COCO
+from pycocotools.cocoeval import COCOeval
+from ..modeling.keypoint_utils import oks_nms
+from scipy.io import loadmat, savemat
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['KeyPointTopDownCOCOEval', 'KeyPointTopDownMPIIEval']
+
+
+class KeyPointTopDownCOCOEval(object):
+ '''
+ Adapted from
+ https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
+ Copyright (c) Microsoft, under the MIT License.
+ '''
+
+ def __init__(self,
+ anno_file,
+ num_samples,
+ num_joints,
+ output_eval,
+ iou_type='keypoints',
+ in_vis_thre=0.2,
+ oks_thre=0.9,
+ save_prediction_only=False):
+ super(KeyPointTopDownCOCOEval, self).__init__()
+ self.coco = COCO(anno_file)
+ self.num_samples = num_samples
+ self.num_joints = num_joints
+ self.iou_type = iou_type
+ self.in_vis_thre = in_vis_thre
+ self.oks_thre = oks_thre
+ self.output_eval = output_eval
+ self.res_file = os.path.join(output_eval, "keypoints_results.json")
+ self.save_prediction_only = save_prediction_only
+ self.reset()
+
+ def reset(self):
+ self.results = {
+ 'all_preds': np.zeros(
+ (self.num_samples, self.num_joints, 3), dtype=np.float32),
+ 'all_boxes': np.zeros((self.num_samples, 6)),
+ 'image_path': []
+ }
+ self.eval_results = {}
+ self.idx = 0
+
+ def update(self, inputs, outputs):
+ kpts, _ = outputs['keypoint'][0]
+
+ num_images = inputs['image'].shape[0]
+ self.results['all_preds'][self.idx:self.idx + num_images, :, 0:
+ 3] = kpts[:, :, 0:3]
+ self.results['all_boxes'][self.idx:self.idx + num_images, 0:2] = inputs[
+ 'center'].numpy()[:, 0:2]
+ self.results['all_boxes'][self.idx:self.idx + num_images, 2:4] = inputs[
+ 'scale'].numpy()[:, 0:2]
+ self.results['all_boxes'][self.idx:self.idx + num_images, 4] = np.prod(
+ inputs['scale'].numpy() * 200, 1)
+ self.results['all_boxes'][self.idx:self.idx + num_images,
+ 5] = np.squeeze(inputs['score'].numpy())
+ self.results['image_path'].extend(inputs['im_id'].numpy())
+
+ self.idx += num_images
+
+ def _write_coco_keypoint_results(self, keypoints):
+ data_pack = [{
+ 'cat_id': 1,
+ 'cls': 'person',
+ 'ann_type': 'keypoints',
+ 'keypoints': keypoints
+ }]
+ results = self._coco_keypoint_results_one_category_kernel(data_pack[0])
+ if not os.path.exists(self.output_eval):
+ os.makedirs(self.output_eval)
+ with open(self.res_file, 'w') as f:
+ json.dump(results, f, sort_keys=True, indent=4)
+ logger.info(f'The keypoint result is saved to {self.res_file}.')
+ try:
+ json.load(open(self.res_file))
+ except Exception:
+ content = []
+ with open(self.res_file, 'r') as f:
+ for line in f:
+ content.append(line)
+ content[-1] = ']'
+ with open(self.res_file, 'w') as f:
+ for c in content:
+ f.write(c)
+
+ def _coco_keypoint_results_one_category_kernel(self, data_pack):
+ cat_id = data_pack['cat_id']
+ keypoints = data_pack['keypoints']
+ cat_results = []
+
+ for img_kpts in keypoints:
+ if len(img_kpts) == 0:
+ continue
+
+ _key_points = np.array(
+ [img_kpts[k]['keypoints'] for k in range(len(img_kpts))])
+ _key_points = _key_points.reshape(_key_points.shape[0], -1)
+
+ result = [{
+ 'image_id': img_kpts[k]['image'],
+ 'category_id': cat_id,
+ 'keypoints': _key_points[k].tolist(),
+ 'score': img_kpts[k]['score'],
+ 'center': list(img_kpts[k]['center']),
+ 'scale': list(img_kpts[k]['scale'])
+ } for k in range(len(img_kpts))]
+ cat_results.extend(result)
+
+ return cat_results
+
+ def get_final_results(self, preds, all_boxes, img_path):
+ _kpts = []
+ for idx, kpt in enumerate(preds):
+ _kpts.append({
+ 'keypoints': kpt,
+ 'center': all_boxes[idx][0:2],
+ 'scale': all_boxes[idx][2:4],
+ 'area': all_boxes[idx][4],
+ 'score': all_boxes[idx][5],
+ 'image': int(img_path[idx])
+ })
+ # image x person x (keypoints)
+ kpts = defaultdict(list)
+ for kpt in _kpts:
+ kpts[kpt['image']].append(kpt)
+
+ # rescoring and oks nms
+ num_joints = preds.shape[1]
+ in_vis_thre = self.in_vis_thre
+ oks_thre = self.oks_thre
+ oks_nmsed_kpts = []
+ for img in kpts.keys():
+ img_kpts = kpts[img]
+ for n_p in img_kpts:
+ box_score = n_p['score']
+ kpt_score = 0
+ valid_num = 0
+ for n_jt in range(0, num_joints):
+ t_s = n_p['keypoints'][n_jt][2]
+ if t_s > in_vis_thre:
+ kpt_score = kpt_score + t_s
+ valid_num = valid_num + 1
+ if valid_num != 0:
+ kpt_score = kpt_score / valid_num
+ # rescoring
+ n_p['score'] = kpt_score * box_score
+
+ keep = oks_nms([img_kpts[i] for i in range(len(img_kpts))],
+ oks_thre)
+
+ if len(keep) == 0:
+ oks_nmsed_kpts.append(img_kpts)
+ else:
+ oks_nmsed_kpts.append([img_kpts[_keep] for _keep in keep])
+
+ self._write_coco_keypoint_results(oks_nmsed_kpts)
+
+ def accumulate(self):
+ self.get_final_results(self.results['all_preds'],
+ self.results['all_boxes'],
+ self.results['image_path'])
+ if self.save_prediction_only:
+ logger.info(f'The keypoint result is saved to {self.res_file} '
+ 'and do not evaluate the mAP.')
+ return
+ coco_dt = self.coco.loadRes(self.res_file)
+ coco_eval = COCOeval(self.coco, coco_dt, 'keypoints')
+ coco_eval.params.useSegm = None
+ coco_eval.evaluate()
+ coco_eval.accumulate()
+ coco_eval.summarize()
+
+ keypoint_stats = []
+ for ind in range(len(coco_eval.stats)):
+ keypoint_stats.append((coco_eval.stats[ind]))
+ self.eval_results['keypoint'] = keypoint_stats
+
+ def log(self):
+ if self.save_prediction_only:
+ return
+ stats_names = [
+ 'AP', 'Ap .5', 'AP .75', 'AP (M)', 'AP (L)', 'AR', 'AR .5',
+ 'AR .75', 'AR (M)', 'AR (L)'
+ ]
+ num_values = len(stats_names)
+ print(' '.join(['| {}'.format(name) for name in stats_names]) + ' |')
+ print('|---' * (num_values + 1) + '|')
+
+ print(' '.join([
+ '| {:.3f}'.format(value) for value in self.eval_results['keypoint']
+ ]) + ' |')
+
+ def get_results(self):
+ return self.eval_results
+
+
+class KeyPointTopDownMPIIEval(object):
+ def __init__(self,
+ anno_file,
+ num_samples,
+ num_joints,
+ output_eval,
+ oks_thre=0.9,
+ save_prediction_only=False):
+ super(KeyPointTopDownMPIIEval, self).__init__()
+ self.ann_file = anno_file
+ self.res_file = os.path.join(output_eval, "keypoints_results.json")
+ self.save_prediction_only = save_prediction_only
+ self.reset()
+
+ def reset(self):
+ self.results = []
+ self.eval_results = {}
+ self.idx = 0
+
+ def update(self, inputs, outputs):
+ kpts, _ = outputs['keypoint'][0]
+
+ num_images = inputs['image'].shape[0]
+ results = {}
+ results['preds'] = kpts[:, :, 0:3]
+ results['boxes'] = np.zeros((num_images, 6))
+ results['boxes'][:, 0:2] = inputs['center'].numpy()[:, 0:2]
+ results['boxes'][:, 2:4] = inputs['scale'].numpy()[:, 0:2]
+ results['boxes'][:, 4] = np.prod(inputs['scale'].numpy() * 200, 1)
+ results['boxes'][:, 5] = np.squeeze(inputs['score'].numpy())
+ results['image_path'] = inputs['image_file']
+
+ self.results.append(results)
+
+ def accumulate(self):
+ self._mpii_keypoint_results_save()
+ if self.save_prediction_only:
+ logger.info(f'The keypoint result is saved to {self.res_file} '
+ 'and do not evaluate the mAP.')
+ return
+
+ self.eval_results = self.evaluate(self.results)
+
+ def _mpii_keypoint_results_save(self):
+ results = []
+ for res in self.results:
+ if len(res) == 0:
+ continue
+ result = [{
+ 'preds': res['preds'][k].tolist(),
+ 'boxes': res['boxes'][k].tolist(),
+ 'image_path': res['image_path'][k],
+ } for k in range(len(res))]
+ results.extend(result)
+ with open(self.res_file, 'w') as f:
+ json.dump(results, f, sort_keys=True, indent=4)
+ logger.info(f'The keypoint result is saved to {self.res_file}.')
+
+ def log(self):
+ if self.save_prediction_only:
+ return
+ for item, value in self.eval_results.items():
+ print("{} : {}".format(item, value))
+
+ def get_results(self):
+ return self.eval_results
+
+ def evaluate(self, outputs, savepath=None):
+ """Evaluate PCKh for MPII dataset. Adapted from
+ https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
+ Copyright (c) Microsoft, under the MIT License.
+
+ Args:
+ outputs(list(preds, boxes)):
+
+ * preds (np.ndarray[N,K,3]): The first two dimensions are
+ coordinates, score is the third dimension of the array.
+ * boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
+ , scale[1],area, score]
+
+ Returns:
+ dict: PCKh for each joint
+ """
+
+ kpts = []
+ for output in outputs:
+ preds = output['preds']
+ batch_size = preds.shape[0]
+ for i in range(batch_size):
+ kpts.append({'keypoints': preds[i]})
+
+ preds = np.stack([kpt['keypoints'] for kpt in kpts])
+
+ # convert 0-based index to 1-based index,
+ # and get the first two dimensions.
+ preds = preds[..., :2] + 1.0
+
+ if savepath is not None:
+ pred_file = os.path.join(savepath, 'pred.mat')
+ savemat(pred_file, mdict={'preds': preds})
+
+ SC_BIAS = 0.6
+ threshold = 0.5
+
+ gt_file = os.path.join(
+ os.path.dirname(self.ann_file), 'mpii_gt_val.mat')
+ gt_dict = loadmat(gt_file)
+ dataset_joints = gt_dict['dataset_joints']
+ jnt_missing = gt_dict['jnt_missing']
+ pos_gt_src = gt_dict['pos_gt_src']
+ headboxes_src = gt_dict['headboxes_src']
+
+ pos_pred_src = np.transpose(preds, [1, 2, 0])
+
+ head = np.where(dataset_joints == 'head')[1][0]
+ lsho = np.where(dataset_joints == 'lsho')[1][0]
+ lelb = np.where(dataset_joints == 'lelb')[1][0]
+ lwri = np.where(dataset_joints == 'lwri')[1][0]
+ lhip = np.where(dataset_joints == 'lhip')[1][0]
+ lkne = np.where(dataset_joints == 'lkne')[1][0]
+ lank = np.where(dataset_joints == 'lank')[1][0]
+
+ rsho = np.where(dataset_joints == 'rsho')[1][0]
+ relb = np.where(dataset_joints == 'relb')[1][0]
+ rwri = np.where(dataset_joints == 'rwri')[1][0]
+ rkne = np.where(dataset_joints == 'rkne')[1][0]
+ rank = np.where(dataset_joints == 'rank')[1][0]
+ rhip = np.where(dataset_joints == 'rhip')[1][0]
+
+ jnt_visible = 1 - jnt_missing
+ uv_error = pos_pred_src - pos_gt_src
+ uv_err = np.linalg.norm(uv_error, axis=1)
+ headsizes = headboxes_src[1, :, :] - headboxes_src[0, :, :]
+ headsizes = np.linalg.norm(headsizes, axis=0)
+ headsizes *= SC_BIAS
+ scale = headsizes * np.ones((len(uv_err), 1), dtype=np.float32)
+ scaled_uv_err = uv_err / scale
+ scaled_uv_err = scaled_uv_err * jnt_visible
+ jnt_count = np.sum(jnt_visible, axis=1)
+ less_than_threshold = (scaled_uv_err <= threshold) * jnt_visible
+ PCKh = 100. * np.sum(less_than_threshold, axis=1) / jnt_count
+
+ # save
+ rng = np.arange(0, 0.5 + 0.01, 0.01)
+ pckAll = np.zeros((len(rng), 16), dtype=np.float32)
+
+ for r, threshold in enumerate(rng):
+ less_than_threshold = (scaled_uv_err <= threshold) * jnt_visible
+ pckAll[r, :] = 100. * np.sum(less_than_threshold,
+ axis=1) / jnt_count
+
+ PCKh = np.ma.array(PCKh, mask=False)
+ PCKh.mask[6:8] = True
+
+ jnt_count = np.ma.array(jnt_count, mask=False)
+ jnt_count.mask[6:8] = True
+ jnt_ratio = jnt_count / np.sum(jnt_count).astype(np.float64)
+
+ name_value = [ #noqa
+ ('Head', PCKh[head]),
+ ('Shoulder', 0.5 * (PCKh[lsho] + PCKh[rsho])),
+ ('Elbow', 0.5 * (PCKh[lelb] + PCKh[relb])),
+ ('Wrist', 0.5 * (PCKh[lwri] + PCKh[rwri])),
+ ('Hip', 0.5 * (PCKh[lhip] + PCKh[rhip])),
+ ('Knee', 0.5 * (PCKh[lkne] + PCKh[rkne])),
+ ('Ankle', 0.5 * (PCKh[lank] + PCKh[rank])),
+ ('PCKh', np.sum(PCKh * jnt_ratio)),
+ ('PCKh@0.1', np.sum(pckAll[11, :] * jnt_ratio))
+ ]
+ name_value = OrderedDict(name_value)
+
+ return name_value
+
+ def _sort_and_unique_bboxes(self, kpts, key='bbox_id'):
+ """sort kpts and remove the repeated ones."""
+ kpts = sorted(kpts, key=lambda x: x[key])
+ num = len(kpts)
+ for i in range(num - 1, 0, -1):
+ if kpts[i][key] == kpts[i - 1][key]:
+ del kpts[i]
+
+ return kpts
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/map_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/map_utils.py
new file mode 100644
index 000000000..9c96b9235
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/map_utils.py
@@ -0,0 +1,443 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+import os
+import sys
+import numpy as np
+import itertools
+import paddle
+from ppdet.modeling.bbox_utils import poly2rbox, rbox2poly_np
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'draw_pr_curve',
+ 'bbox_area',
+ 'jaccard_overlap',
+ 'prune_zero_padding',
+ 'DetectionMAP',
+ 'ap_per_class',
+ 'compute_ap',
+]
+
+
+def draw_pr_curve(precision,
+ recall,
+ iou=0.5,
+ out_dir='pr_curve',
+ file_name='precision_recall_curve.jpg'):
+ if not os.path.exists(out_dir):
+ os.makedirs(out_dir)
+ output_path = os.path.join(out_dir, file_name)
+ try:
+ import matplotlib.pyplot as plt
+ except Exception as e:
+ logger.error('Matplotlib not found, plaese install matplotlib.'
+ 'for example: `pip install matplotlib`.')
+ raise e
+ plt.cla()
+ plt.figure('P-R Curve')
+ plt.title('Precision/Recall Curve(IoU={})'.format(iou))
+ plt.xlabel('Recall')
+ plt.ylabel('Precision')
+ plt.grid(True)
+ plt.plot(recall, precision)
+ plt.savefig(output_path)
+
+
+def bbox_area(bbox, is_bbox_normalized):
+ """
+ Calculate area of a bounding box
+ """
+ norm = 1. - float(is_bbox_normalized)
+ width = bbox[2] - bbox[0] + norm
+ height = bbox[3] - bbox[1] + norm
+ return width * height
+
+
+def jaccard_overlap(pred, gt, is_bbox_normalized=False):
+ """
+ Calculate jaccard overlap ratio between two bounding box
+ """
+ if pred[0] >= gt[2] or pred[2] <= gt[0] or \
+ pred[1] >= gt[3] or pred[3] <= gt[1]:
+ return 0.
+ inter_xmin = max(pred[0], gt[0])
+ inter_ymin = max(pred[1], gt[1])
+ inter_xmax = min(pred[2], gt[2])
+ inter_ymax = min(pred[3], gt[3])
+ inter_size = bbox_area([inter_xmin, inter_ymin, inter_xmax, inter_ymax],
+ is_bbox_normalized)
+ pred_size = bbox_area(pred, is_bbox_normalized)
+ gt_size = bbox_area(gt, is_bbox_normalized)
+ overlap = float(inter_size) / (pred_size + gt_size - inter_size)
+ return overlap
+
+
+def calc_rbox_iou(pred, gt_rbox):
+ """
+ calc iou between rotated bbox
+ """
+ # calc iou of bounding box for speedup
+ pred = np.array(pred, np.float32).reshape(-1, 8)
+ pred = pred.reshape(-1, 2)
+ gt_poly = rbox2poly_np(np.array(gt_rbox).reshape(-1, 5))[0]
+ gt_poly = gt_poly.reshape(-1, 2)
+ pred_rect = [
+ np.min(pred[:, 0]), np.min(pred[:, 1]), np.max(pred[:, 0]),
+ np.max(pred[:, 1])
+ ]
+ gt_rect = [
+ np.min(gt_poly[:, 0]), np.min(gt_poly[:, 1]), np.max(gt_poly[:, 0]),
+ np.max(gt_poly[:, 1])
+ ]
+ iou = jaccard_overlap(pred_rect, gt_rect, False)
+
+ if iou <= 0:
+ return iou
+
+ # calc rbox iou
+ pred = pred.reshape(-1, 8)
+
+ pred = np.array(pred, np.float32).reshape(-1, 8)
+ pred_rbox = poly2rbox(pred)
+ pred_rbox = pred_rbox.reshape(-1, 5)
+ pred_rbox = pred_rbox.reshape(-1, 5)
+ try:
+ from rbox_iou_ops import rbox_iou
+ except Exception as e:
+ print("import custom_ops error, try install rbox_iou_ops " \
+ "following ppdet/ext_op/README.md", e)
+ sys.stdout.flush()
+ sys.exit(-1)
+ gt_rbox = np.array(gt_rbox, np.float32).reshape(-1, 5)
+ pd_gt_rbox = paddle.to_tensor(gt_rbox, dtype='float32')
+ pd_pred_rbox = paddle.to_tensor(pred_rbox, dtype='float32')
+ iou = rbox_iou(pd_gt_rbox, pd_pred_rbox)
+ iou = iou.numpy()
+ return iou[0][0]
+
+
+def prune_zero_padding(gt_box, gt_label, difficult=None):
+ valid_cnt = 0
+ for i in range(len(gt_box)):
+ if gt_box[i, 0] == 0 and gt_box[i, 1] == 0 and \
+ gt_box[i, 2] == 0 and gt_box[i, 3] == 0:
+ break
+ valid_cnt += 1
+ return (gt_box[:valid_cnt], gt_label[:valid_cnt], difficult[:valid_cnt]
+ if difficult is not None else None)
+
+
+class DetectionMAP(object):
+ """
+ Calculate detection mean average precision.
+ Currently support two types: 11point and integral
+
+ Args:
+ class_num (int): The class number.
+ overlap_thresh (float): The threshold of overlap
+ ratio between prediction bounding box and
+ ground truth bounding box for deciding
+ true/false positive. Default 0.5.
+ map_type (str): Calculation method of mean average
+ precision, currently support '11point' and
+ 'integral'. Default '11point'.
+ is_bbox_normalized (bool): Whether bounding boxes
+ is normalized to range[0, 1]. Default False.
+ evaluate_difficult (bool): Whether to evaluate
+ difficult bounding boxes. Default False.
+ catid2name (dict): Mapping between category id and category name.
+ classwise (bool): Whether per-category AP and draw
+ P-R Curve or not.
+ """
+
+ def __init__(self,
+ class_num,
+ overlap_thresh=0.5,
+ map_type='11point',
+ is_bbox_normalized=False,
+ evaluate_difficult=False,
+ catid2name=None,
+ classwise=False):
+ self.class_num = class_num
+ self.overlap_thresh = overlap_thresh
+ assert map_type in ['11point', 'integral'], \
+ "map_type currently only support '11point' "\
+ "and 'integral'"
+ self.map_type = map_type
+ self.is_bbox_normalized = is_bbox_normalized
+ self.evaluate_difficult = evaluate_difficult
+ self.classwise = classwise
+ self.classes = []
+ for cname in catid2name.values():
+ self.classes.append(cname)
+ self.reset()
+
+ def update(self, bbox, score, label, gt_box, gt_label, difficult=None):
+ """
+ Update metric statics from given prediction and ground
+ truth infomations.
+ """
+ if difficult is None:
+ difficult = np.zeros_like(gt_label)
+
+ # record class gt count
+ for gtl, diff in zip(gt_label, difficult):
+ if self.evaluate_difficult or int(diff) == 0:
+ self.class_gt_counts[int(np.array(gtl))] += 1
+
+ # record class score positive
+ visited = [False] * len(gt_label)
+ for b, s, l in zip(bbox, score, label):
+ pred = b.tolist() if isinstance(b, np.ndarray) else b
+ max_idx = -1
+ max_overlap = -1.0
+ for i, gl in enumerate(gt_label):
+ if int(gl) == int(l):
+ if len(gt_box[i]) == 5:
+ overlap = calc_rbox_iou(pred, gt_box[i])
+ else:
+ overlap = jaccard_overlap(pred, gt_box[i],
+ self.is_bbox_normalized)
+ if overlap > max_overlap:
+ max_overlap = overlap
+ max_idx = i
+
+ if max_overlap > self.overlap_thresh:
+ if self.evaluate_difficult or \
+ int(np.array(difficult[max_idx])) == 0:
+ if not visited[max_idx]:
+ self.class_score_poss[int(l)].append([s, 1.0])
+ visited[max_idx] = True
+ else:
+ self.class_score_poss[int(l)].append([s, 0.0])
+ else:
+ self.class_score_poss[int(l)].append([s, 0.0])
+
+ def reset(self):
+ """
+ Reset metric statics
+ """
+ self.class_score_poss = [[] for _ in range(self.class_num)]
+ self.class_gt_counts = [0] * self.class_num
+ self.mAP = 0.0
+
+ def accumulate(self):
+ """
+ Accumulate metric results and calculate mAP
+ """
+ mAP = 0.
+ valid_cnt = 0
+ eval_results = []
+ for score_pos, count in zip(self.class_score_poss,
+ self.class_gt_counts):
+ if count == 0: continue
+ if len(score_pos) == 0:
+ valid_cnt += 1
+ continue
+
+ accum_tp_list, accum_fp_list = \
+ self._get_tp_fp_accum(score_pos)
+ precision = []
+ recall = []
+ for ac_tp, ac_fp in zip(accum_tp_list, accum_fp_list):
+ precision.append(float(ac_tp) / (ac_tp + ac_fp))
+ recall.append(float(ac_tp) / count)
+
+ one_class_ap = 0.0
+ if self.map_type == '11point':
+ max_precisions = [0.] * 11
+ start_idx = len(precision) - 1
+ for j in range(10, -1, -1):
+ for i in range(start_idx, -1, -1):
+ if recall[i] < float(j) / 10.:
+ start_idx = i
+ if j > 0:
+ max_precisions[j - 1] = max_precisions[j]
+ break
+ else:
+ if max_precisions[j] < precision[i]:
+ max_precisions[j] = precision[i]
+ one_class_ap = sum(max_precisions) / 11.
+ mAP += one_class_ap
+ valid_cnt += 1
+ elif self.map_type == 'integral':
+ import math
+ prev_recall = 0.
+ for i in range(len(precision)):
+ recall_gap = math.fabs(recall[i] - prev_recall)
+ if recall_gap > 1e-6:
+ one_class_ap += precision[i] * recall_gap
+ prev_recall = recall[i]
+ mAP += one_class_ap
+ valid_cnt += 1
+ else:
+ logger.error("Unspported mAP type {}".format(self.map_type))
+ sys.exit(1)
+ eval_results.append({
+ 'class': self.classes[valid_cnt - 1],
+ 'ap': one_class_ap,
+ 'precision': precision,
+ 'recall': recall,
+ })
+ self.eval_results = eval_results
+ self.mAP = mAP / float(valid_cnt) if valid_cnt > 0 else mAP
+
+ def get_map(self):
+ """
+ Get mAP result
+ """
+ if self.mAP is None:
+ logger.error("mAP is not calculated.")
+ if self.classwise:
+ # Compute per-category AP and PR curve
+ try:
+ from terminaltables import AsciiTable
+ except Exception as e:
+ logger.error(
+ 'terminaltables not found, plaese install terminaltables. '
+ 'for example: `pip install terminaltables`.')
+ raise e
+ results_per_category = []
+ for eval_result in self.eval_results:
+ results_per_category.append(
+ (str(eval_result['class']),
+ '{:0.3f}'.format(float(eval_result['ap']))))
+ draw_pr_curve(
+ eval_result['precision'],
+ eval_result['recall'],
+ out_dir='voc_pr_curve',
+ file_name='{}_precision_recall_curve.jpg'.format(
+ eval_result['class']))
+
+ num_columns = min(6, len(results_per_category) * 2)
+ results_flatten = list(itertools.chain(*results_per_category))
+ headers = ['category', 'AP'] * (num_columns // 2)
+ results_2d = itertools.zip_longest(
+ *[results_flatten[i::num_columns] for i in range(num_columns)])
+ table_data = [headers]
+ table_data += [result for result in results_2d]
+ table = AsciiTable(table_data)
+ logger.info('Per-category of VOC AP: \n{}'.format(table.table))
+ logger.info(
+ "per-category PR curve has output to voc_pr_curve folder.")
+ return self.mAP
+
+ def _get_tp_fp_accum(self, score_pos_list):
+ """
+ Calculate accumulating true/false positive results from
+ [score, pos] records
+ """
+ sorted_list = sorted(score_pos_list, key=lambda s: s[0], reverse=True)
+ accum_tp = 0
+ accum_fp = 0
+ accum_tp_list = []
+ accum_fp_list = []
+ for (score, pos) in sorted_list:
+ accum_tp += int(pos)
+ accum_tp_list.append(accum_tp)
+ accum_fp += 1 - int(pos)
+ accum_fp_list.append(accum_fp)
+ return accum_tp_list, accum_fp_list
+
+
+def ap_per_class(tp, conf, pred_cls, target_cls):
+ """
+ Computes the average precision, given the recall and precision curves.
+ Method originally from https://github.com/rafaelpadilla/Object-Detection-Metrics.
+
+ Args:
+ tp (list): True positives.
+ conf (list): Objectness value from 0-1.
+ pred_cls (list): Predicted object classes.
+ target_cls (list): Target object classes.
+ """
+ tp, conf, pred_cls, target_cls = np.array(tp), np.array(conf), np.array(
+ pred_cls), np.array(target_cls)
+
+ # Sort by objectness
+ i = np.argsort(-conf)
+ tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]
+
+ # Find unique classes
+ unique_classes = np.unique(np.concatenate((pred_cls, target_cls), 0))
+
+ # Create Precision-Recall curve and compute AP for each class
+ ap, p, r = [], [], []
+ for c in unique_classes:
+ i = pred_cls == c
+ n_gt = sum(target_cls == c) # Number of ground truth objects
+ n_p = sum(i) # Number of predicted objects
+
+ if (n_p == 0) and (n_gt == 0):
+ continue
+ elif (n_p == 0) or (n_gt == 0):
+ ap.append(0)
+ r.append(0)
+ p.append(0)
+ else:
+ # Accumulate FPs and TPs
+ fpc = np.cumsum(1 - tp[i])
+ tpc = np.cumsum(tp[i])
+
+ # Recall
+ recall_curve = tpc / (n_gt + 1e-16)
+ r.append(tpc[-1] / (n_gt + 1e-16))
+
+ # Precision
+ precision_curve = tpc / (tpc + fpc)
+ p.append(tpc[-1] / (tpc[-1] + fpc[-1]))
+
+ # AP from recall-precision curve
+ ap.append(compute_ap(recall_curve, precision_curve))
+
+ return np.array(ap), unique_classes.astype('int32'), np.array(r), np.array(
+ p)
+
+
+def compute_ap(recall, precision):
+ """
+ Computes the average precision, given the recall and precision curves.
+ Code originally from https://github.com/rbgirshick/py-faster-rcnn.
+
+ Args:
+ recall (list): The recall curve.
+ precision (list): The precision curve.
+
+ Returns:
+ The average precision as computed in py-faster-rcnn.
+ """
+ # correct AP calculation
+ # first append sentinel values at the end
+ mrec = np.concatenate(([0.], recall, [1.]))
+ mpre = np.concatenate(([0.], precision, [0.]))
+
+ # compute the precision envelope
+ for i in range(mpre.size - 1, 0, -1):
+ mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
+
+ # to calculate area under PR curve, look for points
+ # where X axis (recall) changes value
+ i = np.where(mrec[1:] != mrec[:-1])[0]
+
+ # and sum (\Delta recall) * prec
+ ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
+ return ap
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mcmot_metrics.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mcmot_metrics.py
new file mode 100644
index 000000000..9f329c8e0
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mcmot_metrics.py
@@ -0,0 +1,467 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import copy
+import sys
+import math
+from collections import defaultdict
+from motmetrics.math_util import quiet_divide
+
+import numpy as np
+import pandas as pd
+
+import paddle
+import paddle.nn.functional as F
+from .metrics import Metric
+import motmetrics as mm
+import openpyxl
+metrics = mm.metrics.motchallenge_metrics
+mh = mm.metrics.create()
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['MCMOTEvaluator', 'MCMOTMetric']
+
+METRICS_LIST = [
+ 'num_frames', 'num_matches', 'num_switches', 'num_transfer', 'num_ascend',
+ 'num_migrate', 'num_false_positives', 'num_misses', 'num_detections',
+ 'num_objects', 'num_predictions', 'num_unique_objects', 'mostly_tracked',
+ 'partially_tracked', 'mostly_lost', 'num_fragmentations', 'motp', 'mota',
+ 'precision', 'recall', 'idfp', 'idfn', 'idtp', 'idp', 'idr', 'idf1'
+]
+
+NAME_MAP = {
+ 'num_frames': 'num_frames',
+ 'num_matches': 'num_matches',
+ 'num_switches': 'IDs',
+ 'num_transfer': 'IDt',
+ 'num_ascend': 'IDa',
+ 'num_migrate': 'IDm',
+ 'num_false_positives': 'FP',
+ 'num_misses': 'FN',
+ 'num_detections': 'num_detections',
+ 'num_objects': 'num_objects',
+ 'num_predictions': 'num_predictions',
+ 'num_unique_objects': 'GT',
+ 'mostly_tracked': 'MT',
+ 'partially_tracked': 'partially_tracked',
+ 'mostly_lost': 'ML',
+ 'num_fragmentations': 'FM',
+ 'motp': 'MOTP',
+ 'mota': 'MOTA',
+ 'precision': 'Prcn',
+ 'recall': 'Rcll',
+ 'idfp': 'idfp',
+ 'idfn': 'idfn',
+ 'idtp': 'idtp',
+ 'idp': 'IDP',
+ 'idr': 'IDR',
+ 'idf1': 'IDF1'
+}
+
+
+def parse_accs_metrics(seq_acc, index_name, verbose=False):
+ """
+ Parse the evaluation indicators of multiple MOTAccumulator
+ """
+ mh = mm.metrics.create()
+ summary = MCMOTEvaluator.get_summary(seq_acc, index_name, METRICS_LIST)
+ summary.loc['OVERALL', 'motp'] = (summary['motp'] * summary['num_detections']).sum() / \
+ summary.loc['OVERALL', 'num_detections']
+ if verbose:
+ strsummary = mm.io.render_summary(
+ summary, formatters=mh.formatters, namemap=NAME_MAP)
+ print(strsummary)
+
+ return summary
+
+
+def seqs_overall_metrics(summary_df, verbose=False):
+ """
+ Calculate overall metrics for multiple sequences
+ """
+ add_col = [
+ 'num_frames', 'num_matches', 'num_switches', 'num_transfer',
+ 'num_ascend', 'num_migrate', 'num_false_positives', 'num_misses',
+ 'num_detections', 'num_objects', 'num_predictions',
+ 'num_unique_objects', 'mostly_tracked', 'partially_tracked',
+ 'mostly_lost', 'num_fragmentations', 'idfp', 'idfn', 'idtp'
+ ]
+ calc_col = ['motp', 'mota', 'precision', 'recall', 'idp', 'idr', 'idf1']
+ calc_df = summary_df.copy()
+
+ overall_dic = {}
+ for col in add_col:
+ overall_dic[col] = calc_df[col].sum()
+
+ for col in calc_col:
+ overall_dic[col] = getattr(MCMOTMetricOverall, col + '_overall')(
+ calc_df, overall_dic)
+
+ overall_df = pd.DataFrame(overall_dic, index=['overall_calc'])
+ calc_df = pd.concat([calc_df, overall_df])
+
+ if verbose:
+ mh = mm.metrics.create()
+ str_calc_df = mm.io.render_summary(
+ calc_df, formatters=mh.formatters, namemap=NAME_MAP)
+ print(str_calc_df)
+
+ return calc_df
+
+
+class MCMOTMetricOverall(object):
+ def motp_overall(summary_df, overall_dic):
+ motp = quiet_divide((summary_df['motp'] *
+ summary_df['num_detections']).sum(),
+ overall_dic['num_detections'])
+ return motp
+
+ def mota_overall(summary_df, overall_dic):
+ del summary_df
+ mota = 1. - quiet_divide(
+ (overall_dic['num_misses'] + overall_dic['num_switches'] +
+ overall_dic['num_false_positives']), overall_dic['num_objects'])
+ return mota
+
+ def precision_overall(summary_df, overall_dic):
+ del summary_df
+ precision = quiet_divide(overall_dic['num_detections'], (
+ overall_dic['num_false_positives'] + overall_dic['num_detections']))
+ return precision
+
+ def recall_overall(summary_df, overall_dic):
+ del summary_df
+ recall = quiet_divide(overall_dic['num_detections'],
+ overall_dic['num_objects'])
+ return recall
+
+ def idp_overall(summary_df, overall_dic):
+ del summary_df
+ idp = quiet_divide(overall_dic['idtp'],
+ (overall_dic['idtp'] + overall_dic['idfp']))
+ return idp
+
+ def idr_overall(summary_df, overall_dic):
+ del summary_df
+ idr = quiet_divide(overall_dic['idtp'],
+ (overall_dic['idtp'] + overall_dic['idfn']))
+ return idr
+
+ def idf1_overall(summary_df, overall_dic):
+ del summary_df
+ idf1 = quiet_divide(2. * overall_dic['idtp'], (
+ overall_dic['num_objects'] + overall_dic['num_predictions']))
+ return idf1
+
+
+def read_mcmot_results_union(filename, is_gt, is_ignore):
+ results_dict = dict()
+ if os.path.isfile(filename):
+ all_result = np.loadtxt(filename, delimiter=',')
+ if all_result.shape[0] == 0 or all_result.shape[1] < 7:
+ return results_dict
+ if is_ignore:
+ return results_dict
+ if is_gt:
+ # only for test use
+ all_result = all_result[all_result[:, 7] != 0]
+ all_result[:, 7] = all_result[:, 7] - 1
+
+ if all_result.shape[0] == 0:
+ return results_dict
+
+ class_unique = np.unique(all_result[:, 7])
+
+ last_max_id = 0
+ result_cls_list = []
+ for cls in class_unique:
+ result_cls_split = all_result[all_result[:, 7] == cls]
+ result_cls_split[:, 1] = result_cls_split[:, 1] + last_max_id
+ # make sure track id different between every category
+ last_max_id = max(np.unique(result_cls_split[:, 1])) + 1
+ result_cls_list.append(result_cls_split)
+
+ results_con = np.concatenate(result_cls_list)
+
+ for line in range(len(results_con)):
+ linelist = results_con[line]
+ fid = int(linelist[0])
+ if fid < 1:
+ continue
+ results_dict.setdefault(fid, list())
+
+ if is_gt:
+ score = 1
+ else:
+ score = float(linelist[6])
+
+ tlwh = tuple(map(float, linelist[2:6]))
+ target_id = int(linelist[1])
+ cls = int(linelist[7])
+
+ results_dict[fid].append((tlwh, target_id, cls, score))
+
+ return results_dict
+
+
+def read_mcmot_results(filename, is_gt, is_ignore):
+ results_dict = dict()
+ if os.path.isfile(filename):
+ with open(filename, 'r') as f:
+ for line in f.readlines():
+ linelist = line.strip().split(',')
+ if len(linelist) < 7:
+ continue
+ fid = int(linelist[0])
+ if fid < 1:
+ continue
+ cid = int(linelist[7])
+ if is_gt:
+ score = 1
+ # only for test use
+ cid -= 1
+ else:
+ score = float(linelist[6])
+
+ cls_result_dict = results_dict.setdefault(cid, dict())
+ cls_result_dict.setdefault(fid, list())
+
+ tlwh = tuple(map(float, linelist[2:6]))
+ target_id = int(linelist[1])
+ cls_result_dict[fid].append((tlwh, target_id, score))
+ return results_dict
+
+
+def read_results(filename,
+ data_type,
+ is_gt=False,
+ is_ignore=False,
+ multi_class=False,
+ union=False):
+ if data_type in ['mcmot', 'lab']:
+ if multi_class:
+ if union:
+ # The results are evaluated by union all the categories.
+ # Track IDs between different categories cannot be duplicate.
+ read_fun = read_mcmot_results_union
+ else:
+ # The results are evaluated separately by category.
+ read_fun = read_mcmot_results
+ else:
+ raise ValueError('multi_class: {}, MCMOT should have cls_id.'.
+ format(multi_class))
+ else:
+ raise ValueError('Unknown data type: {}'.format(data_type))
+
+ return read_fun(filename, is_gt, is_ignore)
+
+
+def unzip_objs(objs):
+ if len(objs) > 0:
+ tlwhs, ids, scores = zip(*objs)
+ else:
+ tlwhs, ids, scores = [], [], []
+ tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)
+ return tlwhs, ids, scores
+
+
+def unzip_objs_cls(objs):
+ if len(objs) > 0:
+ tlwhs, ids, cls, scores = zip(*objs)
+ else:
+ tlwhs, ids, cls, scores = [], [], [], []
+ tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)
+ ids = np.array(ids)
+ cls = np.array(cls)
+ scores = np.array(scores)
+ return tlwhs, ids, cls, scores
+
+
+class MCMOTEvaluator(object):
+ def __init__(self, data_root, seq_name, data_type, num_classes):
+ self.data_root = data_root
+ self.seq_name = seq_name
+ self.data_type = data_type
+ self.num_classes = num_classes
+
+ self.load_annotations()
+ self.reset_accumulator()
+
+ self.class_accs = []
+
+ def load_annotations(self):
+ assert self.data_type == 'mcmot'
+ self.gt_filename = os.path.join(self.data_root, '../', '../',
+ 'sequences',
+ '{}.txt'.format(self.seq_name))
+
+ def reset_accumulator(self):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+ self.acc = mm.MOTAccumulator(auto_id=True)
+
+ def eval_frame_dict(self, trk_objs, gt_objs, rtn_events=False, union=False):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+ if union:
+ trk_tlwhs, trk_ids, trk_cls = unzip_objs_cls(trk_objs)[:3]
+ gt_tlwhs, gt_ids, gt_cls = unzip_objs_cls(gt_objs)[:3]
+
+ # get distance matrix
+ iou_distance = mm.distances.iou_matrix(
+ gt_tlwhs, trk_tlwhs, max_iou=0.5)
+
+ # Set the distance between objects of different categories to nan
+ gt_cls_len = len(gt_cls)
+ trk_cls_len = len(trk_cls)
+ # When the number of GT or Trk is 0, iou_distance dimension is (0,0)
+ if gt_cls_len != 0 and trk_cls_len != 0:
+ gt_cls = gt_cls.reshape(gt_cls_len, 1)
+ gt_cls = np.repeat(gt_cls, trk_cls_len, axis=1)
+ trk_cls = trk_cls.reshape(1, trk_cls_len)
+ trk_cls = np.repeat(trk_cls, gt_cls_len, axis=0)
+ iou_distance = np.where(gt_cls == trk_cls, iou_distance, np.nan)
+
+ else:
+ trk_tlwhs, trk_ids = unzip_objs(trk_objs)[:2]
+ gt_tlwhs, gt_ids = unzip_objs(gt_objs)[:2]
+
+ # get distance matrix
+ iou_distance = mm.distances.iou_matrix(
+ gt_tlwhs, trk_tlwhs, max_iou=0.5)
+
+ self.acc.update(gt_ids, trk_ids, iou_distance)
+
+ if rtn_events and iou_distance.size > 0 and hasattr(self.acc,
+ 'mot_events'):
+ events = self.acc.mot_events # only supported by https://github.com/longcw/py-motmetrics
+ else:
+ events = None
+ return events
+
+ def eval_file(self, result_filename):
+ # evaluation of each category
+ gt_frame_dict = read_results(
+ self.gt_filename,
+ self.data_type,
+ is_gt=True,
+ multi_class=True,
+ union=False)
+ result_frame_dict = read_results(
+ result_filename,
+ self.data_type,
+ is_gt=False,
+ multi_class=True,
+ union=False)
+
+ for cid in range(self.num_classes):
+ self.reset_accumulator()
+ cls_result_frame_dict = result_frame_dict.setdefault(cid, dict())
+ cls_gt_frame_dict = gt_frame_dict.setdefault(cid, dict())
+
+ # only labeled frames will be evaluated
+ frames = sorted(list(set(cls_gt_frame_dict.keys())))
+
+ for frame_id in frames:
+ trk_objs = cls_result_frame_dict.get(frame_id, [])
+ gt_objs = cls_gt_frame_dict.get(frame_id, [])
+ self.eval_frame_dict(trk_objs, gt_objs, rtn_events=False)
+
+ self.class_accs.append(self.acc)
+
+ return self.class_accs
+
+ @staticmethod
+ def get_summary(accs,
+ names,
+ metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1',
+ 'precision', 'recall')):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+
+ names = copy.deepcopy(names)
+ if metrics is None:
+ metrics = mm.metrics.motchallenge_metrics
+ metrics = copy.deepcopy(metrics)
+
+ mh = mm.metrics.create()
+ summary = mh.compute_many(
+ accs, metrics=metrics, names=names, generate_overall=True)
+
+ return summary
+
+ @staticmethod
+ def save_summary(summary, filename):
+ import pandas as pd
+ writer = pd.ExcelWriter(filename)
+ summary.to_excel(writer)
+ writer.save()
+
+
+class MCMOTMetric(Metric):
+ def __init__(self, num_classes, save_summary=False):
+ self.num_classes = num_classes
+ self.save_summary = save_summary
+ self.MCMOTEvaluator = MCMOTEvaluator
+ self.result_root = None
+ self.reset()
+
+ self.seqs_overall = defaultdict(list)
+
+ def reset(self):
+ self.accs = []
+ self.seqs = []
+
+ def update(self, data_root, seq, data_type, result_root, result_filename):
+ evaluator = self.MCMOTEvaluator(data_root, seq, data_type,
+ self.num_classes)
+ seq_acc = evaluator.eval_file(result_filename)
+ self.accs.append(seq_acc)
+ self.seqs.append(seq)
+ self.result_root = result_root
+
+ cls_index_name = [
+ '{}_{}'.format(seq, i) for i in range(self.num_classes)
+ ]
+ summary = parse_accs_metrics(seq_acc, cls_index_name)
+ summary.rename(
+ index={'OVERALL': '{}_OVERALL'.format(seq)}, inplace=True)
+ for row in range(len(summary)):
+ self.seqs_overall[row].append(summary.iloc[row:row + 1])
+
+ def accumulate(self):
+ self.cls_summary_list = []
+ for row in range(self.num_classes):
+ seqs_cls_df = pd.concat(self.seqs_overall[row])
+ seqs_cls_summary = seqs_overall_metrics(seqs_cls_df)
+ cls_summary_overall = seqs_cls_summary.iloc[-1:].copy()
+ cls_summary_overall.rename(
+ index={'overall_calc': 'overall_calc_{}'.format(row)},
+ inplace=True)
+ self.cls_summary_list.append(cls_summary_overall)
+
+ def log(self):
+ seqs_summary = seqs_overall_metrics(
+ pd.concat(self.seqs_overall[self.num_classes]), verbose=True)
+ class_summary = seqs_overall_metrics(
+ pd.concat(self.cls_summary_list), verbose=True)
+
+ def get_results(self):
+ return 1
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/metrics.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/metrics.py
new file mode 100644
index 000000000..f9913b7fb
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/metrics.py
@@ -0,0 +1,432 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+import json
+import paddle
+import numpy as np
+
+from .map_utils import prune_zero_padding, DetectionMAP
+from .coco_utils import get_infer_results, cocoapi_eval
+from .widerface_utils import face_eval_run
+from ppdet.data.source.category import get_categories
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'Metric',
+ 'COCOMetric',
+ 'VOCMetric',
+ 'WiderFaceMetric',
+ 'get_infer_results',
+ 'RBoxMetric',
+ 'SNIPERCOCOMetric'
+]
+
+COCO_SIGMAS = np.array([
+ .26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07, .87, .87,
+ .89, .89
+]) / 10.0
+CROWD_SIGMAS = np.array(
+ [.79, .79, .72, .72, .62, .62, 1.07, 1.07, .87, .87, .89, .89, .79,
+ .79]) / 10.0
+
+
+class Metric(paddle.metric.Metric):
+ def name(self):
+ return self.__class__.__name__
+
+ def reset(self):
+ pass
+
+ def accumulate(self):
+ pass
+
+ # paddle.metric.Metric defined :metch:`update`, :meth:`accumulate`
+ # :metch:`reset`, in ppdet, we also need following 2 methods:
+
+ # abstract method for logging metric results
+ def log(self):
+ pass
+
+ # abstract method for getting metric results
+ def get_results(self):
+ pass
+
+
+class COCOMetric(Metric):
+ def __init__(self, anno_file, **kwargs):
+ assert os.path.isfile(anno_file), \
+ "anno_file {} not a file".format(anno_file)
+ self.anno_file = anno_file
+ self.clsid2catid = kwargs.get('clsid2catid', None)
+ if self.clsid2catid is None:
+ self.clsid2catid, _ = get_categories('COCO', anno_file)
+ self.classwise = kwargs.get('classwise', False)
+ self.output_eval = kwargs.get('output_eval', None)
+ # TODO: bias should be unified
+ self.bias = kwargs.get('bias', 0)
+ self.save_prediction_only = kwargs.get('save_prediction_only', False)
+ self.iou_type = kwargs.get('IouType', 'bbox')
+ self.reset()
+
+ def reset(self):
+ # only bbox and mask evaluation support currently
+ self.results = {'bbox': [], 'mask': [], 'segm': [], 'keypoint': []}
+ self.eval_results = {}
+
+ def update(self, inputs, outputs):
+ outs = {}
+ # outputs Tensor -> numpy.ndarray
+ for k, v in outputs.items():
+ outs[k] = v.numpy() if isinstance(v, paddle.Tensor) else v
+
+ im_id = inputs['im_id']
+ outs['im_id'] = im_id.numpy() if isinstance(im_id,
+ paddle.Tensor) else im_id
+
+ infer_results = get_infer_results(
+ outs, self.clsid2catid, bias=self.bias)
+ self.results['bbox'] += infer_results[
+ 'bbox'] if 'bbox' in infer_results else []
+ self.results['mask'] += infer_results[
+ 'mask'] if 'mask' in infer_results else []
+ self.results['segm'] += infer_results[
+ 'segm'] if 'segm' in infer_results else []
+ self.results['keypoint'] += infer_results[
+ 'keypoint'] if 'keypoint' in infer_results else []
+
+ def accumulate(self):
+ if len(self.results['bbox']) > 0:
+ output = "bbox.json"
+ if self.output_eval:
+ output = os.path.join(self.output_eval, output)
+ with open(output, 'w') as f:
+ json.dump(self.results['bbox'], f)
+ logger.info('The bbox result is saved to bbox.json.')
+
+ if self.save_prediction_only:
+ logger.info('The bbox result is saved to {} and do not '
+ 'evaluate the mAP.'.format(output))
+ else:
+ bbox_stats = cocoapi_eval(
+ output,
+ 'bbox',
+ anno_file=self.anno_file,
+ classwise=self.classwise)
+ self.eval_results['bbox'] = bbox_stats
+ sys.stdout.flush()
+
+ if len(self.results['mask']) > 0:
+ output = "mask.json"
+ if self.output_eval:
+ output = os.path.join(self.output_eval, output)
+ with open(output, 'w') as f:
+ json.dump(self.results['mask'], f)
+ logger.info('The mask result is saved to mask.json.')
+
+ if self.save_prediction_only:
+ logger.info('The mask result is saved to {} and do not '
+ 'evaluate the mAP.'.format(output))
+ else:
+ seg_stats = cocoapi_eval(
+ output,
+ 'segm',
+ anno_file=self.anno_file,
+ classwise=self.classwise)
+ self.eval_results['mask'] = seg_stats
+ sys.stdout.flush()
+
+ if len(self.results['segm']) > 0:
+ output = "segm.json"
+ if self.output_eval:
+ output = os.path.join(self.output_eval, output)
+ with open(output, 'w') as f:
+ json.dump(self.results['segm'], f)
+ logger.info('The segm result is saved to segm.json.')
+
+ if self.save_prediction_only:
+ logger.info('The segm result is saved to {} and do not '
+ 'evaluate the mAP.'.format(output))
+ else:
+ seg_stats = cocoapi_eval(
+ output,
+ 'segm',
+ anno_file=self.anno_file,
+ classwise=self.classwise)
+ self.eval_results['mask'] = seg_stats
+ sys.stdout.flush()
+
+ if len(self.results['keypoint']) > 0:
+ output = "keypoint.json"
+ if self.output_eval:
+ output = os.path.join(self.output_eval, output)
+ with open(output, 'w') as f:
+ json.dump(self.results['keypoint'], f)
+ logger.info('The keypoint result is saved to keypoint.json.')
+
+ if self.save_prediction_only:
+ logger.info('The keypoint result is saved to {} and do not '
+ 'evaluate the mAP.'.format(output))
+ else:
+ style = 'keypoints'
+ use_area = True
+ sigmas = COCO_SIGMAS
+ if self.iou_type == 'keypoints_crowd':
+ style = 'keypoints_crowd'
+ use_area = False
+ sigmas = CROWD_SIGMAS
+ keypoint_stats = cocoapi_eval(
+ output,
+ style,
+ anno_file=self.anno_file,
+ classwise=self.classwise,
+ sigmas=sigmas,
+ use_area=use_area)
+ self.eval_results['keypoint'] = keypoint_stats
+ sys.stdout.flush()
+
+ def log(self):
+ pass
+
+ def get_results(self):
+ return self.eval_results
+
+
+class VOCMetric(Metric):
+ def __init__(self,
+ label_list,
+ class_num=20,
+ overlap_thresh=0.5,
+ map_type='11point',
+ is_bbox_normalized=False,
+ evaluate_difficult=False,
+ classwise=False):
+ assert os.path.isfile(label_list), \
+ "label_list {} not a file".format(label_list)
+ self.clsid2catid, self.catid2name = get_categories('VOC', label_list)
+
+ self.overlap_thresh = overlap_thresh
+ self.map_type = map_type
+ self.evaluate_difficult = evaluate_difficult
+ self.detection_map = DetectionMAP(
+ class_num=class_num,
+ overlap_thresh=overlap_thresh,
+ map_type=map_type,
+ is_bbox_normalized=is_bbox_normalized,
+ evaluate_difficult=evaluate_difficult,
+ catid2name=self.catid2name,
+ classwise=classwise)
+
+ self.reset()
+
+ def reset(self):
+ self.detection_map.reset()
+
+ def update(self, inputs, outputs):
+ bbox_np = outputs['bbox'].numpy()
+ bboxes = bbox_np[:, 2:]
+ scores = bbox_np[:, 1]
+ labels = bbox_np[:, 0]
+ bbox_lengths = outputs['bbox_num'].numpy()
+
+ if bboxes.shape == (1, 1) or bboxes is None:
+ return
+ gt_boxes = inputs['gt_bbox']
+ gt_labels = inputs['gt_class']
+ difficults = inputs['difficult'] if not self.evaluate_difficult \
+ else None
+
+ scale_factor = inputs['scale_factor'].numpy(
+ ) if 'scale_factor' in inputs else np.ones(
+ (gt_boxes.shape[0], 2)).astype('float32')
+
+ bbox_idx = 0
+ for i in range(len(gt_boxes)):
+ gt_box = gt_boxes[i].numpy()
+ h, w = scale_factor[i]
+ gt_box = gt_box / np.array([w, h, w, h])
+ gt_label = gt_labels[i].numpy()
+ difficult = None if difficults is None \
+ else difficults[i].numpy()
+ bbox_num = bbox_lengths[i]
+ bbox = bboxes[bbox_idx:bbox_idx + bbox_num]
+ score = scores[bbox_idx:bbox_idx + bbox_num]
+ label = labels[bbox_idx:bbox_idx + bbox_num]
+ gt_box, gt_label, difficult = prune_zero_padding(gt_box, gt_label,
+ difficult)
+ self.detection_map.update(bbox, score, label, gt_box, gt_label,
+ difficult)
+ bbox_idx += bbox_num
+
+ def accumulate(self):
+ logger.info("Accumulating evaluatation results...")
+ self.detection_map.accumulate()
+
+ def log(self):
+ map_stat = 100. * self.detection_map.get_map()
+ logger.info("mAP({:.2f}, {}) = {:.2f}%".format(self.overlap_thresh,
+ self.map_type, map_stat))
+
+ def get_results(self):
+ return {'bbox': [self.detection_map.get_map()]}
+
+
+class WiderFaceMetric(Metric):
+ def __init__(self, image_dir, anno_file, multi_scale=True):
+ self.image_dir = image_dir
+ self.anno_file = anno_file
+ self.multi_scale = multi_scale
+ self.clsid2catid, self.catid2name = get_categories('widerface')
+
+ def update(self, model):
+
+ face_eval_run(
+ model,
+ self.image_dir,
+ self.anno_file,
+ pred_dir='output/pred',
+ eval_mode='widerface',
+ multi_scale=self.multi_scale)
+
+
+class RBoxMetric(Metric):
+ def __init__(self, anno_file, **kwargs):
+ assert os.path.isfile(anno_file), \
+ "anno_file {} not a file".format(anno_file)
+ assert os.path.exists(anno_file), "anno_file {} not exists".format(
+ anno_file)
+ self.anno_file = anno_file
+ self.gt_anno = json.load(open(self.anno_file))
+ cats = self.gt_anno['categories']
+ self.clsid2catid = {i: cat['id'] for i, cat in enumerate(cats)}
+ self.catid2clsid = {cat['id']: i for i, cat in enumerate(cats)}
+ self.catid2name = {cat['id']: cat['name'] for cat in cats}
+ self.classwise = kwargs.get('classwise', False)
+ self.output_eval = kwargs.get('output_eval', None)
+ # TODO: bias should be unified
+ self.bias = kwargs.get('bias', 0)
+ self.save_prediction_only = kwargs.get('save_prediction_only', False)
+ self.iou_type = kwargs.get('IouType', 'bbox')
+ self.overlap_thresh = kwargs.get('overlap_thresh', 0.5)
+ self.map_type = kwargs.get('map_type', '11point')
+ self.evaluate_difficult = kwargs.get('evaluate_difficult', False)
+ class_num = len(self.catid2name)
+ self.detection_map = DetectionMAP(
+ class_num=class_num,
+ overlap_thresh=self.overlap_thresh,
+ map_type=self.map_type,
+ is_bbox_normalized=False,
+ evaluate_difficult=self.evaluate_difficult,
+ catid2name=self.catid2name,
+ classwise=self.classwise)
+
+ self.reset()
+
+ def reset(self):
+ self.result_bbox = []
+ self.detection_map.reset()
+
+ def update(self, inputs, outputs):
+ outs = {}
+ # outputs Tensor -> numpy.ndarray
+ for k, v in outputs.items():
+ outs[k] = v.numpy() if isinstance(v, paddle.Tensor) else v
+
+ im_id = inputs['im_id']
+ outs['im_id'] = im_id.numpy() if isinstance(im_id,
+ paddle.Tensor) else im_id
+
+ infer_results = get_infer_results(
+ outs, self.clsid2catid, bias=self.bias)
+ self.result_bbox += infer_results[
+ 'bbox'] if 'bbox' in infer_results else []
+ bbox = [b['bbox'] for b in self.result_bbox]
+ score = [b['score'] for b in self.result_bbox]
+ label = [b['category_id'] for b in self.result_bbox]
+ label = [self.catid2clsid[e] for e in label]
+ gt_box = [
+ e['bbox'] for e in self.gt_anno['annotations']
+ if e['image_id'] == outs['im_id']
+ ]
+ gt_label = [
+ e['category_id'] for e in self.gt_anno['annotations']
+ if e['image_id'] == outs['im_id']
+ ]
+ gt_label = [self.catid2clsid[e] for e in gt_label]
+ self.detection_map.update(bbox, score, label, gt_box, gt_label)
+
+ def accumulate(self):
+ if len(self.result_bbox) > 0:
+ output = "bbox.json"
+ if self.output_eval:
+ output = os.path.join(self.output_eval, output)
+ with open(output, 'w') as f:
+ json.dump(self.result_bbox, f)
+ logger.info('The bbox result is saved to bbox.json.')
+
+ if self.save_prediction_only:
+ logger.info('The bbox result is saved to {} and do not '
+ 'evaluate the mAP.'.format(output))
+ else:
+ logger.info("Accumulating evaluatation results...")
+ self.detection_map.accumulate()
+
+ def log(self):
+ map_stat = 100. * self.detection_map.get_map()
+ logger.info("mAP({:.2f}, {}) = {:.2f}%".format(self.overlap_thresh,
+ self.map_type, map_stat))
+
+ def get_results(self):
+ return {'bbox': [self.detection_map.get_map()]}
+
+
+class SNIPERCOCOMetric(COCOMetric):
+ def __init__(self, anno_file, **kwargs):
+ super(SNIPERCOCOMetric, self).__init__(anno_file, **kwargs)
+ self.dataset = kwargs["dataset"]
+ self.chip_results = []
+
+ def reset(self):
+ # only bbox and mask evaluation support currently
+ self.results = {'bbox': [], 'mask': [], 'segm': [], 'keypoint': []}
+ self.eval_results = {}
+ self.chip_results = []
+
+ def update(self, inputs, outputs):
+ outs = {}
+ # outputs Tensor -> numpy.ndarray
+ for k, v in outputs.items():
+ outs[k] = v.numpy() if isinstance(v, paddle.Tensor) else v
+
+ im_id = inputs['im_id']
+ outs['im_id'] = im_id.numpy() if isinstance(im_id,
+ paddle.Tensor) else im_id
+
+ self.chip_results.append(outs)
+
+
+ def accumulate(self):
+ results = self.dataset.anno_cropper.aggregate_chips_detections(self.chip_results)
+ for outs in results:
+ infer_results = get_infer_results(outs, self.clsid2catid, bias=self.bias)
+ self.results['bbox'] += infer_results['bbox'] if 'bbox' in infer_results else []
+
+ super(SNIPERCOCOMetric, self).accumulate()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mot_metrics.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mot_metrics.py
new file mode 100644
index 000000000..85cba3630
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/mot_metrics.py
@@ -0,0 +1,1232 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import copy
+import sys
+import math
+from collections import defaultdict
+import numpy as np
+import paddle
+import paddle.nn.functional as F
+from ppdet.modeling.bbox_utils import bbox_iou_np_expand
+from .map_utils import ap_per_class
+from .metrics import Metric
+from .munkres import Munkres
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['MOTEvaluator', 'MOTMetric', 'JDEDetMetric', 'KITTIMOTMetric']
+
+
+def read_mot_results(filename, is_gt=False, is_ignore=False):
+ valid_labels = {1}
+ ignore_labels = {2, 7, 8, 12} # only in motchallenge datasets like 'MOT16'
+ results_dict = dict()
+ if os.path.isfile(filename):
+ with open(filename, 'r') as f:
+ for line in f.readlines():
+ linelist = line.split(',')
+ if len(linelist) < 7:
+ continue
+ fid = int(linelist[0])
+ if fid < 1:
+ continue
+ results_dict.setdefault(fid, list())
+
+ box_size = float(linelist[4]) * float(linelist[5])
+
+ if is_gt:
+ label = int(float(linelist[7]))
+ mark = int(float(linelist[6]))
+ if mark == 0 or label not in valid_labels:
+ continue
+ score = 1
+ elif is_ignore:
+ if 'MOT16-' in filename or 'MOT17-' in filename or 'MOT15-' in filename or 'MOT20-' in filename:
+ label = int(float(linelist[7]))
+ vis_ratio = float(linelist[8])
+ if label not in ignore_labels and vis_ratio >= 0:
+ continue
+ else:
+ continue
+ score = 1
+ else:
+ score = float(linelist[6])
+
+ tlwh = tuple(map(float, linelist[2:6]))
+ target_id = int(linelist[1])
+
+ results_dict[fid].append((tlwh, target_id, score))
+ return results_dict
+
+
+"""
+MOT dataset label list, see in https://motchallenge.net
+labels={'ped', ... % 1
+ 'person_on_vhcl', ... % 2
+ 'car', ... % 3
+ 'bicycle', ... % 4
+ 'mbike', ... % 5
+ 'non_mot_vhcl', ... % 6
+ 'static_person', ... % 7
+ 'distractor', ... % 8
+ 'occluder', ... % 9
+ 'occluder_on_grnd', ... % 10
+ 'occluder_full', ... % 11
+ 'reflection', ... % 12
+ 'crowd' ... % 13
+};
+"""
+
+
+def unzip_objs(objs):
+ if len(objs) > 0:
+ tlwhs, ids, scores = zip(*objs)
+ else:
+ tlwhs, ids, scores = [], [], []
+ tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)
+ return tlwhs, ids, scores
+
+
+class MOTEvaluator(object):
+ def __init__(self, data_root, seq_name, data_type):
+ self.data_root = data_root
+ self.seq_name = seq_name
+ self.data_type = data_type
+
+ self.load_annotations()
+ self.reset_accumulator()
+
+ def load_annotations(self):
+ assert self.data_type == 'mot'
+ gt_filename = os.path.join(self.data_root, self.seq_name, 'gt',
+ 'gt.txt')
+ self.gt_frame_dict = read_mot_results(gt_filename, is_gt=True)
+ self.gt_ignore_frame_dict = read_mot_results(
+ gt_filename, is_ignore=True)
+
+ def reset_accumulator(self):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+ self.acc = mm.MOTAccumulator(auto_id=True)
+
+ def eval_frame(self, frame_id, trk_tlwhs, trk_ids, rtn_events=False):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+ # results
+ trk_tlwhs = np.copy(trk_tlwhs)
+ trk_ids = np.copy(trk_ids)
+
+ # gts
+ gt_objs = self.gt_frame_dict.get(frame_id, [])
+ gt_tlwhs, gt_ids = unzip_objs(gt_objs)[:2]
+
+ # ignore boxes
+ ignore_objs = self.gt_ignore_frame_dict.get(frame_id, [])
+ ignore_tlwhs = unzip_objs(ignore_objs)[0]
+
+ # remove ignored results
+ keep = np.ones(len(trk_tlwhs), dtype=bool)
+ iou_distance = mm.distances.iou_matrix(
+ ignore_tlwhs, trk_tlwhs, max_iou=0.5)
+ if len(iou_distance) > 0:
+ match_is, match_js = mm.lap.linear_sum_assignment(iou_distance)
+ match_is, match_js = map(lambda a: np.asarray(a, dtype=int), [match_is, match_js])
+ match_ious = iou_distance[match_is, match_js]
+
+ match_js = np.asarray(match_js, dtype=int)
+ match_js = match_js[np.logical_not(np.isnan(match_ious))]
+ keep[match_js] = False
+ trk_tlwhs = trk_tlwhs[keep]
+ trk_ids = trk_ids[keep]
+
+ # get distance matrix
+ iou_distance = mm.distances.iou_matrix(gt_tlwhs, trk_tlwhs, max_iou=0.5)
+
+ # acc
+ self.acc.update(gt_ids, trk_ids, iou_distance)
+
+ if rtn_events and iou_distance.size > 0 and hasattr(self.acc,
+ 'last_mot_events'):
+ events = self.acc.last_mot_events # only supported by https://github.com/longcw/py-motmetrics
+ else:
+ events = None
+ return events
+
+ def eval_file(self, filename):
+ self.reset_accumulator()
+
+ result_frame_dict = read_mot_results(filename, is_gt=False)
+ frames = sorted(list(set(result_frame_dict.keys())))
+ for frame_id in frames:
+ trk_objs = result_frame_dict.get(frame_id, [])
+ trk_tlwhs, trk_ids = unzip_objs(trk_objs)[:2]
+ self.eval_frame(frame_id, trk_tlwhs, trk_ids, rtn_events=False)
+
+ return self.acc
+
+ @staticmethod
+ def get_summary(accs,
+ names,
+ metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1',
+ 'precision', 'recall')):
+ import motmetrics as mm
+ mm.lap.default_solver = 'lap'
+ names = copy.deepcopy(names)
+ if metrics is None:
+ metrics = mm.metrics.motchallenge_metrics
+ metrics = copy.deepcopy(metrics)
+
+ mh = mm.metrics.create()
+ summary = mh.compute_many(
+ accs, metrics=metrics, names=names, generate_overall=True)
+ return summary
+
+ @staticmethod
+ def save_summary(summary, filename):
+ import pandas as pd
+ writer = pd.ExcelWriter(filename)
+ summary.to_excel(writer)
+ writer.save()
+
+
+class MOTMetric(Metric):
+ def __init__(self, save_summary=False):
+ self.save_summary = save_summary
+ self.MOTEvaluator = MOTEvaluator
+ self.result_root = None
+ self.reset()
+
+ def reset(self):
+ self.accs = []
+ self.seqs = []
+
+ def update(self, data_root, seq, data_type, result_root, result_filename):
+ evaluator = self.MOTEvaluator(data_root, seq, data_type)
+ self.accs.append(evaluator.eval_file(result_filename))
+ self.seqs.append(seq)
+ self.result_root = result_root
+
+ def accumulate(self):
+ import motmetrics as mm
+ import openpyxl
+ metrics = mm.metrics.motchallenge_metrics
+ mh = mm.metrics.create()
+ summary = self.MOTEvaluator.get_summary(self.accs, self.seqs, metrics)
+ self.strsummary = mm.io.render_summary(
+ summary,
+ formatters=mh.formatters,
+ namemap=mm.io.motchallenge_metric_names)
+ if self.save_summary:
+ self.MOTEvaluator.save_summary(
+ summary, os.path.join(self.result_root, 'summary.xlsx'))
+
+ def log(self):
+ print(self.strsummary)
+
+ def get_results(self):
+ return self.strsummary
+
+
+class JDEDetMetric(Metric):
+ # Note this detection AP metric is different from COCOMetric or VOCMetric,
+ # and the bboxes coordinates are not scaled to the original image
+ def __init__(self, overlap_thresh=0.5):
+ self.overlap_thresh = overlap_thresh
+ self.reset()
+
+ def reset(self):
+ self.AP_accum = np.zeros(1)
+ self.AP_accum_count = np.zeros(1)
+
+ def update(self, inputs, outputs):
+ bboxes = outputs['bbox'][:, 2:].numpy()
+ scores = outputs['bbox'][:, 1].numpy()
+ labels = outputs['bbox'][:, 0].numpy()
+ bbox_lengths = outputs['bbox_num'].numpy()
+ if bboxes.shape[0] == 1 and bboxes.sum() == 0.0:
+ return
+
+ gt_boxes = inputs['gt_bbox'].numpy()[0]
+ gt_labels = inputs['gt_class'].numpy()[0]
+ if gt_labels.shape[0] == 0:
+ return
+
+ correct = []
+ detected = []
+ for i in range(bboxes.shape[0]):
+ obj_pred = 0
+ pred_bbox = bboxes[i].reshape(1, 4)
+ # Compute iou with target boxes
+ iou = bbox_iou_np_expand(pred_bbox, gt_boxes, x1y1x2y2=True)[0]
+ # Extract index of largest overlap
+ best_i = np.argmax(iou)
+ # If overlap exceeds threshold and classification is correct mark as correct
+ if iou[best_i] > self.overlap_thresh and obj_pred == gt_labels[
+ best_i] and best_i not in detected:
+ correct.append(1)
+ detected.append(best_i)
+ else:
+ correct.append(0)
+
+ # Compute Average Precision (AP) per class
+ target_cls = list(gt_labels.T[0])
+ AP, AP_class, R, P = ap_per_class(
+ tp=correct,
+ conf=scores,
+ pred_cls=np.zeros_like(scores),
+ target_cls=target_cls)
+ self.AP_accum_count += np.bincount(AP_class, minlength=1)
+ self.AP_accum += np.bincount(AP_class, minlength=1, weights=AP)
+
+ def accumulate(self):
+ logger.info("Accumulating evaluatation results...")
+ self.map_stat = self.AP_accum[0] / (self.AP_accum_count[0] + 1E-16)
+
+ def log(self):
+ map_stat = 100. * self.map_stat
+ logger.info("mAP({:.2f}) = {:.2f}%".format(self.overlap_thresh,
+ map_stat))
+
+ def get_results(self):
+ return self.map_stat
+
+
+"""
+Following code is borrow from https://github.com/xingyizhou/CenterTrack/blob/master/src/tools/eval_kitti_track/evaluate_tracking.py
+"""
+
+
+class tData:
+ """
+ Utility class to load data.
+ """
+ def __init__(self,frame=-1,obj_type="unset",truncation=-1,occlusion=-1,\
+ obs_angle=-10,x1=-1,y1=-1,x2=-1,y2=-1,w=-1,h=-1,l=-1,\
+ X=-1000,Y=-1000,Z=-1000,yaw=-10,score=-1000,track_id=-1):
+ """
+ Constructor, initializes the object given the parameters.
+ """
+ self.frame = frame
+ self.track_id = track_id
+ self.obj_type = obj_type
+ self.truncation = truncation
+ self.occlusion = occlusion
+ self.obs_angle = obs_angle
+ self.x1 = x1
+ self.y1 = y1
+ self.x2 = x2
+ self.y2 = y2
+ self.w = w
+ self.h = h
+ self.l = l
+ self.X = X
+ self.Y = Y
+ self.Z = Z
+ self.yaw = yaw
+ self.score = score
+ self.ignored = False
+ self.valid = False
+ self.tracker = -1
+
+ def __str__(self):
+ attrs = vars(self)
+ return '\n'.join("%s: %s" % item for item in attrs.items())
+
+
+class KITTIEvaluation(object):
+ """ KITTI tracking statistics (CLEAR MOT, id-switches, fragments, ML/PT/MT, precision/recall)
+ MOTA - Multi-object tracking accuracy in [0,100]
+ MOTP - Multi-object tracking precision in [0,100] (3D) / [td,100] (2D)
+ MOTAL - Multi-object tracking accuracy in [0,100] with log10(id-switches)
+
+ id-switches - number of id switches
+ fragments - number of fragmentations
+
+ MT, PT, ML - number of mostly tracked, partially tracked and mostly lost trajectories
+
+ recall - recall = percentage of detected targets
+ precision - precision = percentage of correctly detected targets
+ FAR - number of false alarms per frame
+ falsepositives - number of false positives (FP)
+ missed - number of missed targets (FN)
+ """
+ def __init__(self, result_path, gt_path, min_overlap=0.5, max_truncation = 0,\
+ min_height = 25, max_occlusion = 2, cls="car",\
+ n_frames=[], seqs=[], n_sequences=0):
+ # get number of sequences and
+ # get number of frames per sequence from test mapping
+ # (created while extracting the benchmark)
+ self.gt_path = os.path.join(gt_path, "../labels")
+ self.n_frames = n_frames
+ self.sequence_name = seqs
+ self.n_sequences = n_sequences
+
+ self.cls = cls # class to evaluate, i.e. pedestrian or car
+
+ self.result_path = result_path
+
+ # statistics and numbers for evaluation
+ self.n_gt = 0 # number of ground truth detections minus ignored false negatives and true positives
+ self.n_igt = 0 # number of ignored ground truth detections
+ self.n_gts = [
+ ] # number of ground truth detections minus ignored false negatives and true positives PER SEQUENCE
+ self.n_igts = [
+ ] # number of ground ignored truth detections PER SEQUENCE
+ self.n_gt_trajectories = 0
+ self.n_gt_seq = []
+ self.n_tr = 0 # number of tracker detections minus ignored tracker detections
+ self.n_trs = [
+ ] # number of tracker detections minus ignored tracker detections PER SEQUENCE
+ self.n_itr = 0 # number of ignored tracker detections
+ self.n_itrs = [] # number of ignored tracker detections PER SEQUENCE
+ self.n_igttr = 0 # number of ignored ground truth detections where the corresponding associated tracker detection is also ignored
+ self.n_tr_trajectories = 0
+ self.n_tr_seq = []
+ self.MOTA = 0
+ self.MOTP = 0
+ self.MOTAL = 0
+ self.MODA = 0
+ self.MODP = 0
+ self.MODP_t = []
+ self.recall = 0
+ self.precision = 0
+ self.F1 = 0
+ self.FAR = 0
+ self.total_cost = 0
+ self.itp = 0 # number of ignored true positives
+ self.itps = [] # number of ignored true positives PER SEQUENCE
+ self.tp = 0 # number of true positives including ignored true positives!
+ self.tps = [
+ ] # number of true positives including ignored true positives PER SEQUENCE
+ self.fn = 0 # number of false negatives WITHOUT ignored false negatives
+ self.fns = [
+ ] # number of false negatives WITHOUT ignored false negatives PER SEQUENCE
+ self.ifn = 0 # number of ignored false negatives
+ self.ifns = [] # number of ignored false negatives PER SEQUENCE
+ self.fp = 0 # number of false positives
+ # a bit tricky, the number of ignored false negatives and ignored true positives
+ # is subtracted, but if both tracker detection and ground truth detection
+ # are ignored this number is added again to avoid double counting
+ self.fps = [] # above PER SEQUENCE
+ self.mme = 0
+ self.fragments = 0
+ self.id_switches = 0
+ self.MT = 0
+ self.PT = 0
+ self.ML = 0
+
+ self.min_overlap = min_overlap # minimum bounding box overlap for 3rd party metrics
+ self.max_truncation = max_truncation # maximum truncation of an object for evaluation
+ self.max_occlusion = max_occlusion # maximum occlusion of an object for evaluation
+ self.min_height = min_height # minimum height of an object for evaluation
+ self.n_sample_points = 500
+
+ # this should be enough to hold all groundtruth trajectories
+ # is expanded if necessary and reduced in any case
+ self.gt_trajectories = [[] for x in range(self.n_sequences)]
+ self.ign_trajectories = [[] for x in range(self.n_sequences)]
+
+ def loadGroundtruth(self):
+ try:
+ self._loadData(self.gt_path, cls=self.cls, loading_groundtruth=True)
+ except IOError:
+ return False
+ return True
+
+ def loadTracker(self):
+ try:
+ if not self._loadData(
+ self.result_path, cls=self.cls, loading_groundtruth=False):
+ return False
+ except IOError:
+ return False
+ return True
+
+ def _loadData(self,
+ root_dir,
+ cls,
+ min_score=-1000,
+ loading_groundtruth=False):
+ """
+ Generic loader for ground truth and tracking data.
+ Use loadGroundtruth() or loadTracker() to load this data.
+ Loads detections in KITTI format from textfiles.
+ """
+ # construct objectDetections object to hold detection data
+ t_data = tData()
+ data = []
+ eval_2d = True
+ eval_3d = True
+
+ seq_data = []
+ n_trajectories = 0
+ n_trajectories_seq = []
+ for seq, s_name in enumerate(self.sequence_name):
+ i = 0
+ filename = os.path.join(root_dir, "%s.txt" % s_name)
+ f = open(filename, "r")
+
+ f_data = [
+ [] for x in range(self.n_frames[seq])
+ ] # current set has only 1059 entries, sufficient length is checked anyway
+ ids = []
+ n_in_seq = 0
+ id_frame_cache = []
+ for line in f:
+ # KITTI tracking benchmark data format:
+ # (frame,tracklet_id,objectType,truncation,occlusion,alpha,x1,y1,x2,y2,h,w,l,X,Y,Z,ry)
+ line = line.strip()
+ fields = line.split(" ")
+ # classes that should be loaded (ignored neighboring classes)
+ if "car" in cls.lower():
+ classes = ["car", "van"]
+ elif "pedestrian" in cls.lower():
+ classes = ["pedestrian", "person_sitting"]
+ else:
+ classes = [cls.lower()]
+ classes += ["dontcare"]
+ if not any([s for s in classes if s in fields[2].lower()]):
+ continue
+ # get fields from table
+ t_data.frame = int(float(fields[0])) # frame
+ t_data.track_id = int(float(fields[1])) # id
+ t_data.obj_type = fields[
+ 2].lower() # object type [car, pedestrian, cyclist, ...]
+ t_data.truncation = int(
+ float(fields[3])) # truncation [-1,0,1,2]
+ t_data.occlusion = int(
+ float(fields[4])) # occlusion [-1,0,1,2]
+ t_data.obs_angle = float(fields[5]) # observation angle [rad]
+ t_data.x1 = float(fields[6]) # left [px]
+ t_data.y1 = float(fields[7]) # top [px]
+ t_data.x2 = float(fields[8]) # right [px]
+ t_data.y2 = float(fields[9]) # bottom [px]
+ t_data.h = float(fields[10]) # height [m]
+ t_data.w = float(fields[11]) # width [m]
+ t_data.l = float(fields[12]) # length [m]
+ t_data.X = float(fields[13]) # X [m]
+ t_data.Y = float(fields[14]) # Y [m]
+ t_data.Z = float(fields[15]) # Z [m]
+ t_data.yaw = float(fields[16]) # yaw angle [rad]
+ if not loading_groundtruth:
+ if len(fields) == 17:
+ t_data.score = -1
+ elif len(fields) == 18:
+ t_data.score = float(fields[17]) # detection score
+ else:
+ logger.info("file is not in KITTI format")
+ return
+
+ # do not consider objects marked as invalid
+ if t_data.track_id is -1 and t_data.obj_type != "dontcare":
+ continue
+
+ idx = t_data.frame
+ # check if length for frame data is sufficient
+ if idx >= len(f_data):
+ print("extend f_data", idx, len(f_data))
+ f_data += [[] for x in range(max(500, idx - len(f_data)))]
+ try:
+ id_frame = (t_data.frame, t_data.track_id)
+ if id_frame in id_frame_cache and not loading_groundtruth:
+ logger.info(
+ "track ids are not unique for sequence %d: frame %d"
+ % (seq, t_data.frame))
+ logger.info(
+ "track id %d occured at least twice for this frame"
+ % t_data.track_id)
+ logger.info("Exiting...")
+ #continue # this allows to evaluate non-unique result files
+ return False
+ id_frame_cache.append(id_frame)
+ f_data[t_data.frame].append(copy.copy(t_data))
+ except:
+ print(len(f_data), idx)
+ raise
+
+ if t_data.track_id not in ids and t_data.obj_type != "dontcare":
+ ids.append(t_data.track_id)
+ n_trajectories += 1
+ n_in_seq += 1
+
+ # check if uploaded data provides information for 2D and 3D evaluation
+ if not loading_groundtruth and eval_2d is True and (
+ t_data.x1 == -1 or t_data.x2 == -1 or t_data.y1 == -1 or
+ t_data.y2 == -1):
+ eval_2d = False
+ if not loading_groundtruth and eval_3d is True and (
+ t_data.X == -1000 or t_data.Y == -1000 or
+ t_data.Z == -1000):
+ eval_3d = False
+
+ # only add existing frames
+ n_trajectories_seq.append(n_in_seq)
+ seq_data.append(f_data)
+ f.close()
+
+ if not loading_groundtruth:
+ self.tracker = seq_data
+ self.n_tr_trajectories = n_trajectories
+ self.eval_2d = eval_2d
+ self.eval_3d = eval_3d
+ self.n_tr_seq = n_trajectories_seq
+ if self.n_tr_trajectories == 0:
+ return False
+ else:
+ # split ground truth and DontCare areas
+ self.dcareas = []
+ self.groundtruth = []
+ for seq_idx in range(len(seq_data)):
+ seq_gt = seq_data[seq_idx]
+ s_g, s_dc = [], []
+ for f in range(len(seq_gt)):
+ all_gt = seq_gt[f]
+ g, dc = [], []
+ for gg in all_gt:
+ if gg.obj_type == "dontcare":
+ dc.append(gg)
+ else:
+ g.append(gg)
+ s_g.append(g)
+ s_dc.append(dc)
+ self.dcareas.append(s_dc)
+ self.groundtruth.append(s_g)
+ self.n_gt_seq = n_trajectories_seq
+ self.n_gt_trajectories = n_trajectories
+ return True
+
+ def boxoverlap(self, a, b, criterion="union"):
+ """
+ boxoverlap computes intersection over union for bbox a and b in KITTI format.
+ If the criterion is 'union', overlap = (a inter b) / a union b).
+ If the criterion is 'a', overlap = (a inter b) / a, where b should be a dontcare area.
+ """
+ x1 = max(a.x1, b.x1)
+ y1 = max(a.y1, b.y1)
+ x2 = min(a.x2, b.x2)
+ y2 = min(a.y2, b.y2)
+
+ w = x2 - x1
+ h = y2 - y1
+
+ if w <= 0. or h <= 0.:
+ return 0.
+ inter = w * h
+ aarea = (a.x2 - a.x1) * (a.y2 - a.y1)
+ barea = (b.x2 - b.x1) * (b.y2 - b.y1)
+ # intersection over union overlap
+ if criterion.lower() == "union":
+ o = inter / float(aarea + barea - inter)
+ elif criterion.lower() == "a":
+ o = float(inter) / float(aarea)
+ else:
+ raise TypeError("Unkown type for criterion")
+ return o
+
+ def compute3rdPartyMetrics(self):
+ """
+ Computes the metrics defined in
+ - Stiefelhagen 2008: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics
+ MOTA, MOTAL, MOTP
+ - Nevatia 2008: Global Data Association for Multi-Object Tracking Using Network Flows
+ MT/PT/ML
+ """
+ # construct Munkres object for Hungarian Method association
+ hm = Munkres()
+ max_cost = 1e9
+
+ # go through all frames and associate ground truth and tracker results
+ # groundtruth and tracker contain lists for every single frame containing lists of KITTI format detections
+ fr, ids = 0, 0
+ for seq_idx in range(len(self.groundtruth)):
+ seq_gt = self.groundtruth[seq_idx]
+ seq_dc = self.dcareas[seq_idx] # don't care areas
+ seq_tracker = self.tracker[seq_idx]
+ seq_trajectories = defaultdict(list)
+ seq_ignored = defaultdict(list)
+
+ # statistics over the current sequence, check the corresponding
+ # variable comments in __init__ to get their meaning
+ seqtp = 0
+ seqitp = 0
+ seqfn = 0
+ seqifn = 0
+ seqfp = 0
+ seqigt = 0
+ seqitr = 0
+
+ last_ids = [[], []]
+ n_gts = 0
+ n_trs = 0
+
+ for f in range(len(seq_gt)):
+ g = seq_gt[f]
+ dc = seq_dc[f]
+
+ t = seq_tracker[f]
+ # counting total number of ground truth and tracker objects
+ self.n_gt += len(g)
+ self.n_tr += len(t)
+
+ n_gts += len(g)
+ n_trs += len(t)
+
+ # use hungarian method to associate, using boxoverlap 0..1 as cost
+ # build cost matrix
+ cost_matrix = []
+ this_ids = [[], []]
+ for gg in g:
+ # save current ids
+ this_ids[0].append(gg.track_id)
+ this_ids[1].append(-1)
+ gg.tracker = -1
+ gg.id_switch = 0
+ gg.fragmentation = 0
+ cost_row = []
+ for tt in t:
+ # overlap == 1 is cost ==0
+ c = 1 - self.boxoverlap(gg, tt)
+ # gating for boxoverlap
+ if c <= self.min_overlap:
+ cost_row.append(c)
+ else:
+ cost_row.append(max_cost) # = 1e9
+ cost_matrix.append(cost_row)
+ # all ground truth trajectories are initially not associated
+ # extend groundtruth trajectories lists (merge lists)
+ seq_trajectories[gg.track_id].append(-1)
+ seq_ignored[gg.track_id].append(False)
+
+ if len(g) is 0:
+ cost_matrix = [[]]
+ # associate
+ association_matrix = hm.compute(cost_matrix)
+
+ # tmp variables for sanity checks and MODP computation
+ tmptp = 0
+ tmpfp = 0
+ tmpfn = 0
+ tmpc = 0 # this will sum up the overlaps for all true positives
+ tmpcs = [0] * len(
+ g) # this will save the overlaps for all true positives
+ # the reason is that some true positives might be ignored
+ # later such that the corrsponding overlaps can
+ # be subtracted from tmpc for MODP computation
+
+ # mapping for tracker ids and ground truth ids
+ for row, col in association_matrix:
+ # apply gating on boxoverlap
+ c = cost_matrix[row][col]
+ if c < max_cost:
+ g[row].tracker = t[col].track_id
+ this_ids[1][row] = t[col].track_id
+ t[col].valid = True
+ g[row].distance = c
+ self.total_cost += 1 - c
+ tmpc += 1 - c
+ tmpcs[row] = 1 - c
+ seq_trajectories[g[row].track_id][-1] = t[col].track_id
+
+ # true positives are only valid associations
+ self.tp += 1
+ tmptp += 1
+ else:
+ g[row].tracker = -1
+ self.fn += 1
+ tmpfn += 1
+
+ # associate tracker and DontCare areas
+ # ignore tracker in neighboring classes
+ nignoredtracker = 0 # number of ignored tracker detections
+ ignoredtrackers = dict() # will associate the track_id with -1
+ # if it is not ignored and 1 if it is
+ # ignored;
+ # this is used to avoid double counting ignored
+ # cases, see the next loop
+
+ for tt in t:
+ ignoredtrackers[tt.track_id] = -1
+ # ignore detection if it belongs to a neighboring class or is
+ # smaller or equal to the minimum height
+
+ tt_height = abs(tt.y1 - tt.y2)
+ if ((self.cls == "car" and tt.obj_type == "van") or
+ (self.cls == "pedestrian" and
+ tt.obj_type == "person_sitting") or
+ tt_height <= self.min_height) and not tt.valid:
+ nignoredtracker += 1
+ tt.ignored = True
+ ignoredtrackers[tt.track_id] = 1
+ continue
+ for d in dc:
+ overlap = self.boxoverlap(tt, d, "a")
+ if overlap > 0.5 and not tt.valid:
+ tt.ignored = True
+ nignoredtracker += 1
+ ignoredtrackers[tt.track_id] = 1
+ break
+
+ # check for ignored FN/TP (truncation or neighboring object class)
+ ignoredfn = 0 # the number of ignored false negatives
+ nignoredtp = 0 # the number of ignored true positives
+ nignoredpairs = 0 # the number of ignored pairs, i.e. a true positive
+ # which is ignored but where the associated tracker
+ # detection has already been ignored
+
+ gi = 0
+ for gg in g:
+ if gg.tracker < 0:
+ if gg.occlusion>self.max_occlusion or gg.truncation>self.max_truncation\
+ or (self.cls=="car" and gg.obj_type=="van") or (self.cls=="pedestrian" and gg.obj_type=="person_sitting"):
+ seq_ignored[gg.track_id][-1] = True
+ gg.ignored = True
+ ignoredfn += 1
+
+ elif gg.tracker >= 0:
+ if gg.occlusion>self.max_occlusion or gg.truncation>self.max_truncation\
+ or (self.cls=="car" and gg.obj_type=="van") or (self.cls=="pedestrian" and gg.obj_type=="person_sitting"):
+
+ seq_ignored[gg.track_id][-1] = True
+ gg.ignored = True
+ nignoredtp += 1
+
+ # if the associated tracker detection is already ignored,
+ # we want to avoid double counting ignored detections
+ if ignoredtrackers[gg.tracker] > 0:
+ nignoredpairs += 1
+
+ # for computing MODP, the overlaps from ignored detections
+ # are subtracted
+ tmpc -= tmpcs[gi]
+ gi += 1
+
+ # the below might be confusion, check the comments in __init__
+ # to see what the individual statistics represent
+
+ # correct TP by number of ignored TP due to truncation
+ # ignored TP are shown as tracked in visualization
+ tmptp -= nignoredtp
+
+ # count the number of ignored true positives
+ self.itp += nignoredtp
+
+ # adjust the number of ground truth objects considered
+ self.n_gt -= (ignoredfn + nignoredtp)
+
+ # count the number of ignored ground truth objects
+ self.n_igt += ignoredfn + nignoredtp
+
+ # count the number of ignored tracker objects
+ self.n_itr += nignoredtracker
+
+ # count the number of ignored pairs, i.e. associated tracker and
+ # ground truth objects that are both ignored
+ self.n_igttr += nignoredpairs
+
+ # false negatives = associated gt bboxes exceding association threshold + non-associated gt bboxes
+ tmpfn += len(g) - len(association_matrix) - ignoredfn
+ self.fn += len(g) - len(association_matrix) - ignoredfn
+ self.ifn += ignoredfn
+
+ # false positives = tracker bboxes - associated tracker bboxes
+ # mismatches (mme_t)
+ tmpfp += len(
+ t) - tmptp - nignoredtracker - nignoredtp + nignoredpairs
+ self.fp += len(
+ t) - tmptp - nignoredtracker - nignoredtp + nignoredpairs
+
+ # update sequence data
+ seqtp += tmptp
+ seqitp += nignoredtp
+ seqfp += tmpfp
+ seqfn += tmpfn
+ seqifn += ignoredfn
+ seqigt += ignoredfn + nignoredtp
+ seqitr += nignoredtracker
+
+ # sanity checks
+ # - the number of true positives minues ignored true positives
+ # should be greater or equal to 0
+ # - the number of false negatives should be greater or equal to 0
+ # - the number of false positives needs to be greater or equal to 0
+ # otherwise ignored detections might be counted double
+ # - the number of counted true positives (plus ignored ones)
+ # and the number of counted false negatives (plus ignored ones)
+ # should match the total number of ground truth objects
+ # - the number of counted true positives (plus ignored ones)
+ # and the number of counted false positives
+ # plus the number of ignored tracker detections should
+ # match the total number of tracker detections; note that
+ # nignoredpairs is subtracted here to avoid double counting
+ # of ignored detection sin nignoredtp and nignoredtracker
+ if tmptp < 0:
+ print(tmptp, nignoredtp)
+ raise NameError("Something went wrong! TP is negative")
+ if tmpfn < 0:
+ print(tmpfn,
+ len(g),
+ len(association_matrix), ignoredfn, nignoredpairs)
+ raise NameError("Something went wrong! FN is negative")
+ if tmpfp < 0:
+ print(tmpfp,
+ len(t), tmptp, nignoredtracker, nignoredtp,
+ nignoredpairs)
+ raise NameError("Something went wrong! FP is negative")
+ if tmptp + tmpfn is not len(g) - ignoredfn - nignoredtp:
+ print("seqidx", seq_idx)
+ print("frame ", f)
+ print("TP ", tmptp)
+ print("FN ", tmpfn)
+ print("FP ", tmpfp)
+ print("nGT ", len(g))
+ print("nAss ", len(association_matrix))
+ print("ign GT", ignoredfn)
+ print("ign TP", nignoredtp)
+ raise NameError(
+ "Something went wrong! nGroundtruth is not TP+FN")
+ if tmptp + tmpfp + nignoredtp + nignoredtracker - nignoredpairs is not len(
+ t):
+ print(seq_idx, f, len(t), tmptp, tmpfp)
+ print(len(association_matrix), association_matrix)
+ raise NameError(
+ "Something went wrong! nTracker is not TP+FP")
+
+ # check for id switches or fragmentations
+ for i, tt in enumerate(this_ids[0]):
+ if tt in last_ids[0]:
+ idx = last_ids[0].index(tt)
+ tid = this_ids[1][i]
+ lid = last_ids[1][idx]
+ if tid != lid and lid != -1 and tid != -1:
+ if g[i].truncation < self.max_truncation:
+ g[i].id_switch = 1
+ ids += 1
+ if tid != lid and lid != -1:
+ if g[i].truncation < self.max_truncation:
+ g[i].fragmentation = 1
+ fr += 1
+
+ # save current index
+ last_ids = this_ids
+ # compute MOTP_t
+ MODP_t = 1
+ if tmptp != 0:
+ MODP_t = tmpc / float(tmptp)
+ self.MODP_t.append(MODP_t)
+
+ # remove empty lists for current gt trajectories
+ self.gt_trajectories[seq_idx] = seq_trajectories
+ self.ign_trajectories[seq_idx] = seq_ignored
+
+ # gather statistics for "per sequence" statistics.
+ self.n_gts.append(n_gts)
+ self.n_trs.append(n_trs)
+ self.tps.append(seqtp)
+ self.itps.append(seqitp)
+ self.fps.append(seqfp)
+ self.fns.append(seqfn)
+ self.ifns.append(seqifn)
+ self.n_igts.append(seqigt)
+ self.n_itrs.append(seqitr)
+
+ # compute MT/PT/ML, fragments, idswitches for all groundtruth trajectories
+ n_ignored_tr_total = 0
+ for seq_idx, (
+ seq_trajectories, seq_ignored
+ ) in enumerate(zip(self.gt_trajectories, self.ign_trajectories)):
+ if len(seq_trajectories) == 0:
+ continue
+ tmpMT, tmpML, tmpPT, tmpId_switches, tmpFragments = [0] * 5
+ n_ignored_tr = 0
+ for g, ign_g in zip(seq_trajectories.values(),
+ seq_ignored.values()):
+ # all frames of this gt trajectory are ignored
+ if all(ign_g):
+ n_ignored_tr += 1
+ n_ignored_tr_total += 1
+ continue
+ # all frames of this gt trajectory are not assigned to any detections
+ if all([this == -1 for this in g]):
+ tmpML += 1
+ self.ML += 1
+ continue
+ # compute tracked frames in trajectory
+ last_id = g[0]
+ # first detection (necessary to be in gt_trajectories) is always tracked
+ tracked = 1 if g[0] >= 0 else 0
+ lgt = 0 if ign_g[0] else 1
+ for f in range(1, len(g)):
+ if ign_g[f]:
+ last_id = -1
+ continue
+ lgt += 1
+ if last_id != g[f] and last_id != -1 and g[f] != -1 and g[
+ f - 1] != -1:
+ tmpId_switches += 1
+ self.id_switches += 1
+ if f < len(g) - 1 and g[f - 1] != g[
+ f] and last_id != -1 and g[f] != -1 and g[f +
+ 1] != -1:
+ tmpFragments += 1
+ self.fragments += 1
+ if g[f] != -1:
+ tracked += 1
+ last_id = g[f]
+ # handle last frame; tracked state is handled in for loop (g[f]!=-1)
+ if len(g) > 1 and g[f - 1] != g[f] and last_id != -1 and g[
+ f] != -1 and not ign_g[f]:
+ tmpFragments += 1
+ self.fragments += 1
+
+ # compute MT/PT/ML
+ tracking_ratio = tracked / float(len(g) - sum(ign_g))
+ if tracking_ratio > 0.8:
+ tmpMT += 1
+ self.MT += 1
+ elif tracking_ratio < 0.2:
+ tmpML += 1
+ self.ML += 1
+ else: # 0.2 <= tracking_ratio <= 0.8
+ tmpPT += 1
+ self.PT += 1
+
+ if (self.n_gt_trajectories - n_ignored_tr_total) == 0:
+ self.MT = 0.
+ self.PT = 0.
+ self.ML = 0.
+ else:
+ self.MT /= float(self.n_gt_trajectories - n_ignored_tr_total)
+ self.PT /= float(self.n_gt_trajectories - n_ignored_tr_total)
+ self.ML /= float(self.n_gt_trajectories - n_ignored_tr_total)
+
+ # precision/recall etc.
+ if (self.fp + self.tp) == 0 or (self.tp + self.fn) == 0:
+ self.recall = 0.
+ self.precision = 0.
+ else:
+ self.recall = self.tp / float(self.tp + self.fn)
+ self.precision = self.tp / float(self.fp + self.tp)
+ if (self.recall + self.precision) == 0:
+ self.F1 = 0.
+ else:
+ self.F1 = 2. * (self.precision * self.recall) / (
+ self.precision + self.recall)
+ if sum(self.n_frames) == 0:
+ self.FAR = "n/a"
+ else:
+ self.FAR = self.fp / float(sum(self.n_frames))
+
+ # compute CLEARMOT
+ if self.n_gt == 0:
+ self.MOTA = -float("inf")
+ self.MODA = -float("inf")
+ else:
+ self.MOTA = 1 - (self.fn + self.fp + self.id_switches
+ ) / float(self.n_gt)
+ self.MODA = 1 - (self.fn + self.fp) / float(self.n_gt)
+ if self.tp == 0:
+ self.MOTP = float("inf")
+ else:
+ self.MOTP = self.total_cost / float(self.tp)
+ if self.n_gt != 0:
+ if self.id_switches == 0:
+ self.MOTAL = 1 - (self.fn + self.fp + self.id_switches
+ ) / float(self.n_gt)
+ else:
+ self.MOTAL = 1 - (self.fn + self.fp +
+ math.log10(self.id_switches)
+ ) / float(self.n_gt)
+ else:
+ self.MOTAL = -float("inf")
+ if sum(self.n_frames) == 0:
+ self.MODP = "n/a"
+ else:
+ self.MODP = sum(self.MODP_t) / float(sum(self.n_frames))
+ return True
+
+ def createSummary(self):
+ summary = ""
+ summary += "tracking evaluation summary".center(80, "=") + "\n"
+ summary += self.printEntry("Multiple Object Tracking Accuracy (MOTA)",
+ self.MOTA) + "\n"
+ summary += self.printEntry("Multiple Object Tracking Precision (MOTP)",
+ self.MOTP) + "\n"
+ summary += self.printEntry("Multiple Object Tracking Accuracy (MOTAL)",
+ self.MOTAL) + "\n"
+ summary += self.printEntry("Multiple Object Detection Accuracy (MODA)",
+ self.MODA) + "\n"
+ summary += self.printEntry("Multiple Object Detection Precision (MODP)",
+ self.MODP) + "\n"
+ summary += "\n"
+ summary += self.printEntry("Recall", self.recall) + "\n"
+ summary += self.printEntry("Precision", self.precision) + "\n"
+ summary += self.printEntry("F1", self.F1) + "\n"
+ summary += self.printEntry("False Alarm Rate", self.FAR) + "\n"
+ summary += "\n"
+ summary += self.printEntry("Mostly Tracked", self.MT) + "\n"
+ summary += self.printEntry("Partly Tracked", self.PT) + "\n"
+ summary += self.printEntry("Mostly Lost", self.ML) + "\n"
+ summary += "\n"
+ summary += self.printEntry("True Positives", self.tp) + "\n"
+ #summary += self.printEntry("True Positives per Sequence", self.tps) + "\n"
+ summary += self.printEntry("Ignored True Positives", self.itp) + "\n"
+ #summary += self.printEntry("Ignored True Positives per Sequence", self.itps) + "\n"
+
+ summary += self.printEntry("False Positives", self.fp) + "\n"
+ #summary += self.printEntry("False Positives per Sequence", self.fps) + "\n"
+ summary += self.printEntry("False Negatives", self.fn) + "\n"
+ #summary += self.printEntry("False Negatives per Sequence", self.fns) + "\n"
+ summary += self.printEntry("ID-switches", self.id_switches) + "\n"
+ self.fp = self.fp / self.n_gt
+ self.fn = self.fn / self.n_gt
+ self.id_switches = self.id_switches / self.n_gt
+ summary += self.printEntry("False Positives Ratio", self.fp) + "\n"
+ #summary += self.printEntry("False Positives per Sequence", self.fps) + "\n"
+ summary += self.printEntry("False Negatives Ratio", self.fn) + "\n"
+ #summary += self.printEntry("False Negatives per Sequence", self.fns) + "\n"
+ summary += self.printEntry("Ignored False Negatives Ratio",
+ self.ifn) + "\n"
+
+ #summary += self.printEntry("Ignored False Negatives per Sequence", self.ifns) + "\n"
+ summary += self.printEntry("Missed Targets", self.fn) + "\n"
+ summary += self.printEntry("ID-switches", self.id_switches) + "\n"
+ summary += self.printEntry("Fragmentations", self.fragments) + "\n"
+ summary += "\n"
+ summary += self.printEntry("Ground Truth Objects (Total)", self.n_gt +
+ self.n_igt) + "\n"
+ #summary += self.printEntry("Ground Truth Objects (Total) per Sequence", self.n_gts) + "\n"
+ summary += self.printEntry("Ignored Ground Truth Objects",
+ self.n_igt) + "\n"
+ #summary += self.printEntry("Ignored Ground Truth Objects per Sequence", self.n_igts) + "\n"
+ summary += self.printEntry("Ground Truth Trajectories",
+ self.n_gt_trajectories) + "\n"
+ summary += "\n"
+ summary += self.printEntry("Tracker Objects (Total)", self.n_tr) + "\n"
+ #summary += self.printEntry("Tracker Objects (Total) per Sequence", self.n_trs) + "\n"
+ summary += self.printEntry("Ignored Tracker Objects", self.n_itr) + "\n"
+ #summary += self.printEntry("Ignored Tracker Objects per Sequence", self.n_itrs) + "\n"
+ summary += self.printEntry("Tracker Trajectories",
+ self.n_tr_trajectories) + "\n"
+ #summary += "\n"
+ #summary += self.printEntry("Ignored Tracker Objects with Associated Ignored Ground Truth Objects", self.n_igttr) + "\n"
+ summary += "=" * 80
+ return summary
+
+ def printEntry(self, key, val, width=(70, 10)):
+ """
+ Pretty print an entry in a table fashion.
+ """
+ s_out = key.ljust(width[0])
+ if type(val) == int:
+ s = "%%%dd" % width[1]
+ s_out += s % val
+ elif type(val) == float:
+ s = "%%%df" % (width[1])
+ s_out += s % val
+ else:
+ s_out += ("%s" % val).rjust(width[1])
+ return s_out
+
+ def saveToStats(self, save_summary):
+ """
+ Save the statistics in a whitespace separate file.
+ """
+ summary = self.createSummary()
+ if save_summary:
+ filename = os.path.join(self.result_path,
+ "summary_%s.txt" % self.cls)
+ dump = open(filename, "w+")
+ dump.write(summary)
+ dump.close()
+ return summary
+
+
+class KITTIMOTMetric(Metric):
+ def __init__(self, save_summary=True):
+ self.save_summary = save_summary
+ self.MOTEvaluator = KITTIEvaluation
+ self.result_root = None
+ self.reset()
+
+ def reset(self):
+ self.seqs = []
+ self.n_sequences = 0
+ self.n_frames = []
+ self.strsummary = ''
+
+ def update(self, data_root, seq, data_type, result_root, result_filename):
+ assert data_type == 'kitti', "data_type should 'kitti'"
+ self.result_root = result_root
+ self.gt_path = data_root
+ gt_path = '{}/../labels/{}.txt'.format(data_root, seq)
+ gt = open(gt_path, "r")
+ max_frame = 0
+ for line in gt:
+ line = line.strip()
+ line_list = line.split(" ")
+ if int(line_list[0]) > max_frame:
+ max_frame = int(line_list[0])
+ rs = open(result_filename, "r")
+ for line in rs:
+ line = line.strip()
+ line_list = line.split(" ")
+ if int(line_list[0]) > max_frame:
+ max_frame = int(line_list[0])
+ gt.close()
+ rs.close()
+ self.n_frames.append(max_frame + 1)
+ self.seqs.append(seq)
+ self.n_sequences += 1
+
+ def accumulate(self):
+ logger.info("Processing Result for KITTI Tracking Benchmark")
+ e = self.MOTEvaluator(result_path=self.result_root, gt_path=self.gt_path,\
+ n_frames=self.n_frames, seqs=self.seqs, n_sequences=self.n_sequences)
+ try:
+ if not e.loadTracker():
+ return
+ logger.info("Loading Results - Success")
+ logger.info("Evaluate Object Class: %s" % c.upper())
+ except:
+ logger.info("Caught exception while loading result data.")
+ if not e.loadGroundtruth():
+ raise ValueError("Ground truth not found.")
+ logger.info("Loading Groundtruth - Success")
+ # sanity checks
+ if len(e.groundtruth) is not len(e.tracker):
+ logger.info(
+ "The uploaded data does not provide results for every sequence.")
+ return False
+ logger.info("Loaded %d Sequences." % len(e.groundtruth))
+ logger.info("Start Evaluation...")
+
+ if e.compute3rdPartyMetrics():
+ self.strsummary = e.saveToStats(self.save_summary)
+ else:
+ logger.info(
+ "There seem to be no true positives or false positives at all in the submitted data."
+ )
+
+ def log(self):
+ print(self.strsummary)
+
+ def get_results(self):
+ return self.strsummary
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/munkres.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/munkres.py
new file mode 100644
index 000000000..fbd4a92d2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/munkres.py
@@ -0,0 +1,428 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is borrow from https://github.com/xingyizhou/CenterTrack/blob/master/src/tools/eval_kitti_track/munkres.py
+"""
+
+import sys
+
+__all__ = ['Munkres', 'make_cost_matrix']
+
+
+class Munkres:
+ """
+ Calculate the Munkres solution to the classical assignment problem.
+ See the module documentation for usage.
+ """
+
+ def __init__(self):
+ """Create a new instance"""
+ self.C = None
+ self.row_covered = []
+ self.col_covered = []
+ self.n = 0
+ self.Z0_r = 0
+ self.Z0_c = 0
+ self.marked = None
+ self.path = None
+
+ def make_cost_matrix(profit_matrix, inversion_function):
+ """
+ **DEPRECATED**
+
+ Please use the module function ``make_cost_matrix()``.
+ """
+ import munkres
+ return munkres.make_cost_matrix(profit_matrix, inversion_function)
+
+ make_cost_matrix = staticmethod(make_cost_matrix)
+
+ def pad_matrix(self, matrix, pad_value=0):
+ """
+ Pad a possibly non-square matrix to make it square.
+
+ :Parameters:
+ matrix : list of lists
+ matrix to pad
+
+ pad_value : int
+ value to use to pad the matrix
+
+ :rtype: list of lists
+ :return: a new, possibly padded, matrix
+ """
+ max_columns = 0
+ total_rows = len(matrix)
+
+ for row in matrix:
+ max_columns = max(max_columns, len(row))
+
+ total_rows = max(max_columns, total_rows)
+
+ new_matrix = []
+ for row in matrix:
+ row_len = len(row)
+ new_row = row[:]
+ if total_rows > row_len:
+ # Row too short. Pad it.
+ new_row += [0] * (total_rows - row_len)
+ new_matrix += [new_row]
+
+ while len(new_matrix) < total_rows:
+ new_matrix += [[0] * total_rows]
+
+ return new_matrix
+
+ def compute(self, cost_matrix):
+ """
+ Compute the indexes for the lowest-cost pairings between rows and
+ columns in the database. Returns a list of (row, column) tuples
+ that can be used to traverse the matrix.
+
+ :Parameters:
+ cost_matrix : list of lists
+ The cost matrix. If this cost matrix is not square, it
+ will be padded with zeros, via a call to ``pad_matrix()``.
+ (This method does *not* modify the caller's matrix. It
+ operates on a copy of the matrix.)
+
+ **WARNING**: This code handles square and rectangular
+ matrices. It does *not* handle irregular matrices.
+
+ :rtype: list
+ :return: A list of ``(row, column)`` tuples that describe the lowest
+ cost path through the matrix
+
+ """
+ self.C = self.pad_matrix(cost_matrix)
+ self.n = len(self.C)
+ self.original_length = len(cost_matrix)
+ self.original_width = len(cost_matrix[0])
+ self.row_covered = [False for i in range(self.n)]
+ self.col_covered = [False for i in range(self.n)]
+ self.Z0_r = 0
+ self.Z0_c = 0
+ self.path = self.__make_matrix(self.n * 2, 0)
+ self.marked = self.__make_matrix(self.n, 0)
+
+ done = False
+ step = 1
+
+ steps = {
+ 1: self.__step1,
+ 2: self.__step2,
+ 3: self.__step3,
+ 4: self.__step4,
+ 5: self.__step5,
+ 6: self.__step6
+ }
+
+ while not done:
+ try:
+ func = steps[step]
+ step = func()
+ except KeyError:
+ done = True
+
+ # Look for the starred columns
+ results = []
+ for i in range(self.original_length):
+ for j in range(self.original_width):
+ if self.marked[i][j] == 1:
+ results += [(i, j)]
+
+ return results
+
+ def __copy_matrix(self, matrix):
+ """Return an exact copy of the supplied matrix"""
+ return copy.deepcopy(matrix)
+
+ def __make_matrix(self, n, val):
+ """Create an *n*x*n* matrix, populating it with the specific value."""
+ matrix = []
+ for i in range(n):
+ matrix += [[val for j in range(n)]]
+ return matrix
+
+ def __step1(self):
+ """
+ For each row of the matrix, find the smallest element and
+ subtract it from every element in its row. Go to Step 2.
+ """
+ C = self.C
+ n = self.n
+ for i in range(n):
+ minval = min(self.C[i])
+ # Find the minimum value for this row and subtract that minimum
+ # from every element in the row.
+ for j in range(n):
+ self.C[i][j] -= minval
+
+ return 2
+
+ def __step2(self):
+ """
+ Find a zero (Z) in the resulting matrix. If there is no starred
+ zero in its row or column, star Z. Repeat for each element in the
+ matrix. Go to Step 3.
+ """
+ n = self.n
+ for i in range(n):
+ for j in range(n):
+ if (self.C[i][j] == 0) and \
+ (not self.col_covered[j]) and \
+ (not self.row_covered[i]):
+ self.marked[i][j] = 1
+ self.col_covered[j] = True
+ self.row_covered[i] = True
+
+ self.__clear_covers()
+ return 3
+
+ def __step3(self):
+ """
+ Cover each column containing a starred zero. If K columns are
+ covered, the starred zeros describe a complete set of unique
+ assignments. In this case, Go to DONE, otherwise, Go to Step 4.
+ """
+ n = self.n
+ count = 0
+ for i in range(n):
+ for j in range(n):
+ if self.marked[i][j] == 1:
+ self.col_covered[j] = True
+ count += 1
+
+ if count >= n:
+ step = 7 # done
+ else:
+ step = 4
+
+ return step
+
+ def __step4(self):
+ """
+ Find a noncovered zero and prime it. If there is no starred zero
+ in the row containing this primed zero, Go to Step 5. Otherwise,
+ cover this row and uncover the column containing the starred
+ zero. Continue in this manner until there are no uncovered zeros
+ left. Save the smallest uncovered value and Go to Step 6.
+ """
+ step = 0
+ done = False
+ row = -1
+ col = -1
+ star_col = -1
+ while not done:
+ (row, col) = self.__find_a_zero()
+ if row < 0:
+ done = True
+ step = 6
+ else:
+ self.marked[row][col] = 2
+ star_col = self.__find_star_in_row(row)
+ if star_col >= 0:
+ col = star_col
+ self.row_covered[row] = True
+ self.col_covered[col] = False
+ else:
+ done = True
+ self.Z0_r = row
+ self.Z0_c = col
+ step = 5
+
+ return step
+
+ def __step5(self):
+ """
+ Construct a series of alternating primed and starred zeros as
+ follows. Let Z0 represent the uncovered primed zero found in Step 4.
+ Let Z1 denote the starred zero in the column of Z0 (if any).
+ Let Z2 denote the primed zero in the row of Z1 (there will always
+ be one). Continue until the series terminates at a primed zero
+ that has no starred zero in its column. Unstar each starred zero
+ of the series, star each primed zero of the series, erase all
+ primes and uncover every line in the matrix. Return to Step 3
+ """
+ count = 0
+ path = self.path
+ path[count][0] = self.Z0_r
+ path[count][1] = self.Z0_c
+ done = False
+ while not done:
+ row = self.__find_star_in_col(path[count][1])
+ if row >= 0:
+ count += 1
+ path[count][0] = row
+ path[count][1] = path[count - 1][1]
+ else:
+ done = True
+
+ if not done:
+ col = self.__find_prime_in_row(path[count][0])
+ count += 1
+ path[count][0] = path[count - 1][0]
+ path[count][1] = col
+
+ self.__convert_path(path, count)
+ self.__clear_covers()
+ self.__erase_primes()
+ return 3
+
+ def __step6(self):
+ """
+ Add the value found in Step 4 to every element of each covered
+ row, and subtract it from every element of each uncovered column.
+ Return to Step 4 without altering any stars, primes, or covered
+ lines.
+ """
+ minval = self.__find_smallest()
+ for i in range(self.n):
+ for j in range(self.n):
+ if self.row_covered[i]:
+ self.C[i][j] += minval
+ if not self.col_covered[j]:
+ self.C[i][j] -= minval
+ return 4
+
+ def __find_smallest(self):
+ """Find the smallest uncovered value in the matrix."""
+ minval = 2e9 # sys.maxint
+ for i in range(self.n):
+ for j in range(self.n):
+ if (not self.row_covered[i]) and (not self.col_covered[j]):
+ if minval > self.C[i][j]:
+ minval = self.C[i][j]
+ return minval
+
+ def __find_a_zero(self):
+ """Find the first uncovered element with value 0"""
+ row = -1
+ col = -1
+ i = 0
+ n = self.n
+ done = False
+
+ while not done:
+ j = 0
+ while True:
+ if (self.C[i][j] == 0) and \
+ (not self.row_covered[i]) and \
+ (not self.col_covered[j]):
+ row = i
+ col = j
+ done = True
+ j += 1
+ if j >= n:
+ break
+ i += 1
+ if i >= n:
+ done = True
+
+ return (row, col)
+
+ def __find_star_in_row(self, row):
+ """
+ Find the first starred element in the specified row. Returns
+ the column index, or -1 if no starred element was found.
+ """
+ col = -1
+ for j in range(self.n):
+ if self.marked[row][j] == 1:
+ col = j
+ break
+
+ return col
+
+ def __find_star_in_col(self, col):
+ """
+ Find the first starred element in the specified row. Returns
+ the row index, or -1 if no starred element was found.
+ """
+ row = -1
+ for i in range(self.n):
+ if self.marked[i][col] == 1:
+ row = i
+ break
+
+ return row
+
+ def __find_prime_in_row(self, row):
+ """
+ Find the first prime element in the specified row. Returns
+ the column index, or -1 if no starred element was found.
+ """
+ col = -1
+ for j in range(self.n):
+ if self.marked[row][j] == 2:
+ col = j
+ break
+
+ return col
+
+ def __convert_path(self, path, count):
+ for i in range(count + 1):
+ if self.marked[path[i][0]][path[i][1]] == 1:
+ self.marked[path[i][0]][path[i][1]] = 0
+ else:
+ self.marked[path[i][0]][path[i][1]] = 1
+
+ def __clear_covers(self):
+ """Clear all covered matrix cells"""
+ for i in range(self.n):
+ self.row_covered[i] = False
+ self.col_covered[i] = False
+
+ def __erase_primes(self):
+ """Erase all prime markings"""
+ for i in range(self.n):
+ for j in range(self.n):
+ if self.marked[i][j] == 2:
+ self.marked[i][j] = 0
+
+
+def make_cost_matrix(profit_matrix, inversion_function):
+ """
+ Create a cost matrix from a profit matrix by calling
+ 'inversion_function' to invert each value. The inversion
+ function must take one numeric argument (of any type) and return
+ another numeric argument which is presumed to be the cost inverse
+ of the original profit.
+
+ This is a static method. Call it like this:
+
+ .. python::
+
+ cost_matrix = Munkres.make_cost_matrix(matrix, inversion_func)
+
+ For example:
+
+ .. python::
+
+ cost_matrix = Munkres.make_cost_matrix(matrix, lambda x : sys.maxint - x)
+
+ :Parameters:
+ profit_matrix : list of lists
+ The matrix to convert from a profit to a cost matrix
+
+ inversion_function : function
+ The function to use to invert each entry in the profit matrix
+
+ :rtype: list of lists
+ :return: The converted matrix
+ """
+ cost_matrix = []
+ for row in profit_matrix:
+ cost_matrix.append([inversion_function(value) for value in row])
+ return cost_matrix
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/widerface_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/widerface_utils.py
new file mode 100644
index 000000000..2f64bf6d5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/metrics/widerface_utils.py
@@ -0,0 +1,391 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import cv2
+import numpy as np
+from collections import OrderedDict
+
+import paddle
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['face_eval_run', 'lmk2out']
+
+
+def face_eval_run(model,
+ image_dir,
+ gt_file,
+ pred_dir='output/pred',
+ eval_mode='widerface',
+ multi_scale=False):
+ # load ground truth files
+ with open(gt_file, 'r') as f:
+ gt_lines = f.readlines()
+ imid2path = []
+ pos_gt = 0
+ while pos_gt < len(gt_lines):
+ name_gt = gt_lines[pos_gt].strip('\n\t').split()[0]
+ imid2path.append(name_gt)
+ pos_gt += 1
+ n_gt = int(gt_lines[pos_gt].strip('\n\t').split()[0])
+ pos_gt += 1 + n_gt
+ logger.info('The ground truth file load {} images'.format(len(imid2path)))
+
+ dets_dist = OrderedDict()
+ for iter_id, im_path in enumerate(imid2path):
+ image_path = os.path.join(image_dir, im_path)
+ if eval_mode == 'fddb':
+ image_path += '.jpg'
+ assert os.path.exists(image_path)
+ image = cv2.imread(image_path)
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
+ if multi_scale:
+ shrink, max_shrink = get_shrink(image.shape[0], image.shape[1])
+ det0 = detect_face(model, image, shrink)
+ det1 = flip_test(model, image, shrink)
+ [det2, det3] = multi_scale_test(model, image, max_shrink)
+ det4 = multi_scale_test_pyramid(model, image, max_shrink)
+ det = np.row_stack((det0, det1, det2, det3, det4))
+ dets = bbox_vote(det)
+ else:
+ dets = detect_face(model, image, 1)
+ if eval_mode == 'widerface':
+ save_widerface_bboxes(image_path, dets, pred_dir)
+ else:
+ dets_dist[im_path] = dets
+ if iter_id % 100 == 0:
+ logger.info('Test iter {}'.format(iter_id))
+ if eval_mode == 'fddb':
+ save_fddb_bboxes(dets_dist, pred_dir)
+ logger.info("Finish evaluation.")
+
+
+def detect_face(model, image, shrink):
+ image_shape = [image.shape[0], image.shape[1]]
+ if shrink != 1:
+ h, w = int(image_shape[0] * shrink), int(image_shape[1] * shrink)
+ image = cv2.resize(image, (w, h))
+ image_shape = [h, w]
+
+ img = face_img_process(image)
+ image_shape = np.asarray([image_shape])
+ scale_factor = np.asarray([[shrink, shrink]])
+ data = {
+ "image": paddle.to_tensor(
+ img, dtype='float32'),
+ "im_shape": paddle.to_tensor(
+ image_shape, dtype='float32'),
+ "scale_factor": paddle.to_tensor(
+ scale_factor, dtype='float32')
+ }
+ model.eval()
+ detection = model(data)
+ detection = detection['bbox'].numpy()
+ # layout: xmin, ymin, xmax. ymax, score
+ if np.prod(detection.shape) == 1:
+ logger.info("No face detected")
+ return np.array([[0, 0, 0, 0, 0]])
+ det_conf = detection[:, 1]
+ det_xmin = detection[:, 2]
+ det_ymin = detection[:, 3]
+ det_xmax = detection[:, 4]
+ det_ymax = detection[:, 5]
+
+ det = np.column_stack((det_xmin, det_ymin, det_xmax, det_ymax, det_conf))
+ return det
+
+
+def flip_test(model, image, shrink):
+ img = cv2.flip(image, 1)
+ det_f = detect_face(model, img, shrink)
+ det_t = np.zeros(det_f.shape)
+ img_width = image.shape[1]
+ det_t[:, 0] = img_width - det_f[:, 2]
+ det_t[:, 1] = det_f[:, 1]
+ det_t[:, 2] = img_width - det_f[:, 0]
+ det_t[:, 3] = det_f[:, 3]
+ det_t[:, 4] = det_f[:, 4]
+ return det_t
+
+
+def multi_scale_test(model, image, max_shrink):
+ # Shrink detecting is only used to detect big faces
+ st = 0.5 if max_shrink >= 0.75 else 0.5 * max_shrink
+ det_s = detect_face(model, image, st)
+ index = np.where(
+ np.maximum(det_s[:, 2] - det_s[:, 0] + 1, det_s[:, 3] - det_s[:, 1] + 1)
+ > 30)[0]
+ det_s = det_s[index, :]
+ # Enlarge one times
+ bt = min(2, max_shrink) if max_shrink > 1 else (st + max_shrink) / 2
+ det_b = detect_face(model, image, bt)
+
+ # Enlarge small image x times for small faces
+ if max_shrink > 2:
+ bt *= 2
+ while bt < max_shrink:
+ det_b = np.row_stack((det_b, detect_face(model, image, bt)))
+ bt *= 2
+ det_b = np.row_stack((det_b, detect_face(model, image, max_shrink)))
+
+ # Enlarged images are only used to detect small faces.
+ if bt > 1:
+ index = np.where(
+ np.minimum(det_b[:, 2] - det_b[:, 0] + 1,
+ det_b[:, 3] - det_b[:, 1] + 1) < 100)[0]
+ det_b = det_b[index, :]
+ # Shrinked images are only used to detect big faces.
+ else:
+ index = np.where(
+ np.maximum(det_b[:, 2] - det_b[:, 0] + 1,
+ det_b[:, 3] - det_b[:, 1] + 1) > 30)[0]
+ det_b = det_b[index, :]
+ return det_s, det_b
+
+
+def multi_scale_test_pyramid(model, image, max_shrink):
+ # Use image pyramids to detect faces
+ det_b = detect_face(model, image, 0.25)
+ index = np.where(
+ np.maximum(det_b[:, 2] - det_b[:, 0] + 1, det_b[:, 3] - det_b[:, 1] + 1)
+ > 30)[0]
+ det_b = det_b[index, :]
+
+ st = [0.75, 1.25, 1.5, 1.75]
+ for i in range(len(st)):
+ if st[i] <= max_shrink:
+ det_temp = detect_face(model, image, st[i])
+ # Enlarged images are only used to detect small faces.
+ if st[i] > 1:
+ index = np.where(
+ np.minimum(det_temp[:, 2] - det_temp[:, 0] + 1,
+ det_temp[:, 3] - det_temp[:, 1] + 1) < 100)[0]
+ det_temp = det_temp[index, :]
+ # Shrinked images are only used to detect big faces.
+ else:
+ index = np.where(
+ np.maximum(det_temp[:, 2] - det_temp[:, 0] + 1,
+ det_temp[:, 3] - det_temp[:, 1] + 1) > 30)[0]
+ det_temp = det_temp[index, :]
+ det_b = np.row_stack((det_b, det_temp))
+ return det_b
+
+
+def to_chw(image):
+ """
+ Transpose image from HWC to CHW.
+ Args:
+ image (np.array): an image with HWC layout.
+ """
+ # HWC to CHW
+ if len(image.shape) == 3:
+ image = np.swapaxes(image, 1, 2)
+ image = np.swapaxes(image, 1, 0)
+ return image
+
+
+def face_img_process(image,
+ mean=[104., 117., 123.],
+ std=[127.502231, 127.502231, 127.502231]):
+ img = np.array(image)
+ img = to_chw(img)
+ img = img.astype('float32')
+ img -= np.array(mean)[:, np.newaxis, np.newaxis].astype('float32')
+ img /= np.array(std)[:, np.newaxis, np.newaxis].astype('float32')
+ img = [img]
+ img = np.array(img)
+ return img
+
+
+def get_shrink(height, width):
+ """
+ Args:
+ height (int): image height.
+ width (int): image width.
+ """
+ # avoid out of memory
+ max_shrink_v1 = (0x7fffffff / 577.0 / (height * width))**0.5
+ max_shrink_v2 = ((678 * 1024 * 2.0 * 2.0) / (height * width))**0.5
+
+ def get_round(x, loc):
+ str_x = str(x)
+ if '.' in str_x:
+ str_before, str_after = str_x.split('.')
+ len_after = len(str_after)
+ if len_after >= 3:
+ str_final = str_before + '.' + str_after[0:loc]
+ return float(str_final)
+ else:
+ return x
+
+ max_shrink = get_round(min(max_shrink_v1, max_shrink_v2), 2) - 0.3
+ if max_shrink >= 1.5 and max_shrink < 2:
+ max_shrink = max_shrink - 0.1
+ elif max_shrink >= 2 and max_shrink < 3:
+ max_shrink = max_shrink - 0.2
+ elif max_shrink >= 3 and max_shrink < 4:
+ max_shrink = max_shrink - 0.3
+ elif max_shrink >= 4 and max_shrink < 5:
+ max_shrink = max_shrink - 0.4
+ elif max_shrink >= 5:
+ max_shrink = max_shrink - 0.5
+ elif max_shrink <= 0.1:
+ max_shrink = 0.1
+
+ shrink = max_shrink if max_shrink < 1 else 1
+ return shrink, max_shrink
+
+
+def bbox_vote(det):
+ order = det[:, 4].ravel().argsort()[::-1]
+ det = det[order, :]
+ if det.shape[0] == 0:
+ dets = np.array([[10, 10, 20, 20, 0.002]])
+ det = np.empty(shape=[0, 5])
+ while det.shape[0] > 0:
+ # IOU
+ area = (det[:, 2] - det[:, 0] + 1) * (det[:, 3] - det[:, 1] + 1)
+ xx1 = np.maximum(det[0, 0], det[:, 0])
+ yy1 = np.maximum(det[0, 1], det[:, 1])
+ xx2 = np.minimum(det[0, 2], det[:, 2])
+ yy2 = np.minimum(det[0, 3], det[:, 3])
+ w = np.maximum(0.0, xx2 - xx1 + 1)
+ h = np.maximum(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ o = inter / (area[0] + area[:] - inter)
+
+ # nms
+ merge_index = np.where(o >= 0.3)[0]
+ det_accu = det[merge_index, :]
+ det = np.delete(det, merge_index, 0)
+ if merge_index.shape[0] <= 1:
+ if det.shape[0] == 0:
+ try:
+ dets = np.row_stack((dets, det_accu))
+ except:
+ dets = det_accu
+ continue
+ det_accu[:, 0:4] = det_accu[:, 0:4] * np.tile(det_accu[:, -1:], (1, 4))
+ max_score = np.max(det_accu[:, 4])
+ det_accu_sum = np.zeros((1, 5))
+ det_accu_sum[:, 0:4] = np.sum(det_accu[:, 0:4],
+ axis=0) / np.sum(det_accu[:, -1:])
+ det_accu_sum[:, 4] = max_score
+ try:
+ dets = np.row_stack((dets, det_accu_sum))
+ except:
+ dets = det_accu_sum
+ dets = dets[0:750, :]
+ keep_index = np.where(dets[:, 4] >= 0.01)[0]
+ dets = dets[keep_index, :]
+ return dets
+
+
+def save_widerface_bboxes(image_path, bboxes_scores, output_dir):
+ image_name = image_path.split('/')[-1]
+ image_class = image_path.split('/')[-2]
+ odir = os.path.join(output_dir, image_class)
+ if not os.path.exists(odir):
+ os.makedirs(odir)
+
+ ofname = os.path.join(odir, '%s.txt' % (image_name[:-4]))
+ f = open(ofname, 'w')
+ f.write('{:s}\n'.format(image_class + '/' + image_name))
+ f.write('{:d}\n'.format(bboxes_scores.shape[0]))
+ for box_score in bboxes_scores:
+ xmin, ymin, xmax, ymax, score = box_score
+ f.write('{:.1f} {:.1f} {:.1f} {:.1f} {:.3f}\n'.format(xmin, ymin, (
+ xmax - xmin + 1), (ymax - ymin + 1), score))
+ f.close()
+ logger.info("The predicted result is saved as {}".format(ofname))
+
+
+def save_fddb_bboxes(bboxes_scores,
+ output_dir,
+ output_fname='pred_fddb_res.txt'):
+ if not os.path.exists(output_dir):
+ os.makedirs(output_dir)
+ predict_file = os.path.join(output_dir, output_fname)
+ f = open(predict_file, 'w')
+ for image_path, dets in bboxes_scores.iteritems():
+ f.write('{:s}\n'.format(image_path))
+ f.write('{:d}\n'.format(dets.shape[0]))
+ for box_score in dets:
+ xmin, ymin, xmax, ymax, score = box_score
+ width, height = xmax - xmin, ymax - ymin
+ f.write('{:.1f} {:.1f} {:.1f} {:.1f} {:.3f}\n'
+ .format(xmin, ymin, width, height, score))
+ logger.info("The predicted result is saved as {}".format(predict_file))
+ return predict_file
+
+
+def lmk2out(results, is_bbox_normalized=False):
+ """
+ Args:
+ results: request a dict, should include: `landmark`, `im_id`,
+ if is_bbox_normalized=True, also need `im_shape`.
+ is_bbox_normalized: whether or not landmark is normalized.
+ """
+ xywh_res = []
+ for t in results:
+ bboxes = t['bbox'][0]
+ lengths = t['bbox'][1][0]
+ im_ids = np.array(t['im_id'][0]).flatten()
+ if bboxes.shape == (1, 1) or bboxes is None:
+ continue
+ face_index = t['face_index'][0]
+ prior_box = t['prior_boxes'][0]
+ predict_lmk = t['landmark'][0]
+ prior = np.reshape(prior_box, (-1, 4))
+ predictlmk = np.reshape(predict_lmk, (-1, 10))
+
+ k = 0
+ for a in range(len(lengths)):
+ num = lengths[a]
+ im_id = int(im_ids[a])
+ for i in range(num):
+ score = bboxes[k][1]
+ theindex = face_index[i][0]
+ me_prior = prior[theindex, :]
+ lmk_pred = predictlmk[theindex, :]
+ prior_w = me_prior[2] - me_prior[0]
+ prior_h = me_prior[3] - me_prior[1]
+ prior_w_center = (me_prior[2] + me_prior[0]) / 2
+ prior_h_center = (me_prior[3] + me_prior[1]) / 2
+ lmk_decode = np.zeros((10))
+ for j in [0, 2, 4, 6, 8]:
+ lmk_decode[j] = lmk_pred[j] * 0.1 * prior_w + prior_w_center
+ for j in [1, 3, 5, 7, 9]:
+ lmk_decode[j] = lmk_pred[j] * 0.1 * prior_h + prior_h_center
+ im_shape = t['im_shape'][0][a].tolist()
+ image_h, image_w = int(im_shape[0]), int(im_shape[1])
+ if is_bbox_normalized:
+ lmk_decode = lmk_decode * np.array([
+ image_w, image_h, image_w, image_h, image_w, image_h,
+ image_w, image_h, image_w, image_h
+ ])
+ lmk_res = {
+ 'image_id': im_id,
+ 'landmark': lmk_decode,
+ 'score': score,
+ }
+ xywh_res.append(lmk_res)
+ k += 1
+ return xywh_res
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/.gitignore b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/.gitignore
new file mode 100644
index 000000000..f296851d6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/.gitignore
@@ -0,0 +1 @@
+MODEL_ZOO
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__init__.py
new file mode 100644
index 000000000..6db6eb6c6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__init__.py
@@ -0,0 +1,18 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import model_zoo
+from .model_zoo import *
+
+__all__ = model_zoo.__all__
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..dccbb6902
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/model_zoo.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/model_zoo.cpython-37.pyc
new file mode 100644
index 000000000..dd8c6c51a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/__pycache__/model_zoo.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/model_zoo.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/model_zoo.py
new file mode 100644
index 000000000..27581ef79
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/model_zoo.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os.path as osp
+import pkg_resources
+
+try:
+ from collections.abc import Sequence
+except:
+ from collections import Sequence
+
+from ppdet.core.workspace import load_config, create
+from ppdet.utils.checkpoint import load_weight
+from ppdet.utils.download import get_config_path
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'list_model', 'get_config_file', 'get_weights_url', 'get_model',
+ 'MODEL_ZOO_FILENAME'
+]
+
+MODEL_ZOO_FILENAME = 'MODEL_ZOO'
+
+
+def list_model(filters=[]):
+ model_zoo_file = pkg_resources.resource_filename('ppdet.model_zoo',
+ MODEL_ZOO_FILENAME)
+ with open(model_zoo_file) as f:
+ model_names = f.read().splitlines()
+
+ # filter model_name
+ def filt(name):
+ for f in filters:
+ if name.find(f) < 0:
+ return False
+ return True
+
+ if isinstance(filters, str) or not isinstance(filters, Sequence):
+ filters = [filters]
+ model_names = [name for name in model_names if filt(name)]
+ if len(model_names) == 0 and len(filters) > 0:
+ raise ValueError("no model found, please check filters seeting, "
+ "filters can be set as following kinds:\n"
+ "\tDataset: coco, voc ...\n"
+ "\tArchitecture: yolo, rcnn, ssd ...\n"
+ "\tBackbone: resnet, vgg, darknet ...\n")
+
+ model_str = "Available Models:\n"
+ for model_name in model_names:
+ model_str += "\t{}\n".format(model_name)
+ logger.info(model_str)
+
+
+# models and configs save on bcebos under dygraph directory
+def get_config_file(model_name):
+ return get_config_path("ppdet://configs/{}.yml".format(model_name))
+
+
+def get_weights_url(model_name):
+ return "ppdet://models/{}.pdparams".format(osp.split(model_name)[-1])
+
+
+def get_model(model_name, pretrained=True):
+ cfg_file = get_config_file(model_name)
+ cfg = load_config(cfg_file)
+ model = create(cfg.architecture)
+
+ if pretrained:
+ load_weight(model, get_weights_url(model_name))
+
+ return model
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/__init__.py
new file mode 100644
index 000000000..6f0ea8534
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_get_model.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_get_model.py
new file mode 100644
index 000000000..8887185e0
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_get_model.py
@@ -0,0 +1,48 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import paddle
+import ppdet
+import unittest
+
+# NOTE: weights downloading costs time, we choose
+# a small model for unittesting
+MODEL_NAME = 'ppyolo/ppyolo_tiny_650e_coco'
+
+
+class TestGetConfigFile(unittest.TestCase):
+ def test_main(self):
+ try:
+ cfg_file = ppdet.model_zoo.get_config_file(MODEL_NAME)
+ assert os.path.isfile(cfg_file)
+ except:
+ self.assertTrue(False)
+
+
+class TestGetModel(unittest.TestCase):
+ def test_main(self):
+ try:
+ model = ppdet.model_zoo.get_model(MODEL_NAME)
+ assert isinstance(model, paddle.nn.Layer)
+ except:
+ self.assertTrue(False)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_list_model.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_list_model.py
new file mode 100644
index 000000000..8f91afe00
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/model_zoo/tests/test_list_model.py
@@ -0,0 +1,68 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import unittest
+import ppdet
+
+
+class TestListModel(unittest.TestCase):
+ def setUp(self):
+ self._filter = []
+
+ def test_main(self):
+ try:
+ ppdet.model_zoo.list_model(self._filter)
+ self.assertTrue(True)
+ except:
+ self.assertTrue(False)
+
+
+class TestListModelYOLO(TestListModel):
+ def setUp(self):
+ self._filter = ['yolo']
+
+
+class TestListModelRCNN(TestListModel):
+ def setUp(self):
+ self._filter = ['rcnn']
+
+
+class TestListModelSSD(TestListModel):
+ def setUp(self):
+ self._filter = ['ssd']
+
+
+class TestListModelMultiFilter(TestListModel):
+ def setUp(self):
+ self._filter = ['yolo', 'darknet']
+
+
+class TestListModelError(unittest.TestCase):
+ def setUp(self):
+ self._filter = ['xxx']
+
+ def test_main(self):
+ try:
+ ppdet.model_zoo.list_model(self._filter)
+ self.assertTrue(False)
+ except ValueError:
+ self.assertTrue(True)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__init__.py
new file mode 100644
index 000000000..cdcb5d1bf
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__init__.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import warnings
+warnings.filterwarnings(
+ action='ignore', category=DeprecationWarning, module='ops')
+
+from . import ops
+from . import backbones
+from . import necks
+from . import proposal_generator
+from . import heads
+from . import losses
+from . import architectures
+from . import post_process
+from . import layers
+from . import reid
+from . import mot
+from . import transformers
+from . import assigners
+
+from .ops import *
+from .backbones import *
+from .necks import *
+from .proposal_generator import *
+from .heads import *
+from .losses import *
+from .architectures import *
+from .post_process import *
+from .layers import *
+from .reid import *
+from .mot import *
+from .transformers import *
+from .assigners import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..0d7ca5f63
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/bbox_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/bbox_utils.cpython-37.pyc
new file mode 100644
index 000000000..ea3062ca1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/bbox_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/initializer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/initializer.cpython-37.pyc
new file mode 100644
index 000000000..39ca157db
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/initializer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/keypoint_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/keypoint_utils.cpython-37.pyc
new file mode 100644
index 000000000..482bd4353
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/keypoint_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/layers.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/layers.cpython-37.pyc
new file mode 100644
index 000000000..7ccda4248
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/layers.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/ops.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/ops.cpython-37.pyc
new file mode 100644
index 000000000..9a5aa8441
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/ops.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/post_process.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/post_process.cpython-37.pyc
new file mode 100644
index 000000000..3ee8944b2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/post_process.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/shape_spec.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/shape_spec.cpython-37.pyc
new file mode 100644
index 000000000..fe06b61b1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/__pycache__/shape_spec.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__init__.py
new file mode 100644
index 000000000..b5feb06d8
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__init__.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+from . import meta_arch
+from . import faster_rcnn
+from . import mask_rcnn
+from . import yolo
+from . import cascade_rcnn
+from . import ssd
+from . import fcos
+from . import solov2
+from . import ttfnet
+from . import s2anet
+from . import keypoint_hrhrnet
+from . import keypoint_hrnet
+from . import jde
+from . import deepsort
+from . import fairmot
+from . import centernet
+from . import gfl
+from . import picodet
+from . import detr
+from . import sparse_rcnn
+from . import tood
+
+from .meta_arch import *
+from .faster_rcnn import *
+from .mask_rcnn import *
+from .yolo import *
+from .cascade_rcnn import *
+from .ssd import *
+from .fcos import *
+from .solov2 import *
+from .ttfnet import *
+from .s2anet import *
+from .keypoint_hrhrnet import *
+from .keypoint_hrnet import *
+from .jde import *
+from .deepsort import *
+from .fairmot import *
+from .centernet import *
+from .blazeface import *
+from .gfl import *
+from .picodet import *
+from .detr import *
+from .sparse_rcnn import *
+from .tood import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..8b5e1b231
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/blazeface.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/blazeface.cpython-37.pyc
new file mode 100644
index 000000000..8481f88af
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/blazeface.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/cascade_rcnn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/cascade_rcnn.cpython-37.pyc
new file mode 100644
index 000000000..3524b345f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/cascade_rcnn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/centernet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/centernet.cpython-37.pyc
new file mode 100644
index 000000000..203ccb71f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/centernet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/deepsort.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/deepsort.cpython-37.pyc
new file mode 100644
index 000000000..1530b32d4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/deepsort.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/detr.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/detr.cpython-37.pyc
new file mode 100644
index 000000000..5b2a1625f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/detr.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fairmot.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fairmot.cpython-37.pyc
new file mode 100644
index 000000000..7763b0b40
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fairmot.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/faster_rcnn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/faster_rcnn.cpython-37.pyc
new file mode 100644
index 000000000..3727180f5
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/faster_rcnn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fcos.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fcos.cpython-37.pyc
new file mode 100644
index 000000000..86ef96738
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/fcos.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/gfl.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/gfl.cpython-37.pyc
new file mode 100644
index 000000000..a42fad809
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/gfl.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/jde.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/jde.cpython-37.pyc
new file mode 100644
index 000000000..71c58f25d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/jde.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrhrnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrhrnet.cpython-37.pyc
new file mode 100644
index 000000000..3c138dc87
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrhrnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrnet.cpython-37.pyc
new file mode 100644
index 000000000..48a7b5e29
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/keypoint_hrnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/mask_rcnn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/mask_rcnn.cpython-37.pyc
new file mode 100644
index 000000000..9f2423852
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/mask_rcnn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/meta_arch.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/meta_arch.cpython-37.pyc
new file mode 100644
index 000000000..6ce3ee761
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/meta_arch.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/picodet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/picodet.cpython-37.pyc
new file mode 100644
index 000000000..e27b87c8d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/picodet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/s2anet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/s2anet.cpython-37.pyc
new file mode 100644
index 000000000..5328d43d4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/s2anet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/solov2.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/solov2.cpython-37.pyc
new file mode 100644
index 000000000..38764e818
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/solov2.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/sparse_rcnn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/sparse_rcnn.cpython-37.pyc
new file mode 100644
index 000000000..2c87c800e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/sparse_rcnn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ssd.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ssd.cpython-37.pyc
new file mode 100644
index 000000000..e94cda5d4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ssd.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/tood.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/tood.cpython-37.pyc
new file mode 100644
index 000000000..882d03fba
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/tood.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ttfnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ttfnet.cpython-37.pyc
new file mode 100644
index 000000000..4033e47d7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/ttfnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/yolo.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/yolo.cpython-37.pyc
new file mode 100644
index 000000000..27c693dec
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/__pycache__/yolo.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/blazeface.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/blazeface.py
new file mode 100644
index 000000000..af6aa269d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/blazeface.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['BlazeFace']
+
+
+@register
+class BlazeFace(BaseArch):
+ """
+ BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs,
+ see https://arxiv.org/abs/1907.05047
+
+ Args:
+ backbone (nn.Layer): backbone instance
+ neck (nn.Layer): neck instance
+ blaze_head (nn.Layer): `blazeHead` instance
+ post_process (object): `BBoxPostProcess` instance
+ """
+
+ __category__ = 'architecture'
+ __inject__ = ['post_process']
+
+ def __init__(self, backbone, blaze_head, neck, post_process):
+ super(BlazeFace, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.blaze_head = blaze_head
+ self.post_process = post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+ # fpn
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+ # head
+ kwargs = {'input_shape': neck.out_shape}
+ blaze_head = create(cfg['blaze_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ 'blaze_head': blaze_head,
+ }
+
+ def _forward(self):
+ # Backbone
+ body_feats = self.backbone(self.inputs)
+ # neck
+ neck_feats = self.neck(body_feats)
+ # blaze Head
+ if self.training:
+ return self.blaze_head(neck_feats, self.inputs['image'],
+ self.inputs['gt_bbox'],
+ self.inputs['gt_class'])
+ else:
+ preds, anchors = self.blaze_head(neck_feats, self.inputs['image'])
+ bbox, bbox_num = self.post_process(preds, anchors,
+ self.inputs['im_shape'],
+ self.inputs['scale_factor'])
+ return bbox, bbox_num
+
+ def get_loss(self, ):
+ return {"loss": self._forward()}
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {
+ "bbox": bbox_pred,
+ "bbox_num": bbox_num,
+ }
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/cascade_rcnn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/cascade_rcnn.py
new file mode 100644
index 000000000..ac29b775d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/cascade_rcnn.py
@@ -0,0 +1,143 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['CascadeRCNN']
+
+
+@register
+class CascadeRCNN(BaseArch):
+ """
+ Cascade R-CNN network, see https://arxiv.org/abs/1712.00726
+
+ Args:
+ backbone (object): backbone instance
+ rpn_head (object): `RPNHead` instance
+ bbox_head (object): `BBoxHead` instance
+ bbox_post_process (object): `BBoxPostProcess` instance
+ neck (object): 'FPN' instance
+ mask_head (object): `MaskHead` instance
+ mask_post_process (object): `MaskPostProcess` instance
+ """
+ __category__ = 'architecture'
+ __inject__ = [
+ 'bbox_post_process',
+ 'mask_post_process',
+ ]
+
+ def __init__(self,
+ backbone,
+ rpn_head,
+ bbox_head,
+ bbox_post_process,
+ neck=None,
+ mask_head=None,
+ mask_post_process=None):
+ super(CascadeRCNN, self).__init__()
+ self.backbone = backbone
+ self.rpn_head = rpn_head
+ self.bbox_head = bbox_head
+ self.bbox_post_process = bbox_post_process
+ self.neck = neck
+ self.mask_head = mask_head
+ self.mask_post_process = mask_post_process
+ self.with_mask = mask_head is not None
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = cfg['neck'] and create(cfg['neck'], **kwargs)
+
+ out_shape = neck and neck.out_shape or backbone.out_shape
+ kwargs = {'input_shape': out_shape}
+ rpn_head = create(cfg['rpn_head'], **kwargs)
+ bbox_head = create(cfg['bbox_head'], **kwargs)
+
+ out_shape = neck and out_shape or bbox_head.get_head().out_shape
+ kwargs = {'input_shape': out_shape}
+ mask_head = cfg['mask_head'] and create(cfg['mask_head'], **kwargs)
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "rpn_head": rpn_head,
+ "bbox_head": bbox_head,
+ "mask_head": mask_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ if self.neck is not None:
+ body_feats = self.neck(body_feats)
+
+ if self.training:
+ rois, rois_num, rpn_loss = self.rpn_head(body_feats, self.inputs)
+ bbox_loss, bbox_feat = self.bbox_head(body_feats, rois, rois_num,
+ self.inputs)
+ rois, rois_num = self.bbox_head.get_assigned_rois()
+ bbox_targets = self.bbox_head.get_assigned_targets()
+ if self.with_mask:
+ mask_loss = self.mask_head(body_feats, rois, rois_num,
+ self.inputs, bbox_targets, bbox_feat)
+ return rpn_loss, bbox_loss, mask_loss
+ else:
+ return rpn_loss, bbox_loss, {}
+ else:
+ rois, rois_num, _ = self.rpn_head(body_feats, self.inputs)
+ preds, _ = self.bbox_head(body_feats, rois, rois_num, self.inputs)
+ refined_rois = self.bbox_head.get_refined_rois()
+
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+
+ bbox, bbox_num = self.bbox_post_process(
+ preds, (refined_rois, rois_num), im_shape, scale_factor)
+ # rescale the prediction back to origin image
+ bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num,
+ im_shape, scale_factor)
+ if not self.with_mask:
+ return bbox_pred, bbox_num, None
+ mask_out = self.mask_head(body_feats, bbox, bbox_num, self.inputs)
+ origin_shape = self.bbox_post_process.get_origin_shape()
+ mask_pred = self.mask_post_process(mask_out[:, 0, :, :], bbox_pred,
+ bbox_num, origin_shape)
+ return bbox_pred, bbox_num, mask_pred
+
+ def get_loss(self, ):
+ rpn_loss, bbox_loss, mask_loss = self._forward()
+ loss = {}
+ loss.update(rpn_loss)
+ loss.update(bbox_loss)
+ if self.with_mask:
+ loss.update(mask_loss)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num, mask_pred = self._forward()
+ output = {
+ 'bbox': bbox_pred,
+ 'bbox_num': bbox_num,
+ }
+ if self.with_mask:
+ output.update({'mask': mask_pred})
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/centernet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/centernet.py
new file mode 100644
index 000000000..2287d743b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/centernet.py
@@ -0,0 +1,108 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['CenterNet']
+
+
+@register
+class CenterNet(BaseArch):
+ """
+ CenterNet network, see http://arxiv.org/abs/1904.07850
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): FPN instance, default use 'CenterNetDLAFPN'
+ head (object): 'CenterNetHead' instance
+ post_process (object): 'CenterNetPostProcess' instance
+ for_mot (bool): whether return other features used in tracking model
+
+ """
+ __category__ = 'architecture'
+ __inject__ = ['post_process']
+ __shared__ = ['for_mot']
+
+ def __init__(self,
+ backbone,
+ neck='CenterNetDLAFPN',
+ head='CenterNetHead',
+ post_process='CenterNetPostProcess',
+ for_mot=False):
+ super(CenterNet, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.head = head
+ self.post_process = post_process
+ self.for_mot = for_mot
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = cfg['neck'] and create(cfg['neck'], **kwargs)
+
+ out_shape = neck and neck.out_shape or backbone.out_shape
+ kwargs = {'input_shape': out_shape}
+ head = create(cfg['head'], **kwargs)
+
+ return {'backbone': backbone, 'neck': neck, "head": head}
+
+ def _forward(self):
+ neck_feat = self.backbone(self.inputs)
+ if self.neck is not None:
+ neck_feat = self.neck(neck_feat)
+ head_out = self.head(neck_feat, self.inputs)
+ if self.for_mot:
+ head_out.update({'neck_feat': neck_feat})
+ elif self.training:
+ head_out['loss'] = head_out.pop('det_loss')
+ return head_out
+
+ def get_pred(self):
+ head_out = self._forward()
+ if self.for_mot:
+ bbox, bbox_inds, topk_clses = self.post_process(
+ head_out['heatmap'],
+ head_out['size'],
+ head_out['offset'],
+ im_shape=self.inputs['im_shape'],
+ scale_factor=self.inputs['scale_factor'])
+ output = {
+ "bbox": bbox,
+ "bbox_inds": bbox_inds,
+ "topk_clses": topk_clses,
+ "neck_feat": head_out['neck_feat']
+ }
+ else:
+ bbox, bbox_num, _ = self.post_process(
+ head_out['heatmap'],
+ head_out['size'],
+ head_out['offset'],
+ im_shape=self.inputs['im_shape'],
+ scale_factor=self.inputs['scale_factor'])
+ output = {
+ "bbox": bbox,
+ "bbox_num": bbox_num,
+ }
+ return output
+
+ def get_loss(self):
+ return self._forward()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/deepsort.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/deepsort.py
new file mode 100644
index 000000000..066f7a4ce
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/deepsort.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+from ppdet.modeling.mot.utils import Detection, get_crops, scale_coords, clip_box
+
+__all__ = ['DeepSORT']
+
+
+@register
+class DeepSORT(BaseArch):
+ """
+ DeepSORT network, see https://arxiv.org/abs/1703.07402
+
+ Args:
+ detector (object): detector model instance
+ reid (object): reid model instance
+ tracker (object): tracker instance
+ """
+ __category__ = 'architecture'
+
+ def __init__(self,
+ detector='YOLOv3',
+ reid='PCBPyramid',
+ tracker='DeepSORTTracker'):
+ super(DeepSORT, self).__init__()
+ self.detector = detector
+ self.reid = reid
+ self.tracker = tracker
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ if cfg['detector'] != 'None':
+ detector = create(cfg['detector'])
+ else:
+ detector = None
+ reid = create(cfg['reid'])
+ tracker = create(cfg['tracker'])
+
+ return {
+ "detector": detector,
+ "reid": reid,
+ "tracker": tracker,
+ }
+
+ def _forward(self):
+ crops = self.inputs['crops']
+ features = self.reid(crops)
+ return features
+
+ def get_pred(self):
+ return self._forward()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/detr.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/detr.py
new file mode 100644
index 000000000..2c081bf6c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/detr.py
@@ -0,0 +1,93 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from .meta_arch import BaseArch
+from ppdet.core.workspace import register, create
+
+__all__ = ['DETR']
+
+
+@register
+class DETR(BaseArch):
+ __category__ = 'architecture'
+ __inject__ = ['post_process']
+
+ def __init__(self,
+ backbone,
+ transformer,
+ detr_head,
+ post_process='DETRBBoxPostProcess'):
+ super(DETR, self).__init__()
+ self.backbone = backbone
+ self.transformer = transformer
+ self.detr_head = detr_head
+ self.post_process = post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+ # transformer
+ kwargs = {'input_shape': backbone.out_shape}
+ transformer = create(cfg['transformer'], **kwargs)
+ # head
+ kwargs = {
+ 'hidden_dim': transformer.hidden_dim,
+ 'nhead': transformer.nhead,
+ 'input_shape': backbone.out_shape
+ }
+ detr_head = create(cfg['detr_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'transformer': transformer,
+ "detr_head": detr_head,
+ }
+
+ def _forward(self):
+ # Backbone
+ body_feats = self.backbone(self.inputs)
+
+ # Transformer
+ out_transformer = self.transformer(body_feats, self.inputs['pad_mask'])
+
+ # DETR Head
+ if self.training:
+ return self.detr_head(out_transformer, body_feats, self.inputs)
+ else:
+ preds = self.detr_head(out_transformer, body_feats)
+ bbox, bbox_num = self.post_process(preds, self.inputs['im_shape'],
+ self.inputs['scale_factor'])
+ return bbox, bbox_num
+
+ def get_loss(self, ):
+ losses = self._forward()
+ losses.update({
+ 'loss':
+ paddle.add_n([v for k, v in losses.items() if 'log' not in k])
+ })
+ return losses
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {
+ "bbox": bbox_pred,
+ "bbox_num": bbox_num,
+ }
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fairmot.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fairmot.py
new file mode 100644
index 000000000..271450839
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fairmot.py
@@ -0,0 +1,100 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['FairMOT']
+
+
+@register
+class FairMOT(BaseArch):
+ """
+ FairMOT network, see http://arxiv.org/abs/2004.01888
+
+ Args:
+ detector (object): 'CenterNet' instance
+ reid (object): 'FairMOTEmbeddingHead' instance
+ tracker (object): 'JDETracker' instance
+ loss (object): 'FairMOTLoss' instance
+
+ """
+
+ __category__ = 'architecture'
+ __inject__ = ['loss']
+
+ def __init__(self,
+ detector='CenterNet',
+ reid='FairMOTEmbeddingHead',
+ tracker='JDETracker',
+ loss='FairMOTLoss'):
+ super(FairMOT, self).__init__()
+ self.detector = detector
+ self.reid = reid
+ self.tracker = tracker
+ self.loss = loss
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ detector = create(cfg['detector'])
+ detector_out_shape = detector.neck and detector.neck.out_shape or detector.backbone.out_shape
+
+ kwargs = {'input_shape': detector_out_shape}
+ reid = create(cfg['reid'], **kwargs)
+ loss = create(cfg['loss'])
+ tracker = create(cfg['tracker'])
+
+ return {
+ 'detector': detector,
+ 'reid': reid,
+ 'loss': loss,
+ 'tracker': tracker
+ }
+
+ def _forward(self):
+ loss = dict()
+ # det_outs keys:
+ # train: neck_feat, det_loss, heatmap_loss, size_loss, offset_loss (optional: iou_loss)
+ # eval/infer: neck_feat, bbox, bbox_inds
+ det_outs = self.detector(self.inputs)
+ neck_feat = det_outs['neck_feat']
+ if self.training:
+ reid_loss = self.reid(neck_feat, self.inputs)
+
+ det_loss = det_outs['det_loss']
+ loss = self.loss(det_loss, reid_loss)
+ for k, v in det_outs.items():
+ if 'loss' not in k:
+ continue
+ loss.update({k: v})
+ loss.update({'reid_loss': reid_loss})
+ return loss
+ else:
+ pred_dets, pred_embs = self.reid(
+ neck_feat, self.inputs, det_outs['bbox'], det_outs['bbox_inds'],
+ det_outs['topk_clses'])
+ return pred_dets, pred_embs
+
+ def get_pred(self):
+ output = self._forward()
+ return output
+
+ def get_loss(self):
+ loss = self._forward()
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/faster_rcnn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/faster_rcnn.py
new file mode 100644
index 000000000..26a2672d6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/faster_rcnn.py
@@ -0,0 +1,106 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['FasterRCNN']
+
+
+@register
+class FasterRCNN(BaseArch):
+ """
+ Faster R-CNN network, see https://arxiv.org/abs/1506.01497
+
+ Args:
+ backbone (object): backbone instance
+ rpn_head (object): `RPNHead` instance
+ bbox_head (object): `BBoxHead` instance
+ bbox_post_process (object): `BBoxPostProcess` instance
+ neck (object): 'FPN' instance
+ """
+ __category__ = 'architecture'
+ __inject__ = ['bbox_post_process']
+
+ def __init__(self,
+ backbone,
+ rpn_head,
+ bbox_head,
+ bbox_post_process,
+ neck=None):
+ super(FasterRCNN, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.rpn_head = rpn_head
+ self.bbox_head = bbox_head
+ self.bbox_post_process = bbox_post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = cfg['neck'] and create(cfg['neck'], **kwargs)
+
+ out_shape = neck and neck.out_shape or backbone.out_shape
+ kwargs = {'input_shape': out_shape}
+ rpn_head = create(cfg['rpn_head'], **kwargs)
+ bbox_head = create(cfg['bbox_head'], **kwargs)
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "rpn_head": rpn_head,
+ "bbox_head": bbox_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ if self.neck is not None:
+ body_feats = self.neck(body_feats)
+ if self.training:
+ rois, rois_num, rpn_loss = self.rpn_head(body_feats, self.inputs)
+ bbox_loss, _ = self.bbox_head(body_feats, rois, rois_num,
+ self.inputs)
+ return rpn_loss, bbox_loss
+ else:
+ rois, rois_num, _ = self.rpn_head(body_feats, self.inputs)
+ preds, _ = self.bbox_head(body_feats, rois, rois_num, None)
+
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+ bbox, bbox_num = self.bbox_post_process(preds, (rois, rois_num),
+ im_shape, scale_factor)
+
+ # rescale the prediction back to origin image
+ bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num,
+ im_shape, scale_factor)
+ return bbox_pred, bbox_num
+
+ def get_loss(self, ):
+ rpn_loss, bbox_loss = self._forward()
+ loss = {}
+ loss.update(rpn_loss)
+ loss.update(bbox_loss)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fcos.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fcos.py
new file mode 100644
index 000000000..8fa5c569b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/fcos.py
@@ -0,0 +1,105 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['FCOS']
+
+
+@register
+class FCOS(BaseArch):
+ """
+ FCOS network, see https://arxiv.org/abs/1904.01355
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): 'FPN' instance
+ fcos_head (object): 'FCOSHead' instance
+ post_process (object): 'FCOSPostProcess' instance
+ """
+
+ __category__ = 'architecture'
+ __inject__ = ['fcos_post_process']
+
+ def __init__(self,
+ backbone,
+ neck,
+ fcos_head='FCOSHead',
+ fcos_post_process='FCOSPostProcess'):
+ super(FCOS, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.fcos_head = fcos_head
+ self.fcos_post_process = fcos_post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ fcos_head = create(cfg['fcos_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "fcos_head": fcos_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ fpn_feats = self.neck(body_feats)
+ fcos_head_outs = self.fcos_head(fpn_feats, self.training)
+ if not self.training:
+ scale_factor = self.inputs['scale_factor']
+ bboxes = self.fcos_post_process(fcos_head_outs, scale_factor)
+ return bboxes
+ else:
+ return fcos_head_outs
+
+ def get_loss(self, ):
+ loss = {}
+ tag_labels, tag_bboxes, tag_centerness = [], [], []
+ for i in range(len(self.fcos_head.fpn_stride)):
+ # labels, reg_target, centerness
+ k_lbl = 'labels{}'.format(i)
+ if k_lbl in self.inputs:
+ tag_labels.append(self.inputs[k_lbl])
+ k_box = 'reg_target{}'.format(i)
+ if k_box in self.inputs:
+ tag_bboxes.append(self.inputs[k_box])
+ k_ctn = 'centerness{}'.format(i)
+ if k_ctn in self.inputs:
+ tag_centerness.append(self.inputs[k_ctn])
+
+ fcos_head_outs = self._forward()
+ loss_fcos = self.fcos_head.get_loss(fcos_head_outs, tag_labels,
+ tag_bboxes, tag_centerness)
+ loss.update(loss_fcos)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/gfl.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/gfl.py
new file mode 100644
index 000000000..91c13077f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/gfl.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['GFL']
+
+
+@register
+class GFL(BaseArch):
+ """
+ Generalized Focal Loss network, see https://arxiv.org/abs/2006.04388
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): 'FPN' instance
+ head (object): 'GFLHead' instance
+ """
+
+ __category__ = 'architecture'
+
+ def __init__(self, backbone, neck, head='GFLHead'):
+ super(GFL, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.head = head
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ head = create(cfg['head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "head": head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ fpn_feats = self.neck(body_feats)
+ head_outs = self.head(fpn_feats)
+ if not self.training:
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+ bboxes, bbox_num = self.head.post_process(head_outs, im_shape,
+ scale_factor)
+ return bboxes, bbox_num
+ else:
+ return head_outs
+
+ def get_loss(self, ):
+ loss = {}
+
+ head_outs = self._forward()
+ loss_gfl = self.head.get_loss(head_outs, self.inputs)
+ loss.update(loss_gfl)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/jde.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/jde.py
new file mode 100644
index 000000000..11b45c8c1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/jde.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['JDE']
+
+
+@register
+class JDE(BaseArch):
+ __category__ = 'architecture'
+ __shared__ = ['metric']
+ """
+ JDE network, see https://arxiv.org/abs/1909.12605v1
+
+ Args:
+ detector (object): detector model instance
+ reid (object): reid model instance
+ tracker (object): tracker instance
+ metric (str): 'MOTDet' for training and detection evaluation, 'ReID'
+ for ReID embedding evaluation, or 'MOT' for multi object tracking
+ evaluation.
+ """
+
+ def __init__(self,
+ detector='YOLOv3',
+ reid='JDEEmbeddingHead',
+ tracker='JDETracker',
+ metric='MOT'):
+ super(JDE, self).__init__()
+ self.detector = detector
+ self.reid = reid
+ self.tracker = tracker
+ self.metric = metric
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ detector = create(cfg['detector'])
+ kwargs = {'input_shape': detector.neck.out_shape}
+
+ reid = create(cfg['reid'], **kwargs)
+
+ tracker = create(cfg['tracker'])
+
+ return {
+ "detector": detector,
+ "reid": reid,
+ "tracker": tracker,
+ }
+
+ def _forward(self):
+ det_outs = self.detector(self.inputs)
+
+ if self.training:
+ emb_feats = det_outs['emb_feats']
+ loss_confs = det_outs['det_losses']['loss_confs']
+ loss_boxes = det_outs['det_losses']['loss_boxes']
+ jde_losses = self.reid(
+ emb_feats,
+ self.inputs,
+ loss_confs=loss_confs,
+ loss_boxes=loss_boxes)
+ return jde_losses
+ else:
+ if self.metric == 'MOTDet':
+ det_results = {
+ 'bbox': det_outs['bbox'],
+ 'bbox_num': det_outs['bbox_num'],
+ }
+ return det_results
+
+ elif self.metric == 'MOT':
+ emb_feats = det_outs['emb_feats']
+ bboxes = det_outs['bbox']
+ boxes_idx = det_outs['boxes_idx']
+ nms_keep_idx = det_outs['nms_keep_idx']
+
+ pred_dets, pred_embs = self.reid(
+ emb_feats,
+ self.inputs,
+ bboxes=bboxes,
+ boxes_idx=boxes_idx,
+ nms_keep_idx=nms_keep_idx)
+ return pred_dets, pred_embs
+
+ else:
+ raise ValueError("Unknown metric {} for multi object tracking.".
+ format(self.metric))
+
+ def get_loss(self):
+ return self._forward()
+
+ def get_pred(self):
+ return self._forward()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrhrnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrhrnet.py
new file mode 100644
index 000000000..6f62b4b21
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrhrnet.py
@@ -0,0 +1,287 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from scipy.optimize import linear_sum_assignment
+from collections import abc, defaultdict
+import numpy as np
+import paddle
+
+from ppdet.core.workspace import register, create, serializable
+from .meta_arch import BaseArch
+from .. import layers as L
+from ..keypoint_utils import transpred
+
+__all__ = ['HigherHRNet']
+
+
+@register
+class HigherHRNet(BaseArch):
+ __category__ = 'architecture'
+
+ def __init__(self,
+ backbone='HRNet',
+ hrhrnet_head='HrHRNetHead',
+ post_process='HrHRNetPostProcess',
+ eval_flip=True,
+ flip_perm=None,
+ max_num_people=30):
+ """
+ HigherHRNet network, see https://arxiv.org/abs/1908.10357锛
+ HigherHRNet+swahr, see https://arxiv.org/abs/2012.15175
+
+ Args:
+ backbone (nn.Layer): backbone instance
+ hrhrnet_head (nn.Layer): keypoint_head instance
+ bbox_post_process (object): `BBoxPostProcess` instance
+ """
+ super(HigherHRNet, self).__init__()
+ self.backbone = backbone
+ self.hrhrnet_head = hrhrnet_head
+ self.post_process = post_process
+ self.flip = eval_flip
+ self.flip_perm = paddle.to_tensor(flip_perm)
+ self.deploy = False
+ self.interpolate = L.Upsample(2, mode='bilinear')
+ self.pool = L.MaxPool(5, 1, 2)
+ self.max_num_people = max_num_people
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+ # head
+ kwargs = {'input_shape': backbone.out_shape}
+ hrhrnet_head = create(cfg['hrhrnet_head'], **kwargs)
+ post_process = create(cfg['post_process'])
+
+ return {
+ 'backbone': backbone,
+ "hrhrnet_head": hrhrnet_head,
+ "post_process": post_process,
+ }
+
+ def _forward(self):
+ if self.flip and not self.training and not self.deploy:
+ self.inputs['image'] = paddle.concat(
+ (self.inputs['image'], paddle.flip(self.inputs['image'], [3])))
+ body_feats = self.backbone(self.inputs)
+
+ if self.training:
+ return self.hrhrnet_head(body_feats, self.inputs)
+ else:
+ outputs = self.hrhrnet_head(body_feats)
+
+ if self.flip and not self.deploy:
+ outputs = [paddle.split(o, 2) for o in outputs]
+ output_rflip = [
+ paddle.flip(paddle.gather(o[1], self.flip_perm, 1), [3])
+ for o in outputs
+ ]
+ output1 = [o[0] for o in outputs]
+ heatmap = (output1[0] + output_rflip[0]) / 2.
+ tagmaps = [output1[1], output_rflip[1]]
+ outputs = [heatmap] + tagmaps
+ outputs = self.get_topk(outputs)
+
+ if self.deploy:
+ return outputs
+
+ res_lst = []
+ h = self.inputs['im_shape'][0, 0].numpy().item()
+ w = self.inputs['im_shape'][0, 1].numpy().item()
+ kpts, scores = self.post_process(*outputs, h, w)
+ res_lst.append([kpts, scores])
+ return res_lst
+
+ def get_loss(self):
+ return self._forward()
+
+ def get_pred(self):
+ outputs = {}
+ res_lst = self._forward()
+ outputs['keypoint'] = res_lst
+ return outputs
+
+ def get_topk(self, outputs):
+ # resize to image size
+ outputs = [self.interpolate(x) for x in outputs]
+ if len(outputs) == 3:
+ tagmap = paddle.concat(
+ (outputs[1].unsqueeze(4), outputs[2].unsqueeze(4)), axis=4)
+ else:
+ tagmap = outputs[1].unsqueeze(4)
+
+ heatmap = outputs[0]
+ N, J = 1, self.hrhrnet_head.num_joints
+ heatmap_maxpool = self.pool(heatmap)
+ # topk
+ maxmap = heatmap * (heatmap == heatmap_maxpool)
+ maxmap = maxmap.reshape([N, J, -1])
+ heat_k, inds_k = maxmap.topk(self.max_num_people, axis=2)
+
+ outputs = [heatmap, tagmap, heat_k, inds_k]
+ return outputs
+
+
+@register
+@serializable
+class HrHRNetPostProcess(object):
+ '''
+ HrHRNet postprocess contain:
+ 1) get topk keypoints in the output heatmap
+ 2) sample the tagmap's value corresponding to each of the topk coordinate
+ 3) match different joints to combine to some people with Hungary algorithm
+ 4) adjust the coordinate by +-0.25 to decrease error std
+ 5) salvage missing joints by check positivity of heatmap - tagdiff_norm
+ Args:
+ max_num_people (int): max number of people support in postprocess
+ heat_thresh (float): value of topk below this threshhold will be ignored
+ tag_thresh (float): coord's value sampled in tagmap below this threshold belong to same people for init
+
+ inputs(list[heatmap]): the output list of modle, [heatmap, heatmap_maxpool, tagmap], heatmap_maxpool used to get topk
+ original_height, original_width (float): the original image size
+ '''
+
+ def __init__(self, max_num_people=30, heat_thresh=0.1, tag_thresh=1.):
+ self.max_num_people = max_num_people
+ self.heat_thresh = heat_thresh
+ self.tag_thresh = tag_thresh
+
+ def lerp(self, j, y, x, heatmap):
+ H, W = heatmap.shape[-2:]
+ left = np.clip(x - 1, 0, W - 1)
+ right = np.clip(x + 1, 0, W - 1)
+ up = np.clip(y - 1, 0, H - 1)
+ down = np.clip(y + 1, 0, H - 1)
+ offset_y = np.where(heatmap[j, down, x] > heatmap[j, up, x], 0.25,
+ -0.25)
+ offset_x = np.where(heatmap[j, y, right] > heatmap[j, y, left], 0.25,
+ -0.25)
+ return offset_y + 0.5, offset_x + 0.5
+
+ def __call__(self, heatmap, tagmap, heat_k, inds_k, original_height,
+ original_width):
+
+ N, J, H, W = heatmap.shape
+ assert N == 1, "only support batch size 1"
+ heatmap = heatmap[0].cpu().detach().numpy()
+ tagmap = tagmap[0].cpu().detach().numpy()
+ heats = heat_k[0].cpu().detach().numpy()
+ inds_np = inds_k[0].cpu().detach().numpy()
+ y = inds_np // W
+ x = inds_np % W
+ tags = tagmap[np.arange(J)[None, :].repeat(self.max_num_people),
+ y.flatten(), x.flatten()].reshape(J, -1, tagmap.shape[-1])
+ coords = np.stack((y, x), axis=2)
+ # threshold
+ mask = heats > self.heat_thresh
+ # cluster
+ cluster = defaultdict(lambda: {
+ 'coords': np.zeros((J, 2), dtype=np.float32),
+ 'scores': np.zeros(J, dtype=np.float32),
+ 'tags': []
+ })
+ for jid, m in enumerate(mask):
+ num_valid = m.sum()
+ if num_valid == 0:
+ continue
+ valid_inds = np.where(m)[0]
+ valid_tags = tags[jid, m, :]
+ if len(cluster) == 0: # initialize
+ for i in valid_inds:
+ tag = tags[jid, i]
+ key = tag[0]
+ cluster[key]['tags'].append(tag)
+ cluster[key]['scores'][jid] = heats[jid, i]
+ cluster[key]['coords'][jid] = coords[jid, i]
+ continue
+ candidates = list(cluster.keys())[:self.max_num_people]
+ centroids = [
+ np.mean(
+ cluster[k]['tags'], axis=0) for k in candidates
+ ]
+ num_clusters = len(centroids)
+ # shape is (num_valid, num_clusters, tag_dim)
+ dist = valid_tags[:, None, :] - np.array(centroids)[None, ...]
+ l2_dist = np.linalg.norm(dist, ord=2, axis=2)
+ # modulate dist with heat value, see `use_detection_val`
+ cost = np.round(l2_dist) * 100 - heats[jid, m, None]
+ # pad the cost matrix, otherwise new pose are ignored
+ if num_valid > num_clusters:
+ cost = np.pad(cost, ((0, 0), (0, num_valid - num_clusters)),
+ 'constant',
+ constant_values=((0, 0), (0, 1e-10)))
+ rows, cols = linear_sum_assignment(cost)
+ for y, x in zip(rows, cols):
+ tag = tags[jid, y]
+ if y < num_valid and x < num_clusters and \
+ l2_dist[y, x] < self.tag_thresh:
+ key = candidates[x] # merge to cluster
+ else:
+ key = tag[0] # initialize new cluster
+ cluster[key]['tags'].append(tag)
+ cluster[key]['scores'][jid] = heats[jid, y]
+ cluster[key]['coords'][jid] = coords[jid, y]
+
+ # shape is [k, J, 2] and [k, J]
+ pose_tags = np.array([cluster[k]['tags'] for k in cluster])
+ pose_coords = np.array([cluster[k]['coords'] for k in cluster])
+ pose_scores = np.array([cluster[k]['scores'] for k in cluster])
+ valid = pose_scores > 0
+
+ pose_kpts = np.zeros((pose_scores.shape[0], J, 3), dtype=np.float32)
+ if valid.sum() == 0:
+ return pose_kpts, pose_kpts
+
+ # refine coords
+ valid_coords = pose_coords[valid].astype(np.int32)
+ y = valid_coords[..., 0].flatten()
+ x = valid_coords[..., 1].flatten()
+ _, j = np.nonzero(valid)
+ offsets = self.lerp(j, y, x, heatmap)
+ pose_coords[valid, 0] += offsets[0]
+ pose_coords[valid, 1] += offsets[1]
+
+ # mean score before salvage
+ mean_score = pose_scores.mean(axis=1)
+ pose_kpts[valid, 2] = pose_scores[valid]
+
+ # salvage missing joints
+ if True:
+ for pid, coords in enumerate(pose_coords):
+ tag_mean = np.array(pose_tags[pid]).mean(axis=0)
+ norm = np.sum((tagmap - tag_mean)**2, axis=3)**0.5
+ score = heatmap - np.round(norm) # (J, H, W)
+ flat_score = score.reshape(J, -1)
+ max_inds = np.argmax(flat_score, axis=1)
+ max_scores = np.max(flat_score, axis=1)
+ salvage_joints = (pose_scores[pid] == 0) & (max_scores > 0)
+ if salvage_joints.sum() == 0:
+ continue
+ y = max_inds[salvage_joints] // W
+ x = max_inds[salvage_joints] % W
+ offsets = self.lerp(salvage_joints.nonzero()[0], y, x, heatmap)
+ y = y.astype(np.float32) + offsets[0]
+ x = x.astype(np.float32) + offsets[1]
+ pose_coords[pid][salvage_joints, 0] = y
+ pose_coords[pid][salvage_joints, 1] = x
+ pose_kpts[pid][salvage_joints, 2] = max_scores[salvage_joints]
+ pose_kpts[..., :2] = transpred(pose_coords[..., :2][..., ::-1],
+ original_height, original_width,
+ min(H, W))
+ return pose_kpts, mean_score
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrnet.py
new file mode 100644
index 000000000..914bd043c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/keypoint_hrnet.py
@@ -0,0 +1,267 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import numpy as np
+import math
+import cv2
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+from ..keypoint_utils import transform_preds
+from .. import layers as L
+
+__all__ = ['TopDownHRNet']
+
+
+@register
+class TopDownHRNet(BaseArch):
+ __category__ = 'architecture'
+ __inject__ = ['loss']
+
+ def __init__(self,
+ width,
+ num_joints,
+ backbone='HRNet',
+ loss='KeyPointMSELoss',
+ post_process='HRNetPostProcess',
+ flip_perm=None,
+ flip=True,
+ shift_heatmap=True,
+ use_dark=True):
+ """
+ HRNet network, see https://arxiv.org/abs/1902.09212
+
+ Args:
+ backbone (nn.Layer): backbone instance
+ post_process (object): `HRNetPostProcess` instance
+ flip_perm (list): The left-right joints exchange order list
+ use_dark(bool): Whether to use DARK in post processing
+ """
+ super(TopDownHRNet, self).__init__()
+ self.backbone = backbone
+ self.post_process = HRNetPostProcess(use_dark)
+ self.loss = loss
+ self.flip_perm = flip_perm
+ self.flip = flip
+ self.final_conv = L.Conv2d(width, num_joints, 1, 1, 0, bias=True)
+ self.shift_heatmap = shift_heatmap
+ self.deploy = False
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+
+ return {'backbone': backbone, }
+
+ def _forward(self):
+ feats = self.backbone(self.inputs)
+ hrnet_outputs = self.final_conv(feats[0])
+
+ if self.training:
+ return self.loss(hrnet_outputs, self.inputs)
+ elif self.deploy:
+ outshape = hrnet_outputs.shape
+ max_idx = paddle.argmax(
+ hrnet_outputs.reshape(
+ (outshape[0], outshape[1], outshape[2] * outshape[3])),
+ axis=-1)
+ return hrnet_outputs, max_idx
+ else:
+ if self.flip:
+ self.inputs['image'] = self.inputs['image'].flip([3])
+ feats = self.backbone(self.inputs)
+ output_flipped = self.final_conv(feats[0])
+ output_flipped = self.flip_back(output_flipped.numpy(),
+ self.flip_perm)
+ output_flipped = paddle.to_tensor(output_flipped.copy())
+ if self.shift_heatmap:
+ output_flipped[:, :, :, 1:] = output_flipped.clone(
+ )[:, :, :, 0:-1]
+ hrnet_outputs = (hrnet_outputs + output_flipped) * 0.5
+ imshape = (self.inputs['im_shape'].numpy()
+ )[:, ::-1] if 'im_shape' in self.inputs else None
+ center = self.inputs['center'].numpy(
+ ) if 'center' in self.inputs else np.round(imshape / 2.)
+ scale = self.inputs['scale'].numpy(
+ ) if 'scale' in self.inputs else imshape / 200.
+ outputs = self.post_process(hrnet_outputs, center, scale)
+ return outputs
+
+ def get_loss(self):
+ return self._forward()
+
+ def get_pred(self):
+ res_lst = self._forward()
+ outputs = {'keypoint': res_lst}
+ return outputs
+
+ def flip_back(self, output_flipped, matched_parts):
+ assert output_flipped.ndim == 4,\
+ 'output_flipped should be [batch_size, num_joints, height, width]'
+
+ output_flipped = output_flipped[:, :, :, ::-1]
+
+ for pair in matched_parts:
+ tmp = output_flipped[:, pair[0], :, :].copy()
+ output_flipped[:, pair[0], :, :] = output_flipped[:, pair[1], :, :]
+ output_flipped[:, pair[1], :, :] = tmp
+
+ return output_flipped
+
+
+class HRNetPostProcess(object):
+ def __init__(self, use_dark=True):
+ self.use_dark = use_dark
+
+ def get_max_preds(self, heatmaps):
+ '''get predictions from score maps
+
+ Args:
+ heatmaps: numpy.ndarray([batch_size, num_joints, height, width])
+
+ Returns:
+ preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
+ maxvals: numpy.ndarray([batch_size, num_joints, 2]), the maximum confidence of the keypoints
+ '''
+ assert isinstance(heatmaps,
+ np.ndarray), 'heatmaps should be numpy.ndarray'
+ assert heatmaps.ndim == 4, 'batch_images should be 4-ndim'
+
+ batch_size = heatmaps.shape[0]
+ num_joints = heatmaps.shape[1]
+ width = heatmaps.shape[3]
+ heatmaps_reshaped = heatmaps.reshape((batch_size, num_joints, -1))
+ idx = np.argmax(heatmaps_reshaped, 2)
+ maxvals = np.amax(heatmaps_reshaped, 2)
+
+ maxvals = maxvals.reshape((batch_size, num_joints, 1))
+ idx = idx.reshape((batch_size, num_joints, 1))
+
+ preds = np.tile(idx, (1, 1, 2)).astype(np.float32)
+
+ preds[:, :, 0] = (preds[:, :, 0]) % width
+ preds[:, :, 1] = np.floor((preds[:, :, 1]) / width)
+
+ pred_mask = np.tile(np.greater(maxvals, 0.0), (1, 1, 2))
+ pred_mask = pred_mask.astype(np.float32)
+
+ preds *= pred_mask
+
+ return preds, maxvals
+
+ def gaussian_blur(self, heatmap, kernel):
+ border = (kernel - 1) // 2
+ batch_size = heatmap.shape[0]
+ num_joints = heatmap.shape[1]
+ height = heatmap.shape[2]
+ width = heatmap.shape[3]
+ for i in range(batch_size):
+ for j in range(num_joints):
+ origin_max = np.max(heatmap[i, j])
+ dr = np.zeros((height + 2 * border, width + 2 * border))
+ dr[border:-border, border:-border] = heatmap[i, j].copy()
+ dr = cv2.GaussianBlur(dr, (kernel, kernel), 0)
+ heatmap[i, j] = dr[border:-border, border:-border].copy()
+ heatmap[i, j] *= origin_max / np.max(heatmap[i, j])
+ return heatmap
+
+ def dark_parse(self, hm, coord):
+ heatmap_height = hm.shape[0]
+ heatmap_width = hm.shape[1]
+ px = int(coord[0])
+ py = int(coord[1])
+ if 1 < px < heatmap_width - 2 and 1 < py < heatmap_height - 2:
+ dx = 0.5 * (hm[py][px + 1] - hm[py][px - 1])
+ dy = 0.5 * (hm[py + 1][px] - hm[py - 1][px])
+ dxx = 0.25 * (hm[py][px + 2] - 2 * hm[py][px] + hm[py][px - 2])
+ dxy = 0.25 * (hm[py+1][px+1] - hm[py-1][px+1] - hm[py+1][px-1] \
+ + hm[py-1][px-1])
+ dyy = 0.25 * (
+ hm[py + 2 * 1][px] - 2 * hm[py][px] + hm[py - 2 * 1][px])
+ derivative = np.matrix([[dx], [dy]])
+ hessian = np.matrix([[dxx, dxy], [dxy, dyy]])
+ if dxx * dyy - dxy**2 != 0:
+ hessianinv = hessian.I
+ offset = -hessianinv * derivative
+ offset = np.squeeze(np.array(offset.T), axis=0)
+ coord += offset
+ return coord
+
+ def dark_postprocess(self, hm, coords, kernelsize):
+ '''DARK postpocessing, Zhang et al. Distribution-Aware Coordinate
+ Representation for Human Pose Estimation (CVPR 2020).
+ '''
+
+ hm = self.gaussian_blur(hm, kernelsize)
+ hm = np.maximum(hm, 1e-10)
+ hm = np.log(hm)
+ for n in range(coords.shape[0]):
+ for p in range(coords.shape[1]):
+ coords[n, p] = self.dark_parse(hm[n][p], coords[n][p])
+ return coords
+
+ def get_final_preds(self, heatmaps, center, scale, kernelsize=3):
+ """the highest heatvalue location with a quarter offset in the
+ direction from the highest response to the second highest response.
+
+ Args:
+ heatmaps (numpy.ndarray): The predicted heatmaps
+ center (numpy.ndarray): The boxes center
+ scale (numpy.ndarray): The scale factor
+
+ Returns:
+ preds: numpy.ndarray([batch_size, num_joints, 2]), keypoints coords
+ maxvals: numpy.ndarray([batch_size, num_joints, 1]), the maximum confidence of the keypoints
+ """
+ coords, maxvals = self.get_max_preds(heatmaps)
+
+ heatmap_height = heatmaps.shape[2]
+ heatmap_width = heatmaps.shape[3]
+
+ if self.use_dark:
+ coords = self.dark_postprocess(heatmaps, coords, kernelsize)
+ else:
+ for n in range(coords.shape[0]):
+ for p in range(coords.shape[1]):
+ hm = heatmaps[n][p]
+ px = int(math.floor(coords[n][p][0] + 0.5))
+ py = int(math.floor(coords[n][p][1] + 0.5))
+ if 1 < px < heatmap_width - 1 and 1 < py < heatmap_height - 1:
+ diff = np.array([
+ hm[py][px + 1] - hm[py][px - 1],
+ hm[py + 1][px] - hm[py - 1][px]
+ ])
+ coords[n][p] += np.sign(diff) * .25
+ preds = coords.copy()
+
+ # Transform back
+ for i in range(coords.shape[0]):
+ preds[i] = transform_preds(coords[i], center[i], scale[i],
+ [heatmap_width, heatmap_height])
+
+ return preds, maxvals
+
+ def __call__(self, output, center, scale):
+ preds, maxvals = self.get_final_preds(output.numpy(), center, scale)
+ outputs = [[
+ np.concatenate(
+ (preds, maxvals), axis=-1), np.mean(
+ maxvals, axis=1)
+ ]]
+ return outputs
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/mask_rcnn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/mask_rcnn.py
new file mode 100644
index 000000000..071a326f4
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/mask_rcnn.py
@@ -0,0 +1,135 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['MaskRCNN']
+
+
+@register
+class MaskRCNN(BaseArch):
+ """
+ Mask R-CNN network, see https://arxiv.org/abs/1703.06870
+
+ Args:
+ backbone (object): backbone instance
+ rpn_head (object): `RPNHead` instance
+ bbox_head (object): `BBoxHead` instance
+ mask_head (object): `MaskHead` instance
+ bbox_post_process (object): `BBoxPostProcess` instance
+ mask_post_process (object): `MaskPostProcess` instance
+ neck (object): 'FPN' instance
+ """
+
+ __category__ = 'architecture'
+ __inject__ = [
+ 'bbox_post_process',
+ 'mask_post_process',
+ ]
+
+ def __init__(self,
+ backbone,
+ rpn_head,
+ bbox_head,
+ mask_head,
+ bbox_post_process,
+ mask_post_process,
+ neck=None):
+ super(MaskRCNN, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.rpn_head = rpn_head
+ self.bbox_head = bbox_head
+ self.mask_head = mask_head
+
+ self.bbox_post_process = bbox_post_process
+ self.mask_post_process = mask_post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = cfg['neck'] and create(cfg['neck'], **kwargs)
+
+ out_shape = neck and neck.out_shape or backbone.out_shape
+ kwargs = {'input_shape': out_shape}
+ rpn_head = create(cfg['rpn_head'], **kwargs)
+ bbox_head = create(cfg['bbox_head'], **kwargs)
+
+ out_shape = neck and out_shape or bbox_head.get_head().out_shape
+ kwargs = {'input_shape': out_shape}
+ mask_head = create(cfg['mask_head'], **kwargs)
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "rpn_head": rpn_head,
+ "bbox_head": bbox_head,
+ "mask_head": mask_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ if self.neck is not None:
+ body_feats = self.neck(body_feats)
+
+ if self.training:
+ rois, rois_num, rpn_loss = self.rpn_head(body_feats, self.inputs)
+ bbox_loss, bbox_feat = self.bbox_head(body_feats, rois, rois_num,
+ self.inputs)
+ rois, rois_num = self.bbox_head.get_assigned_rois()
+ bbox_targets = self.bbox_head.get_assigned_targets()
+ # Mask Head needs bbox_feat in Mask RCNN
+ mask_loss = self.mask_head(body_feats, rois, rois_num, self.inputs,
+ bbox_targets, bbox_feat)
+ return rpn_loss, bbox_loss, mask_loss
+ else:
+ rois, rois_num, _ = self.rpn_head(body_feats, self.inputs)
+ preds, feat_func = self.bbox_head(body_feats, rois, rois_num, None)
+
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+
+ bbox, bbox_num = self.bbox_post_process(preds, (rois, rois_num),
+ im_shape, scale_factor)
+ mask_out = self.mask_head(
+ body_feats, bbox, bbox_num, self.inputs, feat_func=feat_func)
+
+ # rescale the prediction back to origin image
+ bbox_pred = self.bbox_post_process.get_pred(bbox, bbox_num,
+ im_shape, scale_factor)
+ origin_shape = self.bbox_post_process.get_origin_shape()
+ mask_pred = self.mask_post_process(mask_out[:, 0, :, :], bbox_pred,
+ bbox_num, origin_shape)
+ return bbox_pred, bbox_num, mask_pred
+
+ def get_loss(self, ):
+ bbox_loss, mask_loss, rpn_loss = self._forward()
+ loss = {}
+ loss.update(rpn_loss)
+ loss.update(bbox_loss)
+ loss.update(mask_loss)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num, mask_pred = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num, 'mask': mask_pred}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/meta_arch.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/meta_arch.py
new file mode 100644
index 000000000..d9875e183
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/meta_arch.py
@@ -0,0 +1,72 @@
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+from ppdet.core.workspace import register
+
+__all__ = ['BaseArch']
+
+
+@register
+class BaseArch(nn.Layer):
+ def __init__(self, data_format='NCHW'):
+ super(BaseArch, self).__init__()
+ self.data_format = data_format
+ self.inputs = {}
+ self.fuse_norm = False
+
+ def load_meanstd(self, cfg_transform):
+ self.scale = 1.
+ self.mean = paddle.to_tensor([0.485, 0.456, 0.406]).reshape(
+ (1, 3, 1, 1))
+ self.std = paddle.to_tensor([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1))
+ for item in cfg_transform:
+ if 'NormalizeImage' in item:
+ self.mean = paddle.to_tensor(item['NormalizeImage'][
+ 'mean']).reshape((1, 3, 1, 1))
+ self.std = paddle.to_tensor(item['NormalizeImage'][
+ 'std']).reshape((1, 3, 1, 1))
+ if item['NormalizeImage'].get('is_scale', True):
+ self.scale = 1. / 255.
+ break
+ if self.data_format == 'NHWC':
+ self.mean = self.mean.reshape(1, 1, 1, 3)
+ self.std = self.std.reshape(1, 1, 1, 3)
+
+ def forward(self, inputs):
+ if self.data_format == 'NHWC':
+ image = inputs['image']
+ inputs['image'] = paddle.transpose(image, [0, 2, 3, 1])
+
+ if self.fuse_norm:
+ image = inputs['image']
+ self.inputs['image'] = (image * self.scale - self.mean) / self.std
+ self.inputs['im_shape'] = inputs['im_shape']
+ self.inputs['scale_factor'] = inputs['scale_factor']
+ else:
+ self.inputs = inputs
+
+ self.model_arch()
+
+ if self.training:
+ out = self.get_loss()
+ else:
+ out = self.get_pred()
+ return out
+
+ def build_inputs(self, data, input_def):
+ inputs = {}
+ for i, k in enumerate(input_def):
+ inputs[k] = data[i]
+ return inputs
+
+ def model_arch(self, ):
+ pass
+
+ def get_loss(self, ):
+ raise NotImplementedError("Should implement get_loss method!")
+
+ def get_pred(self, ):
+ raise NotImplementedError("Should implement get_pred method!")
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/picodet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/picodet.py
new file mode 100644
index 000000000..cd807a9fa
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/picodet.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['PicoDet']
+
+
+@register
+class PicoDet(BaseArch):
+ """
+ Generalized Focal Loss network, see https://arxiv.org/abs/2006.04388
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): 'FPN' instance
+ head (object): 'PicoHead' instance
+ """
+
+ __category__ = 'architecture'
+
+ def __init__(self, backbone, neck, head='PicoHead'):
+ super(PicoDet, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.head = head
+ self.deploy = False
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ head = create(cfg['head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "head": head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ fpn_feats = self.neck(body_feats)
+ head_outs = self.head(fpn_feats, self.deploy)
+ if self.training or self.deploy:
+ return head_outs, None
+ else:
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+ bboxes, bbox_num = self.head.post_process(head_outs, im_shape,
+ scale_factor)
+ return bboxes, bbox_num
+
+ def get_loss(self, ):
+ loss = {}
+
+ head_outs, _ = self._forward()
+ loss_gfl = self.head.get_loss(head_outs, self.inputs)
+ loss.update(loss_gfl)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ if self.deploy:
+ return {'picodet': self._forward()[0]}
+ else:
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/s2anet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/s2anet.py
new file mode 100644
index 000000000..ecfc987f9
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/s2anet.py
@@ -0,0 +1,102 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['S2ANet']
+
+
+@register
+class S2ANet(BaseArch):
+ __category__ = 'architecture'
+ __inject__ = [
+ 's2anet_head',
+ 's2anet_bbox_post_process',
+ ]
+
+ def __init__(self, backbone, neck, s2anet_head, s2anet_bbox_post_process):
+ """
+ S2ANet, see https://arxiv.org/pdf/2008.09397.pdf
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): `FPN` instance
+ s2anet_head (object): `S2ANetHead` instance
+ s2anet_bbox_post_process (object): `S2ANetBBoxPostProcess` instance
+ """
+ super(S2ANet, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.s2anet_head = s2anet_head
+ self.s2anet_bbox_post_process = s2anet_bbox_post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = cfg['neck'] and create(cfg['neck'], **kwargs)
+
+ out_shape = neck and neck.out_shape or backbone.out_shape
+ kwargs = {'input_shape': out_shape}
+ s2anet_head = create(cfg['s2anet_head'], **kwargs)
+ s2anet_bbox_post_process = create(cfg['s2anet_bbox_post_process'],
+ **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "s2anet_head": s2anet_head,
+ "s2anet_bbox_post_process": s2anet_bbox_post_process,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ if self.neck is not None:
+ body_feats = self.neck(body_feats)
+ self.s2anet_head(body_feats)
+ if self.training:
+ loss = self.s2anet_head.get_loss(self.inputs)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+ else:
+ im_shape = self.inputs['im_shape']
+ scale_factor = self.inputs['scale_factor']
+ nms_pre = self.s2anet_bbox_post_process.nms_pre
+ pred_scores, pred_bboxes = self.s2anet_head.get_prediction(nms_pre)
+
+ # post_process
+ pred_bboxes, bbox_num = self.s2anet_bbox_post_process(pred_scores,
+ pred_bboxes)
+ # rescale the prediction back to origin image
+ pred_bboxes = self.s2anet_bbox_post_process.get_pred(
+ pred_bboxes, bbox_num, im_shape, scale_factor)
+
+ # output
+ output = {'bbox': pred_bboxes, 'bbox_num': bbox_num}
+ return output
+
+ def get_loss(self, ):
+ loss = self._forward()
+ return loss
+
+ def get_pred(self):
+ output = self._forward()
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/solov2.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/solov2.py
new file mode 100644
index 000000000..4e5fc2118
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/solov2.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['SOLOv2']
+
+
+@register
+class SOLOv2(BaseArch):
+ """
+ SOLOv2 network, see https://arxiv.org/abs/2003.10152
+
+ Args:
+ backbone (object): an backbone instance
+ solov2_head (object): an `SOLOv2Head` instance
+ mask_head (object): an `SOLOv2MaskHead` instance
+ neck (object): neck of network, such as feature pyramid network instance
+ """
+
+ __category__ = 'architecture'
+
+ def __init__(self, backbone, solov2_head, mask_head, neck=None):
+ super(SOLOv2, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.solov2_head = solov2_head
+ self.mask_head = mask_head
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ solov2_head = create(cfg['solov2_head'], **kwargs)
+ mask_head = create(cfg['mask_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ 'solov2_head': solov2_head,
+ 'mask_head': mask_head,
+ }
+
+ def model_arch(self):
+ body_feats = self.backbone(self.inputs)
+
+ body_feats = self.neck(body_feats)
+
+ self.seg_pred = self.mask_head(body_feats)
+
+ self.cate_pred_list, self.kernel_pred_list = self.solov2_head(
+ body_feats)
+
+ def get_loss(self, ):
+ loss = {}
+ # get gt_ins_labels, gt_cate_labels, etc.
+ gt_ins_labels, gt_cate_labels, gt_grid_orders = [], [], []
+ fg_num = self.inputs['fg_num']
+ for i in range(len(self.solov2_head.seg_num_grids)):
+ ins_label = 'ins_label{}'.format(i)
+ if ins_label in self.inputs:
+ gt_ins_labels.append(self.inputs[ins_label])
+ cate_label = 'cate_label{}'.format(i)
+ if cate_label in self.inputs:
+ gt_cate_labels.append(self.inputs[cate_label])
+ grid_order = 'grid_order{}'.format(i)
+ if grid_order in self.inputs:
+ gt_grid_orders.append(self.inputs[grid_order])
+
+ loss_solov2 = self.solov2_head.get_loss(
+ self.cate_pred_list, self.kernel_pred_list, self.seg_pred,
+ gt_ins_labels, gt_cate_labels, gt_grid_orders, fg_num)
+ loss.update(loss_solov2)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ seg_masks, cate_labels, cate_scores, bbox_num = self.solov2_head.get_prediction(
+ self.cate_pred_list, self.kernel_pred_list, self.seg_pred,
+ self.inputs['im_shape'], self.inputs['scale_factor'])
+ outs = {
+ "segm": seg_masks,
+ "bbox_num": bbox_num,
+ 'cate_label': cate_labels,
+ 'cate_score': cate_scores
+ }
+ return outs
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/sparse_rcnn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/sparse_rcnn.py
new file mode 100644
index 000000000..34c29498b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/sparse_rcnn.py
@@ -0,0 +1,99 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ["SparseRCNN"]
+
+
+@register
+class SparseRCNN(BaseArch):
+ __category__ = 'architecture'
+ __inject__ = ["postprocess"]
+
+ def __init__(self,
+ backbone,
+ neck,
+ head="SparsercnnHead",
+ postprocess="SparsePostProcess"):
+ super(SparseRCNN, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.head = head
+ self.postprocess = postprocess
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'roi_input_shape': neck.out_shape}
+ head = create(cfg['head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "head": head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ fpn_feats = self.neck(body_feats)
+ head_outs = self.head(fpn_feats, self.inputs["img_whwh"])
+
+ if not self.training:
+ bboxes = self.postprocess(
+ head_outs["pred_logits"], head_outs["pred_boxes"],
+ self.inputs["scale_factor_wh"], self.inputs["img_whwh"])
+ return bboxes
+ else:
+ return head_outs
+
+ def get_loss(self):
+ batch_gt_class = self.inputs["gt_class"]
+ batch_gt_box = self.inputs["gt_bbox"]
+ batch_whwh = self.inputs["img_whwh"]
+ targets = []
+
+ for i in range(len(batch_gt_class)):
+ boxes = batch_gt_box[i]
+ labels = batch_gt_class[i].squeeze(-1)
+ img_whwh = batch_whwh[i]
+ img_whwh_tgt = img_whwh.unsqueeze(0).tile([int(boxes.shape[0]), 1])
+ targets.append({
+ "boxes": boxes,
+ "labels": labels,
+ "img_whwh": img_whwh,
+ "img_whwh_tgt": img_whwh_tgt
+ })
+
+ outputs = self._forward()
+ loss_dict = self.head.get_loss(outputs, targets)
+ acc = loss_dict["acc"]
+ loss_dict.pop("acc")
+ total_loss = sum(loss_dict.values())
+ loss_dict.update({"loss": total_loss, "acc": acc})
+ return loss_dict
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ssd.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ssd.py
new file mode 100644
index 000000000..34bf24108
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ssd.py
@@ -0,0 +1,92 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['SSD']
+
+
+@register
+class SSD(BaseArch):
+ """
+ Single Shot MultiBox Detector, see https://arxiv.org/abs/1512.02325
+
+ Args:
+ backbone (nn.Layer): backbone instance
+ ssd_head (nn.Layer): `SSDHead` instance
+ post_process (object): `BBoxPostProcess` instance
+ """
+
+ __category__ = 'architecture'
+ __inject__ = ['post_process']
+
+ def __init__(self, backbone, ssd_head, post_process, r34_backbone=False):
+ super(SSD, self).__init__()
+ self.backbone = backbone
+ self.ssd_head = ssd_head
+ self.post_process = post_process
+ self.r34_backbone = r34_backbone
+ if self.r34_backbone:
+ from ppdet.modeling.backbones.resnet import ResNet
+ assert isinstance(self.backbone, ResNet) and \
+ self.backbone.depth == 34, \
+ "If you set r34_backbone=True, please use ResNet-34 as backbone."
+ self.backbone.res_layers[2].blocks[0].branch2a.conv._stride = [1, 1]
+ self.backbone.res_layers[2].blocks[0].short.conv._stride = [1, 1]
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+
+ # head
+ kwargs = {'input_shape': backbone.out_shape}
+ ssd_head = create(cfg['ssd_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ "ssd_head": ssd_head,
+ }
+
+ def _forward(self):
+ # Backbone
+ body_feats = self.backbone(self.inputs)
+
+ # SSD Head
+ if self.training:
+ return self.ssd_head(body_feats, self.inputs['image'],
+ self.inputs['gt_bbox'],
+ self.inputs['gt_class'])
+ else:
+ preds, anchors = self.ssd_head(body_feats, self.inputs['image'])
+ bbox, bbox_num = self.post_process(preds, anchors,
+ self.inputs['im_shape'],
+ self.inputs['scale_factor'])
+ return bbox, bbox_num
+
+ def get_loss(self, ):
+ return {"loss": self._forward()}
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {
+ "bbox": bbox_pred,
+ "bbox_num": bbox_num,
+ }
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/tood.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/tood.py
new file mode 100644
index 000000000..157ec6f3a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/tood.py
@@ -0,0 +1,77 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['TOOD']
+
+
+@register
+class TOOD(BaseArch):
+ """
+ TOOD: Task-aligned One-stage Object Detection, see https://arxiv.org/abs/2108.07755
+ Args:
+ backbone (nn.Layer): backbone instance
+ neck (nn.Layer): 'FPN' instance
+ head (nn.Layer): 'TOODHead' instance
+ """
+
+ __category__ = 'architecture'
+
+ def __init__(self, backbone, neck, head):
+ super(TOOD, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.head = head
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ head = create(cfg['head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "head": head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ fpn_feats = self.neck(body_feats)
+ head_outs = self.head(fpn_feats)
+ if not self.training:
+ bboxes, bbox_num = self.head.post_process(
+ head_outs, self.inputs['im_shape'], self.inputs['scale_factor'])
+ return bboxes, bbox_num
+ else:
+ loss = self.head.get_loss(head_outs, self.inputs)
+ return loss
+
+ def get_loss(self):
+ return self._forward()
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {'bbox': bbox_pred, 'bbox_num': bbox_num}
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ttfnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ttfnet.py
new file mode 100644
index 000000000..c3eb61c87
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/ttfnet.py
@@ -0,0 +1,98 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+
+__all__ = ['TTFNet']
+
+
+@register
+class TTFNet(BaseArch):
+ """
+ TTFNet network, see https://arxiv.org/abs/1909.00700
+
+ Args:
+ backbone (object): backbone instance
+ neck (object): 'TTFFPN' instance
+ ttf_head (object): 'TTFHead' instance
+ post_process (object): 'BBoxPostProcess' instance
+ """
+
+ __category__ = 'architecture'
+ __inject__ = ['post_process']
+
+ def __init__(self,
+ backbone='DarkNet',
+ neck='TTFFPN',
+ ttf_head='TTFHead',
+ post_process='BBoxPostProcess'):
+ super(TTFNet, self).__init__()
+ self.backbone = backbone
+ self.neck = neck
+ self.ttf_head = ttf_head
+ self.post_process = post_process
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ backbone = create(cfg['backbone'])
+
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ kwargs = {'input_shape': neck.out_shape}
+ ttf_head = create(cfg['ttf_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "ttf_head": ttf_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ body_feats = self.neck(body_feats)
+ hm, wh = self.ttf_head(body_feats)
+ if self.training:
+ return hm, wh
+ else:
+ bbox, bbox_num = self.post_process(hm, wh, self.inputs['im_shape'],
+ self.inputs['scale_factor'])
+ return bbox, bbox_num
+
+ def get_loss(self, ):
+ loss = {}
+ heatmap = self.inputs['ttf_heatmap']
+ box_target = self.inputs['ttf_box_target']
+ reg_weight = self.inputs['ttf_reg_weight']
+ hm, wh = self._forward()
+ head_loss = self.ttf_head.get_loss(hm, wh, heatmap, box_target,
+ reg_weight)
+ loss.update(head_loss)
+ total_loss = paddle.add_n(list(loss.values()))
+ loss.update({'loss': total_loss})
+ return loss
+
+ def get_pred(self):
+ bbox_pred, bbox_num = self._forward()
+ output = {
+ "bbox": bbox_pred,
+ "bbox_num": bbox_num,
+ }
+ return output
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/yolo.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/yolo.py
new file mode 100644
index 000000000..d5979e695
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/architectures/yolo.py
@@ -0,0 +1,124 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from ppdet.core.workspace import register, create
+from .meta_arch import BaseArch
+from ..post_process import JDEBBoxPostProcess
+
+__all__ = ['YOLOv3']
+
+
+@register
+class YOLOv3(BaseArch):
+ __category__ = 'architecture'
+ __shared__ = ['data_format']
+ __inject__ = ['post_process']
+
+ def __init__(self,
+ backbone='DarkNet',
+ neck='YOLOv3FPN',
+ yolo_head='YOLOv3Head',
+ post_process='BBoxPostProcess',
+ data_format='NCHW',
+ for_mot=False):
+ """
+ YOLOv3 network, see https://arxiv.org/abs/1804.02767
+
+ Args:
+ backbone (nn.Layer): backbone instance
+ neck (nn.Layer): neck instance
+ yolo_head (nn.Layer): anchor_head instance
+ bbox_post_process (object): `BBoxPostProcess` instance
+ data_format (str): data format, NCHW or NHWC
+ for_mot (bool): whether return other features for multi-object tracking
+ models, default False in pure object detection models.
+ """
+ super(YOLOv3, self).__init__(data_format=data_format)
+ self.backbone = backbone
+ self.neck = neck
+ self.yolo_head = yolo_head
+ self.post_process = post_process
+ self.for_mot = for_mot
+ self.return_idx = isinstance(post_process, JDEBBoxPostProcess)
+
+ @classmethod
+ def from_config(cls, cfg, *args, **kwargs):
+ # backbone
+ backbone = create(cfg['backbone'])
+
+ # fpn
+ kwargs = {'input_shape': backbone.out_shape}
+ neck = create(cfg['neck'], **kwargs)
+
+ # head
+ kwargs = {'input_shape': neck.out_shape}
+ yolo_head = create(cfg['yolo_head'], **kwargs)
+
+ return {
+ 'backbone': backbone,
+ 'neck': neck,
+ "yolo_head": yolo_head,
+ }
+
+ def _forward(self):
+ body_feats = self.backbone(self.inputs)
+ neck_feats = self.neck(body_feats, self.for_mot)
+
+ if isinstance(neck_feats, dict):
+ assert self.for_mot == True
+ emb_feats = neck_feats['emb_feats']
+ neck_feats = neck_feats['yolo_feats']
+
+ if self.training:
+ yolo_losses = self.yolo_head(neck_feats, self.inputs)
+
+ if self.for_mot:
+ return {'det_losses': yolo_losses, 'emb_feats': emb_feats}
+ else:
+ return yolo_losses
+
+ else:
+ yolo_head_outs = self.yolo_head(neck_feats)
+
+ if self.for_mot:
+ boxes_idx, bbox, bbox_num, nms_keep_idx = self.post_process(
+ yolo_head_outs, self.yolo_head.mask_anchors)
+ output = {
+ 'bbox': bbox,
+ 'bbox_num': bbox_num,
+ 'boxes_idx': boxes_idx,
+ 'nms_keep_idx': nms_keep_idx,
+ 'emb_feats': emb_feats,
+ }
+ else:
+ if self.return_idx:
+ _, bbox, bbox_num, _ = self.post_process(
+ yolo_head_outs, self.yolo_head.mask_anchors)
+ else:
+ bbox, bbox_num = self.post_process(
+ yolo_head_outs, self.yolo_head.mask_anchors,
+ self.inputs['im_shape'], self.inputs['scale_factor'])
+ output = {'bbox': bbox, 'bbox_num': bbox_num}
+
+ return output
+
+ def get_loss(self):
+ return self._forward()
+
+ def get_pred(self):
+ return self._forward()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__init__.py
new file mode 100644
index 000000000..be5bb04d3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__init__.py
@@ -0,0 +1,23 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import utils
+from . import task_aligned_assigner
+from . import atss_assigner
+from . import simota_assigner
+
+from .utils import *
+from .task_aligned_assigner import *
+from .atss_assigner import *
+from .simota_assigner import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..ea40821c5
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/atss_assigner.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/atss_assigner.cpython-37.pyc
new file mode 100644
index 000000000..1cbaefccc
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/atss_assigner.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/simota_assigner.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/simota_assigner.cpython-37.pyc
new file mode 100644
index 000000000..359a50c2e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/simota_assigner.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/task_aligned_assigner.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/task_aligned_assigner.cpython-37.pyc
new file mode 100644
index 000000000..55652436d
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/task_aligned_assigner.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/utils.cpython-37.pyc
new file mode 100644
index 000000000..e31eae2d2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/__pycache__/utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/atss_assigner.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/atss_assigner.py
new file mode 100644
index 000000000..43e6ae2ab
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/atss_assigner.py
@@ -0,0 +1,209 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ppdet.core.workspace import register
+from ..ops import iou_similarity
+from ..bbox_utils import bbox_center
+from .utils import (pad_gt, check_points_inside_bboxes, compute_max_iou_anchor,
+ compute_max_iou_gt)
+
+
+@register
+class ATSSAssigner(nn.Layer):
+ """Bridging the Gap Between Anchor-based and Anchor-free Detection
+ via Adaptive Training Sample Selection
+ """
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ topk=9,
+ num_classes=80,
+ force_gt_matching=False,
+ eps=1e-9):
+ super(ATSSAssigner, self).__init__()
+ self.topk = topk
+ self.num_classes = num_classes
+ self.force_gt_matching = force_gt_matching
+ self.eps = eps
+
+ def _gather_topk_pyramid(self, gt2anchor_distances, num_anchors_list,
+ pad_gt_mask):
+ pad_gt_mask = pad_gt_mask.tile([1, 1, self.topk]).astype(paddle.bool)
+ gt2anchor_distances_list = paddle.split(
+ gt2anchor_distances, num_anchors_list, axis=-1)
+ num_anchors_index = np.cumsum(num_anchors_list).tolist()
+ num_anchors_index = [0, ] + num_anchors_index[:-1]
+ is_in_topk_list = []
+ topk_idxs_list = []
+ for distances, anchors_index in zip(gt2anchor_distances_list,
+ num_anchors_index):
+ num_anchors = distances.shape[-1]
+ topk_metrics, topk_idxs = paddle.topk(
+ distances, self.topk, axis=-1, largest=False)
+ topk_idxs_list.append(topk_idxs + anchors_index)
+ topk_idxs = paddle.where(pad_gt_mask, topk_idxs,
+ paddle.zeros_like(topk_idxs))
+ is_in_topk = F.one_hot(topk_idxs, num_anchors).sum(axis=-2)
+ is_in_topk = paddle.where(is_in_topk > 1,
+ paddle.zeros_like(is_in_topk), is_in_topk)
+ is_in_topk_list.append(is_in_topk.astype(gt2anchor_distances.dtype))
+ is_in_topk_list = paddle.concat(is_in_topk_list, axis=-1)
+ topk_idxs_list = paddle.concat(topk_idxs_list, axis=-1)
+ return is_in_topk_list, topk_idxs_list
+
+ @paddle.no_grad()
+ def forward(self,
+ anchor_bboxes,
+ num_anchors_list,
+ gt_labels,
+ gt_bboxes,
+ bg_index,
+ gt_scores=None):
+ r"""This code is based on
+ https://github.com/fcjian/TOOD/blob/master/mmdet/core/bbox/assigners/atss_assigner.py
+
+ The assignment is done in following steps
+ 1. compute iou between all bbox (bbox of all pyramid levels) and gt
+ 2. compute center distance between all bbox and gt
+ 3. on each pyramid level, for each gt, select k bbox whose center
+ are closest to the gt center, so we total select k*l bbox as
+ candidates for each gt
+ 4. get corresponding iou for the these candidates, and compute the
+ mean and std, set mean + std as the iou threshold
+ 5. select these candidates whose iou are greater than or equal to
+ the threshold as positive
+ 6. limit the positive sample's center in gt
+ 7. if an anchor box is assigned to multiple gts, the one with the
+ highest iou will be selected.
+ Args:
+ anchor_bboxes (Tensor, float32): pre-defined anchors, shape(L, 4),
+ "xmin, xmax, ymin, ymax" format
+ num_anchors_list (List): num of anchors in each level
+ gt_labels (Tensor|List[Tensor], int64): Label of gt_bboxes, shape(B, n, 1)
+ gt_bboxes (Tensor|List[Tensor], float32): Ground truth bboxes, shape(B, n, 4)
+ bg_index (int): background index
+ gt_scores (Tensor|List[Tensor]|None, float32) Score of gt_bboxes,
+ shape(B, n, 1), if None, then it will initialize with one_hot label
+ Returns:
+ assigned_labels (Tensor): (B, L)
+ assigned_bboxes (Tensor): (B, L, 4)
+ assigned_scores (Tensor): (B, L, C)
+ """
+ gt_labels, gt_bboxes, pad_gt_scores, pad_gt_mask = pad_gt(
+ gt_labels, gt_bboxes, gt_scores)
+ assert gt_labels.ndim == gt_bboxes.ndim and \
+ gt_bboxes.ndim == 3
+
+ num_anchors, _ = anchor_bboxes.shape
+ batch_size, num_max_boxes, _ = gt_bboxes.shape
+
+ # negative batch
+ if num_max_boxes == 0:
+ assigned_labels = paddle.full([batch_size, num_anchors], bg_index)
+ assigned_bboxes = paddle.zeros([batch_size, num_anchors, 4])
+ assigned_scores = paddle.zeros(
+ [batch_size, num_anchors, self.num_classes])
+ return assigned_labels, assigned_bboxes, assigned_scores
+
+ # 1. compute iou between gt and anchor bbox, [B, n, L]
+ ious = iou_similarity(gt_bboxes.reshape([-1, 4]), anchor_bboxes)
+ ious = ious.reshape([batch_size, -1, num_anchors])
+
+ # 2. compute center distance between all anchors and gt, [B, n, L]
+ gt_centers = bbox_center(gt_bboxes.reshape([-1, 4])).unsqueeze(1)
+ anchor_centers = bbox_center(anchor_bboxes)
+ gt2anchor_distances = (gt_centers - anchor_centers.unsqueeze(0)) \
+ .norm(2, axis=-1).reshape([batch_size, -1, num_anchors])
+
+ # 3. on each pyramid level, selecting topk closest candidates
+ # based on the center distance, [B, n, L]
+ is_in_topk, topk_idxs = self._gather_topk_pyramid(
+ gt2anchor_distances, num_anchors_list, pad_gt_mask)
+
+ # 4. get corresponding iou for the these candidates, and compute the
+ # mean and std, 5. set mean + std as the iou threshold
+ iou_candidates = ious * is_in_topk
+ iou_threshold = paddle.index_sample(
+ iou_candidates.flatten(stop_axis=-2),
+ topk_idxs.flatten(stop_axis=-2))
+ iou_threshold = iou_threshold.reshape([batch_size, num_max_boxes, -1])
+ iou_threshold = iou_threshold.mean(axis=-1, keepdim=True) + \
+ iou_threshold.std(axis=-1, keepdim=True)
+ is_in_topk = paddle.where(
+ iou_candidates > iou_threshold.tile([1, 1, num_anchors]),
+ is_in_topk, paddle.zeros_like(is_in_topk))
+
+ # 6. check the positive sample's center in gt, [B, n, L]
+ is_in_gts = check_points_inside_bboxes(anchor_centers, gt_bboxes)
+
+ # select positive sample, [B, n, L]
+ mask_positive = is_in_topk * is_in_gts * pad_gt_mask
+
+ # 7. if an anchor box is assigned to multiple gts,
+ # the one with the highest iou will be selected.
+ mask_positive_sum = mask_positive.sum(axis=-2)
+ if mask_positive_sum.max() > 1:
+ mask_multiple_gts = (mask_positive_sum.unsqueeze(1) > 1).tile(
+ [1, num_max_boxes, 1])
+ is_max_iou = compute_max_iou_anchor(ious)
+ mask_positive = paddle.where(mask_multiple_gts, is_max_iou,
+ mask_positive)
+ mask_positive_sum = mask_positive.sum(axis=-2)
+ # 8. make sure every gt_bbox matches the anchor
+ if self.force_gt_matching:
+ is_max_iou = compute_max_iou_gt(ious) * pad_gt_mask
+ mask_max_iou = (is_max_iou.sum(-2, keepdim=True) == 1).tile(
+ [1, num_max_boxes, 1])
+ mask_positive = paddle.where(mask_max_iou, is_max_iou,
+ mask_positive)
+ mask_positive_sum = mask_positive.sum(axis=-2)
+ assigned_gt_index = mask_positive.argmax(axis=-2)
+ assert mask_positive_sum.max() == 1, \
+ ("one anchor just assign one gt, but received not equals 1. "
+ "Received: %f" % mask_positive_sum.max().item())
+
+ # assigned target
+ batch_ind = paddle.arange(
+ end=batch_size, dtype=gt_labels.dtype).unsqueeze(-1)
+ assigned_gt_index = assigned_gt_index + batch_ind * num_max_boxes
+ assigned_labels = paddle.gather(
+ gt_labels.flatten(), assigned_gt_index.flatten(), axis=0)
+ assigned_labels = assigned_labels.reshape([batch_size, num_anchors])
+ assigned_labels = paddle.where(
+ mask_positive_sum > 0, assigned_labels,
+ paddle.full_like(assigned_labels, bg_index))
+
+ assigned_bboxes = paddle.gather(
+ gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0)
+ assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4])
+
+ assigned_scores = F.one_hot(assigned_labels, self.num_classes)
+ if gt_scores is not None:
+ gather_scores = paddle.gather(
+ pad_gt_scores.flatten(), assigned_gt_index.flatten(), axis=0)
+ gather_scores = gather_scores.reshape([batch_size, num_anchors])
+ gather_scores = paddle.where(mask_positive_sum > 0, gather_scores,
+ paddle.zeros_like(gather_scores))
+ assigned_scores *= gather_scores.unsqueeze(-1)
+
+ return assigned_labels, assigned_bboxes, assigned_scores
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/simota_assigner.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/simota_assigner.py
new file mode 100644
index 000000000..4b34027e3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/simota_assigner.py
@@ -0,0 +1,262 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/assigners/sim_ota_assigner.py
+
+import paddle
+import numpy as np
+import paddle.nn.functional as F
+
+from ppdet.modeling.losses.varifocal_loss import varifocal_loss
+from ppdet.modeling.bbox_utils import batch_bbox_overlaps
+from ppdet.core.workspace import register
+
+
+@register
+class SimOTAAssigner(object):
+ """Computes matching between predictions and ground truth.
+ Args:
+ center_radius (int | float, optional): Ground truth center size
+ to judge whether a prior is in center. Default 2.5.
+ candidate_topk (int, optional): The candidate top-k which used to
+ get top-k ious to calculate dynamic-k. Default 10.
+ iou_weight (int | float, optional): The scale factor for regression
+ iou cost. Default 3.0.
+ cls_weight (int | float, optional): The scale factor for classification
+ cost. Default 1.0.
+ num_classes (int): The num_classes of dataset.
+ use_vfl (int): Whether to use varifocal_loss when calculating the cost matrix.
+ """
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ center_radius=2.5,
+ candidate_topk=10,
+ iou_weight=3.0,
+ cls_weight=1.0,
+ num_classes=80,
+ use_vfl=True):
+ self.center_radius = center_radius
+ self.candidate_topk = candidate_topk
+ self.iou_weight = iou_weight
+ self.cls_weight = cls_weight
+ self.num_classes = num_classes
+ self.use_vfl = use_vfl
+
+ def get_in_gt_and_in_center_info(self, flatten_center_and_stride,
+ gt_bboxes):
+ num_gt = gt_bboxes.shape[0]
+
+ flatten_x = flatten_center_and_stride[:, 0].unsqueeze(1).tile(
+ [1, num_gt])
+ flatten_y = flatten_center_and_stride[:, 1].unsqueeze(1).tile(
+ [1, num_gt])
+ flatten_stride_x = flatten_center_and_stride[:, 2].unsqueeze(1).tile(
+ [1, num_gt])
+ flatten_stride_y = flatten_center_and_stride[:, 3].unsqueeze(1).tile(
+ [1, num_gt])
+
+ # is prior centers in gt bboxes, shape: [n_center, n_gt]
+ l_ = flatten_x - gt_bboxes[:, 0]
+ t_ = flatten_y - gt_bboxes[:, 1]
+ r_ = gt_bboxes[:, 2] - flatten_x
+ b_ = gt_bboxes[:, 3] - flatten_y
+
+ deltas = paddle.stack([l_, t_, r_, b_], axis=1)
+ is_in_gts = deltas.min(axis=1) > 0
+ is_in_gts_all = is_in_gts.sum(axis=1) > 0
+
+ # is prior centers in gt centers
+ gt_center_xs = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0
+ gt_center_ys = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0
+ ct_bound_l = gt_center_xs - self.center_radius * flatten_stride_x
+ ct_bound_t = gt_center_ys - self.center_radius * flatten_stride_y
+ ct_bound_r = gt_center_xs + self.center_radius * flatten_stride_x
+ ct_bound_b = gt_center_ys + self.center_radius * flatten_stride_y
+
+ cl_ = flatten_x - ct_bound_l
+ ct_ = flatten_y - ct_bound_t
+ cr_ = ct_bound_r - flatten_x
+ cb_ = ct_bound_b - flatten_y
+
+ ct_deltas = paddle.stack([cl_, ct_, cr_, cb_], axis=1)
+ is_in_cts = ct_deltas.min(axis=1) > 0
+ is_in_cts_all = is_in_cts.sum(axis=1) > 0
+
+ # in any of gts or gt centers, shape: [n_center]
+ is_in_gts_or_centers_all = paddle.logical_or(is_in_gts_all,
+ is_in_cts_all)
+
+ is_in_gts_or_centers_all_inds = paddle.nonzero(
+ is_in_gts_or_centers_all).squeeze(1)
+
+ # both in gts and gt centers, shape: [num_fg, num_gt]
+ is_in_gts_and_centers = paddle.logical_and(
+ paddle.gather(
+ is_in_gts.cast('int'), is_in_gts_or_centers_all_inds,
+ axis=0).cast('bool'),
+ paddle.gather(
+ is_in_cts.cast('int'), is_in_gts_or_centers_all_inds,
+ axis=0).cast('bool'))
+ return is_in_gts_or_centers_all, is_in_gts_or_centers_all_inds, is_in_gts_and_centers
+
+ def dynamic_k_matching(self, cost_matrix, pairwise_ious, num_gt):
+ match_matrix = np.zeros_like(cost_matrix.numpy())
+ # select candidate topk ious for dynamic-k calculation
+ topk_ious, _ = paddle.topk(pairwise_ious, self.candidate_topk, axis=0)
+ # calculate dynamic k for each gt
+ dynamic_ks = paddle.clip(topk_ious.sum(0).cast('int'), min=1)
+ for gt_idx in range(num_gt):
+ _, pos_idx = paddle.topk(
+ cost_matrix[:, gt_idx], k=dynamic_ks[gt_idx], largest=False)
+ match_matrix[:, gt_idx][pos_idx.numpy()] = 1.0
+
+ del topk_ious, dynamic_ks, pos_idx
+
+ # match points more than two gts
+ extra_match_gts_mask = match_matrix.sum(1) > 1
+ if extra_match_gts_mask.sum() > 0:
+ cost_matrix = cost_matrix.numpy()
+ cost_argmin = np.argmin(
+ cost_matrix[extra_match_gts_mask, :], axis=1)
+ match_matrix[extra_match_gts_mask, :] *= 0.0
+ match_matrix[extra_match_gts_mask, cost_argmin] = 1.0
+ # get foreground mask
+ match_fg_mask_inmatrix = match_matrix.sum(1) > 0
+ match_gt_inds_to_fg = match_matrix[match_fg_mask_inmatrix, :].argmax(1)
+
+ return match_gt_inds_to_fg, match_fg_mask_inmatrix
+
+ def get_sample(self, assign_gt_inds, gt_bboxes):
+ pos_inds = np.unique(np.nonzero(assign_gt_inds > 0)[0])
+ neg_inds = np.unique(np.nonzero(assign_gt_inds == 0)[0])
+ pos_assigned_gt_inds = assign_gt_inds[pos_inds] - 1
+
+ if gt_bboxes.size == 0:
+ # hack for index error case
+ assert pos_assigned_gt_inds.size == 0
+ pos_gt_bboxes = np.empty_like(gt_bboxes).reshape(-1, 4)
+ else:
+ if len(gt_bboxes.shape) < 2:
+ gt_bboxes = gt_bboxes.resize(-1, 4)
+ pos_gt_bboxes = gt_bboxes[pos_assigned_gt_inds, :]
+ return pos_inds, neg_inds, pos_gt_bboxes, pos_assigned_gt_inds
+
+ def __call__(self,
+ flatten_cls_pred_scores,
+ flatten_center_and_stride,
+ flatten_bboxes,
+ gt_bboxes,
+ gt_labels,
+ eps=1e-7):
+ """Assign gt to priors using SimOTA.
+ TODO: add comment.
+ Returns:
+ assign_result: The assigned result.
+ """
+ num_gt = gt_bboxes.shape[0]
+ num_bboxes = flatten_bboxes.shape[0]
+
+ if num_gt == 0 or num_bboxes == 0:
+ # No ground truth or boxes
+ label = np.ones([num_bboxes], dtype=np.int64) * self.num_classes
+ label_weight = np.ones([num_bboxes], dtype=np.float32)
+ bbox_target = np.zeros_like(flatten_center_and_stride)
+ return 0, label, label_weight, bbox_target
+
+ is_in_gts_or_centers_all, is_in_gts_or_centers_all_inds, is_in_boxes_and_center = self.get_in_gt_and_in_center_info(
+ flatten_center_and_stride, gt_bboxes)
+
+ # bboxes and scores to calculate matrix
+ valid_flatten_bboxes = flatten_bboxes[is_in_gts_or_centers_all_inds]
+ valid_cls_pred_scores = flatten_cls_pred_scores[
+ is_in_gts_or_centers_all_inds]
+ num_valid_bboxes = valid_flatten_bboxes.shape[0]
+
+ pairwise_ious = batch_bbox_overlaps(valid_flatten_bboxes,
+ gt_bboxes) # [num_points,num_gts]
+ if self.use_vfl:
+ gt_vfl_labels = gt_labels.squeeze(-1).unsqueeze(0).tile(
+ [num_valid_bboxes, 1]).reshape([-1])
+ valid_pred_scores = valid_cls_pred_scores.unsqueeze(1).tile(
+ [1, num_gt, 1]).reshape([-1, self.num_classes])
+ vfl_score = np.zeros(valid_pred_scores.shape)
+ vfl_score[np.arange(0, vfl_score.shape[0]), gt_vfl_labels.numpy(
+ )] = pairwise_ious.reshape([-1])
+ vfl_score = paddle.to_tensor(vfl_score)
+ losses_vfl = varifocal_loss(
+ valid_pred_scores, vfl_score,
+ use_sigmoid=False).reshape([num_valid_bboxes, num_gt])
+ losses_giou = batch_bbox_overlaps(
+ valid_flatten_bboxes, gt_bboxes, mode='giou')
+ cost_matrix = (
+ losses_vfl * self.cls_weight + losses_giou * self.iou_weight +
+ paddle.logical_not(is_in_boxes_and_center).cast('float32') *
+ 100000000)
+ else:
+ iou_cost = -paddle.log(pairwise_ious + eps)
+ gt_onehot_label = (F.one_hot(
+ gt_labels.squeeze(-1).cast(paddle.int64),
+ flatten_cls_pred_scores.shape[-1]).cast('float32').unsqueeze(0)
+ .tile([num_valid_bboxes, 1, 1]))
+
+ valid_pred_scores = valid_cls_pred_scores.unsqueeze(1).tile(
+ [1, num_gt, 1])
+ cls_cost = F.binary_cross_entropy(
+ valid_pred_scores, gt_onehot_label, reduction='none').sum(-1)
+
+ cost_matrix = (
+ cls_cost * self.cls_weight + iou_cost * self.iou_weight +
+ paddle.logical_not(is_in_boxes_and_center).cast('float32') *
+ 100000000)
+
+ match_gt_inds_to_fg, match_fg_mask_inmatrix = \
+ self.dynamic_k_matching(
+ cost_matrix, pairwise_ious, num_gt)
+
+ # sample and assign results
+ assigned_gt_inds = np.zeros([num_bboxes], dtype=np.int64)
+ match_fg_mask_inall = np.zeros_like(assigned_gt_inds)
+ match_fg_mask_inall[is_in_gts_or_centers_all.numpy(
+ )] = match_fg_mask_inmatrix
+
+ assigned_gt_inds[match_fg_mask_inall.astype(
+ np.bool)] = match_gt_inds_to_fg + 1
+
+ pos_inds, neg_inds, pos_gt_bboxes, pos_assigned_gt_inds \
+ = self.get_sample(assigned_gt_inds, gt_bboxes.numpy())
+
+ bbox_target = np.zeros_like(flatten_bboxes)
+ bbox_weight = np.zeros_like(flatten_bboxes)
+ label = np.ones([num_bboxes], dtype=np.int64) * self.num_classes
+ label_weight = np.zeros([num_bboxes], dtype=np.float32)
+
+ if len(pos_inds) > 0:
+ gt_labels = gt_labels.numpy()
+ pos_bbox_targets = pos_gt_bboxes
+ bbox_target[pos_inds, :] = pos_bbox_targets
+ bbox_weight[pos_inds, :] = 1.0
+ if not np.any(gt_labels):
+ label[pos_inds] = 0
+ else:
+ label[pos_inds] = gt_labels.squeeze(-1)[pos_assigned_gt_inds]
+
+ label_weight[pos_inds] = 1.0
+ if len(neg_inds) > 0:
+ label_weight[neg_inds] = 1.0
+
+ pos_num = max(pos_inds.size, 1)
+
+ return pos_num, label, label_weight, bbox_target
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/task_aligned_assigner.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/task_aligned_assigner.py
new file mode 100644
index 000000000..7e31c8afc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/task_aligned_assigner.py
@@ -0,0 +1,158 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ppdet.core.workspace import register
+from ..bbox_utils import iou_similarity
+from .utils import (pad_gt, gather_topk_anchors, check_points_inside_bboxes,
+ compute_max_iou_anchor)
+
+
+@register
+class TaskAlignedAssigner(nn.Layer):
+ """TOOD: Task-aligned One-stage Object Detection
+ """
+
+ def __init__(self, topk=13, alpha=1.0, beta=6.0, eps=1e-9):
+ super(TaskAlignedAssigner, self).__init__()
+ self.topk = topk
+ self.alpha = alpha
+ self.beta = beta
+ self.eps = eps
+
+ @paddle.no_grad()
+ def forward(self,
+ pred_scores,
+ pred_bboxes,
+ anchor_points,
+ gt_labels,
+ gt_bboxes,
+ bg_index,
+ gt_scores=None):
+ r"""This code is based on
+ https://github.com/fcjian/TOOD/blob/master/mmdet/core/bbox/assigners/task_aligned_assigner.py
+
+ The assignment is done in following steps
+ 1. compute alignment metric between all bbox (bbox of all pyramid levels) and gt
+ 2. select top-k bbox as candidates for each gt
+ 3. limit the positive sample's center in gt (because the anchor-free detector
+ only can predict positive distance)
+ 4. if an anchor box is assigned to multiple gts, the one with the
+ highest iou will be selected.
+ Args:
+ pred_scores (Tensor, float32): predicted class probability, shape(B, L, C)
+ pred_bboxes (Tensor, float32): predicted bounding boxes, shape(B, L, 4)
+ anchor_points (Tensor, float32): pre-defined anchors, shape(L, 2), "cxcy" format
+ gt_labels (Tensor|List[Tensor], int64): Label of gt_bboxes, shape(B, n, 1)
+ gt_bboxes (Tensor|List[Tensor], float32): Ground truth bboxes, shape(B, n, 4)
+ bg_index (int): background index
+ gt_scores (Tensor|List[Tensor]|None, float32) Score of gt_bboxes,
+ shape(B, n, 1), if None, then it will initialize with one_hot label
+ Returns:
+ assigned_labels (Tensor): (B, L)
+ assigned_bboxes (Tensor): (B, L, 4)
+ assigned_scores (Tensor): (B, L, C)
+ """
+ assert pred_scores.ndim == pred_bboxes.ndim
+
+ gt_labels, gt_bboxes, pad_gt_scores, pad_gt_mask = pad_gt(
+ gt_labels, gt_bboxes, gt_scores)
+ assert gt_labels.ndim == gt_bboxes.ndim and \
+ gt_bboxes.ndim == 3
+
+ batch_size, num_anchors, num_classes = pred_scores.shape
+ _, num_max_boxes, _ = gt_bboxes.shape
+
+ # negative batch
+ if num_max_boxes == 0:
+ assigned_labels = paddle.full([batch_size, num_anchors], bg_index)
+ assigned_bboxes = paddle.zeros([batch_size, num_anchors, 4])
+ assigned_scores = paddle.zeros(
+ [batch_size, num_anchors, num_classes])
+ return assigned_labels, assigned_bboxes, assigned_scores
+
+ # compute iou between gt and pred bbox, [B, n, L]
+ ious = iou_similarity(gt_bboxes, pred_bboxes)
+ # gather pred bboxes class score
+ pred_scores = pred_scores.transpose([0, 2, 1])
+ batch_ind = paddle.arange(
+ end=batch_size, dtype=gt_labels.dtype).unsqueeze(-1)
+ gt_labels_ind = paddle.stack(
+ [batch_ind.tile([1, num_max_boxes]), gt_labels.squeeze(-1)],
+ axis=-1)
+ bbox_cls_scores = paddle.gather_nd(pred_scores, gt_labels_ind)
+ # compute alignment metrics, [B, n, L]
+ alignment_metrics = bbox_cls_scores.pow(self.alpha) * ious.pow(
+ self.beta)
+
+ # check the positive sample's center in gt, [B, n, L]
+ is_in_gts = check_points_inside_bboxes(anchor_points, gt_bboxes)
+
+ # select topk largest alignment metrics pred bbox as candidates
+ # for each gt, [B, n, L]
+ is_in_topk = gather_topk_anchors(
+ alignment_metrics * is_in_gts,
+ self.topk,
+ topk_mask=pad_gt_mask.tile([1, 1, self.topk]).astype(paddle.bool))
+
+ # select positive sample, [B, n, L]
+ mask_positive = is_in_topk * is_in_gts * pad_gt_mask
+
+ # if an anchor box is assigned to multiple gts,
+ # the one with the highest iou will be selected, [B, n, L]
+ mask_positive_sum = mask_positive.sum(axis=-2)
+ if mask_positive_sum.max() > 1:
+ mask_multiple_gts = (mask_positive_sum.unsqueeze(1) > 1).tile(
+ [1, num_max_boxes, 1])
+ is_max_iou = compute_max_iou_anchor(ious)
+ mask_positive = paddle.where(mask_multiple_gts, is_max_iou,
+ mask_positive)
+ mask_positive_sum = mask_positive.sum(axis=-2)
+ assigned_gt_index = mask_positive.argmax(axis=-2)
+ assert mask_positive_sum.max() == 1, \
+ ("one anchor just assign one gt, but received not equals 1. "
+ "Received: %f" % mask_positive_sum.max().item())
+
+ # assigned target
+ assigned_gt_index = assigned_gt_index + batch_ind * num_max_boxes
+ assigned_labels = paddle.gather(
+ gt_labels.flatten(), assigned_gt_index.flatten(), axis=0)
+ assigned_labels = assigned_labels.reshape([batch_size, num_anchors])
+ assigned_labels = paddle.where(
+ mask_positive_sum > 0, assigned_labels,
+ paddle.full_like(assigned_labels, bg_index))
+
+ assigned_bboxes = paddle.gather(
+ gt_bboxes.reshape([-1, 4]), assigned_gt_index.flatten(), axis=0)
+ assigned_bboxes = assigned_bboxes.reshape([batch_size, num_anchors, 4])
+
+ assigned_scores = F.one_hot(assigned_labels, num_classes)
+ # rescale alignment metrics
+ alignment_metrics *= mask_positive
+ max_metrics_per_instance = alignment_metrics.max(axis=-1, keepdim=True)
+ max_ious_per_instance = (ious * mask_positive).max(axis=-1,
+ keepdim=True)
+ alignment_metrics = alignment_metrics / (
+ max_metrics_per_instance + self.eps) * max_ious_per_instance
+ alignment_metrics = alignment_metrics.max(-2).unsqueeze(-1)
+ assigned_scores = assigned_scores * alignment_metrics
+
+ return assigned_labels, assigned_bboxes, assigned_scores
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/utils.py
new file mode 100644
index 000000000..3448d9d8a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/assigners/utils.py
@@ -0,0 +1,149 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn.functional as F
+
+
+def pad_gt(gt_labels, gt_bboxes, gt_scores=None):
+ r""" Pad 0 in gt_labels and gt_bboxes.
+ Args:
+ gt_labels (Tensor|List[Tensor], int64): Label of gt_bboxes,
+ shape is [B, n, 1] or [[n_1, 1], [n_2, 1], ...], here n = sum(n_i)
+ gt_bboxes (Tensor|List[Tensor], float32): Ground truth bboxes,
+ shape is [B, n, 4] or [[n_1, 4], [n_2, 4], ...], here n = sum(n_i)
+ gt_scores (Tensor|List[Tensor]|None, float32): Score of gt_bboxes,
+ shape is [B, n, 1] or [[n_1, 4], [n_2, 4], ...], here n = sum(n_i)
+ Returns:
+ pad_gt_labels (Tensor, int64): shape[B, n, 1]
+ pad_gt_bboxes (Tensor, float32): shape[B, n, 4]
+ pad_gt_scores (Tensor, float32): shape[B, n, 1]
+ pad_gt_mask (Tensor, float32): shape[B, n, 1], 1 means bbox, 0 means no bbox
+ """
+ if isinstance(gt_labels, paddle.Tensor) and isinstance(gt_bboxes,
+ paddle.Tensor):
+ assert gt_labels.ndim == gt_bboxes.ndim and \
+ gt_bboxes.ndim == 3
+ pad_gt_mask = (
+ gt_bboxes.sum(axis=-1, keepdim=True) > 0).astype(gt_bboxes.dtype)
+ if gt_scores is None:
+ gt_scores = pad_gt_mask.clone()
+ assert gt_labels.ndim == gt_scores.ndim
+
+ return gt_labels, gt_bboxes, gt_scores, pad_gt_mask
+ elif isinstance(gt_labels, list) and isinstance(gt_bboxes, list):
+ assert len(gt_labels) == len(gt_bboxes), \
+ 'The number of `gt_labels` and `gt_bboxes` is not equal. '
+ num_max_boxes = max([len(a) for a in gt_bboxes])
+ batch_size = len(gt_bboxes)
+ # pad label and bbox
+ pad_gt_labels = paddle.zeros(
+ [batch_size, num_max_boxes, 1], dtype=gt_labels[0].dtype)
+ pad_gt_bboxes = paddle.zeros(
+ [batch_size, num_max_boxes, 4], dtype=gt_bboxes[0].dtype)
+ pad_gt_scores = paddle.zeros(
+ [batch_size, num_max_boxes, 1], dtype=gt_bboxes[0].dtype)
+ pad_gt_mask = paddle.zeros(
+ [batch_size, num_max_boxes, 1], dtype=gt_bboxes[0].dtype)
+ for i, (label, bbox) in enumerate(zip(gt_labels, gt_bboxes)):
+ if len(label) > 0 and len(bbox) > 0:
+ pad_gt_labels[i, :len(label)] = label
+ pad_gt_bboxes[i, :len(bbox)] = bbox
+ pad_gt_mask[i, :len(bbox)] = 1.
+ if gt_scores is not None:
+ pad_gt_scores[i, :len(gt_scores[i])] = gt_scores[i]
+ if gt_scores is None:
+ pad_gt_scores = pad_gt_mask.clone()
+ return pad_gt_labels, pad_gt_bboxes, pad_gt_scores, pad_gt_mask
+ else:
+ raise ValueError('The input `gt_labels` or `gt_bboxes` is invalid! ')
+
+
+def gather_topk_anchors(metrics, topk, largest=True, topk_mask=None, eps=1e-9):
+ r"""
+ Args:
+ metrics (Tensor, float32): shape[B, n, L], n: num_gts, L: num_anchors
+ topk (int): The number of top elements to look for along the axis.
+ largest (bool) : largest is a flag, if set to true,
+ algorithm will sort by descending order, otherwise sort by
+ ascending order. Default: True
+ topk_mask (Tensor, bool|None): shape[B, n, topk], ignore bbox mask,
+ Default: None
+ eps (float): Default: 1e-9
+ Returns:
+ is_in_topk (Tensor, float32): shape[B, n, L], value=1. means selected
+ """
+ num_anchors = metrics.shape[-1]
+ topk_metrics, topk_idxs = paddle.topk(
+ metrics, topk, axis=-1, largest=largest)
+ if topk_mask is None:
+ topk_mask = (topk_metrics.max(axis=-1, keepdim=True) > eps).tile(
+ [1, 1, topk])
+ topk_idxs = paddle.where(topk_mask, topk_idxs, paddle.zeros_like(topk_idxs))
+ is_in_topk = F.one_hot(topk_idxs, num_anchors).sum(axis=-2)
+ is_in_topk = paddle.where(is_in_topk > 1,
+ paddle.zeros_like(is_in_topk), is_in_topk)
+ return is_in_topk.astype(metrics.dtype)
+
+
+def check_points_inside_bboxes(points, bboxes, eps=1e-9):
+ r"""
+ Args:
+ points (Tensor, float32): shape[L, 2], "xy" format, L: num_anchors
+ bboxes (Tensor, float32): shape[B, n, 4], "xmin, ymin, xmax, ymax" format
+ eps (float): Default: 1e-9
+ Returns:
+ is_in_bboxes (Tensor, float32): shape[B, n, L], value=1. means selected
+ """
+ points = points.unsqueeze([0, 1])
+ x, y = points.chunk(2, axis=-1)
+ xmin, ymin, xmax, ymax = bboxes.unsqueeze(2).chunk(4, axis=-1)
+ l = x - xmin
+ t = y - ymin
+ r = xmax - x
+ b = ymax - y
+ bbox_ltrb = paddle.concat([l, t, r, b], axis=-1)
+ return (bbox_ltrb.min(axis=-1) > eps).astype(bboxes.dtype)
+
+
+def compute_max_iou_anchor(ious):
+ r"""
+ For each anchor, find the GT with the largest IOU.
+ Args:
+ ious (Tensor, float32): shape[B, n, L], n: num_gts, L: num_anchors
+ Returns:
+ is_max_iou (Tensor, float32): shape[B, n, L], value=1. means selected
+ """
+ num_max_boxes = ious.shape[-2]
+ max_iou_index = ious.argmax(axis=-2)
+ is_max_iou = F.one_hot(max_iou_index, num_max_boxes).transpose([0, 2, 1])
+ return is_max_iou.astype(ious.dtype)
+
+
+def compute_max_iou_gt(ious):
+ r"""
+ For each GT, find the anchor with the largest IOU.
+ Args:
+ ious (Tensor, float32): shape[B, n, L], n: num_gts, L: num_anchors
+ Returns:
+ is_max_iou (Tensor, float32): shape[B, n, L], value=1. means selected
+ """
+ num_anchors = ious.shape[-1]
+ max_iou_index = ious.argmax(axis=-1)
+ is_max_iou = F.one_hot(max_iou_index, num_anchors)
+ return is_max_iou.astype(ious.dtype)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__init__.py
new file mode 100644
index 000000000..3f415e6a5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__init__.py
@@ -0,0 +1,49 @@
+# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import vgg
+from . import resnet
+from . import darknet
+from . import mobilenet_v1
+from . import mobilenet_v3
+from . import hrnet
+from . import lite_hrnet
+from . import blazenet
+from . import ghostnet
+from . import senet
+from . import res2net
+from . import dla
+from . import shufflenet_v2
+from . import swin_transformer
+from . import lcnet
+from . import hardnet
+from . import esnet
+
+from .vgg import *
+from .resnet import *
+from .darknet import *
+from .mobilenet_v1 import *
+from .mobilenet_v3 import *
+from .hrnet import *
+from .lite_hrnet import *
+from .blazenet import *
+from .ghostnet import *
+from .senet import *
+from .res2net import *
+from .dla import *
+from .shufflenet_v2 import *
+from .swin_transformer import *
+from .lcnet import *
+from .hardnet import *
+from .esnet import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..065c620ad
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/blazenet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/blazenet.cpython-37.pyc
new file mode 100644
index 000000000..afc60e84c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/blazenet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/darknet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/darknet.cpython-37.pyc
new file mode 100644
index 000000000..626ca3c65
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/darknet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/dla.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/dla.cpython-37.pyc
new file mode 100644
index 000000000..2319927be
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/dla.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/esnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/esnet.cpython-37.pyc
new file mode 100644
index 000000000..dd0b7cd16
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/esnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/ghostnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/ghostnet.cpython-37.pyc
new file mode 100644
index 000000000..630894564
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/ghostnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hardnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hardnet.cpython-37.pyc
new file mode 100644
index 000000000..085823a44
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hardnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hrnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hrnet.cpython-37.pyc
new file mode 100644
index 000000000..c610b5d31
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/hrnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lcnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lcnet.cpython-37.pyc
new file mode 100644
index 000000000..c4294309c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lcnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lite_hrnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lite_hrnet.cpython-37.pyc
new file mode 100644
index 000000000..abbfe056c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/lite_hrnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v1.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v1.cpython-37.pyc
new file mode 100644
index 000000000..5c53f4dd3
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v1.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v3.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v3.cpython-37.pyc
new file mode 100644
index 000000000..4b589de4f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/mobilenet_v3.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/name_adapter.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/name_adapter.cpython-37.pyc
new file mode 100644
index 000000000..8654f5ebe
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/name_adapter.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/res2net.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/res2net.cpython-37.pyc
new file mode 100644
index 000000000..a418868ba
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/res2net.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/resnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/resnet.cpython-37.pyc
new file mode 100644
index 000000000..ba70f23d8
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/resnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/senet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/senet.cpython-37.pyc
new file mode 100644
index 000000000..4fe1193c7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/senet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/shufflenet_v2.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/shufflenet_v2.cpython-37.pyc
new file mode 100644
index 000000000..7fa8928fe
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/shufflenet_v2.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/swin_transformer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/swin_transformer.cpython-37.pyc
new file mode 100644
index 000000000..c0bbf2d3e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/swin_transformer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/vgg.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/vgg.cpython-37.pyc
new file mode 100644
index 000000000..ff8f56ec1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/__pycache__/vgg.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/blazenet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/blazenet.py
new file mode 100644
index 000000000..425f2a86e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/blazenet.py
@@ -0,0 +1,322 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import KaimingNormal
+from ppdet.core.workspace import register, serializable
+from ..shape_spec import ShapeSpec
+
+__all__ = ['BlazeNet']
+
+
+def hard_swish(x):
+ return x * F.relu6(x + 3) / 6.
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ num_groups=1,
+ act='relu',
+ conv_lr=0.1,
+ conv_decay=0.,
+ norm_decay=0.,
+ norm_type='bn',
+ name=None):
+ super(ConvBNLayer, self).__init__()
+ self.act = act
+ self._conv = nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ groups=num_groups,
+ weight_attr=ParamAttr(
+ learning_rate=conv_lr, initializer=KaimingNormal()),
+ bias_attr=False)
+
+ if norm_type == 'sync_bn':
+ self._batch_norm = nn.SyncBatchNorm(out_channels)
+ else:
+ self._batch_norm = nn.BatchNorm(
+ out_channels, act=None, use_global_stats=False)
+
+ def forward(self, x):
+ x = self._conv(x)
+ x = self._batch_norm(x)
+ if self.act == "relu":
+ x = F.relu(x)
+ elif self.act == "relu6":
+ x = F.relu6(x)
+ elif self.act == 'leaky':
+ x = F.leaky_relu(x)
+ elif self.act == 'hard_swish':
+ x = hard_swish(x)
+ return x
+
+
+class BlazeBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels1,
+ out_channels2,
+ double_channels=None,
+ stride=1,
+ use_5x5kernel=True,
+ act='relu',
+ name=None):
+ super(BlazeBlock, self).__init__()
+ assert stride in [1, 2]
+ self.use_pool = not stride == 1
+ self.use_double_block = double_channels is not None
+ self.conv_dw = []
+ if use_5x5kernel:
+ self.conv_dw.append(
+ self.add_sublayer(
+ name + "1_dw",
+ ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=out_channels1,
+ kernel_size=5,
+ stride=stride,
+ padding=2,
+ num_groups=out_channels1,
+ name=name + "1_dw")))
+ else:
+ self.conv_dw.append(
+ self.add_sublayer(
+ name + "1_dw_1",
+ ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=out_channels1,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ num_groups=out_channels1,
+ name=name + "1_dw_1")))
+ self.conv_dw.append(
+ self.add_sublayer(
+ name + "1_dw_2",
+ ConvBNLayer(
+ in_channels=out_channels1,
+ out_channels=out_channels1,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ num_groups=out_channels1,
+ name=name + "1_dw_2")))
+ self.act = act if self.use_double_block else None
+ self.conv_pw = ConvBNLayer(
+ in_channels=out_channels1,
+ out_channels=out_channels2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ act=self.act,
+ name=name + "1_sep")
+ if self.use_double_block:
+ self.conv_dw2 = []
+ if use_5x5kernel:
+ self.conv_dw2.append(
+ self.add_sublayer(
+ name + "2_dw",
+ ConvBNLayer(
+ in_channels=out_channels2,
+ out_channels=out_channels2,
+ kernel_size=5,
+ stride=1,
+ padding=2,
+ num_groups=out_channels2,
+ name=name + "2_dw")))
+ else:
+ self.conv_dw2.append(
+ self.add_sublayer(
+ name + "2_dw_1",
+ ConvBNLayer(
+ in_channels=out_channels2,
+ out_channels=out_channels2,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ num_groups=out_channels2,
+ name=name + "1_dw_1")))
+ self.conv_dw2.append(
+ self.add_sublayer(
+ name + "2_dw_2",
+ ConvBNLayer(
+ in_channels=out_channels2,
+ out_channels=out_channels2,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ num_groups=out_channels2,
+ name=name + "2_dw_2")))
+ self.conv_pw2 = ConvBNLayer(
+ in_channels=out_channels2,
+ out_channels=double_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ name=name + "2_sep")
+ # shortcut
+ if self.use_pool:
+ shortcut_channel = double_channels or out_channels2
+ self._shortcut = []
+ self._shortcut.append(
+ self.add_sublayer(
+ name + '_shortcut_pool',
+ nn.MaxPool2D(
+ kernel_size=stride, stride=stride, ceil_mode=True)))
+ self._shortcut.append(
+ self.add_sublayer(
+ name + '_shortcut_conv',
+ ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=shortcut_channel,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ name="shortcut" + name)))
+
+ def forward(self, x):
+ y = x
+ for conv_dw_block in self.conv_dw:
+ y = conv_dw_block(y)
+ y = self.conv_pw(y)
+ if self.use_double_block:
+ for conv_dw2_block in self.conv_dw2:
+ y = conv_dw2_block(y)
+ y = self.conv_pw2(y)
+ if self.use_pool:
+ for shortcut in self._shortcut:
+ x = shortcut(x)
+ return F.relu(paddle.add(x, y))
+
+
+@register
+@serializable
+class BlazeNet(nn.Layer):
+ """
+ BlazeFace, see https://arxiv.org/abs/1907.05047
+
+ Args:
+ blaze_filters (list): number of filter for each blaze block.
+ double_blaze_filters (list): number of filter for each double_blaze block.
+ use_5x5kernel (bool): whether or not filter size is 5x5 in depth-wise conv.
+ """
+
+ def __init__(
+ self,
+ blaze_filters=[[24, 24], [24, 24], [24, 48, 2], [48, 48], [48, 48]],
+ double_blaze_filters=[[48, 24, 96, 2], [96, 24, 96], [96, 24, 96],
+ [96, 24, 96, 2], [96, 24, 96], [96, 24, 96]],
+ use_5x5kernel=True,
+ act=None):
+ super(BlazeNet, self).__init__()
+ conv1_num_filters = blaze_filters[0][0]
+ self.conv1 = ConvBNLayer(
+ in_channels=3,
+ out_channels=conv1_num_filters,
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ name="conv1")
+ in_channels = conv1_num_filters
+ self.blaze_block = []
+ self._out_channels = []
+ for k, v in enumerate(blaze_filters):
+ assert len(v) in [2, 3], \
+ "blaze_filters {} not in [2, 3]"
+ if len(v) == 2:
+ self.blaze_block.append(
+ self.add_sublayer(
+ 'blaze_{}'.format(k),
+ BlazeBlock(
+ in_channels,
+ v[0],
+ v[1],
+ use_5x5kernel=use_5x5kernel,
+ act=act,
+ name='blaze_{}'.format(k))))
+ elif len(v) == 3:
+ self.blaze_block.append(
+ self.add_sublayer(
+ 'blaze_{}'.format(k),
+ BlazeBlock(
+ in_channels,
+ v[0],
+ v[1],
+ stride=v[2],
+ use_5x5kernel=use_5x5kernel,
+ act=act,
+ name='blaze_{}'.format(k))))
+ in_channels = v[1]
+
+ for k, v in enumerate(double_blaze_filters):
+ assert len(v) in [3, 4], \
+ "blaze_filters {} not in [3, 4]"
+ if len(v) == 3:
+ self.blaze_block.append(
+ self.add_sublayer(
+ 'double_blaze_{}'.format(k),
+ BlazeBlock(
+ in_channels,
+ v[0],
+ v[1],
+ double_channels=v[2],
+ use_5x5kernel=use_5x5kernel,
+ act=act,
+ name='double_blaze_{}'.format(k))))
+ elif len(v) == 4:
+ self.blaze_block.append(
+ self.add_sublayer(
+ 'double_blaze_{}'.format(k),
+ BlazeBlock(
+ in_channels,
+ v[0],
+ v[1],
+ double_channels=v[2],
+ stride=v[3],
+ use_5x5kernel=use_5x5kernel,
+ act=act,
+ name='double_blaze_{}'.format(k))))
+ in_channels = v[2]
+ self._out_channels.append(in_channels)
+
+ def forward(self, inputs):
+ outs = []
+ y = self.conv1(inputs['image'])
+ for block in self.blaze_block:
+ y = block(y)
+ outs.append(y)
+ return [outs[-4], outs[-1]]
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(channels=c)
+ for c in [self._out_channels[-4], self._out_channels[-1]]
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/darknet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/darknet.py
new file mode 100644
index 000000000..246529699
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/darknet.py
@@ -0,0 +1,340 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.ops import batch_norm, mish
+from ..shape_spec import ShapeSpec
+
+__all__ = ['DarkNet', 'ConvBNLayer']
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size=3,
+ stride=1,
+ groups=1,
+ padding=0,
+ norm_type='bn',
+ norm_decay=0.,
+ act="leaky",
+ freeze_norm=False,
+ data_format='NCHW',
+ name=''):
+ """
+ conv + bn + activation layer
+
+ Args:
+ ch_in (int): input channel
+ ch_out (int): output channel
+ filter_size (int): filter size, default 3
+ stride (int): stride, default 1
+ groups (int): number of groups of conv layer, default 1
+ padding (int): padding size, default 0
+ norm_type (str): batch norm type, default bn
+ norm_decay (str): decay for weight and bias of batch norm layer, default 0.
+ act (str): activation function type, default 'leaky', which means leaky_relu
+ freeze_norm (bool): whether to freeze norm, default False
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(ConvBNLayer, self).__init__()
+
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=padding,
+ groups=groups,
+ data_format=data_format,
+ bias_attr=False)
+ self.batch_norm = batch_norm(
+ ch_out,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+ self.act = act
+
+ def forward(self, inputs):
+ out = self.conv(inputs)
+ out = self.batch_norm(out)
+ if self.act == 'leaky':
+ out = F.leaky_relu(out, 0.1)
+ elif self.act == 'mish':
+ out = mish(out)
+ return out
+
+
+class DownSample(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size=3,
+ stride=2,
+ padding=1,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ data_format='NCHW'):
+ """
+ downsample layer
+
+ Args:
+ ch_in (int): input channel
+ ch_out (int): output channel
+ filter_size (int): filter size, default 3
+ stride (int): stride, default 2
+ padding (int): padding size, default 1
+ norm_type (str): batch norm type, default bn
+ norm_decay (str): decay for weight and bias of batch norm layer, default 0.
+ freeze_norm (bool): whether to freeze norm, default False
+ data_format (str): data format, NCHW or NHWC
+ """
+
+ super(DownSample, self).__init__()
+
+ self.conv_bn_layer = ConvBNLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=filter_size,
+ stride=stride,
+ padding=padding,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+ self.ch_out = ch_out
+
+ def forward(self, inputs):
+ out = self.conv_bn_layer(inputs)
+ return out
+
+
+class BasicBlock(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ data_format='NCHW'):
+ """
+ BasicBlock layer of DarkNet
+
+ Args:
+ ch_in (int): input channel
+ ch_out (int): output channel
+ norm_type (str): batch norm type, default bn
+ norm_decay (str): decay for weight and bias of batch norm layer, default 0.
+ freeze_norm (bool): whether to freeze norm, default False
+ data_format (str): data format, NCHW or NHWC
+ """
+
+ super(BasicBlock, self).__init__()
+
+ self.conv1 = ConvBNLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+ self.conv2 = ConvBNLayer(
+ ch_in=ch_out,
+ ch_out=ch_out * 2,
+ filter_size=3,
+ stride=1,
+ padding=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+
+ def forward(self, inputs):
+ conv1 = self.conv1(inputs)
+ conv2 = self.conv2(conv1)
+ out = paddle.add(x=inputs, y=conv2)
+ return out
+
+
+class Blocks(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ count,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ name=None,
+ data_format='NCHW'):
+ """
+ Blocks layer, which consist of some BaickBlock layers
+
+ Args:
+ ch_in (int): input channel
+ ch_out (int): output channel
+ count (int): number of BasicBlock layer
+ norm_type (str): batch norm type, default bn
+ norm_decay (str): decay for weight and bias of batch norm layer, default 0.
+ freeze_norm (bool): whether to freeze norm, default False
+ name (str): layer name
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(Blocks, self).__init__()
+
+ self.basicblock0 = BasicBlock(
+ ch_in,
+ ch_out,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+ self.res_out_list = []
+ for i in range(1, count):
+ block_name = '{}.{}'.format(name, i)
+ res_out = self.add_sublayer(
+ block_name,
+ BasicBlock(
+ ch_out * 2,
+ ch_out,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format))
+ self.res_out_list.append(res_out)
+ self.ch_out = ch_out
+
+ def forward(self, inputs):
+ y = self.basicblock0(inputs)
+ for basic_block_i in self.res_out_list:
+ y = basic_block_i(y)
+ return y
+
+
+DarkNet_cfg = {53: ([1, 2, 8, 8, 4])}
+
+
+@register
+@serializable
+class DarkNet(nn.Layer):
+ __shared__ = ['norm_type', 'data_format']
+
+ def __init__(self,
+ depth=53,
+ freeze_at=-1,
+ return_idx=[2, 3, 4],
+ num_stages=5,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ data_format='NCHW'):
+ """
+ Darknet, see https://pjreddie.com/darknet/yolo/
+
+ Args:
+ depth (int): depth of network
+ freeze_at (int): freeze the backbone at which stage
+ filter_size (int): filter size, default 3
+ return_idx (list): index of stages whose feature maps are returned
+ norm_type (str): batch norm type, default bn
+ norm_decay (str): decay for weight and bias of batch norm layer, default 0.
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(DarkNet, self).__init__()
+ self.depth = depth
+ self.freeze_at = freeze_at
+ self.return_idx = return_idx
+ self.num_stages = num_stages
+ self.stages = DarkNet_cfg[self.depth][0:num_stages]
+
+ self.conv0 = ConvBNLayer(
+ ch_in=3,
+ ch_out=32,
+ filter_size=3,
+ stride=1,
+ padding=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+
+ self.downsample0 = DownSample(
+ ch_in=32,
+ ch_out=32 * 2,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format)
+
+ self._out_channels = []
+ self.darknet_conv_block_list = []
+ self.downsample_list = []
+ ch_in = [64, 128, 256, 512, 1024]
+ for i, stage in enumerate(self.stages):
+ name = 'stage.{}'.format(i)
+ conv_block = self.add_sublayer(
+ name,
+ Blocks(
+ int(ch_in[i]),
+ 32 * (2**i),
+ stage,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name))
+ self.darknet_conv_block_list.append(conv_block)
+ if i in return_idx:
+ self._out_channels.append(64 * (2**i))
+ for i in range(num_stages - 1):
+ down_name = 'stage.{}.downsample'.format(i)
+ downsample = self.add_sublayer(
+ down_name,
+ DownSample(
+ ch_in=32 * (2**(i + 1)),
+ ch_out=32 * (2**(i + 2)),
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ data_format=data_format))
+ self.downsample_list.append(downsample)
+
+ def forward(self, inputs):
+ x = inputs['image']
+
+ out = self.conv0(x)
+ out = self.downsample0(out)
+ blocks = []
+ for i, conv_block_i in enumerate(self.darknet_conv_block_list):
+ out = conv_block_i(out)
+ if i == self.freeze_at:
+ out.stop_gradient = True
+ if i in self.return_idx:
+ blocks.append(out)
+ if i < self.num_stages - 1:
+ out = self.downsample_list[i](out)
+ return blocks
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/dla.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/dla.py
new file mode 100644
index 000000000..4ab06ab7f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/dla.py
@@ -0,0 +1,243 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.layers import ConvNormLayer
+from ..shape_spec import ShapeSpec
+
+DLA_cfg = {34: ([1, 1, 1, 2, 2, 1], [16, 32, 64, 128, 256, 512])}
+
+
+class BasicBlock(nn.Layer):
+ def __init__(self, ch_in, ch_out, stride=1):
+ super(BasicBlock, self).__init__()
+ self.conv1 = ConvNormLayer(
+ ch_in,
+ ch_out,
+ filter_size=3,
+ stride=stride,
+ bias_on=False,
+ norm_decay=None)
+ self.conv2 = ConvNormLayer(
+ ch_out,
+ ch_out,
+ filter_size=3,
+ stride=1,
+ bias_on=False,
+ norm_decay=None)
+
+ def forward(self, inputs, residual=None):
+ if residual is None:
+ residual = inputs
+
+ out = self.conv1(inputs)
+ out = F.relu(out)
+
+ out = self.conv2(out)
+
+ out = paddle.add(x=out, y=residual)
+ out = F.relu(out)
+
+ return out
+
+
+class Root(nn.Layer):
+ def __init__(self, ch_in, ch_out, kernel_size, residual):
+ super(Root, self).__init__()
+ self.conv = ConvNormLayer(
+ ch_in,
+ ch_out,
+ filter_size=1,
+ stride=1,
+ bias_on=False,
+ norm_decay=None)
+ self.residual = residual
+
+ def forward(self, inputs):
+ children = inputs
+ out = self.conv(paddle.concat(inputs, axis=1))
+ if self.residual:
+ out = paddle.add(x=out, y=children[0])
+ out = F.relu(out)
+
+ return out
+
+
+class Tree(nn.Layer):
+ def __init__(self,
+ level,
+ block,
+ ch_in,
+ ch_out,
+ stride=1,
+ level_root=False,
+ root_dim=0,
+ root_kernel_size=1,
+ root_residual=False):
+ super(Tree, self).__init__()
+ if root_dim == 0:
+ root_dim = 2 * ch_out
+ if level_root:
+ root_dim += ch_in
+ if level == 1:
+ self.tree1 = block(ch_in, ch_out, stride)
+ self.tree2 = block(ch_out, ch_out, 1)
+ else:
+ self.tree1 = Tree(
+ level - 1,
+ block,
+ ch_in,
+ ch_out,
+ stride,
+ root_dim=0,
+ root_kernel_size=root_kernel_size,
+ root_residual=root_residual)
+ self.tree2 = Tree(
+ level - 1,
+ block,
+ ch_out,
+ ch_out,
+ 1,
+ root_dim=root_dim + ch_out,
+ root_kernel_size=root_kernel_size,
+ root_residual=root_residual)
+
+ if level == 1:
+ self.root = Root(root_dim, ch_out, root_kernel_size, root_residual)
+ self.level_root = level_root
+ self.root_dim = root_dim
+ self.downsample = None
+ self.project = None
+ self.level = level
+ if stride > 1:
+ self.downsample = nn.MaxPool2D(stride, stride=stride)
+ if ch_in != ch_out:
+ self.project = ConvNormLayer(
+ ch_in,
+ ch_out,
+ filter_size=1,
+ stride=1,
+ bias_on=False,
+ norm_decay=None)
+
+ def forward(self, x, residual=None, children=None):
+ children = [] if children is None else children
+ bottom = self.downsample(x) if self.downsample else x
+ residual = self.project(bottom) if self.project else bottom
+ if self.level_root:
+ children.append(bottom)
+ x1 = self.tree1(x, residual)
+ if self.level == 1:
+ x2 = self.tree2(x1)
+ x = self.root([x2, x1] + children)
+ else:
+ children.append(x1)
+ x = self.tree2(x1, children=children)
+ return x
+
+
+@register
+@serializable
+class DLA(nn.Layer):
+ """
+ DLA, see https://arxiv.org/pdf/1707.06484.pdf
+
+ Args:
+ depth (int): DLA depth, should be 34.
+ residual_root (bool): whether use a reidual layer in the root block
+
+ """
+
+ def __init__(self, depth=34, residual_root=False):
+ super(DLA, self).__init__()
+ levels, channels = DLA_cfg[depth]
+ if depth == 34:
+ block = BasicBlock
+ self.channels = channels
+ self.base_layer = nn.Sequential(
+ ConvNormLayer(
+ 3,
+ channels[0],
+ filter_size=7,
+ stride=1,
+ bias_on=False,
+ norm_decay=None),
+ nn.ReLU())
+ self.level0 = self._make_conv_level(channels[0], channels[0], levels[0])
+ self.level1 = self._make_conv_level(
+ channels[0], channels[1], levels[1], stride=2)
+ self.level2 = Tree(
+ levels[2],
+ block,
+ channels[1],
+ channels[2],
+ 2,
+ level_root=False,
+ root_residual=residual_root)
+ self.level3 = Tree(
+ levels[3],
+ block,
+ channels[2],
+ channels[3],
+ 2,
+ level_root=True,
+ root_residual=residual_root)
+ self.level4 = Tree(
+ levels[4],
+ block,
+ channels[3],
+ channels[4],
+ 2,
+ level_root=True,
+ root_residual=residual_root)
+ self.level5 = Tree(
+ levels[5],
+ block,
+ channels[4],
+ channels[5],
+ 2,
+ level_root=True,
+ root_residual=residual_root)
+
+ def _make_conv_level(self, ch_in, ch_out, conv_num, stride=1):
+ modules = []
+ for i in range(conv_num):
+ modules.extend([
+ ConvNormLayer(
+ ch_in,
+ ch_out,
+ filter_size=3,
+ stride=stride if i == 0 else 1,
+ bias_on=False,
+ norm_decay=None), nn.ReLU()
+ ])
+ ch_in = ch_out
+ return nn.Sequential(*modules)
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.channels[i]) for i in range(6)]
+
+ def forward(self, inputs):
+ outs = []
+ im = inputs['image']
+ feats = self.base_layer(im)
+ for i in range(6):
+ feats = getattr(self, 'level{}'.format(i))(feats)
+ outs.append(feats)
+
+ return outs
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/esnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/esnet.py
new file mode 100644
index 000000000..2b3f3c54a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/esnet.py
@@ -0,0 +1,290 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn import Conv2D, MaxPool2D, AdaptiveAvgPool2D, BatchNorm
+from paddle.nn.initializer import KaimingNormal
+from paddle.regularizer import L2Decay
+
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+from ppdet.modeling.ops import channel_shuffle
+from ppdet.modeling.backbones.shufflenet_v2 import ConvBNLayer
+
+__all__ = ['ESNet']
+
+
+def make_divisible(v, divisor=16, min_value=None):
+ if min_value is None:
+ min_value = divisor
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+ if new_v < 0.9 * v:
+ new_v += divisor
+ return new_v
+
+
+class SEModule(nn.Layer):
+ def __init__(self, channel, reduction=4):
+ super(SEModule, self).__init__()
+ self.avg_pool = AdaptiveAvgPool2D(1)
+ self.conv1 = Conv2D(
+ in_channels=channel,
+ out_channels=channel // reduction,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(),
+ bias_attr=ParamAttr())
+ self.conv2 = Conv2D(
+ in_channels=channel // reduction,
+ out_channels=channel,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(),
+ bias_attr=ParamAttr())
+
+ def forward(self, inputs):
+ outputs = self.avg_pool(inputs)
+ outputs = self.conv1(outputs)
+ outputs = F.relu(outputs)
+ outputs = self.conv2(outputs)
+ outputs = F.hardsigmoid(outputs)
+ return paddle.multiply(x=inputs, y=outputs)
+
+
+class InvertedResidual(nn.Layer):
+ def __init__(self,
+ in_channels,
+ mid_channels,
+ out_channels,
+ stride,
+ act="relu"):
+ super(InvertedResidual, self).__init__()
+ self._conv_pw = ConvBNLayer(
+ in_channels=in_channels // 2,
+ out_channels=mid_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ self._conv_dw = ConvBNLayer(
+ in_channels=mid_channels // 2,
+ out_channels=mid_channels // 2,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=mid_channels // 2,
+ act=None)
+ self._se = SEModule(mid_channels)
+
+ self._conv_linear = ConvBNLayer(
+ in_channels=mid_channels,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+
+ def forward(self, inputs):
+ x1, x2 = paddle.split(
+ inputs,
+ num_or_sections=[inputs.shape[1] // 2, inputs.shape[1] // 2],
+ axis=1)
+ x2 = self._conv_pw(x2)
+ x3 = self._conv_dw(x2)
+ x3 = paddle.concat([x2, x3], axis=1)
+ x3 = self._se(x3)
+ x3 = self._conv_linear(x3)
+ out = paddle.concat([x1, x3], axis=1)
+ return channel_shuffle(out, 2)
+
+
+class InvertedResidualDS(nn.Layer):
+ def __init__(self,
+ in_channels,
+ mid_channels,
+ out_channels,
+ stride,
+ act="relu"):
+ super(InvertedResidualDS, self).__init__()
+
+ # branch1
+ self._conv_dw_1 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=in_channels,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=in_channels,
+ act=None)
+ self._conv_linear_1 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ # branch2
+ self._conv_pw_2 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=mid_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ self._conv_dw_2 = ConvBNLayer(
+ in_channels=mid_channels // 2,
+ out_channels=mid_channels // 2,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=mid_channels // 2,
+ act=None)
+ self._se = SEModule(mid_channels // 2)
+ self._conv_linear_2 = ConvBNLayer(
+ in_channels=mid_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ self._conv_dw_mv1 = ConvBNLayer(
+ in_channels=out_channels,
+ out_channels=out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ groups=out_channels,
+ act="hard_swish")
+ self._conv_pw_mv1 = ConvBNLayer(
+ in_channels=out_channels,
+ out_channels=out_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act="hard_swish")
+
+ def forward(self, inputs):
+ x1 = self._conv_dw_1(inputs)
+ x1 = self._conv_linear_1(x1)
+ x2 = self._conv_pw_2(inputs)
+ x2 = self._conv_dw_2(x2)
+ x2 = self._se(x2)
+ x2 = self._conv_linear_2(x2)
+ out = paddle.concat([x1, x2], axis=1)
+ out = self._conv_dw_mv1(out)
+ out = self._conv_pw_mv1(out)
+
+ return out
+
+
+@register
+@serializable
+class ESNet(nn.Layer):
+ def __init__(self,
+ scale=1.0,
+ act="hard_swish",
+ feature_maps=[4, 11, 14],
+ channel_ratio=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]):
+ super(ESNet, self).__init__()
+ self.scale = scale
+ if isinstance(feature_maps, Integral):
+ feature_maps = [feature_maps]
+ self.feature_maps = feature_maps
+ stage_repeats = [3, 7, 3]
+
+ stage_out_channels = [
+ -1, 24, make_divisible(128 * scale), make_divisible(256 * scale),
+ make_divisible(512 * scale), 1024
+ ]
+
+ self._out_channels = []
+ self._feature_idx = 0
+ # 1. conv1
+ self._conv1 = ConvBNLayer(
+ in_channels=3,
+ out_channels=stage_out_channels[1],
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ act=act)
+ self._max_pool = MaxPool2D(kernel_size=3, stride=2, padding=1)
+ self._feature_idx += 1
+
+ # 2. bottleneck sequences
+ self._block_list = []
+ arch_idx = 0
+ for stage_id, num_repeat in enumerate(stage_repeats):
+ for i in range(num_repeat):
+ channels_scales = channel_ratio[arch_idx]
+ mid_c = make_divisible(
+ int(stage_out_channels[stage_id + 2] * channels_scales),
+ divisor=8)
+ if i == 0:
+ block = self.add_sublayer(
+ name=str(stage_id + 2) + '_' + str(i + 1),
+ sublayer=InvertedResidualDS(
+ in_channels=stage_out_channels[stage_id + 1],
+ mid_channels=mid_c,
+ out_channels=stage_out_channels[stage_id + 2],
+ stride=2,
+ act=act))
+ else:
+ block = self.add_sublayer(
+ name=str(stage_id + 2) + '_' + str(i + 1),
+ sublayer=InvertedResidual(
+ in_channels=stage_out_channels[stage_id + 2],
+ mid_channels=mid_c,
+ out_channels=stage_out_channels[stage_id + 2],
+ stride=1,
+ act=act))
+ self._block_list.append(block)
+ arch_idx += 1
+ self._feature_idx += 1
+ self._update_out_channels(stage_out_channels[stage_id + 2],
+ self._feature_idx, self.feature_maps)
+
+ def _update_out_channels(self, channel, feature_idx, feature_maps):
+ if feature_idx in feature_maps:
+ self._out_channels.append(channel)
+
+ def forward(self, inputs):
+ y = self._conv1(inputs['image'])
+ y = self._max_pool(y)
+ outs = []
+ for i, inv in enumerate(self._block_list):
+ y = inv(y)
+ if i + 2 in self.feature_maps:
+ outs.append(y)
+
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/ghostnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/ghostnet.py
new file mode 100644
index 000000000..cd333b4fe
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/ghostnet.py
@@ -0,0 +1,470 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+import paddle
+from paddle import ParamAttr
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn import AdaptiveAvgPool2D, Linear
+from paddle.nn.initializer import Uniform
+
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+from .mobilenet_v3 import make_divisible, ConvBNLayer
+
+__all__ = ['GhostNet']
+
+
+class ExtraBlockDW(nn.Layer):
+ def __init__(self,
+ in_c,
+ ch_1,
+ ch_2,
+ stride,
+ lr_mult,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ name=None):
+ super(ExtraBlockDW, self).__init__()
+ self.pointwise_conv = ConvBNLayer(
+ in_c=in_c,
+ out_c=ch_1,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra1")
+ self.depthwise_conv = ConvBNLayer(
+ in_c=ch_1,
+ out_c=ch_2,
+ filter_size=3,
+ stride=stride,
+ padding=1, #
+ num_groups=int(ch_1),
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra2_dw")
+ self.normal_conv = ConvBNLayer(
+ in_c=ch_2,
+ out_c=ch_2,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra2_sep")
+
+ def forward(self, inputs):
+ x = self.pointwise_conv(inputs)
+ x = self.depthwise_conv(x)
+ x = self.normal_conv(x)
+ return x
+
+
+class SEBlock(nn.Layer):
+ def __init__(self, num_channels, lr_mult, reduction_ratio=4, name=None):
+ super(SEBlock, self).__init__()
+ self.pool2d_gap = AdaptiveAvgPool2D(1)
+ self._num_channels = num_channels
+ stdv = 1.0 / math.sqrt(num_channels * 1.0)
+ med_ch = num_channels // reduction_ratio
+ self.squeeze = Linear(
+ num_channels,
+ med_ch,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult, initializer=Uniform(-stdv, stdv)),
+ bias_attr=ParamAttr(learning_rate=lr_mult))
+ stdv = 1.0 / math.sqrt(med_ch * 1.0)
+ self.excitation = Linear(
+ med_ch,
+ num_channels,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult, initializer=Uniform(-stdv, stdv)),
+ bias_attr=ParamAttr(learning_rate=lr_mult))
+
+ def forward(self, inputs):
+ pool = self.pool2d_gap(inputs)
+ pool = paddle.squeeze(pool, axis=[2, 3])
+ squeeze = self.squeeze(pool)
+ squeeze = F.relu(squeeze)
+ excitation = self.excitation(squeeze)
+ excitation = paddle.clip(x=excitation, min=0, max=1)
+ excitation = paddle.unsqueeze(excitation, axis=[2, 3])
+ out = paddle.multiply(inputs, excitation)
+ return out
+
+
+class GhostModule(nn.Layer):
+ def __init__(self,
+ in_channels,
+ output_channels,
+ kernel_size=1,
+ ratio=2,
+ dw_size=3,
+ stride=1,
+ relu=True,
+ lr_mult=1.,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ name=None):
+ super(GhostModule, self).__init__()
+ init_channels = int(math.ceil(output_channels / ratio))
+ new_channels = int(init_channels * (ratio - 1))
+ self.primary_conv = ConvBNLayer(
+ in_c=in_channels,
+ out_c=init_channels,
+ filter_size=kernel_size,
+ stride=stride,
+ padding=int((kernel_size - 1) // 2),
+ num_groups=1,
+ act="relu" if relu else None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_primary_conv")
+ self.cheap_operation = ConvBNLayer(
+ in_c=init_channels,
+ out_c=new_channels,
+ filter_size=dw_size,
+ stride=1,
+ padding=int((dw_size - 1) // 2),
+ num_groups=init_channels,
+ act="relu" if relu else None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_cheap_operation")
+
+ def forward(self, inputs):
+ x = self.primary_conv(inputs)
+ y = self.cheap_operation(x)
+ out = paddle.concat([x, y], axis=1)
+ return out
+
+
+class GhostBottleneck(nn.Layer):
+ def __init__(self,
+ in_channels,
+ hidden_dim,
+ output_channels,
+ kernel_size,
+ stride,
+ use_se,
+ lr_mult,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ return_list=False,
+ name=None):
+ super(GhostBottleneck, self).__init__()
+ self._stride = stride
+ self._use_se = use_se
+ self._num_channels = in_channels
+ self._output_channels = output_channels
+ self.return_list = return_list
+
+ self.ghost_module_1 = GhostModule(
+ in_channels=in_channels,
+ output_channels=hidden_dim,
+ kernel_size=1,
+ stride=1,
+ relu=True,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_ghost_module_1")
+ if stride == 2:
+ self.depthwise_conv = ConvBNLayer(
+ in_c=hidden_dim,
+ out_c=hidden_dim,
+ filter_size=kernel_size,
+ stride=stride,
+ padding=int((kernel_size - 1) // 2),
+ num_groups=hidden_dim,
+ act=None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name +
+ "_depthwise_depthwise" # looks strange due to an old typo, will be fixed later.
+ )
+ if use_se:
+ self.se_block = SEBlock(hidden_dim, lr_mult, name=name + "_se")
+ self.ghost_module_2 = GhostModule(
+ in_channels=hidden_dim,
+ output_channels=output_channels,
+ kernel_size=1,
+ relu=False,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_ghost_module_2")
+ if stride != 1 or in_channels != output_channels:
+ self.shortcut_depthwise = ConvBNLayer(
+ in_c=in_channels,
+ out_c=in_channels,
+ filter_size=kernel_size,
+ stride=stride,
+ padding=int((kernel_size - 1) // 2),
+ num_groups=in_channels,
+ act=None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name +
+ "_shortcut_depthwise_depthwise" # looks strange due to an old typo, will be fixed later.
+ )
+ self.shortcut_conv = ConvBNLayer(
+ in_c=in_channels,
+ out_c=output_channels,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ num_groups=1,
+ act=None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_shortcut_conv")
+
+ def forward(self, inputs):
+ y = self.ghost_module_1(inputs)
+ x = y
+ if self._stride == 2:
+ x = self.depthwise_conv(x)
+ if self._use_se:
+ x = self.se_block(x)
+ x = self.ghost_module_2(x)
+
+ if self._stride == 1 and self._num_channels == self._output_channels:
+ shortcut = inputs
+ else:
+ shortcut = self.shortcut_depthwise(inputs)
+ shortcut = self.shortcut_conv(shortcut)
+ x = paddle.add(x=x, y=shortcut)
+
+ if self.return_list:
+ return [y, x]
+ else:
+ return x
+
+
+@register
+@serializable
+class GhostNet(nn.Layer):
+ __shared__ = ['norm_type']
+
+ def __init__(
+ self,
+ scale=1.3,
+ feature_maps=[6, 12, 15],
+ with_extra_blocks=False,
+ extra_block_filters=[[256, 512], [128, 256], [128, 256], [64, 128]],
+ lr_mult_list=[1.0, 1.0, 1.0, 1.0, 1.0],
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.0,
+ freeze_norm=False):
+ super(GhostNet, self).__init__()
+ if isinstance(feature_maps, Integral):
+ feature_maps = [feature_maps]
+ if norm_type == 'sync_bn' and freeze_norm:
+ raise ValueError(
+ "The norm_type should not be sync_bn when freeze_norm is True")
+ self.feature_maps = feature_maps
+ self.with_extra_blocks = with_extra_blocks
+ self.extra_block_filters = extra_block_filters
+
+ inplanes = 16
+ self.cfgs = [
+ # k, t, c, SE, s
+ [3, 16, 16, 0, 1],
+ [3, 48, 24, 0, 2],
+ [3, 72, 24, 0, 1],
+ [5, 72, 40, 1, 2],
+ [5, 120, 40, 1, 1],
+ [3, 240, 80, 0, 2],
+ [3, 200, 80, 0, 1],
+ [3, 184, 80, 0, 1],
+ [3, 184, 80, 0, 1],
+ [3, 480, 112, 1, 1],
+ [3, 672, 112, 1, 1],
+ [5, 672, 160, 1, 2], # SSDLite output
+ [5, 960, 160, 0, 1],
+ [5, 960, 160, 1, 1],
+ [5, 960, 160, 0, 1],
+ [5, 960, 160, 1, 1]
+ ]
+ self.scale = scale
+ conv1_out_ch = int(make_divisible(inplanes * self.scale, 4))
+ self.conv1 = ConvBNLayer(
+ in_c=3,
+ out_c=conv1_out_ch,
+ filter_size=3,
+ stride=2,
+ padding=1,
+ num_groups=1,
+ act="relu",
+ lr_mult=1.,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="conv1")
+
+ # build inverted residual blocks
+ self._out_channels = []
+ self.ghost_bottleneck_list = []
+ idx = 0
+ inplanes = conv1_out_ch
+ for k, exp_size, c, use_se, s in self.cfgs:
+ lr_idx = min(idx // 3, len(lr_mult_list) - 1)
+ lr_mult = lr_mult_list[lr_idx]
+
+ # for SSD/SSDLite, first head input is after ResidualUnit expand_conv
+ return_list = self.with_extra_blocks and idx + 2 in self.feature_maps
+
+ ghost_bottleneck = self.add_sublayer(
+ "_ghostbottleneck_" + str(idx),
+ sublayer=GhostBottleneck(
+ in_channels=inplanes,
+ hidden_dim=int(make_divisible(exp_size * self.scale, 4)),
+ output_channels=int(make_divisible(c * self.scale, 4)),
+ kernel_size=k,
+ stride=s,
+ use_se=use_se,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ return_list=return_list,
+ name="_ghostbottleneck_" + str(idx)))
+ self.ghost_bottleneck_list.append(ghost_bottleneck)
+ inplanes = int(make_divisible(c * self.scale, 4))
+ idx += 1
+ self._update_out_channels(
+ int(make_divisible(exp_size * self.scale, 4))
+ if return_list else inplanes, idx + 1, feature_maps)
+
+ if self.with_extra_blocks:
+ self.extra_block_list = []
+ extra_out_c = int(make_divisible(self.scale * self.cfgs[-1][1], 4))
+ lr_idx = min(idx // 3, len(lr_mult_list) - 1)
+ lr_mult = lr_mult_list[lr_idx]
+
+ conv_extra = self.add_sublayer(
+ "conv" + str(idx + 2),
+ sublayer=ConvBNLayer(
+ in_c=inplanes,
+ out_c=extra_out_c,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ num_groups=1,
+ act="relu6",
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="conv" + str(idx + 2)))
+ self.extra_block_list.append(conv_extra)
+ idx += 1
+ self._update_out_channels(extra_out_c, idx + 1, feature_maps)
+
+ for j, block_filter in enumerate(self.extra_block_filters):
+ in_c = extra_out_c if j == 0 else self.extra_block_filters[j -
+ 1][1]
+ conv_extra = self.add_sublayer(
+ "conv" + str(idx + 2),
+ sublayer=ExtraBlockDW(
+ in_c,
+ block_filter[0],
+ block_filter[1],
+ stride=2,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name='conv' + str(idx + 2)))
+ self.extra_block_list.append(conv_extra)
+ idx += 1
+ self._update_out_channels(block_filter[1], idx + 1,
+ feature_maps)
+
+ def _update_out_channels(self, channel, feature_idx, feature_maps):
+ if feature_idx in feature_maps:
+ self._out_channels.append(channel)
+
+ def forward(self, inputs):
+ x = self.conv1(inputs['image'])
+ outs = []
+ for idx, ghost_bottleneck in enumerate(self.ghost_bottleneck_list):
+ x = ghost_bottleneck(x)
+ if idx + 2 in self.feature_maps:
+ if isinstance(x, list):
+ outs.append(x[0])
+ x = x[1]
+ else:
+ outs.append(x)
+
+ if not self.with_extra_blocks:
+ return outs
+
+ for i, block in enumerate(self.extra_block_list):
+ idx = i + len(self.ghost_bottleneck_list)
+ x = block(x)
+ if idx + 2 in self.feature_maps:
+ outs.append(x)
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hardnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hardnet.py
new file mode 100644
index 000000000..14a1599df
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hardnet.py
@@ -0,0 +1,224 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+from ppdet.core.workspace import register
+from ..shape_spec import ShapeSpec
+
+__all__ = ['HarDNet']
+
+
+def ConvLayer(in_channels,
+ out_channels,
+ kernel_size=3,
+ stride=1,
+ bias_attr=False):
+ layer = nn.Sequential(
+ ('conv', nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=kernel_size // 2,
+ groups=1,
+ bias_attr=bias_attr)), ('norm', nn.BatchNorm2D(out_channels)),
+ ('relu', nn.ReLU6()))
+ return layer
+
+
+def DWConvLayer(in_channels,
+ out_channels,
+ kernel_size=3,
+ stride=1,
+ bias_attr=False):
+ layer = nn.Sequential(
+ ('dwconv', nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=1,
+ groups=out_channels,
+ bias_attr=bias_attr)), ('norm', nn.BatchNorm2D(out_channels)))
+ return layer
+
+
+def CombConvLayer(in_channels, out_channels, kernel_size=1, stride=1):
+ layer = nn.Sequential(
+ ('layer1', ConvLayer(
+ in_channels, out_channels, kernel_size=kernel_size)),
+ ('layer2', DWConvLayer(
+ out_channels, out_channels, stride=stride)))
+ return layer
+
+
+class HarDBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ growth_rate,
+ grmul,
+ n_layers,
+ keepBase=False,
+ residual_out=False,
+ dwconv=False):
+ super().__init__()
+ self.keepBase = keepBase
+ self.links = []
+ layers_ = []
+ self.out_channels = 0
+ for i in range(n_layers):
+ outch, inch, link = self.get_link(i + 1, in_channels, growth_rate,
+ grmul)
+ self.links.append(link)
+ if dwconv:
+ layers_.append(CombConvLayer(inch, outch))
+ else:
+ layers_.append(ConvLayer(inch, outch))
+
+ if (i % 2 == 0) or (i == n_layers - 1):
+ self.out_channels += outch
+ self.layers = nn.LayerList(layers_)
+
+ def get_out_ch(self):
+ return self.out_channels
+
+ def get_link(self, layer, base_ch, growth_rate, grmul):
+ if layer == 0:
+ return base_ch, 0, []
+ out_channels = growth_rate
+
+ link = []
+ for i in range(10):
+ dv = 2**i
+ if layer % dv == 0:
+ k = layer - dv
+ link.append(k)
+ if i > 0:
+ out_channels *= grmul
+
+ out_channels = int(int(out_channels + 1) / 2) * 2
+ in_channels = 0
+
+ for i in link:
+ ch, _, _ = self.get_link(i, base_ch, growth_rate, grmul)
+ in_channels += ch
+
+ return out_channels, in_channels, link
+
+ def forward(self, x):
+ layers_ = [x]
+
+ for layer in range(len(self.layers)):
+ link = self.links[layer]
+ tin = []
+ for i in link:
+ tin.append(layers_[i])
+ if len(tin) > 1:
+ x = paddle.concat(tin, 1)
+ else:
+ x = tin[0]
+ out = self.layers[layer](x)
+ layers_.append(out)
+
+ t = len(layers_)
+ out_ = []
+ for i in range(t):
+ if (i == 0 and self.keepBase) or (i == t - 1) or (i % 2 == 1):
+ out_.append(layers_[i])
+ out = paddle.concat(out_, 1)
+
+ return out
+
+
+@register
+class HarDNet(nn.Layer):
+ def __init__(self, depth_wise=False, return_idx=[1, 3, 8, 13], arch=85):
+ super(HarDNet, self).__init__()
+ assert arch in [39, 68, 85], "HarDNet-{} not support.".format(arch)
+ if arch == 85:
+ first_ch = [48, 96]
+ second_kernel = 3
+ ch_list = [192, 256, 320, 480, 720]
+ grmul = 1.7
+ gr = [24, 24, 28, 36, 48]
+ n_layers = [8, 16, 16, 16, 16]
+ elif arch == 68:
+ first_ch = [32, 64]
+ second_kernel = 3
+ ch_list = [128, 256, 320, 640]
+ grmul = 1.7
+ gr = [14, 16, 20, 40]
+ n_layers = [8, 16, 16, 16]
+
+ self.return_idx = return_idx
+ self._out_channels = [96, 214, 458, 784]
+
+ avg_pool = True
+ if depth_wise:
+ second_kernel = 1
+ avg_pool = False
+
+ blks = len(n_layers)
+ self.base = nn.LayerList([])
+
+ # First Layer: Standard Conv3x3, Stride=2
+ self.base.append(
+ ConvLayer(
+ in_channels=3,
+ out_channels=first_ch[0],
+ kernel_size=3,
+ stride=2,
+ bias_attr=False))
+
+ # Second Layer
+ self.base.append(
+ ConvLayer(
+ first_ch[0], first_ch[1], kernel_size=second_kernel))
+
+ # Avgpooling or DWConv3x3 downsampling
+ if avg_pool:
+ self.base.append(nn.AvgPool2D(kernel_size=3, stride=2, padding=1))
+ else:
+ self.base.append(DWConvLayer(first_ch[1], first_ch[1], stride=2))
+
+ # Build all HarDNet blocks
+ ch = first_ch[1]
+ for i in range(blks):
+ blk = HarDBlock(ch, gr[i], grmul, n_layers[i], dwconv=depth_wise)
+ ch = blk.out_channels
+ self.base.append(blk)
+
+ if i != blks - 1:
+ self.base.append(ConvLayer(ch, ch_list[i], kernel_size=1))
+ ch = ch_list[i]
+ if i == 0:
+ self.base.append(
+ nn.AvgPool2D(
+ kernel_size=2, stride=2, ceil_mode=True))
+ elif i != blks - 1 and i != 1 and i != 3:
+ self.base.append(nn.AvgPool2D(kernel_size=2, stride=2))
+
+ def forward(self, inputs):
+ x = inputs['image']
+ outs = []
+ for i, layer in enumerate(self.base):
+ x = layer(x)
+ if i in self.return_idx:
+ outs.append(x)
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self._out_channels[i]) for i in range(4)]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hrnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hrnet.py
new file mode 100644
index 000000000..d92aa95f5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/hrnet.py
@@ -0,0 +1,727 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn import AdaptiveAvgPool2D, Linear
+from paddle.regularizer import L2Decay
+from paddle import ParamAttr
+from paddle.nn.initializer import Normal, Uniform
+from numbers import Integral
+import math
+
+from ppdet.core.workspace import register
+from ..shape_spec import ShapeSpec
+
+__all__ = ['HRNet']
+
+
+class ConvNormLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ stride=1,
+ norm_type='bn',
+ norm_groups=32,
+ use_dcn=False,
+ norm_decay=0.,
+ freeze_norm=False,
+ act=None,
+ name=None):
+ super(ConvNormLayer, self).__init__()
+ assert norm_type in ['bn', 'sync_bn', 'gn']
+
+ self.act = act
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=False)
+
+ norm_lr = 0. if freeze_norm else 1.
+
+ param_attr = ParamAttr(
+ learning_rate=norm_lr, regularizer=L2Decay(norm_decay))
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr, regularizer=L2Decay(norm_decay))
+ global_stats = True if freeze_norm else False
+ if norm_type in ['bn', 'sync_bn']:
+ self.norm = nn.BatchNorm(
+ ch_out,
+ param_attr=param_attr,
+ bias_attr=bias_attr,
+ use_global_stats=global_stats)
+ elif norm_type == 'gn':
+ self.norm = nn.GroupNorm(
+ num_groups=norm_groups,
+ num_channels=ch_out,
+ weight_attr=param_attr,
+ bias_attr=bias_attr)
+ norm_params = self.norm.parameters()
+ if freeze_norm:
+ for param in norm_params:
+ param.stop_gradient = True
+
+ def forward(self, inputs):
+ out = self.conv(inputs)
+ out = self.norm(out)
+
+ if self.act == 'relu':
+ out = F.relu(out)
+ return out
+
+
+class Layer1(nn.Layer):
+ def __init__(self,
+ num_channels,
+ has_se=False,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(Layer1, self).__init__()
+
+ self.bottleneck_block_list = []
+
+ for i in range(4):
+ bottleneck_block = self.add_sublayer(
+ "block_{}_{}".format(name, i + 1),
+ BottleneckBlock(
+ num_channels=num_channels if i == 0 else 256,
+ num_filters=64,
+ has_se=has_se,
+ stride=1,
+ downsample=True if i == 0 else False,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + '_' + str(i + 1)))
+ self.bottleneck_block_list.append(bottleneck_block)
+
+ def forward(self, input):
+ conv = input
+ for block_func in self.bottleneck_block_list:
+ conv = block_func(conv)
+ return conv
+
+
+class TransitionLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(TransitionLayer, self).__init__()
+
+ num_in = len(in_channels)
+ num_out = len(out_channels)
+ out = []
+ self.conv_bn_func_list = []
+ for i in range(num_out):
+ residual = None
+ if i < num_in:
+ if in_channels[i] != out_channels[i]:
+ residual = self.add_sublayer(
+ "transition_{}_layer_{}".format(name, i + 1),
+ ConvNormLayer(
+ ch_in=in_channels[i],
+ ch_out=out_channels[i],
+ filter_size=3,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act='relu',
+ name=name + '_layer_' + str(i + 1)))
+ else:
+ residual = self.add_sublayer(
+ "transition_{}_layer_{}".format(name, i + 1),
+ ConvNormLayer(
+ ch_in=in_channels[-1],
+ ch_out=out_channels[i],
+ filter_size=3,
+ stride=2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act='relu',
+ name=name + '_layer_' + str(i + 1)))
+ self.conv_bn_func_list.append(residual)
+
+ def forward(self, input):
+ outs = []
+ for idx, conv_bn_func in enumerate(self.conv_bn_func_list):
+ if conv_bn_func is None:
+ outs.append(input[idx])
+ else:
+ if idx < len(input):
+ outs.append(conv_bn_func(input[idx]))
+ else:
+ outs.append(conv_bn_func(input[-1]))
+ return outs
+
+
+class Branches(nn.Layer):
+ def __init__(self,
+ block_num,
+ in_channels,
+ out_channels,
+ has_se=False,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(Branches, self).__init__()
+
+ self.basic_block_list = []
+ for i in range(len(out_channels)):
+ self.basic_block_list.append([])
+ for j in range(block_num):
+ in_ch = in_channels[i] if j == 0 else out_channels[i]
+ basic_block_func = self.add_sublayer(
+ "bb_{}_branch_layer_{}_{}".format(name, i + 1, j + 1),
+ BasicBlock(
+ num_channels=in_ch,
+ num_filters=out_channels[i],
+ has_se=has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + '_branch_layer_' + str(i + 1) + '_' +
+ str(j + 1)))
+ self.basic_block_list[i].append(basic_block_func)
+
+ def forward(self, inputs):
+ outs = []
+ for idx, input in enumerate(inputs):
+ conv = input
+ basic_block_list = self.basic_block_list[idx]
+ for basic_block_func in basic_block_list:
+ conv = basic_block_func(conv)
+ outs.append(conv)
+ return outs
+
+
+class BottleneckBlock(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ has_se,
+ stride=1,
+ downsample=False,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(BottleneckBlock, self).__init__()
+
+ self.has_se = has_se
+ self.downsample = downsample
+
+ self.conv1 = ConvNormLayer(
+ ch_in=num_channels,
+ ch_out=num_filters,
+ filter_size=1,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act="relu",
+ name=name + "_conv1")
+ self.conv2 = ConvNormLayer(
+ ch_in=num_filters,
+ ch_out=num_filters,
+ filter_size=3,
+ stride=stride,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act="relu",
+ name=name + "_conv2")
+ self.conv3 = ConvNormLayer(
+ ch_in=num_filters,
+ ch_out=num_filters * 4,
+ filter_size=1,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act=None,
+ name=name + "_conv3")
+
+ if self.downsample:
+ self.conv_down = ConvNormLayer(
+ ch_in=num_channels,
+ ch_out=num_filters * 4,
+ filter_size=1,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act=None,
+ name=name + "_downsample")
+
+ if self.has_se:
+ self.se = SELayer(
+ num_channels=num_filters * 4,
+ num_filters=num_filters * 4,
+ reduction_ratio=16,
+ name='fc' + name)
+
+ def forward(self, input):
+ residual = input
+ conv1 = self.conv1(input)
+ conv2 = self.conv2(conv1)
+ conv3 = self.conv3(conv2)
+
+ if self.downsample:
+ residual = self.conv_down(input)
+
+ if self.has_se:
+ conv3 = self.se(conv3)
+
+ y = paddle.add(x=residual, y=conv3)
+ y = F.relu(y)
+ return y
+
+
+class BasicBlock(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ stride=1,
+ has_se=False,
+ downsample=False,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(BasicBlock, self).__init__()
+
+ self.has_se = has_se
+ self.downsample = downsample
+ self.conv1 = ConvNormLayer(
+ ch_in=num_channels,
+ ch_out=num_filters,
+ filter_size=3,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ stride=stride,
+ act="relu",
+ name=name + "_conv1")
+ self.conv2 = ConvNormLayer(
+ ch_in=num_filters,
+ ch_out=num_filters,
+ filter_size=3,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ stride=1,
+ act=None,
+ name=name + "_conv2")
+
+ if self.downsample:
+ self.conv_down = ConvNormLayer(
+ ch_in=num_channels,
+ ch_out=num_filters * 4,
+ filter_size=1,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act=None,
+ name=name + "_downsample")
+
+ if self.has_se:
+ self.se = SELayer(
+ num_channels=num_filters,
+ num_filters=num_filters,
+ reduction_ratio=16,
+ name='fc' + name)
+
+ def forward(self, input):
+ residual = input
+ conv1 = self.conv1(input)
+ conv2 = self.conv2(conv1)
+
+ if self.downsample:
+ residual = self.conv_down(input)
+
+ if self.has_se:
+ conv2 = self.se(conv2)
+
+ y = paddle.add(x=residual, y=conv2)
+ y = F.relu(y)
+ return y
+
+
+class SELayer(nn.Layer):
+ def __init__(self, num_channels, num_filters, reduction_ratio, name=None):
+ super(SELayer, self).__init__()
+
+ self.pool2d_gap = AdaptiveAvgPool2D(1)
+
+ self._num_channels = num_channels
+
+ med_ch = int(num_channels / reduction_ratio)
+ stdv = 1.0 / math.sqrt(num_channels * 1.0)
+ self.squeeze = Linear(
+ num_channels,
+ med_ch,
+ weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
+
+ stdv = 1.0 / math.sqrt(med_ch * 1.0)
+ self.excitation = Linear(
+ med_ch,
+ num_filters,
+ weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
+
+ def forward(self, input):
+ pool = self.pool2d_gap(input)
+ pool = paddle.squeeze(pool, axis=[2, 3])
+ squeeze = self.squeeze(pool)
+ squeeze = F.relu(squeeze)
+ excitation = self.excitation(squeeze)
+ excitation = F.sigmoid(excitation)
+ excitation = paddle.unsqueeze(excitation, axis=[2, 3])
+ out = input * excitation
+ return out
+
+
+class Stage(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_modules,
+ num_filters,
+ has_se=False,
+ norm_decay=0.,
+ freeze_norm=True,
+ multi_scale_output=True,
+ name=None):
+ super(Stage, self).__init__()
+
+ self._num_modules = num_modules
+ self.stage_func_list = []
+ for i in range(num_modules):
+ if i == num_modules - 1 and not multi_scale_output:
+ stage_func = self.add_sublayer(
+ "stage_{}_{}".format(name, i + 1),
+ HighResolutionModule(
+ num_channels=num_channels,
+ num_filters=num_filters,
+ has_se=has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ multi_scale_output=False,
+ name=name + '_' + str(i + 1)))
+ else:
+ stage_func = self.add_sublayer(
+ "stage_{}_{}".format(name, i + 1),
+ HighResolutionModule(
+ num_channels=num_channels,
+ num_filters=num_filters,
+ has_se=has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + '_' + str(i + 1)))
+
+ self.stage_func_list.append(stage_func)
+
+ def forward(self, input):
+ out = input
+ for idx in range(self._num_modules):
+ out = self.stage_func_list[idx](out)
+ return out
+
+
+class HighResolutionModule(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ has_se=False,
+ multi_scale_output=True,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(HighResolutionModule, self).__init__()
+ self.branches_func = Branches(
+ block_num=4,
+ in_channels=num_channels,
+ out_channels=num_filters,
+ has_se=has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name)
+
+ self.fuse_func = FuseLayers(
+ in_channels=num_filters,
+ out_channels=num_filters,
+ multi_scale_output=multi_scale_output,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name)
+
+ def forward(self, input):
+ out = self.branches_func(input)
+ out = self.fuse_func(out)
+ return out
+
+
+class FuseLayers(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ multi_scale_output=True,
+ norm_decay=0.,
+ freeze_norm=True,
+ name=None):
+ super(FuseLayers, self).__init__()
+
+ self._actual_ch = len(in_channels) if multi_scale_output else 1
+ self._in_channels = in_channels
+
+ self.residual_func_list = []
+ for i in range(self._actual_ch):
+ for j in range(len(in_channels)):
+ residual_func = None
+ if j > i:
+ residual_func = self.add_sublayer(
+ "residual_{}_layer_{}_{}".format(name, i + 1, j + 1),
+ ConvNormLayer(
+ ch_in=in_channels[j],
+ ch_out=out_channels[i],
+ filter_size=1,
+ stride=1,
+ act=None,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + '_layer_' + str(i + 1) + '_' +
+ str(j + 1)))
+ self.residual_func_list.append(residual_func)
+ elif j < i:
+ pre_num_filters = in_channels[j]
+ for k in range(i - j):
+ if k == i - j - 1:
+ residual_func = self.add_sublayer(
+ "residual_{}_layer_{}_{}_{}".format(
+ name, i + 1, j + 1, k + 1),
+ ConvNormLayer(
+ ch_in=pre_num_filters,
+ ch_out=out_channels[i],
+ filter_size=3,
+ stride=2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act=None,
+ name=name + '_layer_' + str(i + 1) + '_' +
+ str(j + 1) + '_' + str(k + 1)))
+ pre_num_filters = out_channels[i]
+ else:
+ residual_func = self.add_sublayer(
+ "residual_{}_layer_{}_{}_{}".format(
+ name, i + 1, j + 1, k + 1),
+ ConvNormLayer(
+ ch_in=pre_num_filters,
+ ch_out=out_channels[j],
+ filter_size=3,
+ stride=2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act="relu",
+ name=name + '_layer_' + str(i + 1) + '_' +
+ str(j + 1) + '_' + str(k + 1)))
+ pre_num_filters = out_channels[j]
+ self.residual_func_list.append(residual_func)
+
+ def forward(self, input):
+ outs = []
+ residual_func_idx = 0
+ for i in range(self._actual_ch):
+ residual = input[i]
+ for j in range(len(self._in_channels)):
+ if j > i:
+ y = self.residual_func_list[residual_func_idx](input[j])
+ residual_func_idx += 1
+ y = F.interpolate(y, scale_factor=2**(j - i))
+ residual = paddle.add(x=residual, y=y)
+ elif j < i:
+ y = input[j]
+ for k in range(i - j):
+ y = self.residual_func_list[residual_func_idx](y)
+ residual_func_idx += 1
+
+ residual = paddle.add(x=residual, y=y)
+ residual = F.relu(residual)
+ outs.append(residual)
+
+ return outs
+
+
+@register
+class HRNet(nn.Layer):
+ """
+ HRNet, see https://arxiv.org/abs/1908.07919
+
+ Args:
+ width (int): the width of HRNet
+ has_se (bool): whether to add SE block for each stage
+ freeze_at (int): the stage to freeze
+ freeze_norm (bool): whether to freeze norm in HRNet
+ norm_decay (float): weight decay for normalization layer weights
+ return_idx (List): the stage to return
+ upsample (bool): whether to upsample and concat the backbone feats
+ """
+
+ def __init__(self,
+ width=18,
+ has_se=False,
+ freeze_at=0,
+ freeze_norm=True,
+ norm_decay=0.,
+ return_idx=[0, 1, 2, 3],
+ upsample=False):
+ super(HRNet, self).__init__()
+
+ self.width = width
+ self.has_se = has_se
+ if isinstance(return_idx, Integral):
+ return_idx = [return_idx]
+
+ assert len(return_idx) > 0, "need one or more return index"
+ self.freeze_at = freeze_at
+ self.return_idx = return_idx
+ self.upsample = upsample
+
+ self.channels = {
+ 18: [[18, 36], [18, 36, 72], [18, 36, 72, 144]],
+ 30: [[30, 60], [30, 60, 120], [30, 60, 120, 240]],
+ 32: [[32, 64], [32, 64, 128], [32, 64, 128, 256]],
+ 40: [[40, 80], [40, 80, 160], [40, 80, 160, 320]],
+ 44: [[44, 88], [44, 88, 176], [44, 88, 176, 352]],
+ 48: [[48, 96], [48, 96, 192], [48, 96, 192, 384]],
+ 60: [[60, 120], [60, 120, 240], [60, 120, 240, 480]],
+ 64: [[64, 128], [64, 128, 256], [64, 128, 256, 512]]
+ }
+
+ channels_2, channels_3, channels_4 = self.channels[width]
+ num_modules_2, num_modules_3, num_modules_4 = 1, 4, 3
+ self._out_channels = [sum(channels_4)] if self.upsample else channels_4
+ self._out_strides = [4] if self.upsample else [4, 8, 16, 32]
+
+ self.conv_layer1_1 = ConvNormLayer(
+ ch_in=3,
+ ch_out=64,
+ filter_size=3,
+ stride=2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act='relu',
+ name="layer1_1")
+
+ self.conv_layer1_2 = ConvNormLayer(
+ ch_in=64,
+ ch_out=64,
+ filter_size=3,
+ stride=2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ act='relu',
+ name="layer1_2")
+
+ self.la1 = Layer1(
+ num_channels=64,
+ has_se=has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="layer2")
+
+ self.tr1 = TransitionLayer(
+ in_channels=[256],
+ out_channels=channels_2,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="tr1")
+
+ self.st2 = Stage(
+ num_channels=channels_2,
+ num_modules=num_modules_2,
+ num_filters=channels_2,
+ has_se=self.has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="st2")
+
+ self.tr2 = TransitionLayer(
+ in_channels=channels_2,
+ out_channels=channels_3,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="tr2")
+
+ self.st3 = Stage(
+ num_channels=channels_3,
+ num_modules=num_modules_3,
+ num_filters=channels_3,
+ has_se=self.has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="st3")
+
+ self.tr3 = TransitionLayer(
+ in_channels=channels_3,
+ out_channels=channels_4,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="tr3")
+ self.st4 = Stage(
+ num_channels=channels_4,
+ num_modules=num_modules_4,
+ num_filters=channels_4,
+ has_se=self.has_se,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ multi_scale_output=len(return_idx) > 1,
+ name="st4")
+
+ def forward(self, inputs):
+ x = inputs['image']
+ conv1 = self.conv_layer1_1(x)
+ conv2 = self.conv_layer1_2(conv1)
+
+ la1 = self.la1(conv2)
+ tr1 = self.tr1([la1])
+ st2 = self.st2(tr1)
+ tr2 = self.tr2(st2)
+
+ st3 = self.st3(tr2)
+ tr3 = self.tr3(st3)
+
+ st4 = self.st4(tr3)
+
+ if self.upsample:
+ # Upsampling
+ x0_h, x0_w = st4[0].shape[2:4]
+ x1 = F.upsample(st4[1], size=(x0_h, x0_w), mode='bilinear')
+ x2 = F.upsample(st4[2], size=(x0_h, x0_w), mode='bilinear')
+ x3 = F.upsample(st4[3], size=(x0_h, x0_w), mode='bilinear')
+ x = paddle.concat([st4[0], x1, x2, x3], 1)
+ return x
+
+ res = []
+ for i, layer in enumerate(st4):
+ if i == self.freeze_at:
+ layer.stop_gradient = True
+ if i in self.return_idx:
+ res.append(layer)
+
+ return res
+
+ @property
+ def out_shape(self):
+ if self.upsample:
+ self.return_idx = [0]
+ return [
+ ShapeSpec(
+ channels=self._out_channels[i], stride=self._out_strides[i])
+ for i in self.return_idx
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lcnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lcnet.py
new file mode 100644
index 000000000..fd8ad4e46
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lcnet.py
@@ -0,0 +1,258 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+from paddle import ParamAttr
+from paddle.nn import AdaptiveAvgPool2D, BatchNorm, Conv2D, Dropout, Linear
+from paddle.regularizer import L2Decay
+from paddle.nn.initializer import KaimingNormal
+
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+
+__all__ = ['LCNet']
+
+NET_CONFIG = {
+ "blocks2":
+ #k, in_c, out_c, s, use_se
+ [[3, 16, 32, 1, False], ],
+ "blocks3": [
+ [3, 32, 64, 2, False],
+ [3, 64, 64, 1, False],
+ ],
+ "blocks4": [
+ [3, 64, 128, 2, False],
+ [3, 128, 128, 1, False],
+ ],
+ "blocks5": [
+ [3, 128, 256, 2, False],
+ [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False],
+ ],
+ "blocks6": [[5, 256, 512, 2, True], [5, 512, 512, 1, True]]
+}
+
+
+def make_divisible(v, divisor=8, min_value=None):
+ if min_value is None:
+ min_value = divisor
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+ if new_v < 0.9 * v:
+ new_v += divisor
+ return new_v
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ num_channels,
+ filter_size,
+ num_filters,
+ stride,
+ num_groups=1):
+ super().__init__()
+
+ self.conv = Conv2D(
+ in_channels=num_channels,
+ out_channels=num_filters,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=num_groups,
+ weight_attr=ParamAttr(initializer=KaimingNormal()),
+ bias_attr=False)
+
+ self.bn = BatchNorm(
+ num_filters,
+ param_attr=ParamAttr(regularizer=L2Decay(0.0)),
+ bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
+ self.hardswish = nn.Hardswish()
+
+ def forward(self, x):
+ x = self.conv(x)
+ x = self.bn(x)
+ x = self.hardswish(x)
+ return x
+
+
+class DepthwiseSeparable(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ stride,
+ dw_size=3,
+ use_se=False):
+ super().__init__()
+ self.use_se = use_se
+ self.dw_conv = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_channels,
+ filter_size=dw_size,
+ stride=stride,
+ num_groups=num_channels)
+ if use_se:
+ self.se = SEModule(num_channels)
+ self.pw_conv = ConvBNLayer(
+ num_channels=num_channels,
+ filter_size=1,
+ num_filters=num_filters,
+ stride=1)
+
+ def forward(self, x):
+ x = self.dw_conv(x)
+ if self.use_se:
+ x = self.se(x)
+ x = self.pw_conv(x)
+ return x
+
+
+class SEModule(nn.Layer):
+ def __init__(self, channel, reduction=4):
+ super().__init__()
+ self.avg_pool = AdaptiveAvgPool2D(1)
+ self.conv1 = Conv2D(
+ in_channels=channel,
+ out_channels=channel // reduction,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self.relu = nn.ReLU()
+ self.conv2 = Conv2D(
+ in_channels=channel // reduction,
+ out_channels=channel,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self.hardsigmoid = nn.Hardsigmoid()
+
+ def forward(self, x):
+ identity = x
+ x = self.avg_pool(x)
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.conv2(x)
+ x = self.hardsigmoid(x)
+ x = paddle.multiply(x=identity, y=x)
+ return x
+
+
+@register
+@serializable
+class LCNet(nn.Layer):
+ def __init__(self, scale=1.0, feature_maps=[3, 4, 5]):
+ super().__init__()
+ self.scale = scale
+ self.feature_maps = feature_maps
+
+ out_channels = []
+
+ self.conv1 = ConvBNLayer(
+ num_channels=3,
+ filter_size=3,
+ num_filters=make_divisible(16 * scale),
+ stride=2)
+
+ self.blocks2 = nn.Sequential(* [
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks2"])
+ ])
+
+ self.blocks3 = nn.Sequential(* [
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks3"])
+ ])
+
+ out_channels.append(
+ make_divisible(NET_CONFIG["blocks3"][-1][2] * scale))
+
+ self.blocks4 = nn.Sequential(* [
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks4"])
+ ])
+
+ out_channels.append(
+ make_divisible(NET_CONFIG["blocks4"][-1][2] * scale))
+
+ self.blocks5 = nn.Sequential(* [
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks5"])
+ ])
+
+ out_channels.append(
+ make_divisible(NET_CONFIG["blocks5"][-1][2] * scale))
+
+ self.blocks6 = nn.Sequential(* [
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks6"])
+ ])
+
+ out_channels.append(
+ make_divisible(NET_CONFIG["blocks6"][-1][2] * scale))
+ self._out_channels = [
+ ch for idx, ch in enumerate(out_channels) if idx + 2 in feature_maps
+ ]
+
+ def forward(self, inputs):
+ x = inputs['image']
+ outs = []
+
+ x = self.conv1(x)
+ x = self.blocks2(x)
+ x = self.blocks3(x)
+ outs.append(x)
+ x = self.blocks4(x)
+ outs.append(x)
+ x = self.blocks5(x)
+ outs.append(x)
+ x = self.blocks6(x)
+ outs.append(x)
+ outs = [o for i, o in enumerate(outs) if i + 2 in self.feature_maps]
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lite_hrnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lite_hrnet.py
new file mode 100644
index 000000000..f14aae8e2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/lite_hrnet.py
@@ -0,0 +1,881 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from numbers import Integral
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from paddle.nn.initializer import Normal, Constant
+from ppdet.core.workspace import register
+from ppdet.modeling.shape_spec import ShapeSpec
+from ppdet.modeling.ops import channel_shuffle
+from .. import layers as L
+
+__all__ = ['LiteHRNet']
+
+
+class ConvNormLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ stride=1,
+ groups=1,
+ norm_type=None,
+ norm_groups=32,
+ norm_decay=0.,
+ freeze_norm=False,
+ act=None):
+ super(ConvNormLayer, self).__init__()
+ self.act = act
+ norm_lr = 0. if freeze_norm else 1.
+ if norm_type is not None:
+ assert norm_type in ['bn', 'sync_bn', 'gn'],\
+ "norm_type should be one of ['bn', 'sync_bn', 'gn'], but got {}".format(norm_type)
+ param_attr = ParamAttr(
+ initializer=Constant(1.0),
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay), )
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr, regularizer=L2Decay(norm_decay))
+ global_stats = True if freeze_norm else False
+ if norm_type in ['bn', 'sync_bn']:
+ self.norm = nn.BatchNorm(
+ ch_out,
+ param_attr=param_attr,
+ bias_attr=bias_attr,
+ use_global_stats=global_stats, )
+ elif norm_type == 'gn':
+ self.norm = nn.GroupNorm(
+ num_groups=norm_groups,
+ num_channels=ch_out,
+ weight_attr=param_attr,
+ bias_attr=bias_attr)
+ norm_params = self.norm.parameters()
+ if freeze_norm:
+ for param in norm_params:
+ param.stop_gradient = True
+ conv_bias_attr = False
+ else:
+ conv_bias_attr = True
+ self.norm = None
+
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.001)),
+ bias_attr=conv_bias_attr)
+
+ def forward(self, inputs):
+ out = self.conv(inputs)
+ if self.norm is not None:
+ out = self.norm(out)
+
+ if self.act == 'relu':
+ out = F.relu(out)
+ elif self.act == 'sigmoid':
+ out = F.sigmoid(out)
+ return out
+
+
+class DepthWiseSeparableConvNormLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ stride=1,
+ dw_norm_type=None,
+ pw_norm_type=None,
+ norm_decay=0.,
+ freeze_norm=False,
+ dw_act=None,
+ pw_act=None):
+ super(DepthWiseSeparableConvNormLayer, self).__init__()
+ self.depthwise_conv = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_in,
+ filter_size=filter_size,
+ stride=stride,
+ groups=ch_in,
+ norm_type=dw_norm_type,
+ act=dw_act,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm, )
+ self.pointwise_conv = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ norm_type=pw_norm_type,
+ act=pw_act,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm, )
+
+ def forward(self, x):
+ x = self.depthwise_conv(x)
+ x = self.pointwise_conv(x)
+ return x
+
+
+class CrossResolutionWeightingModule(nn.Layer):
+ def __init__(self,
+ channels,
+ ratio=16,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(CrossResolutionWeightingModule, self).__init__()
+ self.channels = channels
+ total_channel = sum(channels)
+ self.conv1 = ConvNormLayer(
+ ch_in=total_channel,
+ ch_out=total_channel // ratio,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ self.conv2 = ConvNormLayer(
+ ch_in=total_channel // ratio,
+ ch_out=total_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='sigmoid',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+
+ def forward(self, x):
+ mini_size = x[-1].shape[-2:]
+ out = [F.adaptive_avg_pool2d(s, mini_size) for s in x[:-1]] + [x[-1]]
+ out = paddle.concat(out, 1)
+ out = self.conv1(out)
+ out = self.conv2(out)
+ out = paddle.split(out, self.channels, 1)
+ out = [
+ s * F.interpolate(
+ a, s.shape[-2:], mode='nearest') for s, a in zip(x, out)
+ ]
+ return out
+
+
+class SpatialWeightingModule(nn.Layer):
+ def __init__(self, in_channel, ratio=16, freeze_norm=False, norm_decay=0.):
+ super(SpatialWeightingModule, self).__init__()
+ self.global_avgpooling = nn.AdaptiveAvgPool2D(1)
+ self.conv1 = ConvNormLayer(
+ ch_in=in_channel,
+ ch_out=in_channel // ratio,
+ filter_size=1,
+ stride=1,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ self.conv2 = ConvNormLayer(
+ ch_in=in_channel // ratio,
+ ch_out=in_channel,
+ filter_size=1,
+ stride=1,
+ act='sigmoid',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+
+ def forward(self, x):
+ out = self.global_avgpooling(x)
+ out = self.conv1(out)
+ out = self.conv2(out)
+ return x * out
+
+
+class ConditionalChannelWeightingBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ stride,
+ reduce_ratio,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(ConditionalChannelWeightingBlock, self).__init__()
+ assert stride in [1, 2]
+ branch_channels = [channel // 2 for channel in in_channels]
+
+ self.cross_resolution_weighting = CrossResolutionWeightingModule(
+ branch_channels,
+ ratio=reduce_ratio,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ self.depthwise_convs = nn.LayerList([
+ ConvNormLayer(
+ channel,
+ channel,
+ filter_size=3,
+ stride=stride,
+ groups=channel,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay) for channel in branch_channels
+ ])
+
+ self.spatial_weighting = nn.LayerList([
+ SpatialWeightingModule(
+ channel,
+ ratio=4,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay) for channel in branch_channels
+ ])
+
+ def forward(self, x):
+ x = [s.chunk(2, axis=1) for s in x]
+ x1 = [s[0] for s in x]
+ x2 = [s[1] for s in x]
+
+ x2 = self.cross_resolution_weighting(x2)
+ x2 = [dw(s) for s, dw in zip(x2, self.depthwise_convs)]
+ x2 = [sw(s) for s, sw in zip(x2, self.spatial_weighting)]
+
+ out = [paddle.concat([s1, s2], axis=1) for s1, s2 in zip(x1, x2)]
+ out = [channel_shuffle(s, groups=2) for s in out]
+ return out
+
+
+class ShuffleUnit(nn.Layer):
+ def __init__(self,
+ in_channel,
+ out_channel,
+ stride,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(ShuffleUnit, self).__init__()
+ branch_channel = out_channel // 2
+ self.stride = stride
+ if self.stride == 1:
+ assert in_channel == branch_channel * 2,\
+ "when stride=1, in_channel {} should equal to branch_channel*2 {}".format(in_channel, branch_channel * 2)
+ if stride > 1:
+ self.branch1 = nn.Sequential(
+ ConvNormLayer(
+ ch_in=in_channel,
+ ch_out=in_channel,
+ filter_size=3,
+ stride=self.stride,
+ groups=in_channel,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay),
+ ConvNormLayer(
+ ch_in=in_channel,
+ ch_out=branch_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay), )
+ self.branch2 = nn.Sequential(
+ ConvNormLayer(
+ ch_in=branch_channel if stride == 1 else in_channel,
+ ch_out=branch_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay),
+ ConvNormLayer(
+ ch_in=branch_channel,
+ ch_out=branch_channel,
+ filter_size=3,
+ stride=self.stride,
+ groups=branch_channel,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay),
+ ConvNormLayer(
+ ch_in=branch_channel,
+ ch_out=branch_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay), )
+
+ def forward(self, x):
+ if self.stride > 1:
+ x1 = self.branch1(x)
+ x2 = self.branch2(x)
+ else:
+ x1, x2 = x.chunk(2, axis=1)
+ x2 = self.branch2(x2)
+ out = paddle.concat([x1, x2], axis=1)
+ out = channel_shuffle(out, groups=2)
+ return out
+
+
+class IterativeHead(nn.Layer):
+ def __init__(self,
+ in_channels,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(IterativeHead, self).__init__()
+ num_branches = len(in_channels)
+ self.in_channels = in_channels[::-1]
+
+ projects = []
+ for i in range(num_branches):
+ if i != num_branches - 1:
+ projects.append(
+ DepthWiseSeparableConvNormLayer(
+ ch_in=self.in_channels[i],
+ ch_out=self.in_channels[i + 1],
+ filter_size=3,
+ stride=1,
+ dw_act=None,
+ pw_act='relu',
+ dw_norm_type=norm_type,
+ pw_norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay))
+ else:
+ projects.append(
+ DepthWiseSeparableConvNormLayer(
+ ch_in=self.in_channels[i],
+ ch_out=self.in_channels[i],
+ filter_size=3,
+ stride=1,
+ dw_act=None,
+ pw_act='relu',
+ dw_norm_type=norm_type,
+ pw_norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay))
+ self.projects = nn.LayerList(projects)
+
+ def forward(self, x):
+ x = x[::-1]
+ y = []
+ last_x = None
+ for i, s in enumerate(x):
+ if last_x is not None:
+ last_x = F.interpolate(
+ last_x,
+ size=s.shape[-2:],
+ mode='bilinear',
+ align_corners=True)
+ s = s + last_x
+ s = self.projects[i](s)
+ y.append(s)
+ last_x = s
+
+ return y[::-1]
+
+
+class Stem(nn.Layer):
+ def __init__(self,
+ in_channel,
+ stem_channel,
+ out_channel,
+ expand_ratio,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(Stem, self).__init__()
+ self.conv1 = ConvNormLayer(
+ in_channel,
+ stem_channel,
+ filter_size=3,
+ stride=2,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ mid_channel = int(round(stem_channel * expand_ratio))
+ branch_channel = stem_channel // 2
+ if stem_channel == out_channel:
+ inc_channel = out_channel - branch_channel
+ else:
+ inc_channel = out_channel - stem_channel
+ self.branch1 = nn.Sequential(
+ ConvNormLayer(
+ ch_in=branch_channel,
+ ch_out=branch_channel,
+ filter_size=3,
+ stride=2,
+ groups=branch_channel,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay),
+ ConvNormLayer(
+ ch_in=branch_channel,
+ ch_out=inc_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay), )
+ self.expand_conv = ConvNormLayer(
+ ch_in=branch_channel,
+ ch_out=mid_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ self.depthwise_conv = ConvNormLayer(
+ ch_in=mid_channel,
+ ch_out=mid_channel,
+ filter_size=3,
+ stride=2,
+ groups=mid_channel,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ self.linear_conv = ConvNormLayer(
+ ch_in=mid_channel,
+ ch_out=branch_channel
+ if stem_channel == out_channel else stem_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ act='relu',
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+
+ def forward(self, x):
+ x = self.conv1(x)
+ x1, x2 = x.chunk(2, axis=1)
+ x1 = self.branch1(x1)
+ x2 = self.expand_conv(x2)
+ x2 = self.depthwise_conv(x2)
+ x2 = self.linear_conv(x2)
+ out = paddle.concat([x1, x2], axis=1)
+ out = channel_shuffle(out, groups=2)
+
+ return out
+
+
+class LiteHRNetModule(nn.Layer):
+ def __init__(self,
+ num_branches,
+ num_blocks,
+ in_channels,
+ reduce_ratio,
+ module_type,
+ multiscale_output=False,
+ with_fuse=True,
+ norm_type='bn',
+ freeze_norm=False,
+ norm_decay=0.):
+ super(LiteHRNetModule, self).__init__()
+ assert num_branches == len(in_channels),\
+ "num_branches {} should equal to num_in_channels {}".format(num_branches, len(in_channels))
+ assert module_type in ['LITE', 'NAIVE'],\
+ "module_type should be one of ['LITE', 'NAIVE']"
+ self.num_branches = num_branches
+ self.in_channels = in_channels
+ self.multiscale_output = multiscale_output
+ self.with_fuse = with_fuse
+ self.norm_type = 'bn'
+ self.module_type = module_type
+
+ if self.module_type == 'LITE':
+ self.layers = self._make_weighting_blocks(
+ num_blocks,
+ reduce_ratio,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+ elif self.module_type == 'NAIVE':
+ self.layers = self._make_naive_branches(
+ num_branches,
+ num_blocks,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay)
+
+ if self.with_fuse:
+ self.fuse_layers = self._make_fuse_layers(
+ freeze_norm=freeze_norm, norm_decay=norm_decay)
+ self.relu = nn.ReLU()
+
+ def _make_weighting_blocks(self,
+ num_blocks,
+ reduce_ratio,
+ stride=1,
+ freeze_norm=False,
+ norm_decay=0.):
+ layers = []
+ for i in range(num_blocks):
+ layers.append(
+ ConditionalChannelWeightingBlock(
+ self.in_channels,
+ stride=stride,
+ reduce_ratio=reduce_ratio,
+ norm_type=self.norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay))
+ return nn.Sequential(*layers)
+
+ def _make_naive_branches(self,
+ num_branches,
+ num_blocks,
+ freeze_norm=False,
+ norm_decay=0.):
+ branches = []
+ for branch_idx in range(num_branches):
+ layers = []
+ for i in range(num_blocks):
+ layers.append(
+ ShuffleUnit(
+ self.in_channels[branch_idx],
+ self.in_channels[branch_idx],
+ stride=1,
+ norm_type=self.norm_type,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay))
+ branches.append(nn.Sequential(*layers))
+ return nn.LayerList(branches)
+
+ def _make_fuse_layers(self, freeze_norm=False, norm_decay=0.):
+ if self.num_branches == 1:
+ return None
+ fuse_layers = []
+ num_out_branches = self.num_branches if self.multiscale_output else 1
+ for i in range(num_out_branches):
+ fuse_layer = []
+ for j in range(self.num_branches):
+ if j > i:
+ fuse_layer.append(
+ nn.Sequential(
+ L.Conv2d(
+ self.in_channels[j],
+ self.in_channels[i],
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=False, ),
+ nn.BatchNorm(self.in_channels[i]),
+ nn.Upsample(
+ scale_factor=2**(j - i), mode='nearest')))
+ elif j == i:
+ fuse_layer.append(None)
+ else:
+ conv_downsamples = []
+ for k in range(i - j):
+ if k == i - j - 1:
+ conv_downsamples.append(
+ nn.Sequential(
+ L.Conv2d(
+ self.in_channels[j],
+ self.in_channels[j],
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ groups=self.in_channels[j],
+ bias=False, ),
+ nn.BatchNorm(self.in_channels[j]),
+ L.Conv2d(
+ self.in_channels[j],
+ self.in_channels[i],
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=False, ),
+ nn.BatchNorm(self.in_channels[i])))
+ else:
+ conv_downsamples.append(
+ nn.Sequential(
+ L.Conv2d(
+ self.in_channels[j],
+ self.in_channels[j],
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ groups=self.in_channels[j],
+ bias=False, ),
+ nn.BatchNorm(self.in_channels[j]),
+ L.Conv2d(
+ self.in_channels[j],
+ self.in_channels[j],
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=False, ),
+ nn.BatchNorm(self.in_channels[j]),
+ nn.ReLU()))
+
+ fuse_layer.append(nn.Sequential(*conv_downsamples))
+ fuse_layers.append(nn.LayerList(fuse_layer))
+
+ return nn.LayerList(fuse_layers)
+
+ def forward(self, x):
+ if self.num_branches == 1:
+ return [self.layers[0](x[0])]
+ if self.module_type == 'LITE':
+ out = self.layers(x)
+ elif self.module_type == 'NAIVE':
+ for i in range(self.num_branches):
+ x[i] = self.layers[i](x[i])
+ out = x
+ if self.with_fuse:
+ out_fuse = []
+ for i in range(len(self.fuse_layers)):
+ y = out[0] if i == 0 else self.fuse_layers[i][0](out[0])
+ for j in range(self.num_branches):
+ if j == 0:
+ y += y
+ elif i == j:
+ y += out[j]
+ else:
+ y += self.fuse_layers[i][j](out[j])
+ if i == 0:
+ out[i] = y
+ out_fuse.append(self.relu(y))
+ out = out_fuse
+ elif not self.multiscale_output:
+ out = [out[0]]
+ return out
+
+
+@register
+class LiteHRNet(nn.Layer):
+ """
+ @inproceedings{Yulitehrnet21,
+ title={Lite-HRNet: A Lightweight High-Resolution Network},
+ author={Yu, Changqian and Xiao, Bin and Gao, Changxin and Yuan, Lu and Zhang, Lei and Sang, Nong and Wang, Jingdong},
+ booktitle={CVPR},year={2021}
+ }
+ Args:
+ network_type (str): the network_type should be one of ["lite_18", "lite_30", "naive", "wider_naive"],
+ "naive": Simply combining the shuffle block in ShuffleNet and the highresolution design pattern in HRNet.
+ "wider_naive": Naive network with wider channels in each block.
+ "lite_18": Lite-HRNet-18, which replaces the pointwise convolution in a shuffle block by conditional channel weighting.
+ "lite_30": Lite-HRNet-30, with more blocks compared with Lite-HRNet-18.
+ freeze_at (int): the stage to freeze
+ freeze_norm (bool): whether to freeze norm in HRNet
+ norm_decay (float): weight decay for normalization layer weights
+ return_idx (List): the stage to return
+ """
+
+ def __init__(self,
+ network_type,
+ freeze_at=0,
+ freeze_norm=True,
+ norm_decay=0.,
+ return_idx=[0, 1, 2, 3]):
+ super(LiteHRNet, self).__init__()
+ if isinstance(return_idx, Integral):
+ return_idx = [return_idx]
+ assert network_type in ["lite_18", "lite_30", "naive", "wider_naive"],\
+ "the network_type should be one of [lite_18, lite_30, naive, wider_naive]"
+ assert len(return_idx) > 0, "need one or more return index"
+ self.freeze_at = freeze_at
+ self.freeze_norm = freeze_norm
+ self.norm_decay = norm_decay
+ self.return_idx = return_idx
+ self.norm_type = 'bn'
+
+ self.module_configs = {
+ "lite_18": {
+ "num_modules": [2, 4, 2],
+ "num_branches": [2, 3, 4],
+ "num_blocks": [2, 2, 2],
+ "module_type": ["LITE", "LITE", "LITE"],
+ "reduce_ratios": [8, 8, 8],
+ "num_channels": [[40, 80], [40, 80, 160], [40, 80, 160, 320]],
+ },
+ "lite_30": {
+ "num_modules": [3, 8, 3],
+ "num_branches": [2, 3, 4],
+ "num_blocks": [2, 2, 2],
+ "module_type": ["LITE", "LITE", "LITE"],
+ "reduce_ratios": [8, 8, 8],
+ "num_channels": [[40, 80], [40, 80, 160], [40, 80, 160, 320]],
+ },
+ "naive": {
+ "num_modules": [2, 4, 2],
+ "num_branches": [2, 3, 4],
+ "num_blocks": [2, 2, 2],
+ "module_type": ["NAIVE", "NAIVE", "NAIVE"],
+ "reduce_ratios": [1, 1, 1],
+ "num_channels": [[30, 60], [30, 60, 120], [30, 60, 120, 240]],
+ },
+ "wider_naive": {
+ "num_modules": [2, 4, 2],
+ "num_branches": [2, 3, 4],
+ "num_blocks": [2, 2, 2],
+ "module_type": ["NAIVE", "NAIVE", "NAIVE"],
+ "reduce_ratios": [1, 1, 1],
+ "num_channels": [[40, 80], [40, 80, 160], [40, 80, 160, 320]],
+ },
+ }
+
+ self.stages_config = self.module_configs[network_type]
+
+ self.stem = Stem(3, 32, 32, 1)
+ num_channels_pre_layer = [32]
+ for stage_idx in range(3):
+ num_channels = self.stages_config["num_channels"][stage_idx]
+ setattr(self, 'transition{}'.format(stage_idx),
+ self._make_transition_layer(num_channels_pre_layer,
+ num_channels, self.freeze_norm,
+ self.norm_decay))
+ stage, num_channels_pre_layer = self._make_stage(
+ self.stages_config, stage_idx, num_channels, True,
+ self.freeze_norm, self.norm_decay)
+ setattr(self, 'stage{}'.format(stage_idx), stage)
+ self.head_layer = IterativeHead(num_channels_pre_layer, 'bn',
+ self.freeze_norm, self.norm_decay)
+
+ def _make_transition_layer(self,
+ num_channels_pre_layer,
+ num_channels_cur_layer,
+ freeze_norm=False,
+ norm_decay=0.):
+ num_branches_pre = len(num_channels_pre_layer)
+ num_branches_cur = len(num_channels_cur_layer)
+ transition_layers = []
+ for i in range(num_branches_cur):
+ if i < num_branches_pre:
+ if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
+ transition_layers.append(
+ nn.Sequential(
+ L.Conv2d(
+ num_channels_pre_layer[i],
+ num_channels_pre_layer[i],
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ groups=num_channels_pre_layer[i],
+ bias=False),
+ nn.BatchNorm(num_channels_pre_layer[i]),
+ L.Conv2d(
+ num_channels_pre_layer[i],
+ num_channels_cur_layer[i],
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=False, ),
+ nn.BatchNorm(num_channels_cur_layer[i]),
+ nn.ReLU()))
+ else:
+ transition_layers.append(None)
+ else:
+ conv_downsamples = []
+ for j in range(i + 1 - num_branches_pre):
+ conv_downsamples.append(
+ nn.Sequential(
+ L.Conv2d(
+ num_channels_pre_layer[-1],
+ num_channels_pre_layer[-1],
+ groups=num_channels_pre_layer[-1],
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ bias=False, ),
+ nn.BatchNorm(num_channels_pre_layer[-1]),
+ L.Conv2d(
+ num_channels_pre_layer[-1],
+ num_channels_cur_layer[i]
+ if j == i - num_branches_pre else
+ num_channels_pre_layer[-1],
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=False, ),
+ nn.BatchNorm(num_channels_cur_layer[i]
+ if j == i - num_branches_pre else
+ num_channels_pre_layer[-1]),
+ nn.ReLU()))
+ transition_layers.append(nn.Sequential(*conv_downsamples))
+ return nn.LayerList(transition_layers)
+
+ def _make_stage(self,
+ stages_config,
+ stage_idx,
+ in_channels,
+ multiscale_output,
+ freeze_norm=False,
+ norm_decay=0.):
+ num_modules = stages_config["num_modules"][stage_idx]
+ num_branches = stages_config["num_branches"][stage_idx]
+ num_blocks = stages_config["num_blocks"][stage_idx]
+ reduce_ratio = stages_config['reduce_ratios'][stage_idx]
+ module_type = stages_config['module_type'][stage_idx]
+
+ modules = []
+ for i in range(num_modules):
+ if not multiscale_output and i == num_modules - 1:
+ reset_multiscale_output = False
+ else:
+ reset_multiscale_output = True
+ modules.append(
+ LiteHRNetModule(
+ num_branches,
+ num_blocks,
+ in_channels,
+ reduce_ratio,
+ module_type,
+ multiscale_output=reset_multiscale_output,
+ with_fuse=True,
+ freeze_norm=freeze_norm,
+ norm_decay=norm_decay))
+ in_channels = modules[-1].in_channels
+ return nn.Sequential(*modules), in_channels
+
+ def forward(self, inputs):
+ x = inputs['image']
+ x = self.stem(x)
+ y_list = [x]
+ for stage_idx in range(3):
+ x_list = []
+ transition = getattr(self, 'transition{}'.format(stage_idx))
+ for j in range(self.stages_config["num_branches"][stage_idx]):
+ if transition[j] is not None:
+ if j >= len(y_list):
+ x_list.append(transition[j](y_list[-1]))
+ else:
+ x_list.append(transition[j](y_list[j]))
+ else:
+ x_list.append(y_list[j])
+ y_list = getattr(self, 'stage{}'.format(stage_idx))(x_list)
+ x = self.head_layer(y_list)
+ res = []
+ for i, layer in enumerate(x):
+ if i == self.freeze_at:
+ layer.stop_gradient = True
+ if i in self.return_idx:
+ res.append(layer)
+ return res
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self._out_channels[i], stride=self._out_strides[i])
+ for i in self.return_idx
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v1.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v1.py
new file mode 100644
index 000000000..8cf602832
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v1.py
@@ -0,0 +1,409 @@
+# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from paddle.nn.initializer import KaimingNormal
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+
+__all__ = ['MobileNet']
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ num_groups=1,
+ act='relu',
+ conv_lr=1.,
+ conv_decay=0.,
+ norm_decay=0.,
+ norm_type='bn',
+ name=None):
+ super(ConvBNLayer, self).__init__()
+ self.act = act
+ self._conv = nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ groups=num_groups,
+ weight_attr=ParamAttr(
+ learning_rate=conv_lr,
+ initializer=KaimingNormal(),
+ regularizer=L2Decay(conv_decay)),
+ bias_attr=False)
+
+ param_attr = ParamAttr(regularizer=L2Decay(norm_decay))
+ bias_attr = ParamAttr(regularizer=L2Decay(norm_decay))
+ if norm_type == 'sync_bn':
+ self._batch_norm = nn.SyncBatchNorm(
+ out_channels, weight_attr=param_attr, bias_attr=bias_attr)
+ else:
+ self._batch_norm = nn.BatchNorm(
+ out_channels,
+ act=None,
+ param_attr=param_attr,
+ bias_attr=bias_attr,
+ use_global_stats=False)
+
+ def forward(self, x):
+ x = self._conv(x)
+ x = self._batch_norm(x)
+ if self.act == "relu":
+ x = F.relu(x)
+ elif self.act == "relu6":
+ x = F.relu6(x)
+ return x
+
+
+class DepthwiseSeparable(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels1,
+ out_channels2,
+ num_groups,
+ stride,
+ scale,
+ conv_lr=1.,
+ conv_decay=0.,
+ norm_decay=0.,
+ norm_type='bn',
+ name=None):
+ super(DepthwiseSeparable, self).__init__()
+
+ self._depthwise_conv = ConvBNLayer(
+ in_channels,
+ int(out_channels1 * scale),
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ num_groups=int(num_groups * scale),
+ conv_lr=conv_lr,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name=name + "_dw")
+
+ self._pointwise_conv = ConvBNLayer(
+ int(out_channels1 * scale),
+ int(out_channels2 * scale),
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ conv_lr=conv_lr,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name=name + "_sep")
+
+ def forward(self, x):
+ x = self._depthwise_conv(x)
+ x = self._pointwise_conv(x)
+ return x
+
+
+class ExtraBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels1,
+ out_channels2,
+ num_groups=1,
+ stride=2,
+ conv_lr=1.,
+ conv_decay=0.,
+ norm_decay=0.,
+ norm_type='bn',
+ name=None):
+ super(ExtraBlock, self).__init__()
+
+ self.pointwise_conv = ConvBNLayer(
+ in_channels,
+ int(out_channels1),
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ num_groups=int(num_groups),
+ act='relu6',
+ conv_lr=conv_lr,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name=name + "_extra1")
+
+ self.normal_conv = ConvBNLayer(
+ int(out_channels1),
+ int(out_channels2),
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ num_groups=int(num_groups),
+ act='relu6',
+ conv_lr=conv_lr,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name=name + "_extra2")
+
+ def forward(self, x):
+ x = self.pointwise_conv(x)
+ x = self.normal_conv(x)
+ return x
+
+
+@register
+@serializable
+class MobileNet(nn.Layer):
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ norm_type='bn',
+ norm_decay=0.,
+ conv_decay=0.,
+ scale=1,
+ conv_learning_rate=1.0,
+ feature_maps=[4, 6, 13],
+ with_extra_blocks=False,
+ extra_block_filters=[[256, 512], [128, 256], [128, 256],
+ [64, 128]]):
+ super(MobileNet, self).__init__()
+ if isinstance(feature_maps, Integral):
+ feature_maps = [feature_maps]
+ self.feature_maps = feature_maps
+ self.with_extra_blocks = with_extra_blocks
+ self.extra_block_filters = extra_block_filters
+
+ self._out_channels = []
+
+ self.conv1 = ConvBNLayer(
+ in_channels=3,
+ out_channels=int(32 * scale),
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv1")
+
+ self.dwsl = []
+ dws21 = self.add_sublayer(
+ "conv2_1",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(32 * scale),
+ out_channels1=32,
+ out_channels2=64,
+ num_groups=32,
+ stride=1,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv2_1"))
+ self.dwsl.append(dws21)
+ self._update_out_channels(64, len(self.dwsl), feature_maps)
+ dws22 = self.add_sublayer(
+ "conv2_2",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(64 * scale),
+ out_channels1=64,
+ out_channels2=128,
+ num_groups=64,
+ stride=2,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv2_2"))
+ self.dwsl.append(dws22)
+ self._update_out_channels(128, len(self.dwsl), feature_maps)
+ # 1/4
+ dws31 = self.add_sublayer(
+ "conv3_1",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(128 * scale),
+ out_channels1=128,
+ out_channels2=128,
+ num_groups=128,
+ stride=1,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv3_1"))
+ self.dwsl.append(dws31)
+ self._update_out_channels(128, len(self.dwsl), feature_maps)
+ dws32 = self.add_sublayer(
+ "conv3_2",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(128 * scale),
+ out_channels1=128,
+ out_channels2=256,
+ num_groups=128,
+ stride=2,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv3_2"))
+ self.dwsl.append(dws32)
+ self._update_out_channels(256, len(self.dwsl), feature_maps)
+ # 1/8
+ dws41 = self.add_sublayer(
+ "conv4_1",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(256 * scale),
+ out_channels1=256,
+ out_channels2=256,
+ num_groups=256,
+ stride=1,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv4_1"))
+ self.dwsl.append(dws41)
+ self._update_out_channels(256, len(self.dwsl), feature_maps)
+ dws42 = self.add_sublayer(
+ "conv4_2",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(256 * scale),
+ out_channels1=256,
+ out_channels2=512,
+ num_groups=256,
+ stride=2,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv4_2"))
+ self.dwsl.append(dws42)
+ self._update_out_channels(512, len(self.dwsl), feature_maps)
+ # 1/16
+ for i in range(5):
+ tmp = self.add_sublayer(
+ "conv5_" + str(i + 1),
+ sublayer=DepthwiseSeparable(
+ in_channels=512,
+ out_channels1=512,
+ out_channels2=512,
+ num_groups=512,
+ stride=1,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv5_" + str(i + 1)))
+ self.dwsl.append(tmp)
+ self._update_out_channels(512, len(self.dwsl), feature_maps)
+ dws56 = self.add_sublayer(
+ "conv5_6",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(512 * scale),
+ out_channels1=512,
+ out_channels2=1024,
+ num_groups=512,
+ stride=2,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv5_6"))
+ self.dwsl.append(dws56)
+ self._update_out_channels(1024, len(self.dwsl), feature_maps)
+ # 1/32
+ dws6 = self.add_sublayer(
+ "conv6",
+ sublayer=DepthwiseSeparable(
+ in_channels=int(1024 * scale),
+ out_channels1=1024,
+ out_channels2=1024,
+ num_groups=1024,
+ stride=1,
+ scale=scale,
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv6"))
+ self.dwsl.append(dws6)
+ self._update_out_channels(1024, len(self.dwsl), feature_maps)
+
+ if self.with_extra_blocks:
+ self.extra_blocks = []
+ for i, block_filter in enumerate(self.extra_block_filters):
+ in_c = 1024 if i == 0 else self.extra_block_filters[i - 1][1]
+ conv_extra = self.add_sublayer(
+ "conv7_" + str(i + 1),
+ sublayer=ExtraBlock(
+ in_c,
+ block_filter[0],
+ block_filter[1],
+ conv_lr=conv_learning_rate,
+ conv_decay=conv_decay,
+ norm_decay=norm_decay,
+ norm_type=norm_type,
+ name="conv7_" + str(i + 1)))
+ self.extra_blocks.append(conv_extra)
+ self._update_out_channels(
+ block_filter[1],
+ len(self.dwsl) + len(self.extra_blocks), feature_maps)
+
+ def _update_out_channels(self, channel, feature_idx, feature_maps):
+ if feature_idx in feature_maps:
+ self._out_channels.append(channel)
+
+ def forward(self, inputs):
+ outs = []
+ y = self.conv1(inputs['image'])
+ for i, block in enumerate(self.dwsl):
+ y = block(y)
+ if i + 1 in self.feature_maps:
+ outs.append(y)
+
+ if not self.with_extra_blocks:
+ return outs
+
+ y = outs[-1]
+ for i, block in enumerate(self.extra_blocks):
+ idx = i + len(self.dwsl)
+ y = block(y)
+ if idx + 1 in self.feature_maps:
+ outs.append(y)
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v3.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v3.py
new file mode 100644
index 000000000..02021e87c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/mobilenet_v3.py
@@ -0,0 +1,482 @@
+# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+
+__all__ = ['MobileNetV3']
+
+
+def make_divisible(v, divisor=8, min_value=None):
+ if min_value is None:
+ min_value = divisor
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+ if new_v < 0.9 * v:
+ new_v += divisor
+ return new_v
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_c,
+ out_c,
+ filter_size,
+ stride,
+ padding,
+ num_groups=1,
+ act=None,
+ lr_mult=1.,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ name=""):
+ super(ConvBNLayer, self).__init__()
+ self.act = act
+ self.conv = nn.Conv2D(
+ in_channels=in_c,
+ out_channels=out_c,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=padding,
+ groups=num_groups,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult, regularizer=L2Decay(conv_decay)),
+ bias_attr=False)
+
+ norm_lr = 0. if freeze_norm else lr_mult
+ param_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+ global_stats = True if freeze_norm else False
+ if norm_type == 'sync_bn':
+ self.bn = nn.SyncBatchNorm(
+ out_c, weight_attr=param_attr, bias_attr=bias_attr)
+ else:
+ self.bn = nn.BatchNorm(
+ out_c,
+ act=None,
+ param_attr=param_attr,
+ bias_attr=bias_attr,
+ use_global_stats=global_stats)
+ norm_params = self.bn.parameters()
+ if freeze_norm:
+ for param in norm_params:
+ param.stop_gradient = True
+
+ def forward(self, x):
+ x = self.conv(x)
+ x = self.bn(x)
+ if self.act is not None:
+ if self.act == "relu":
+ x = F.relu(x)
+ elif self.act == "relu6":
+ x = F.relu6(x)
+ elif self.act == "hard_swish":
+ x = F.hardswish(x)
+ else:
+ raise NotImplementedError(
+ "The activation function is selected incorrectly.")
+ return x
+
+
+class ResidualUnit(nn.Layer):
+ def __init__(self,
+ in_c,
+ mid_c,
+ out_c,
+ filter_size,
+ stride,
+ use_se,
+ lr_mult,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ act=None,
+ return_list=False,
+ name=''):
+ super(ResidualUnit, self).__init__()
+ self.if_shortcut = stride == 1 and in_c == out_c
+ self.use_se = use_se
+ self.return_list = return_list
+
+ self.expand_conv = ConvBNLayer(
+ in_c=in_c,
+ out_c=mid_c,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act=act,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_expand")
+ self.bottleneck_conv = ConvBNLayer(
+ in_c=mid_c,
+ out_c=mid_c,
+ filter_size=filter_size,
+ stride=stride,
+ padding=int((filter_size - 1) // 2),
+ num_groups=mid_c,
+ act=act,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_depthwise")
+ if self.use_se:
+ self.mid_se = SEModule(
+ mid_c, lr_mult, conv_decay, name=name + "_se")
+ self.linear_conv = ConvBNLayer(
+ in_c=mid_c,
+ out_c=out_c,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act=None,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_linear")
+
+ def forward(self, inputs):
+ y = self.expand_conv(inputs)
+ x = self.bottleneck_conv(y)
+ if self.use_se:
+ x = self.mid_se(x)
+ x = self.linear_conv(x)
+ if self.if_shortcut:
+ x = paddle.add(inputs, x)
+ if self.return_list:
+ return [y, x]
+ else:
+ return x
+
+
+class SEModule(nn.Layer):
+ def __init__(self, channel, lr_mult, conv_decay, reduction=4, name=""):
+ super(SEModule, self).__init__()
+ self.avg_pool = nn.AdaptiveAvgPool2D(1)
+ mid_channels = int(channel // reduction)
+ self.conv1 = nn.Conv2D(
+ in_channels=channel,
+ out_channels=mid_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult, regularizer=L2Decay(conv_decay)),
+ bias_attr=ParamAttr(
+ learning_rate=lr_mult, regularizer=L2Decay(conv_decay)))
+ self.conv2 = nn.Conv2D(
+ in_channels=mid_channels,
+ out_channels=channel,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult, regularizer=L2Decay(conv_decay)),
+ bias_attr=ParamAttr(
+ learning_rate=lr_mult, regularizer=L2Decay(conv_decay)))
+
+ def forward(self, inputs):
+ outputs = self.avg_pool(inputs)
+ outputs = self.conv1(outputs)
+ outputs = F.relu(outputs)
+ outputs = self.conv2(outputs)
+ outputs = F.hardsigmoid(outputs, slope=0.2, offset=0.5)
+ return paddle.multiply(x=inputs, y=outputs)
+
+
+class ExtraBlockDW(nn.Layer):
+ def __init__(self,
+ in_c,
+ ch_1,
+ ch_2,
+ stride,
+ lr_mult,
+ conv_decay=0.,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ name=None):
+ super(ExtraBlockDW, self).__init__()
+ self.pointwise_conv = ConvBNLayer(
+ in_c=in_c,
+ out_c=ch_1,
+ filter_size=1,
+ stride=1,
+ padding='SAME',
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra1")
+ self.depthwise_conv = ConvBNLayer(
+ in_c=ch_1,
+ out_c=ch_2,
+ filter_size=3,
+ stride=stride,
+ padding='SAME',
+ num_groups=int(ch_1),
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra2_dw")
+ self.normal_conv = ConvBNLayer(
+ in_c=ch_2,
+ out_c=ch_2,
+ filter_size=1,
+ stride=1,
+ padding='SAME',
+ act='relu6',
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name=name + "_extra2_sep")
+
+ def forward(self, inputs):
+ x = self.pointwise_conv(inputs)
+ x = self.depthwise_conv(x)
+ x = self.normal_conv(x)
+ return x
+
+
+@register
+@serializable
+class MobileNetV3(nn.Layer):
+ __shared__ = ['norm_type']
+
+ def __init__(
+ self,
+ scale=1.0,
+ model_name="large",
+ feature_maps=[6, 12, 15],
+ with_extra_blocks=False,
+ extra_block_filters=[[256, 512], [128, 256], [128, 256], [64, 128]],
+ lr_mult_list=[1.0, 1.0, 1.0, 1.0, 1.0],
+ conv_decay=0.0,
+ multiplier=1.0,
+ norm_type='bn',
+ norm_decay=0.0,
+ freeze_norm=False):
+ super(MobileNetV3, self).__init__()
+ if isinstance(feature_maps, Integral):
+ feature_maps = [feature_maps]
+ if norm_type == 'sync_bn' and freeze_norm:
+ raise ValueError(
+ "The norm_type should not be sync_bn when freeze_norm is True")
+ self.feature_maps = feature_maps
+ self.with_extra_blocks = with_extra_blocks
+ self.extra_block_filters = extra_block_filters
+
+ inplanes = 16
+ if model_name == "large":
+ self.cfg = [
+ # k, exp, c, se, nl, s,
+ [3, 16, 16, False, "relu", 1],
+ [3, 64, 24, False, "relu", 2],
+ [3, 72, 24, False, "relu", 1],
+ [5, 72, 40, True, "relu", 2], # RCNN output
+ [5, 120, 40, True, "relu", 1],
+ [5, 120, 40, True, "relu", 1], # YOLOv3 output
+ [3, 240, 80, False, "hard_swish", 2], # RCNN output
+ [3, 200, 80, False, "hard_swish", 1],
+ [3, 184, 80, False, "hard_swish", 1],
+ [3, 184, 80, False, "hard_swish", 1],
+ [3, 480, 112, True, "hard_swish", 1],
+ [3, 672, 112, True, "hard_swish", 1], # YOLOv3 output
+ [5, 672, 160, True, "hard_swish", 2], # SSD/SSDLite/RCNN output
+ [5, 960, 160, True, "hard_swish", 1],
+ [5, 960, 160, True, "hard_swish", 1], # YOLOv3 output
+ ]
+ elif model_name == "small":
+ self.cfg = [
+ # k, exp, c, se, nl, s,
+ [3, 16, 16, True, "relu", 2],
+ [3, 72, 24, False, "relu", 2], # RCNN output
+ [3, 88, 24, False, "relu", 1], # YOLOv3 output
+ [5, 96, 40, True, "hard_swish", 2], # RCNN output
+ [5, 240, 40, True, "hard_swish", 1],
+ [5, 240, 40, True, "hard_swish", 1],
+ [5, 120, 48, True, "hard_swish", 1],
+ [5, 144, 48, True, "hard_swish", 1], # YOLOv3 output
+ [5, 288, 96, True, "hard_swish", 2], # SSD/SSDLite/RCNN output
+ [5, 576, 96, True, "hard_swish", 1],
+ [5, 576, 96, True, "hard_swish", 1], # YOLOv3 output
+ ]
+ else:
+ raise NotImplementedError(
+ "mode[{}_model] is not implemented!".format(model_name))
+
+ if multiplier != 1.0:
+ self.cfg[-3][2] = int(self.cfg[-3][2] * multiplier)
+ self.cfg[-2][1] = int(self.cfg[-2][1] * multiplier)
+ self.cfg[-2][2] = int(self.cfg[-2][2] * multiplier)
+ self.cfg[-1][1] = int(self.cfg[-1][1] * multiplier)
+ self.cfg[-1][2] = int(self.cfg[-1][2] * multiplier)
+
+ self.conv1 = ConvBNLayer(
+ in_c=3,
+ out_c=make_divisible(inplanes * scale),
+ filter_size=3,
+ stride=2,
+ padding=1,
+ num_groups=1,
+ act="hard_swish",
+ lr_mult=lr_mult_list[0],
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="conv1")
+
+ self._out_channels = []
+ self.block_list = []
+ i = 0
+ inplanes = make_divisible(inplanes * scale)
+ for (k, exp, c, se, nl, s) in self.cfg:
+ lr_idx = min(i // 3, len(lr_mult_list) - 1)
+ lr_mult = lr_mult_list[lr_idx]
+
+ # for SSD/SSDLite, first head input is after ResidualUnit expand_conv
+ return_list = self.with_extra_blocks and i + 2 in self.feature_maps
+
+ block = self.add_sublayer(
+ "conv" + str(i + 2),
+ sublayer=ResidualUnit(
+ in_c=inplanes,
+ mid_c=make_divisible(scale * exp),
+ out_c=make_divisible(scale * c),
+ filter_size=k,
+ stride=s,
+ use_se=se,
+ act=nl,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ return_list=return_list,
+ name="conv" + str(i + 2)))
+ self.block_list.append(block)
+ inplanes = make_divisible(scale * c)
+ i += 1
+ self._update_out_channels(
+ make_divisible(scale * exp)
+ if return_list else inplanes, i + 1, feature_maps)
+
+ if self.with_extra_blocks:
+ self.extra_block_list = []
+ extra_out_c = make_divisible(scale * self.cfg[-1][1])
+ lr_idx = min(i // 3, len(lr_mult_list) - 1)
+ lr_mult = lr_mult_list[lr_idx]
+
+ conv_extra = self.add_sublayer(
+ "conv" + str(i + 2),
+ sublayer=ConvBNLayer(
+ in_c=inplanes,
+ out_c=extra_out_c,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ num_groups=1,
+ act="hard_swish",
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name="conv" + str(i + 2)))
+ self.extra_block_list.append(conv_extra)
+ i += 1
+ self._update_out_channels(extra_out_c, i + 1, feature_maps)
+
+ for j, block_filter in enumerate(self.extra_block_filters):
+ in_c = extra_out_c if j == 0 else self.extra_block_filters[j -
+ 1][1]
+ conv_extra = self.add_sublayer(
+ "conv" + str(i + 2),
+ sublayer=ExtraBlockDW(
+ in_c,
+ block_filter[0],
+ block_filter[1],
+ stride=2,
+ lr_mult=lr_mult,
+ conv_decay=conv_decay,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ name='conv' + str(i + 2)))
+ self.extra_block_list.append(conv_extra)
+ i += 1
+ self._update_out_channels(block_filter[1], i + 1, feature_maps)
+
+ def _update_out_channels(self, channel, feature_idx, feature_maps):
+ if feature_idx in feature_maps:
+ self._out_channels.append(channel)
+
+ def forward(self, inputs):
+ x = self.conv1(inputs['image'])
+ outs = []
+ for idx, block in enumerate(self.block_list):
+ x = block(x)
+ if idx + 2 in self.feature_maps:
+ if isinstance(x, list):
+ outs.append(x[0])
+ x = x[1]
+ else:
+ outs.append(x)
+
+ if not self.with_extra_blocks:
+ return outs
+
+ for i, block in enumerate(self.extra_block_list):
+ idx = i + len(self.block_list)
+ x = block(x)
+ if idx + 2 in self.feature_maps:
+ outs.append(x)
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/name_adapter.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/name_adapter.py
new file mode 100644
index 000000000..4afbb9b18
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/name_adapter.py
@@ -0,0 +1,69 @@
+class NameAdapter(object):
+ """Fix the backbones variable names for pretrained weight"""
+
+ def __init__(self, model):
+ super(NameAdapter, self).__init__()
+ self.model = model
+
+ @property
+ def model_type(self):
+ return getattr(self.model, '_model_type', '')
+
+ @property
+ def variant(self):
+ return getattr(self.model, 'variant', '')
+
+ def fix_conv_norm_name(self, name):
+ if name == "conv1":
+ bn_name = "bn_" + name
+ else:
+ bn_name = "bn" + name[3:]
+ # the naming rule is same as pretrained weight
+ if self.model_type == 'SEResNeXt':
+ bn_name = name + "_bn"
+ return bn_name
+
+ def fix_shortcut_name(self, name):
+ if self.model_type == 'SEResNeXt':
+ name = 'conv' + name + '_prj'
+ return name
+
+ def fix_bottleneck_name(self, name):
+ if self.model_type == 'SEResNeXt':
+ conv_name1 = 'conv' + name + '_x1'
+ conv_name2 = 'conv' + name + '_x2'
+ conv_name3 = 'conv' + name + '_x3'
+ shortcut_name = name
+ else:
+ conv_name1 = name + "_branch2a"
+ conv_name2 = name + "_branch2b"
+ conv_name3 = name + "_branch2c"
+ shortcut_name = name + "_branch1"
+ return conv_name1, conv_name2, conv_name3, shortcut_name
+
+ def fix_basicblock_name(self, name):
+ if self.model_type == 'SEResNeXt':
+ conv_name1 = 'conv' + name + '_x1'
+ conv_name2 = 'conv' + name + '_x2'
+ shortcut_name = name
+ else:
+ conv_name1 = name + "_branch2a"
+ conv_name2 = name + "_branch2b"
+ shortcut_name = name + "_branch1"
+ return conv_name1, conv_name2, shortcut_name
+
+ def fix_layer_warp_name(self, stage_num, count, i):
+ name = 'res' + str(stage_num)
+ if count > 10 and stage_num == 4:
+ if i == 0:
+ conv_name = name + "a"
+ else:
+ conv_name = name + "b" + str(i)
+ else:
+ conv_name = name + chr(ord("a") + i)
+ if self.model_type == 'SEResNeXt':
+ conv_name = str(stage_num + 2) + '_' + str(i + 1)
+ return conv_name
+
+ def fix_c1_stage_name(self):
+ return "res_conv1" if self.model_type == 'ResNeXt' else "conv1"
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/res2net.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/res2net.py
new file mode 100644
index 000000000..9e7677247
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/res2net.py
@@ -0,0 +1,357 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from numbers import Integral
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ..shape_spec import ShapeSpec
+from .resnet import ConvNormLayer
+
+__all__ = ['Res2Net', 'Res2NetC5']
+
+Res2Net_cfg = {
+ 50: [3, 4, 6, 3],
+ 101: [3, 4, 23, 3],
+ 152: [3, 8, 36, 3],
+ 200: [3, 12, 48, 3]
+}
+
+
+class BottleNeck(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ stride,
+ shortcut,
+ width,
+ scales=4,
+ variant='b',
+ groups=1,
+ lr=1.0,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ dcn_v2=False):
+ super(BottleNeck, self).__init__()
+
+ self.shortcut = shortcut
+ self.scales = scales
+ self.stride = stride
+ if not shortcut:
+ if variant == 'd' and stride == 2:
+ self.branch1 = nn.Sequential()
+ self.branch1.add_sublayer(
+ 'pool',
+ nn.AvgPool2D(
+ kernel_size=2, stride=2, padding=0, ceil_mode=True))
+ self.branch1.add_sublayer(
+ 'conv',
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr))
+ else:
+ self.branch1 = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=stride,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2a = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=width * scales,
+ filter_size=1,
+ stride=stride if variant == 'a' else 1,
+ groups=1,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2b = nn.LayerList([
+ ConvNormLayer(
+ ch_in=width,
+ ch_out=width,
+ filter_size=3,
+ stride=1 if variant == 'a' else stride,
+ groups=groups,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr,
+ dcn_v2=dcn_v2) for _ in range(self.scales - 1)
+ ])
+
+ self.branch2c = ConvNormLayer(
+ ch_in=width * scales,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ groups=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ def forward(self, inputs):
+
+ out = self.branch2a(inputs)
+ feature_split = paddle.split(out, self.scales, 1)
+ out_split = []
+ for i in range(self.scales - 1):
+ if i == 0 or self.stride == 2:
+ out_split.append(self.branch2b[i](feature_split[i]))
+ else:
+ out_split.append(self.branch2b[i](paddle.add(feature_split[i],
+ out_split[-1])))
+ if self.stride == 1:
+ out_split.append(feature_split[-1])
+ else:
+ out_split.append(F.avg_pool2d(feature_split[-1], 3, self.stride, 1))
+ out = self.branch2c(paddle.concat(out_split, 1))
+
+ if self.shortcut:
+ short = inputs
+ else:
+ short = self.branch1(inputs)
+
+ out = paddle.add(out, short)
+ out = F.relu(out)
+
+ return out
+
+
+class Blocks(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ count,
+ stage_num,
+ width,
+ scales=4,
+ variant='b',
+ groups=1,
+ lr=1.0,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ dcn_v2=False):
+ super(Blocks, self).__init__()
+
+ self.blocks = nn.Sequential()
+ for i in range(count):
+ self.blocks.add_sublayer(
+ str(i),
+ BottleNeck(
+ ch_in=ch_in if i == 0 else ch_out,
+ ch_out=ch_out,
+ stride=2 if i == 0 and stage_num != 2 else 1,
+ shortcut=False if i == 0 else True,
+ width=width * (2**(stage_num - 2)),
+ scales=scales,
+ variant=variant,
+ groups=groups,
+ lr=lr,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ dcn_v2=dcn_v2))
+
+ def forward(self, inputs):
+ return self.blocks(inputs)
+
+
+@register
+@serializable
+class Res2Net(nn.Layer):
+ """
+ Res2Net, see https://arxiv.org/abs/1904.01169
+ Args:
+ depth (int): Res2Net depth, should be 50, 101, 152, 200.
+ width (int): Res2Net width
+ scales (int): Res2Net scale
+ variant (str): Res2Net variant, supports 'a', 'b', 'c', 'd' currently
+ lr_mult_list (list): learning rate ratio of different resnet stages(2,3,4,5),
+ lower learning rate ratio is need for pretrained model
+ got using distillation(default as [1.0, 1.0, 1.0, 1.0]).
+ groups (int): The groups number of the Conv Layer.
+ norm_type (str): normalization type, 'bn' or 'sync_bn'
+ norm_decay (float): weight decay for normalization layer weights
+ freeze_norm (bool): freeze normalization layers
+ freeze_at (int): freeze the backbone at which stage
+ return_idx (list): index of stages whose feature maps are returned,
+ index 0 stands for res2
+ dcn_v2_stages (list): index of stages who select deformable conv v2
+ num_stages (int): number of stages created
+
+ """
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ depth=50,
+ width=26,
+ scales=4,
+ variant='b',
+ lr_mult_list=[1.0, 1.0, 1.0, 1.0],
+ groups=1,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ freeze_at=0,
+ return_idx=[0, 1, 2, 3],
+ dcn_v2_stages=[-1],
+ num_stages=4):
+ super(Res2Net, self).__init__()
+
+ self._model_type = 'Res2Net' if groups == 1 else 'Res2NeXt'
+
+ assert depth in [50, 101, 152, 200], \
+ "depth {} not in [50, 101, 152, 200]"
+ assert variant in ['a', 'b', 'c', 'd'], "invalid Res2Net variant"
+ assert num_stages >= 1 and num_stages <= 4
+
+ self.depth = depth
+ self.variant = variant
+ self.norm_type = norm_type
+ self.norm_decay = norm_decay
+ self.freeze_norm = freeze_norm
+ self.freeze_at = freeze_at
+ if isinstance(return_idx, Integral):
+ return_idx = [return_idx]
+ assert max(return_idx) < num_stages, \
+ 'the maximum return index must smaller than num_stages, ' \
+ 'but received maximum return index is {} and num_stages ' \
+ 'is {}'.format(max(return_idx), num_stages)
+ self.return_idx = return_idx
+ self.num_stages = num_stages
+ assert len(lr_mult_list) == 4, \
+ "lr_mult_list length must be 4 but got {}".format(len(lr_mult_list))
+ if isinstance(dcn_v2_stages, Integral):
+ dcn_v2_stages = [dcn_v2_stages]
+ assert max(dcn_v2_stages) < num_stages
+ self.dcn_v2_stages = dcn_v2_stages
+
+ block_nums = Res2Net_cfg[depth]
+
+ # C1 stage
+ if self.variant in ['c', 'd']:
+ conv_def = [
+ [3, 32, 3, 2, "conv1_1"],
+ [32, 32, 3, 1, "conv1_2"],
+ [32, 64, 3, 1, "conv1_3"],
+ ]
+ else:
+ conv_def = [[3, 64, 7, 2, "conv1"]]
+ self.res1 = nn.Sequential()
+ for (c_in, c_out, k, s, _name) in conv_def:
+ self.res1.add_sublayer(
+ _name,
+ ConvNormLayer(
+ ch_in=c_in,
+ ch_out=c_out,
+ filter_size=k,
+ stride=s,
+ groups=1,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=1.0))
+
+ self._in_channels = [64, 256, 512, 1024]
+ self._out_channels = [256, 512, 1024, 2048]
+ self._out_strides = [4, 8, 16, 32]
+
+ # C2-C5 stages
+ self.res_layers = []
+ for i in range(num_stages):
+ lr_mult = lr_mult_list[i]
+ stage_num = i + 2
+ self.res_layers.append(
+ self.add_sublayer(
+ "res{}".format(stage_num),
+ Blocks(
+ self._in_channels[i],
+ self._out_channels[i],
+ count=block_nums[i],
+ stage_num=stage_num,
+ width=width,
+ scales=scales,
+ groups=groups,
+ lr=lr_mult,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ dcn_v2=(i in self.dcn_v2_stages))))
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self._out_channels[i], stride=self._out_strides[i])
+ for i in self.return_idx
+ ]
+
+ def forward(self, inputs):
+ x = inputs['image']
+ res1 = self.res1(x)
+ x = F.max_pool2d(res1, kernel_size=3, stride=2, padding=1)
+ outs = []
+ for idx, stage in enumerate(self.res_layers):
+ x = stage(x)
+ if idx == self.freeze_at:
+ x.stop_gradient = True
+ if idx in self.return_idx:
+ outs.append(x)
+ return outs
+
+
+@register
+class Res2NetC5(nn.Layer):
+ def __init__(self, depth=50, width=26, scales=4, variant='b'):
+ super(Res2NetC5, self).__init__()
+ feat_in, feat_out = [1024, 2048]
+ self.res5 = Blocks(
+ feat_in,
+ feat_out,
+ count=3,
+ stage_num=5,
+ width=width,
+ scales=scales,
+ variant=variant)
+ self.feat_out = feat_out
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(
+ channels=self.feat_out,
+ stride=32, )]
+
+ def forward(self, roi_feat, stage=0):
+ y = self.res5(roi_feat)
+ return y
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/resnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/resnet.py
new file mode 100644
index 000000000..d4bc878ea
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/resnet.py
@@ -0,0 +1,613 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+from numbers import Integral
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from paddle.regularizer import L2Decay
+from paddle.nn.initializer import Uniform
+from paddle import ParamAttr
+from paddle.nn.initializer import Constant
+from paddle.vision.ops import DeformConv2D
+from .name_adapter import NameAdapter
+from ..shape_spec import ShapeSpec
+
+__all__ = ['ResNet', 'Res5Head', 'Blocks', 'BasicBlock', 'BottleNeck']
+
+ResNet_cfg = {
+ 18: [2, 2, 2, 2],
+ 34: [3, 4, 6, 3],
+ 50: [3, 4, 6, 3],
+ 101: [3, 4, 23, 3],
+ 152: [3, 8, 36, 3],
+}
+
+
+class ConvNormLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ stride,
+ groups=1,
+ act=None,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ lr=1.0,
+ dcn_v2=False):
+ super(ConvNormLayer, self).__init__()
+ assert norm_type in ['bn', 'sync_bn']
+ self.norm_type = norm_type
+ self.act = act
+ self.dcn_v2 = dcn_v2
+
+ if not self.dcn_v2:
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ weight_attr=ParamAttr(learning_rate=lr),
+ bias_attr=False)
+ else:
+ self.offset_channel = 2 * filter_size**2
+ self.mask_channel = filter_size**2
+
+ self.conv_offset = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=3 * filter_size**2,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ weight_attr=ParamAttr(initializer=Constant(0.)),
+ bias_attr=ParamAttr(initializer=Constant(0.)))
+ self.conv = DeformConv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ dilation=1,
+ groups=groups,
+ weight_attr=ParamAttr(learning_rate=lr),
+ bias_attr=False)
+
+ norm_lr = 0. if freeze_norm else lr
+ param_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+
+ global_stats = True if freeze_norm else False
+ if norm_type == 'sync_bn':
+ self.norm = nn.SyncBatchNorm(
+ ch_out, weight_attr=param_attr, bias_attr=bias_attr)
+ else:
+ self.norm = nn.BatchNorm(
+ ch_out,
+ act=None,
+ param_attr=param_attr,
+ bias_attr=bias_attr,
+ use_global_stats=global_stats)
+ norm_params = self.norm.parameters()
+
+ if freeze_norm:
+ for param in norm_params:
+ param.stop_gradient = True
+
+ def forward(self, inputs):
+ if not self.dcn_v2:
+ out = self.conv(inputs)
+ else:
+ offset_mask = self.conv_offset(inputs)
+ offset, mask = paddle.split(
+ offset_mask,
+ num_or_sections=[self.offset_channel, self.mask_channel],
+ axis=1)
+ mask = F.sigmoid(mask)
+ out = self.conv(inputs, offset, mask=mask)
+
+ if self.norm_type in ['bn', 'sync_bn']:
+ out = self.norm(out)
+ if self.act:
+ out = getattr(F, self.act)(out)
+ return out
+
+
+class SELayer(nn.Layer):
+ def __init__(self, ch, reduction_ratio=16):
+ super(SELayer, self).__init__()
+ self.pool = nn.AdaptiveAvgPool2D(1)
+ stdv = 1.0 / math.sqrt(ch)
+ c_ = ch // reduction_ratio
+ self.squeeze = nn.Linear(
+ ch,
+ c_,
+ weight_attr=paddle.ParamAttr(initializer=Uniform(-stdv, stdv)),
+ bias_attr=True)
+
+ stdv = 1.0 / math.sqrt(c_)
+ self.extract = nn.Linear(
+ c_,
+ ch,
+ weight_attr=paddle.ParamAttr(initializer=Uniform(-stdv, stdv)),
+ bias_attr=True)
+
+ def forward(self, inputs):
+ out = self.pool(inputs)
+ out = paddle.squeeze(out, axis=[2, 3])
+ out = self.squeeze(out)
+ out = F.relu(out)
+ out = self.extract(out)
+ out = F.sigmoid(out)
+ out = paddle.unsqueeze(out, axis=[2, 3])
+ scale = out * inputs
+ return scale
+
+
+class BasicBlock(nn.Layer):
+
+ expansion = 1
+
+ def __init__(self,
+ ch_in,
+ ch_out,
+ stride,
+ shortcut,
+ variant='b',
+ groups=1,
+ base_width=64,
+ lr=1.0,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ dcn_v2=False,
+ std_senet=False):
+ super(BasicBlock, self).__init__()
+ assert groups == 1 and base_width == 64, 'BasicBlock only supports groups=1 and base_width=64'
+
+ self.shortcut = shortcut
+ if not shortcut:
+ if variant == 'd' and stride == 2:
+ self.short = nn.Sequential()
+ self.short.add_sublayer(
+ 'pool',
+ nn.AvgPool2D(
+ kernel_size=2, stride=2, padding=0, ceil_mode=True))
+ self.short.add_sublayer(
+ 'conv',
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr))
+ else:
+ self.short = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=stride,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2a = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=3,
+ stride=stride,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2b = ConvNormLayer(
+ ch_in=ch_out,
+ ch_out=ch_out,
+ filter_size=3,
+ stride=1,
+ act=None,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr,
+ dcn_v2=dcn_v2)
+
+ self.std_senet = std_senet
+ if self.std_senet:
+ self.se = SELayer(ch_out)
+
+ def forward(self, inputs):
+ out = self.branch2a(inputs)
+ out = self.branch2b(out)
+ if self.std_senet:
+ out = self.se(out)
+
+ if self.shortcut:
+ short = inputs
+ else:
+ short = self.short(inputs)
+
+ out = paddle.add(x=out, y=short)
+ out = F.relu(out)
+
+ return out
+
+
+class BottleNeck(nn.Layer):
+
+ expansion = 4
+
+ def __init__(self,
+ ch_in,
+ ch_out,
+ stride,
+ shortcut,
+ variant='b',
+ groups=1,
+ base_width=4,
+ lr=1.0,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ dcn_v2=False,
+ std_senet=False):
+ super(BottleNeck, self).__init__()
+ if variant == 'a':
+ stride1, stride2 = stride, 1
+ else:
+ stride1, stride2 = 1, stride
+
+ # ResNeXt
+ width = int(ch_out * (base_width / 64.)) * groups
+
+ self.shortcut = shortcut
+ if not shortcut:
+ if variant == 'd' and stride == 2:
+ self.short = nn.Sequential()
+ self.short.add_sublayer(
+ 'pool',
+ nn.AvgPool2D(
+ kernel_size=2, stride=2, padding=0, ceil_mode=True))
+ self.short.add_sublayer(
+ 'conv',
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out * self.expansion,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr))
+ else:
+ self.short = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out * self.expansion,
+ filter_size=1,
+ stride=stride,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2a = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=width,
+ filter_size=1,
+ stride=stride1,
+ groups=1,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.branch2b = ConvNormLayer(
+ ch_in=width,
+ ch_out=width,
+ filter_size=3,
+ stride=stride2,
+ groups=groups,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr,
+ dcn_v2=dcn_v2)
+
+ self.branch2c = ConvNormLayer(
+ ch_in=width,
+ ch_out=ch_out * self.expansion,
+ filter_size=1,
+ stride=1,
+ groups=1,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=lr)
+
+ self.std_senet = std_senet
+ if self.std_senet:
+ self.se = SELayer(ch_out * self.expansion)
+
+ def forward(self, inputs):
+
+ out = self.branch2a(inputs)
+ out = self.branch2b(out)
+ out = self.branch2c(out)
+
+ if self.std_senet:
+ out = self.se(out)
+
+ if self.shortcut:
+ short = inputs
+ else:
+ short = self.short(inputs)
+
+ out = paddle.add(x=out, y=short)
+ out = F.relu(out)
+
+ return out
+
+
+class Blocks(nn.Layer):
+ def __init__(self,
+ block,
+ ch_in,
+ ch_out,
+ count,
+ name_adapter,
+ stage_num,
+ variant='b',
+ groups=1,
+ base_width=64,
+ lr=1.0,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=True,
+ dcn_v2=False,
+ std_senet=False):
+ super(Blocks, self).__init__()
+
+ self.blocks = []
+ for i in range(count):
+ conv_name = name_adapter.fix_layer_warp_name(stage_num, count, i)
+ layer = self.add_sublayer(
+ conv_name,
+ block(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ stride=2 if i == 0 and stage_num != 2 else 1,
+ shortcut=False if i == 0 else True,
+ variant=variant,
+ groups=groups,
+ base_width=base_width,
+ lr=lr,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ dcn_v2=dcn_v2,
+ std_senet=std_senet))
+ self.blocks.append(layer)
+ if i == 0:
+ ch_in = ch_out * block.expansion
+
+ def forward(self, inputs):
+ block_out = inputs
+ for block in self.blocks:
+ block_out = block(block_out)
+ return block_out
+
+
+@register
+@serializable
+class ResNet(nn.Layer):
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ depth=50,
+ ch_in=64,
+ variant='b',
+ lr_mult_list=[1.0, 1.0, 1.0, 1.0],
+ groups=1,
+ base_width=64,
+ norm_type='bn',
+ norm_decay=0,
+ freeze_norm=True,
+ freeze_at=0,
+ return_idx=[0, 1, 2, 3],
+ dcn_v2_stages=[-1],
+ num_stages=4,
+ std_senet=False):
+ """
+ Residual Network, see https://arxiv.org/abs/1512.03385
+
+ Args:
+ depth (int): ResNet depth, should be 18, 34, 50, 101, 152.
+ ch_in (int): output channel of first stage, default 64
+ variant (str): ResNet variant, supports 'a', 'b', 'c', 'd' currently
+ lr_mult_list (list): learning rate ratio of different resnet stages(2,3,4,5),
+ lower learning rate ratio is need for pretrained model
+ got using distillation(default as [1.0, 1.0, 1.0, 1.0]).
+ groups (int): group convolution cardinality
+ base_width (int): base width of each group convolution
+ norm_type (str): normalization type, 'bn', 'sync_bn' or 'affine_channel'
+ norm_decay (float): weight decay for normalization layer weights
+ freeze_norm (bool): freeze normalization layers
+ freeze_at (int): freeze the backbone at which stage
+ return_idx (list): index of the stages whose feature maps are returned
+ dcn_v2_stages (list): index of stages who select deformable conv v2
+ num_stages (int): total num of stages
+ std_senet (bool): whether use senet, default True
+ """
+ super(ResNet, self).__init__()
+ self._model_type = 'ResNet' if groups == 1 else 'ResNeXt'
+ assert num_stages >= 1 and num_stages <= 4
+ self.depth = depth
+ self.variant = variant
+ self.groups = groups
+ self.base_width = base_width
+ self.norm_type = norm_type
+ self.norm_decay = norm_decay
+ self.freeze_norm = freeze_norm
+ self.freeze_at = freeze_at
+ if isinstance(return_idx, Integral):
+ return_idx = [return_idx]
+ assert max(return_idx) < num_stages, \
+ 'the maximum return index must smaller than num_stages, ' \
+ 'but received maximum return index is {} and num_stages ' \
+ 'is {}'.format(max(return_idx), num_stages)
+ self.return_idx = return_idx
+ self.num_stages = num_stages
+ assert len(lr_mult_list) == 4, \
+ "lr_mult_list length must be 4 but got {}".format(len(lr_mult_list))
+ if isinstance(dcn_v2_stages, Integral):
+ dcn_v2_stages = [dcn_v2_stages]
+ assert max(dcn_v2_stages) < num_stages
+
+ if isinstance(dcn_v2_stages, Integral):
+ dcn_v2_stages = [dcn_v2_stages]
+ assert max(dcn_v2_stages) < num_stages
+ self.dcn_v2_stages = dcn_v2_stages
+
+ block_nums = ResNet_cfg[depth]
+ na = NameAdapter(self)
+
+ conv1_name = na.fix_c1_stage_name()
+ if variant in ['c', 'd']:
+ conv_def = [
+ [3, ch_in // 2, 3, 2, "conv1_1"],
+ [ch_in // 2, ch_in // 2, 3, 1, "conv1_2"],
+ [ch_in // 2, ch_in, 3, 1, "conv1_3"],
+ ]
+ else:
+ conv_def = [[3, ch_in, 7, 2, conv1_name]]
+ self.conv1 = nn.Sequential()
+ for (c_in, c_out, k, s, _name) in conv_def:
+ self.conv1.add_sublayer(
+ _name,
+ ConvNormLayer(
+ ch_in=c_in,
+ ch_out=c_out,
+ filter_size=k,
+ stride=s,
+ groups=1,
+ act='relu',
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ lr=1.0))
+
+ self.ch_in = ch_in
+ ch_out_list = [64, 128, 256, 512]
+ block = BottleNeck if depth >= 50 else BasicBlock
+
+ self._out_channels = [block.expansion * v for v in ch_out_list]
+ self._out_strides = [4, 8, 16, 32]
+
+ self.res_layers = []
+ for i in range(num_stages):
+ lr_mult = lr_mult_list[i]
+ stage_num = i + 2
+ res_name = "res{}".format(stage_num)
+ res_layer = self.add_sublayer(
+ res_name,
+ Blocks(
+ block,
+ self.ch_in,
+ ch_out_list[i],
+ count=block_nums[i],
+ name_adapter=na,
+ stage_num=stage_num,
+ variant=variant,
+ groups=groups,
+ base_width=base_width,
+ lr=lr_mult,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ dcn_v2=(i in self.dcn_v2_stages),
+ std_senet=std_senet))
+ self.res_layers.append(res_layer)
+ self.ch_in = self._out_channels[i]
+
+ if freeze_at >= 0:
+ self._freeze_parameters(self.conv1)
+ for i in range(min(freeze_at + 1, num_stages)):
+ self._freeze_parameters(self.res_layers[i])
+
+ def _freeze_parameters(self, m):
+ for p in m.parameters():
+ p.stop_gradient = True
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self._out_channels[i], stride=self._out_strides[i])
+ for i in self.return_idx
+ ]
+
+ def forward(self, inputs):
+ x = inputs['image']
+ conv1 = self.conv1(x)
+ x = F.max_pool2d(conv1, kernel_size=3, stride=2, padding=1)
+ outs = []
+ for idx, stage in enumerate(self.res_layers):
+ x = stage(x)
+ if idx in self.return_idx:
+ outs.append(x)
+ return outs
+
+
+@register
+class Res5Head(nn.Layer):
+ def __init__(self, depth=50):
+ super(Res5Head, self).__init__()
+ feat_in, feat_out = [1024, 512]
+ if depth < 50:
+ feat_in = 256
+ na = NameAdapter(self)
+ block = BottleNeck if depth >= 50 else BasicBlock
+ self.res5 = Blocks(
+ block, feat_in, feat_out, count=3, name_adapter=na, stage_num=5)
+ self.feat_out = feat_out if depth < 50 else feat_out * 4
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(
+ channels=self.feat_out,
+ stride=16, )]
+
+ def forward(self, roi_feat, stage=0):
+ y = self.res5(roi_feat)
+ return y
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/senet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/senet.py
new file mode 100644
index 000000000..eb0bad33f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/senet.py
@@ -0,0 +1,139 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle.nn as nn
+
+from ppdet.core.workspace import register, serializable
+from .resnet import ResNet, Blocks, BasicBlock, BottleNeck
+
+__all__ = ['SENet', 'SERes5Head']
+
+
+@register
+@serializable
+class SENet(ResNet):
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ depth=50,
+ variant='b',
+ lr_mult_list=[1.0, 1.0, 1.0, 1.0],
+ groups=1,
+ base_width=64,
+ norm_type='bn',
+ norm_decay=0,
+ freeze_norm=True,
+ freeze_at=0,
+ return_idx=[0, 1, 2, 3],
+ dcn_v2_stages=[-1],
+ std_senet=True,
+ num_stages=4):
+ """
+ Squeeze-and-Excitation Networks, see https://arxiv.org/abs/1709.01507
+
+ Args:
+ depth (int): SENet depth, should be 50, 101, 152
+ variant (str): ResNet variant, supports 'a', 'b', 'c', 'd' currently
+ lr_mult_list (list): learning rate ratio of different resnet stages(2,3,4,5),
+ lower learning rate ratio is need for pretrained model
+ got using distillation(default as [1.0, 1.0, 1.0, 1.0]).
+ groups (int): group convolution cardinality
+ base_width (int): base width of each group convolution
+ norm_type (str): normalization type, 'bn', 'sync_bn' or 'affine_channel'
+ norm_decay (float): weight decay for normalization layer weights
+ freeze_norm (bool): freeze normalization layers
+ freeze_at (int): freeze the backbone at which stage
+ return_idx (list): index of the stages whose feature maps are returned
+ dcn_v2_stages (list): index of stages who select deformable conv v2
+ std_senet (bool): whether use senet, default True
+ num_stages (int): total num of stages
+ """
+
+ super(SENet, self).__init__(
+ depth=depth,
+ variant=variant,
+ lr_mult_list=lr_mult_list,
+ ch_in=128,
+ groups=groups,
+ base_width=base_width,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ freeze_at=freeze_at,
+ return_idx=return_idx,
+ dcn_v2_stages=dcn_v2_stages,
+ std_senet=std_senet,
+ num_stages=num_stages)
+
+
+@register
+class SERes5Head(nn.Layer):
+ def __init__(self,
+ depth=50,
+ variant='b',
+ lr_mult=1.0,
+ groups=1,
+ base_width=64,
+ norm_type='bn',
+ norm_decay=0,
+ dcn_v2=False,
+ freeze_norm=False,
+ std_senet=True):
+ """
+ SERes5Head layer
+
+ Args:
+ depth (int): SENet depth, should be 50, 101, 152
+ variant (str): ResNet variant, supports 'a', 'b', 'c', 'd' currently
+ lr_mult (list): learning rate ratio of SERes5Head, default as 1.0.
+ groups (int): group convolution cardinality
+ base_width (int): base width of each group convolution
+ norm_type (str): normalization type, 'bn', 'sync_bn' or 'affine_channel'
+ norm_decay (float): weight decay for normalization layer weights
+ dcn_v2_stages (list): index of stages who select deformable conv v2
+ std_senet (bool): whether use senet, default True
+
+ """
+ super(SERes5Head, self).__init__()
+ ch_out = 512
+ ch_in = 256 if depth < 50 else 1024
+ na = NameAdapter(self)
+ block = BottleNeck if depth >= 50 else BasicBlock
+ self.res5 = Blocks(
+ block,
+ ch_in,
+ ch_out,
+ count=3,
+ name_adapter=na,
+ stage_num=5,
+ variant=variant,
+ groups=groups,
+ base_width=base_width,
+ lr=lr_mult,
+ norm_type=norm_type,
+ norm_decay=norm_decay,
+ freeze_norm=freeze_norm,
+ dcn_v2=dcn_v2,
+ std_senet=std_senet)
+ self.ch_out = ch_out * block.expansion
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(
+ channels=self.ch_out,
+ stride=16, )]
+
+ def forward(self, roi_feat):
+ y = self.res5(roi_feat)
+ return y
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/shufflenet_v2.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/shufflenet_v2.py
new file mode 100644
index 000000000..59b0502a1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/shufflenet_v2.py
@@ -0,0 +1,246 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+from paddle import ParamAttr
+from paddle.nn import Conv2D, MaxPool2D, AdaptiveAvgPool2D, BatchNorm
+from paddle.nn.initializer import KaimingNormal
+from paddle.regularizer import L2Decay
+
+from ppdet.core.workspace import register, serializable
+from numbers import Integral
+from ..shape_spec import ShapeSpec
+from ppdet.modeling.ops import channel_shuffle
+
+__all__ = ['ShuffleNetV2']
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ groups=1,
+ act=None):
+ super(ConvBNLayer, self).__init__()
+ self._conv = Conv2D(
+ in_channels=in_channels,
+ out_channels=out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ groups=groups,
+ weight_attr=ParamAttr(initializer=KaimingNormal()),
+ bias_attr=False)
+
+ self._batch_norm = BatchNorm(
+ out_channels,
+ param_attr=ParamAttr(regularizer=L2Decay(0.0)),
+ bias_attr=ParamAttr(regularizer=L2Decay(0.0)),
+ act=act)
+
+ def forward(self, inputs):
+ y = self._conv(inputs)
+ y = self._batch_norm(y)
+ return y
+
+
+class InvertedResidual(nn.Layer):
+ def __init__(self, in_channels, out_channels, stride, act="relu"):
+ super(InvertedResidual, self).__init__()
+ self._conv_pw = ConvBNLayer(
+ in_channels=in_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ self._conv_dw = ConvBNLayer(
+ in_channels=out_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=out_channels // 2,
+ act=None)
+ self._conv_linear = ConvBNLayer(
+ in_channels=out_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+
+ def forward(self, inputs):
+ x1, x2 = paddle.split(
+ inputs,
+ num_or_sections=[inputs.shape[1] // 2, inputs.shape[1] // 2],
+ axis=1)
+ x2 = self._conv_pw(x2)
+ x2 = self._conv_dw(x2)
+ x2 = self._conv_linear(x2)
+ out = paddle.concat([x1, x2], axis=1)
+ return channel_shuffle(out, 2)
+
+
+class InvertedResidualDS(nn.Layer):
+ def __init__(self, in_channels, out_channels, stride, act="relu"):
+ super(InvertedResidualDS, self).__init__()
+
+ # branch1
+ self._conv_dw_1 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=in_channels,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=in_channels,
+ act=None)
+ self._conv_linear_1 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ # branch2
+ self._conv_pw_2 = ConvBNLayer(
+ in_channels=in_channels,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+ self._conv_dw_2 = ConvBNLayer(
+ in_channels=out_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ groups=out_channels // 2,
+ act=None)
+ self._conv_linear_2 = ConvBNLayer(
+ in_channels=out_channels // 2,
+ out_channels=out_channels // 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ act=act)
+
+ def forward(self, inputs):
+ x1 = self._conv_dw_1(inputs)
+ x1 = self._conv_linear_1(x1)
+ x2 = self._conv_pw_2(inputs)
+ x2 = self._conv_dw_2(x2)
+ x2 = self._conv_linear_2(x2)
+ out = paddle.concat([x1, x2], axis=1)
+
+ return channel_shuffle(out, 2)
+
+
+@register
+@serializable
+class ShuffleNetV2(nn.Layer):
+ def __init__(self, scale=1.0, act="relu", feature_maps=[5, 13, 17]):
+ super(ShuffleNetV2, self).__init__()
+ self.scale = scale
+ if isinstance(feature_maps, Integral):
+ feature_maps = [feature_maps]
+ self.feature_maps = feature_maps
+ stage_repeats = [4, 8, 4]
+
+ if scale == 0.25:
+ stage_out_channels = [-1, 24, 24, 48, 96, 512]
+ elif scale == 0.33:
+ stage_out_channels = [-1, 24, 32, 64, 128, 512]
+ elif scale == 0.5:
+ stage_out_channels = [-1, 24, 48, 96, 192, 1024]
+ elif scale == 1.0:
+ stage_out_channels = [-1, 24, 116, 232, 464, 1024]
+ elif scale == 1.5:
+ stage_out_channels = [-1, 24, 176, 352, 704, 1024]
+ elif scale == 2.0:
+ stage_out_channels = [-1, 24, 224, 488, 976, 2048]
+ else:
+ raise NotImplementedError("This scale size:[" + str(scale) +
+ "] is not implemented!")
+
+ self._out_channels = []
+ self._feature_idx = 0
+ # 1. conv1
+ self._conv1 = ConvBNLayer(
+ in_channels=3,
+ out_channels=stage_out_channels[1],
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ act=act)
+ self._max_pool = MaxPool2D(kernel_size=3, stride=2, padding=1)
+ self._feature_idx += 1
+
+ # 2. bottleneck sequences
+ self._block_list = []
+ for stage_id, num_repeat in enumerate(stage_repeats):
+ for i in range(num_repeat):
+ if i == 0:
+ block = self.add_sublayer(
+ name=str(stage_id + 2) + '_' + str(i + 1),
+ sublayer=InvertedResidualDS(
+ in_channels=stage_out_channels[stage_id + 1],
+ out_channels=stage_out_channels[stage_id + 2],
+ stride=2,
+ act=act))
+ else:
+ block = self.add_sublayer(
+ name=str(stage_id + 2) + '_' + str(i + 1),
+ sublayer=InvertedResidual(
+ in_channels=stage_out_channels[stage_id + 2],
+ out_channels=stage_out_channels[stage_id + 2],
+ stride=1,
+ act=act))
+ self._block_list.append(block)
+ self._feature_idx += 1
+ self._update_out_channels(stage_out_channels[stage_id + 2],
+ self._feature_idx, self.feature_maps)
+
+ def _update_out_channels(self, channel, feature_idx, feature_maps):
+ if feature_idx in feature_maps:
+ self._out_channels.append(channel)
+
+ def forward(self, inputs):
+ y = self._conv1(inputs['image'])
+ y = self._max_pool(y)
+ outs = []
+ for i, inv in enumerate(self._block_list):
+ y = inv(y)
+ if i + 2 in self.feature_maps:
+ outs.append(y)
+
+ return outs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/swin_transformer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/swin_transformer.py
new file mode 100644
index 000000000..027e4f67a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/swin_transformer.py
@@ -0,0 +1,740 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/microsoft/Swin-Transformer/blob/main/models/swin_transformer.py
+Ths copyright of microsoft/Swin-Transformer is as follows:
+MIT License [see LICENSE for details]
+"""
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import TruncatedNormal, Constant, Assign
+from ppdet.modeling.shape_spec import ShapeSpec
+from ppdet.core.workspace import register, serializable
+import numpy as np
+
+# Common initializations
+ones_ = Constant(value=1.)
+zeros_ = Constant(value=0.)
+trunc_normal_ = TruncatedNormal(std=.02)
+
+
+# Common Functions
+def to_2tuple(x):
+ return tuple([x] * 2)
+
+
+def add_parameter(layer, datas, name=None):
+ parameter = layer.create_parameter(
+ shape=(datas.shape), default_initializer=Assign(datas))
+ if name:
+ layer.add_parameter(name, parameter)
+ return parameter
+
+
+# Common Layers
+def drop_path(x, drop_prob=0., training=False):
+ """
+ Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
+ the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
+ See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ...
+ """
+ if drop_prob == 0. or not training:
+ return x
+ keep_prob = paddle.to_tensor(1 - drop_prob)
+ shape = (paddle.shape(x)[0], ) + (1, ) * (x.ndim - 1)
+ random_tensor = keep_prob + paddle.rand(shape, dtype=x.dtype)
+ random_tensor = paddle.floor(random_tensor) # binarize
+ output = x.divide(keep_prob) * random_tensor
+ return output
+
+
+class DropPath(nn.Layer):
+ def __init__(self, drop_prob=None):
+ super(DropPath, self).__init__()
+ self.drop_prob = drop_prob
+
+ def forward(self, x):
+ return drop_path(x, self.drop_prob, self.training)
+
+
+class Identity(nn.Layer):
+ def __init__(self):
+ super(Identity, self).__init__()
+
+ def forward(self, input):
+ return input
+
+
+class Mlp(nn.Layer):
+ def __init__(self,
+ in_features,
+ hidden_features=None,
+ out_features=None,
+ act_layer=nn.GELU,
+ drop=0.):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.reshape(
+ [B, H // window_size, window_size, W // window_size, window_size, C])
+ windows = x.transpose([0, 1, 3, 2, 4, 5]).reshape(
+ [-1, window_size, window_size, C])
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.reshape(
+ [B, H // window_size, W // window_size, window_size, window_size, -1])
+ x = x.transpose([0, 1, 3, 2, 4, 5]).reshape([B, H, W, -1])
+ return x
+
+
+class WindowAttention(nn.Layer):
+ """ Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(self,
+ dim,
+ window_size,
+ num_heads,
+ qkv_bias=True,
+ qk_scale=None,
+ attn_drop=0.,
+ proj_drop=0.):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim**-0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = add_parameter(
+ self,
+ paddle.zeros(((2 * window_size[0] - 1) * (2 * window_size[1] - 1),
+ num_heads))) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = paddle.arange(self.window_size[0])
+ coords_w = paddle.arange(self.window_size[1])
+ coords = paddle.stack(paddle.meshgrid(
+ [coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = paddle.flatten(coords, 1) # 2, Wh*Ww
+ coords_flatten_1 = coords_flatten.unsqueeze(axis=2)
+ coords_flatten_2 = coords_flatten.unsqueeze(axis=1)
+ relative_coords = coords_flatten_1 - coords_flatten_2
+ relative_coords = relative_coords.transpose(
+ [1, 2, 0]) # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[
+ 0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ self.relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index",
+ self.relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias_attr=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table)
+ self.softmax = nn.Softmax(axis=-1)
+
+ def forward(self, x, mask=None):
+ """ Forward function.
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = self.qkv(x).reshape(
+ [B_, N, 3, self.num_heads, C // self.num_heads]).transpose(
+ [2, 0, 3, 1, 4])
+ q, k, v = qkv[0], qkv[1], qkv[2]
+
+ q = q * self.scale
+ attn = paddle.mm(q, k.transpose([0, 1, 3, 2]))
+
+ index = self.relative_position_index.reshape([-1])
+
+ relative_position_bias = paddle.index_select(
+ self.relative_position_bias_table, index)
+ relative_position_bias = relative_position_bias.reshape([
+ self.window_size[0] * self.window_size[1],
+ self.window_size[0] * self.window_size[1], -1
+ ]) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.transpose(
+ [2, 0, 1]) # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.reshape([B_ // nW, nW, self.num_heads, N, N
+ ]) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.reshape([-1, self.num_heads, N, N])
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ # x = (attn @ v).transpose(1, 2).reshape([B_, N, C])
+ x = paddle.mm(attn, v).transpose([0, 2, 1, 3]).reshape([B_, N, C])
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+
+class SwinTransformerBlock(nn.Layer):
+ """ Swin Transformer Block.
+ Args:
+ dim (int): Number of input channels.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Layer, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Layer, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self,
+ dim,
+ num_heads,
+ window_size=7,
+ shift_size=0,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.,
+ attn_drop=0.,
+ drop_path=0.,
+ act_layer=nn.GELU,
+ norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim,
+ window_size=to_2tuple(self.window_size),
+ num_heads=num_heads,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ attn_drop=attn_drop,
+ proj_drop=drop)
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0. else Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(in_features=dim,
+ hidden_features=mlp_hidden_dim,
+ act_layer=act_layer,
+ drop=drop)
+
+ self.H = None
+ self.W = None
+
+ def forward(self, x, mask_matrix):
+ """ Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ mask_matrix: Attention mask for cyclic shift.
+ """
+ B, L, C = x.shape
+ H, W = self.H, self.W
+ assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.reshape([B, H, W, C])
+
+ # pad feature maps to multiples of window size
+ pad_l = pad_t = 0
+ pad_r = (self.window_size - W % self.window_size) % self.window_size
+ pad_b = (self.window_size - H % self.window_size) % self.window_size
+ x = F.pad(x, [0, pad_l, 0, pad_b, 0, pad_r, 0, pad_t])
+ _, Hp, Wp, _ = x.shape
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = paddle.roll(
+ x, shifts=(-self.shift_size, -self.shift_size), axis=(1, 2))
+ attn_mask = mask_matrix
+ else:
+ shifted_x = x
+ attn_mask = None
+
+ # partition windows
+ x_windows = window_partition(
+ shifted_x, self.window_size) # nW*B, window_size, window_size, C
+ x_windows = x_windows.reshape(
+ [-1, self.window_size * self.window_size,
+ C]) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA
+ attn_windows = self.attn(
+ x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
+
+ # merge windows
+ attn_windows = attn_windows.reshape(
+ [-1, self.window_size, self.window_size, C])
+ shifted_x = window_reverse(attn_windows, self.window_size, Hp,
+ Wp) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = paddle.roll(
+ shifted_x,
+ shifts=(self.shift_size, self.shift_size),
+ axis=(1, 2))
+ else:
+ x = shifted_x
+
+ if pad_r > 0 or pad_b > 0:
+ x = x[:, :H, :W, :]
+
+ x = x.reshape([B, H * W, C])
+
+ # FFN
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+
+class PatchMerging(nn.Layer):
+ r""" Patch Merging Layer.
+ Args:
+ dim (int): Number of input channels.
+ norm_layer (nn.Layer, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias_attr=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x, H, W):
+ """ Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+
+ x = x.reshape([B, H, W, C])
+
+ # padding
+ pad_input = (H % 2 == 1) or (W % 2 == 1)
+ if pad_input:
+ x = F.pad(x, [0, 0, 0, W % 2, 0, H % 2])
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = paddle.concat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.reshape([B, H * W // 4, 4 * C]) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+
+class BasicLayer(nn.Layer):
+ """ A basic Swin Transformer layer for one stage.
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resolution.
+ depth (int): Number of blocks.
+ num_heads (int): Number of attention heads.
+ window_size (int): Local window size.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Layer, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Layer | None, optional): Downsample layer at the end of the layer. Default: None
+ """
+
+ def __init__(self,
+ dim,
+ depth,
+ num_heads,
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.,
+ attn_drop=0.,
+ drop_path=0.,
+ norm_layer=nn.LayerNorm,
+ downsample=None):
+ super().__init__()
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.depth = depth
+
+ # build blocks
+ self.blocks = nn.LayerList([
+ SwinTransformerBlock(
+ dim=dim,
+ num_heads=num_heads,
+ window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop,
+ attn_drop=attn_drop,
+ drop_path=drop_path[i]
+ if isinstance(drop_path, np.ndarray) else drop_path,
+ norm_layer=norm_layer) for i in range(depth)
+ ])
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+
+ def forward(self, x, H, W):
+ """ Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = paddle.fluid.layers.zeros(
+ [1, Hp, Wp, 1], dtype='float32') # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+ mask_windows = window_partition(
+ img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.reshape(
+ [-1, self.window_size * self.window_size])
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ huns = -100.0 * paddle.ones_like(attn_mask)
+ attn_mask = huns * (attn_mask != 0).astype("float32")
+
+ for blk in self.blocks:
+ blk.H, blk.W = H, W
+ x = blk(x, attn_mask)
+ if self.downsample is not None:
+ x_down = self.downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x, H, W, x_down, Wh, Ww
+ else:
+ return x, H, W, x, H, W
+
+
+class PatchEmbed(nn.Layer):
+ """ Image to Patch Embedding
+ Args:
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Layer, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ patch_size = to_2tuple(patch_size)
+ self.patch_size = patch_size
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ self.proj = nn.Conv2D(
+ in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ B, C, H, W = x.shape
+ # assert [H, W] == self.img_size[:2], "Input image size ({H}*{W}) doesn't match model ({}*{}).".format(H, W, self.img_size[0], self.img_size[1])
+ if W % self.patch_size[1] != 0:
+ x = F.pad(x, [0, self.patch_size[1] - W % self.patch_size[1], 0, 0])
+ if H % self.patch_size[0] != 0:
+ x = F.pad(x, [0, 0, 0, self.patch_size[0] - H % self.patch_size[0]])
+
+ x = self.proj(x)
+ if self.norm is not None:
+ _, _, Wh, Ww = x.shape
+ x = x.flatten(2).transpose([0, 2, 1])
+ x = self.norm(x)
+ x = x.transpose([0, 2, 1]).reshape([-1, self.embed_dim, Wh, Ww])
+
+ return x
+
+
+@register
+@serializable
+class SwinTransformer(nn.Layer):
+ """ Swin Transformer
+ A PaddlePaddle impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
+ https://arxiv.org/pdf/2103.14030
+
+ Args:
+ img_size (int | tuple(int)): Input image size. Default 224
+ patch_size (int | tuple(int)): Patch size. Default: 4
+ in_chans (int): Number of input image channels. Default: 3
+ num_classes (int): Number of classes for classification head. Default: 1000
+ embed_dim (int): Patch embedding dimension. Default: 96
+ depths (tuple(int)): Depth of each Swin Transformer layer.
+ num_heads (tuple(int)): Number of attention heads in different layers.
+ window_size (int): Window size. Default: 7
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
+ qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
+ drop_rate (float): Dropout rate. Default: 0
+ attn_drop_rate (float): Attention dropout rate. Default: 0
+ drop_path_rate (float): Stochastic depth rate. Default: 0.1
+ norm_layer (nn.Layer): Normalization layer. Default: nn.LayerNorm.
+ ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
+ patch_norm (bool): If True, add normalization after patch embedding. Default: True
+ """
+
+ def __init__(self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.,
+ attn_drop_rate=0.,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ pretrained=None):
+ super(SwinTransformer, self).__init__()
+
+ self.pretrain_img_size = pretrain_img_size
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.out_indices = out_indices
+ self.frozen_stages = frozen_stages
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ patch_size=patch_size,
+ in_chans=in_chans,
+ embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+
+ # absolute position embedding
+ if self.ape:
+ pretrain_img_size = to_2tuple(pretrain_img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [
+ pretrain_img_size[0] // patch_size[0],
+ pretrain_img_size[1] // patch_size[1]
+ ]
+
+ self.absolute_pos_embed = add_parameter(
+ self,
+ paddle.zeros((1, embed_dim, patches_resolution[0],
+ patches_resolution[1])))
+ trunc_normal_(self.absolute_pos_embed)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = np.linspace(0, drop_path_rate,
+ sum(depths)) # stochastic depth decay rule
+
+ # build layers
+ self.layers = nn.LayerList()
+ for i_layer in range(self.num_layers):
+ layer = BasicLayer(
+ dim=int(embed_dim * 2**i_layer),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop_rate,
+ attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
+ norm_layer=norm_layer,
+ downsample=PatchMerging
+ if (i_layer < self.num_layers - 1) else None)
+ self.layers.append(layer)
+
+ num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)]
+ self.num_features = num_features
+
+ # add a norm layer for each output
+ for i_layer in out_indices:
+ layer = norm_layer(num_features[i_layer])
+ layer_name = f'norm{i_layer}'
+ self.add_sublayer(layer_name, layer)
+
+ self.apply(self._init_weights)
+ self._freeze_stages()
+ if pretrained:
+ if 'http' in pretrained: #URL
+ path = paddle.utils.download.get_weights_path_from_url(
+ pretrained)
+ else: #model in local path
+ path = pretrained
+ self.set_state_dict(paddle.load(path))
+
+ def _freeze_stages(self):
+ if self.frozen_stages >= 0:
+ self.patch_embed.eval()
+ for param in self.patch_embed.parameters():
+ param.requires_grad = False
+
+ if self.frozen_stages >= 1 and self.ape:
+ self.absolute_pos_embed.requires_grad = False
+
+ if self.frozen_stages >= 2:
+ self.pos_drop.eval()
+ for i in range(0, self.frozen_stages - 1):
+ m = self.layers[i]
+ m.eval()
+ for param in m.parameters():
+ param.requires_grad = False
+
+ def _init_weights(self, m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ zeros_(m.bias)
+ elif isinstance(m, nn.LayerNorm):
+ zeros_(m.bias)
+ ones_(m.weight)
+
+ def forward(self, x):
+ """Forward function."""
+ x = self.patch_embed(x['image'])
+ _, _, Wh, Ww = x.shape
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(
+ self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
+ x = (x + absolute_pos_embed).flatten(2).transpose([0, 2, 1])
+ else:
+ x = x.flatten(2).transpose([0, 2, 1])
+ x = self.pos_drop(x)
+ outs = []
+ for i in range(self.num_layers):
+ layer = self.layers[i]
+ x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
+ if i in self.out_indices:
+ norm_layer = getattr(self, f'norm{i}')
+ x_out = norm_layer(x_out)
+ out = x_out.reshape((-1, H, W, self.num_features[i])).transpose(
+ (0, 3, 1, 2))
+ outs.append(out)
+
+ return tuple(outs)
+
+ @property
+ def out_shape(self):
+ out_strides = [4, 8, 16, 32]
+ return [
+ ShapeSpec(
+ channels=self.num_features[i], stride=out_strides[i])
+ for i in self.out_indices
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/vgg.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/vgg.py
new file mode 100644
index 000000000..e05753209
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/backbones/vgg.py
@@ -0,0 +1,210 @@
+from __future__ import division
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn import Conv2D, MaxPool2D
+from ppdet.core.workspace import register, serializable
+from ..shape_spec import ShapeSpec
+
+__all__ = ['VGG']
+
+VGG_cfg = {16: [2, 2, 3, 3, 3], 19: [2, 2, 4, 4, 4]}
+
+
+class ConvBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ groups,
+ pool_size=2,
+ pool_stride=2,
+ pool_padding=0,
+ name=None):
+ super(ConvBlock, self).__init__()
+
+ self.groups = groups
+ self.conv0 = nn.Conv2D(
+ in_channels=in_channels,
+ out_channels=out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1)
+ self.conv_out_list = []
+ for i in range(1, groups):
+ conv_out = self.add_sublayer(
+ 'conv{}'.format(i),
+ Conv2D(
+ in_channels=out_channels,
+ out_channels=out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1))
+ self.conv_out_list.append(conv_out)
+
+ self.pool = MaxPool2D(
+ kernel_size=pool_size,
+ stride=pool_stride,
+ padding=pool_padding,
+ ceil_mode=True)
+
+ def forward(self, inputs):
+ out = self.conv0(inputs)
+ out = F.relu(out)
+ for conv_i in self.conv_out_list:
+ out = conv_i(out)
+ out = F.relu(out)
+ pool = self.pool(out)
+ return out, pool
+
+
+class ExtraBlock(nn.Layer):
+ def __init__(self,
+ in_channels,
+ mid_channels,
+ out_channels,
+ padding,
+ stride,
+ kernel_size,
+ name=None):
+ super(ExtraBlock, self).__init__()
+
+ self.conv0 = Conv2D(
+ in_channels=in_channels,
+ out_channels=mid_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self.conv1 = Conv2D(
+ in_channels=mid_channels,
+ out_channels=out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding)
+
+ def forward(self, inputs):
+ out = self.conv0(inputs)
+ out = F.relu(out)
+ out = self.conv1(out)
+ out = F.relu(out)
+ return out
+
+
+class L2NormScale(nn.Layer):
+ def __init__(self, num_channels, scale=1.0):
+ super(L2NormScale, self).__init__()
+ self.scale = self.create_parameter(
+ attr=ParamAttr(initializer=paddle.nn.initializer.Constant(scale)),
+ shape=[num_channels])
+
+ def forward(self, inputs):
+ out = F.normalize(inputs, axis=1, epsilon=1e-10)
+ # out = self.scale.unsqueeze(0).unsqueeze(2).unsqueeze(3).expand_as(
+ # out) * out
+ out = self.scale.unsqueeze(0).unsqueeze(2).unsqueeze(3) * out
+ return out
+
+
+@register
+@serializable
+class VGG(nn.Layer):
+ def __init__(self,
+ depth=16,
+ normalizations=[20., -1, -1, -1, -1, -1],
+ extra_block_filters=[[256, 512, 1, 2, 3], [128, 256, 1, 2, 3],
+ [128, 256, 0, 1, 3],
+ [128, 256, 0, 1, 3]]):
+ super(VGG, self).__init__()
+
+ assert depth in [16, 19], \
+ "depth as 16/19 supported currently, but got {}".format(depth)
+ self.depth = depth
+ self.groups = VGG_cfg[depth]
+ self.normalizations = normalizations
+ self.extra_block_filters = extra_block_filters
+
+ self._out_channels = []
+
+ self.conv_block_0 = ConvBlock(
+ 3, 64, self.groups[0], 2, 2, 0, name="conv1_")
+ self.conv_block_1 = ConvBlock(
+ 64, 128, self.groups[1], 2, 2, 0, name="conv2_")
+ self.conv_block_2 = ConvBlock(
+ 128, 256, self.groups[2], 2, 2, 0, name="conv3_")
+ self.conv_block_3 = ConvBlock(
+ 256, 512, self.groups[3], 2, 2, 0, name="conv4_")
+ self.conv_block_4 = ConvBlock(
+ 512, 512, self.groups[4], 3, 1, 1, name="conv5_")
+ self._out_channels.append(512)
+
+ self.fc6 = Conv2D(
+ in_channels=512,
+ out_channels=1024,
+ kernel_size=3,
+ stride=1,
+ padding=6,
+ dilation=6)
+ self.fc7 = Conv2D(
+ in_channels=1024,
+ out_channels=1024,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self._out_channels.append(1024)
+
+ # extra block
+ self.extra_convs = []
+ last_channels = 1024
+ for i, v in enumerate(self.extra_block_filters):
+ assert len(v) == 5, "extra_block_filters size not fix"
+ extra_conv = self.add_sublayer("conv{}".format(6 + i),
+ ExtraBlock(last_channels, v[0], v[1],
+ v[2], v[3], v[4]))
+ last_channels = v[1]
+ self.extra_convs.append(extra_conv)
+ self._out_channels.append(last_channels)
+
+ self.norms = []
+ for i, n in enumerate(self.normalizations):
+ if n != -1:
+ norm = self.add_sublayer("norm{}".format(i),
+ L2NormScale(
+ self.extra_block_filters[i][1], n))
+ else:
+ norm = None
+ self.norms.append(norm)
+
+ def forward(self, inputs):
+ outputs = []
+
+ conv, pool = self.conv_block_0(inputs['image'])
+ conv, pool = self.conv_block_1(pool)
+ conv, pool = self.conv_block_2(pool)
+ conv, pool = self.conv_block_3(pool)
+ outputs.append(conv)
+
+ conv, pool = self.conv_block_4(pool)
+ out = self.fc6(pool)
+ out = F.relu(out)
+ out = self.fc7(out)
+ out = F.relu(out)
+ outputs.append(out)
+
+ if not self.extra_block_filters:
+ return outputs
+
+ # extra block
+ for extra_conv in self.extra_convs:
+ out = extra_conv(out)
+ outputs.append(out)
+
+ for i, n in enumerate(self.normalizations):
+ if n != -1:
+ outputs[i] = self.norms[i](outputs[i])
+
+ return outputs
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/bbox_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/bbox_utils.py
new file mode 100644
index 000000000..e040ba69b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/bbox_utils.py
@@ -0,0 +1,753 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+import paddle
+import numpy as np
+
+
+def bbox2delta(src_boxes, tgt_boxes, weights):
+ src_w = src_boxes[:, 2] - src_boxes[:, 0]
+ src_h = src_boxes[:, 3] - src_boxes[:, 1]
+ src_ctr_x = src_boxes[:, 0] + 0.5 * src_w
+ src_ctr_y = src_boxes[:, 1] + 0.5 * src_h
+
+ tgt_w = tgt_boxes[:, 2] - tgt_boxes[:, 0]
+ tgt_h = tgt_boxes[:, 3] - tgt_boxes[:, 1]
+ tgt_ctr_x = tgt_boxes[:, 0] + 0.5 * tgt_w
+ tgt_ctr_y = tgt_boxes[:, 1] + 0.5 * tgt_h
+
+ wx, wy, ww, wh = weights
+ dx = wx * (tgt_ctr_x - src_ctr_x) / src_w
+ dy = wy * (tgt_ctr_y - src_ctr_y) / src_h
+ dw = ww * paddle.log(tgt_w / src_w)
+ dh = wh * paddle.log(tgt_h / src_h)
+
+ deltas = paddle.stack((dx, dy, dw, dh), axis=1)
+ return deltas
+
+
+def delta2bbox(deltas, boxes, weights):
+ clip_scale = math.log(1000.0 / 16)
+
+ widths = boxes[:, 2] - boxes[:, 0]
+ heights = boxes[:, 3] - boxes[:, 1]
+ ctr_x = boxes[:, 0] + 0.5 * widths
+ ctr_y = boxes[:, 1] + 0.5 * heights
+
+ wx, wy, ww, wh = weights
+ dx = deltas[:, 0::4] / wx
+ dy = deltas[:, 1::4] / wy
+ dw = deltas[:, 2::4] / ww
+ dh = deltas[:, 3::4] / wh
+ # Prevent sending too large values into paddle.exp()
+ dw = paddle.clip(dw, max=clip_scale)
+ dh = paddle.clip(dh, max=clip_scale)
+
+ pred_ctr_x = dx * widths.unsqueeze(1) + ctr_x.unsqueeze(1)
+ pred_ctr_y = dy * heights.unsqueeze(1) + ctr_y.unsqueeze(1)
+ pred_w = paddle.exp(dw) * widths.unsqueeze(1)
+ pred_h = paddle.exp(dh) * heights.unsqueeze(1)
+
+ pred_boxes = []
+ pred_boxes.append(pred_ctr_x - 0.5 * pred_w)
+ pred_boxes.append(pred_ctr_y - 0.5 * pred_h)
+ pred_boxes.append(pred_ctr_x + 0.5 * pred_w)
+ pred_boxes.append(pred_ctr_y + 0.5 * pred_h)
+ pred_boxes = paddle.stack(pred_boxes, axis=-1)
+
+ return pred_boxes
+
+
+def expand_bbox(bboxes, scale):
+ w_half = (bboxes[:, 2] - bboxes[:, 0]) * .5
+ h_half = (bboxes[:, 3] - bboxes[:, 1]) * .5
+ x_c = (bboxes[:, 2] + bboxes[:, 0]) * .5
+ y_c = (bboxes[:, 3] + bboxes[:, 1]) * .5
+
+ w_half *= scale
+ h_half *= scale
+
+ bboxes_exp = np.zeros(bboxes.shape, dtype=np.float32)
+ bboxes_exp[:, 0] = x_c - w_half
+ bboxes_exp[:, 2] = x_c + w_half
+ bboxes_exp[:, 1] = y_c - h_half
+ bboxes_exp[:, 3] = y_c + h_half
+
+ return bboxes_exp
+
+
+def clip_bbox(boxes, im_shape):
+ h, w = im_shape[0], im_shape[1]
+ x1 = boxes[:, 0].clip(0, w)
+ y1 = boxes[:, 1].clip(0, h)
+ x2 = boxes[:, 2].clip(0, w)
+ y2 = boxes[:, 3].clip(0, h)
+ return paddle.stack([x1, y1, x2, y2], axis=1)
+
+
+def nonempty_bbox(boxes, min_size=0, return_mask=False):
+ w = boxes[:, 2] - boxes[:, 0]
+ h = boxes[:, 3] - boxes[:, 1]
+ mask = paddle.logical_and(h > min_size, w > min_size)
+ if return_mask:
+ return mask
+ keep = paddle.nonzero(mask).flatten()
+ return keep
+
+
+def bbox_area(boxes):
+ return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
+
+
+def bbox_overlaps(boxes1, boxes2):
+ """
+ Calculate overlaps between boxes1 and boxes2
+
+ Args:
+ boxes1 (Tensor): boxes with shape [M, 4]
+ boxes2 (Tensor): boxes with shape [N, 4]
+
+ Return:
+ overlaps (Tensor): overlaps between boxes1 and boxes2 with shape [M, N]
+ """
+ M = boxes1.shape[0]
+ N = boxes2.shape[0]
+ if M * N == 0:
+ return paddle.zeros([M, N], dtype='float32')
+ area1 = bbox_area(boxes1)
+ area2 = bbox_area(boxes2)
+
+ xy_max = paddle.minimum(
+ paddle.unsqueeze(boxes1, 1)[:, :, 2:], boxes2[:, 2:])
+ xy_min = paddle.maximum(
+ paddle.unsqueeze(boxes1, 1)[:, :, :2], boxes2[:, :2])
+ width_height = xy_max - xy_min
+ width_height = width_height.clip(min=0)
+ inter = width_height.prod(axis=2)
+
+ overlaps = paddle.where(inter > 0, inter /
+ (paddle.unsqueeze(area1, 1) + area2 - inter),
+ paddle.zeros_like(inter))
+ return overlaps
+
+
+def batch_bbox_overlaps(bboxes1,
+ bboxes2,
+ mode='iou',
+ is_aligned=False,
+ eps=1e-6):
+ """Calculate overlap between two set of bboxes.
+ If ``is_aligned `` is ``False``, then calculate the overlaps between each
+ bbox of bboxes1 and bboxes2, otherwise the overlaps between each aligned
+ pair of bboxes1 and bboxes2.
+ Args:
+ bboxes1 (Tensor): shape (B, m, 4) in format or empty.
+ bboxes2 (Tensor): shape (B, n, 4) in format or empty.
+ B indicates the batch dim, in shape (B1, B2, ..., Bn).
+ If ``is_aligned `` is ``True``, then m and n must be equal.
+ mode (str): "iou" (intersection over union) or "iof" (intersection over
+ foreground).
+ is_aligned (bool, optional): If True, then m and n must be equal.
+ Default False.
+ eps (float, optional): A value added to the denominator for numerical
+ stability. Default 1e-6.
+ Returns:
+ Tensor: shape (m, n) if ``is_aligned `` is False else shape (m,)
+ """
+ assert mode in ['iou', 'iof', 'giou'], 'Unsupported mode {}'.format(mode)
+ # Either the boxes are empty or the length of boxes's last dimenstion is 4
+ assert (bboxes1.shape[-1] == 4 or bboxes1.shape[0] == 0)
+ assert (bboxes2.shape[-1] == 4 or bboxes2.shape[0] == 0)
+
+ # Batch dim must be the same
+ # Batch dim: (B1, B2, ... Bn)
+ assert bboxes1.shape[:-2] == bboxes2.shape[:-2]
+ batch_shape = bboxes1.shape[:-2]
+
+ rows = bboxes1.shape[-2] if bboxes1.shape[0] > 0 else 0
+ cols = bboxes2.shape[-2] if bboxes2.shape[0] > 0 else 0
+ if is_aligned:
+ assert rows == cols
+
+ if rows * cols == 0:
+ if is_aligned:
+ return paddle.full(batch_shape + (rows, ), 1)
+ else:
+ return paddle.full(batch_shape + (rows, cols), 1)
+
+ area1 = (bboxes1[:, 2] - bboxes1[:, 0]) * (bboxes1[:, 3] - bboxes1[:, 1])
+ area2 = (bboxes2[:, 2] - bboxes2[:, 0]) * (bboxes2[:, 3] - bboxes2[:, 1])
+
+ if is_aligned:
+ lt = paddle.maximum(bboxes1[:, :2], bboxes2[:, :2]) # [B, rows, 2]
+ rb = paddle.minimum(bboxes1[:, 2:], bboxes2[:, 2:]) # [B, rows, 2]
+
+ wh = (rb - lt).clip(min=0) # [B, rows, 2]
+ overlap = wh[:, 0] * wh[:, 1]
+
+ if mode in ['iou', 'giou']:
+ union = area1 + area2 - overlap
+ else:
+ union = area1
+ if mode == 'giou':
+ enclosed_lt = paddle.minimum(bboxes1[:, :2], bboxes2[:, :2])
+ enclosed_rb = paddle.maximum(bboxes1[:, 2:], bboxes2[:, 2:])
+ else:
+ lt = paddle.maximum(bboxes1[:, :2].reshape([rows, 1, 2]),
+ bboxes2[:, :2]) # [B, rows, cols, 2]
+ rb = paddle.minimum(bboxes1[:, 2:].reshape([rows, 1, 2]),
+ bboxes2[:, 2:]) # [B, rows, cols, 2]
+
+ wh = (rb - lt).clip(min=0) # [B, rows, cols, 2]
+ overlap = wh[:, :, 0] * wh[:, :, 1]
+
+ if mode in ['iou', 'giou']:
+ union = area1.reshape([rows,1]) \
+ + area2.reshape([1,cols]) - overlap
+ else:
+ union = area1[:, None]
+ if mode == 'giou':
+ enclosed_lt = paddle.minimum(bboxes1[:, :2].reshape([rows, 1, 2]),
+ bboxes2[:, :2])
+ enclosed_rb = paddle.maximum(bboxes1[:, 2:].reshape([rows, 1, 2]),
+ bboxes2[:, 2:])
+
+ eps = paddle.to_tensor([eps])
+ union = paddle.maximum(union, eps)
+ ious = overlap / union
+ if mode in ['iou', 'iof']:
+ return ious
+ # calculate gious
+ enclose_wh = (enclosed_rb - enclosed_lt).clip(min=0)
+ enclose_area = enclose_wh[:, :, 0] * enclose_wh[:, :, 1]
+ enclose_area = paddle.maximum(enclose_area, eps)
+ gious = ious - (enclose_area - union) / enclose_area
+ return 1 - gious
+
+
+def xywh2xyxy(box):
+ x, y, w, h = box
+ x1 = x - w * 0.5
+ y1 = y - h * 0.5
+ x2 = x + w * 0.5
+ y2 = y + h * 0.5
+ return [x1, y1, x2, y2]
+
+
+def make_grid(h, w, dtype):
+ yv, xv = paddle.meshgrid([paddle.arange(h), paddle.arange(w)])
+ return paddle.stack((xv, yv), 2).cast(dtype=dtype)
+
+
+def decode_yolo(box, anchor, downsample_ratio):
+ """decode yolo box
+
+ Args:
+ box (list): [x, y, w, h], all have the shape [b, na, h, w, 1]
+ anchor (list): anchor with the shape [na, 2]
+ downsample_ratio (int): downsample ratio, default 32
+ scale (float): scale, default 1.
+
+ Return:
+ box (list): decoded box, [x, y, w, h], all have the shape [b, na, h, w, 1]
+ """
+ x, y, w, h = box
+ na, grid_h, grid_w = x.shape[1:4]
+ grid = make_grid(grid_h, grid_w, x.dtype).reshape((1, 1, grid_h, grid_w, 2))
+ x1 = (x + grid[:, :, :, :, 0:1]) / grid_w
+ y1 = (y + grid[:, :, :, :, 1:2]) / grid_h
+
+ anchor = paddle.to_tensor(anchor)
+ anchor = paddle.cast(anchor, x.dtype)
+ anchor = anchor.reshape((1, na, 1, 1, 2))
+ w1 = paddle.exp(w) * anchor[:, :, :, :, 0:1] / (downsample_ratio * grid_w)
+ h1 = paddle.exp(h) * anchor[:, :, :, :, 1:2] / (downsample_ratio * grid_h)
+
+ return [x1, y1, w1, h1]
+
+
+def iou_similarity(box1, box2, eps=1e-9):
+ """Calculate iou of box1 and box2
+
+ Args:
+ box1 (Tensor): box with the shape [N, M1, 4]
+ box2 (Tensor): box with the shape [N, M2, 4]
+
+ Return:
+ iou (Tensor): iou between box1 and box2 with the shape [N, M1, M2]
+ """
+ box1 = box1.unsqueeze(2) # [N, M1, 4] -> [N, M1, 1, 4]
+ box2 = box2.unsqueeze(1) # [N, M2, 4] -> [N, 1, M2, 4]
+ px1y1, px2y2 = box1[:, :, :, 0:2], box1[:, :, :, 2:4]
+ gx1y1, gx2y2 = box2[:, :, :, 0:2], box2[:, :, :, 2:4]
+ x1y1 = paddle.maximum(px1y1, gx1y1)
+ x2y2 = paddle.minimum(px2y2, gx2y2)
+ overlap = (x2y2 - x1y1).clip(0).prod(-1)
+ area1 = (px2y2 - px1y1).clip(0).prod(-1)
+ area2 = (gx2y2 - gx1y1).clip(0).prod(-1)
+ union = area1 + area2 - overlap + eps
+ return overlap / union
+
+
+def bbox_iou(box1, box2, giou=False, diou=False, ciou=False, eps=1e-9):
+ """calculate the iou of box1 and box2
+
+ Args:
+ box1 (list): [x, y, w, h], all have the shape [b, na, h, w, 1]
+ box2 (list): [x, y, w, h], all have the shape [b, na, h, w, 1]
+ giou (bool): whether use giou or not, default False
+ diou (bool): whether use diou or not, default False
+ ciou (bool): whether use ciou or not, default False
+ eps (float): epsilon to avoid divide by zero
+
+ Return:
+ iou (Tensor): iou of box1 and box1, with the shape [b, na, h, w, 1]
+ """
+ px1, py1, px2, py2 = box1
+ gx1, gy1, gx2, gy2 = box2
+ x1 = paddle.maximum(px1, gx1)
+ y1 = paddle.maximum(py1, gy1)
+ x2 = paddle.minimum(px2, gx2)
+ y2 = paddle.minimum(py2, gy2)
+
+ overlap = ((x2 - x1).clip(0)) * ((y2 - y1).clip(0))
+
+ area1 = (px2 - px1) * (py2 - py1)
+ area1 = area1.clip(0)
+
+ area2 = (gx2 - gx1) * (gy2 - gy1)
+ area2 = area2.clip(0)
+
+ union = area1 + area2 - overlap + eps
+ iou = overlap / union
+
+ if giou or ciou or diou:
+ # convex w, h
+ cw = paddle.maximum(px2, gx2) - paddle.minimum(px1, gx1)
+ ch = paddle.maximum(py2, gy2) - paddle.minimum(py1, gy1)
+ if giou:
+ c_area = cw * ch + eps
+ return iou - (c_area - union) / c_area
+ else:
+ # convex diagonal squared
+ c2 = cw**2 + ch**2 + eps
+ # center distance
+ rho2 = ((px1 + px2 - gx1 - gx2)**2 + (py1 + py2 - gy1 - gy2)**2) / 4
+ if diou:
+ return iou - rho2 / c2
+ else:
+ w1, h1 = px2 - px1, py2 - py1 + eps
+ w2, h2 = gx2 - gx1, gy2 - gy1 + eps
+ delta = paddle.atan(w1 / h1) - paddle.atan(w2 / h2)
+ v = (4 / math.pi**2) * paddle.pow(delta, 2)
+ alpha = v / (1 + eps - iou + v)
+ alpha.stop_gradient = True
+ return iou - (rho2 / c2 + v * alpha)
+ else:
+ return iou
+
+
+def rect2rbox(bboxes):
+ """
+ :param bboxes: shape (n, 4) (xmin, ymin, xmax, ymax)
+ :return: dbboxes: shape (n, 5) (x_ctr, y_ctr, w, h, angle)
+ """
+ bboxes = bboxes.reshape(-1, 4)
+ num_boxes = bboxes.shape[0]
+
+ x_ctr = (bboxes[:, 2] + bboxes[:, 0]) / 2.0
+ y_ctr = (bboxes[:, 3] + bboxes[:, 1]) / 2.0
+ edges1 = np.abs(bboxes[:, 2] - bboxes[:, 0])
+ edges2 = np.abs(bboxes[:, 3] - bboxes[:, 1])
+ angles = np.zeros([num_boxes], dtype=bboxes.dtype)
+
+ inds = edges1 < edges2
+
+ rboxes = np.stack((x_ctr, y_ctr, edges1, edges2, angles), axis=1)
+ rboxes[inds, 2] = edges2[inds]
+ rboxes[inds, 3] = edges1[inds]
+ rboxes[inds, 4] = np.pi / 2.0
+ return rboxes
+
+
+def delta2rbox(rrois,
+ deltas,
+ means=[0, 0, 0, 0, 0],
+ stds=[1, 1, 1, 1, 1],
+ wh_ratio_clip=1e-6):
+ """
+ :param rrois: (cx, cy, w, h, theta)
+ :param deltas: (dx, dy, dw, dh, dtheta)
+ :param means:
+ :param stds:
+ :param wh_ratio_clip:
+ :return:
+ """
+ means = paddle.to_tensor(means)
+ stds = paddle.to_tensor(stds)
+ deltas = paddle.reshape(deltas, [-1, deltas.shape[-1]])
+ denorm_deltas = deltas * stds + means
+
+ dx = denorm_deltas[:, 0]
+ dy = denorm_deltas[:, 1]
+ dw = denorm_deltas[:, 2]
+ dh = denorm_deltas[:, 3]
+ dangle = denorm_deltas[:, 4]
+
+ max_ratio = np.abs(np.log(wh_ratio_clip))
+ dw = paddle.clip(dw, min=-max_ratio, max=max_ratio)
+ dh = paddle.clip(dh, min=-max_ratio, max=max_ratio)
+
+ rroi_x = rrois[:, 0]
+ rroi_y = rrois[:, 1]
+ rroi_w = rrois[:, 2]
+ rroi_h = rrois[:, 3]
+ rroi_angle = rrois[:, 4]
+
+ gx = dx * rroi_w * paddle.cos(rroi_angle) - dy * rroi_h * paddle.sin(
+ rroi_angle) + rroi_x
+ gy = dx * rroi_w * paddle.sin(rroi_angle) + dy * rroi_h * paddle.cos(
+ rroi_angle) + rroi_y
+ gw = rroi_w * dw.exp()
+ gh = rroi_h * dh.exp()
+ ga = np.pi * dangle + rroi_angle
+ ga = (ga + np.pi / 4) % np.pi - np.pi / 4
+ ga = paddle.to_tensor(ga)
+
+ gw = paddle.to_tensor(gw, dtype='float32')
+ gh = paddle.to_tensor(gh, dtype='float32')
+ bboxes = paddle.stack([gx, gy, gw, gh, ga], axis=-1)
+ return bboxes
+
+
+def rbox2delta(proposals, gt, means=[0, 0, 0, 0, 0], stds=[1, 1, 1, 1, 1]):
+ """
+
+ Args:
+ proposals:
+ gt:
+ means: 1x5
+ stds: 1x5
+
+ Returns:
+
+ """
+ proposals = proposals.astype(np.float64)
+
+ PI = np.pi
+
+ gt_widths = gt[..., 2]
+ gt_heights = gt[..., 3]
+ gt_angle = gt[..., 4]
+
+ proposals_widths = proposals[..., 2]
+ proposals_heights = proposals[..., 3]
+ proposals_angle = proposals[..., 4]
+
+ coord = gt[..., 0:2] - proposals[..., 0:2]
+ dx = (np.cos(proposals[..., 4]) * coord[..., 0] + np.sin(proposals[..., 4])
+ * coord[..., 1]) / proposals_widths
+ dy = (-np.sin(proposals[..., 4]) * coord[..., 0] + np.cos(proposals[..., 4])
+ * coord[..., 1]) / proposals_heights
+ dw = np.log(gt_widths / proposals_widths)
+ dh = np.log(gt_heights / proposals_heights)
+ da = (gt_angle - proposals_angle)
+
+ da = (da + PI / 4) % PI - PI / 4
+ da /= PI
+
+ deltas = np.stack([dx, dy, dw, dh, da], axis=-1)
+ means = np.array(means, dtype=deltas.dtype)
+ stds = np.array(stds, dtype=deltas.dtype)
+ deltas = (deltas - means) / stds
+ deltas = deltas.astype(np.float32)
+ return deltas
+
+
+def bbox_decode(bbox_preds,
+ anchors,
+ means=[0, 0, 0, 0, 0],
+ stds=[1, 1, 1, 1, 1]):
+ """decode bbox from deltas
+ Args:
+ bbox_preds: [N,H,W,5]
+ anchors: [H*W,5]
+ return:
+ bboxes: [N,H,W,5]
+ """
+ means = paddle.to_tensor(means)
+ stds = paddle.to_tensor(stds)
+ num_imgs, H, W, _ = bbox_preds.shape
+ bboxes_list = []
+ for img_id in range(num_imgs):
+ bbox_pred = bbox_preds[img_id]
+ # bbox_pred.shape=[5,H,W]
+ bbox_delta = bbox_pred
+ anchors = paddle.to_tensor(anchors)
+ bboxes = delta2rbox(
+ anchors, bbox_delta, means, stds, wh_ratio_clip=1e-6)
+ bboxes = paddle.reshape(bboxes, [H, W, 5])
+ bboxes_list.append(bboxes)
+ return paddle.stack(bboxes_list, axis=0)
+
+
+def poly2rbox(polys):
+ """
+ poly:[x0,y0,x1,y1,x2,y2,x3,y3]
+ to
+ rotated_boxes:[x_ctr,y_ctr,w,h,angle]
+ """
+ rotated_boxes = []
+ for poly in polys:
+ poly = np.array(poly[:8], dtype=np.float32)
+
+ pt1 = (poly[0], poly[1])
+ pt2 = (poly[2], poly[3])
+ pt3 = (poly[4], poly[5])
+ pt4 = (poly[6], poly[7])
+
+ edge1 = np.sqrt((pt1[0] - pt2[0]) * (pt1[0] - pt2[0]) + (pt1[1] - pt2[
+ 1]) * (pt1[1] - pt2[1]))
+ edge2 = np.sqrt((pt2[0] - pt3[0]) * (pt2[0] - pt3[0]) + (pt2[1] - pt3[
+ 1]) * (pt2[1] - pt3[1]))
+
+ width = max(edge1, edge2)
+ height = min(edge1, edge2)
+
+ rbox_angle = 0
+ if edge1 > edge2:
+ rbox_angle = np.arctan2(
+ float(pt2[1] - pt1[1]), float(pt2[0] - pt1[0]))
+ elif edge2 >= edge1:
+ rbox_angle = np.arctan2(
+ float(pt4[1] - pt1[1]), float(pt4[0] - pt1[0]))
+
+ def norm_angle(angle, range=[-np.pi / 4, np.pi]):
+ return (angle - range[0]) % range[1] + range[0]
+
+ rbox_angle = norm_angle(rbox_angle)
+
+ x_ctr = float(pt1[0] + pt3[0]) / 2
+ y_ctr = float(pt1[1] + pt3[1]) / 2
+ rotated_box = np.array([x_ctr, y_ctr, width, height, rbox_angle])
+ rotated_boxes.append(rotated_box)
+ ret_rotated_boxes = np.array(rotated_boxes)
+ assert ret_rotated_boxes.shape[1] == 5
+ return ret_rotated_boxes
+
+
+def cal_line_length(point1, point2):
+ import math
+ return math.sqrt(
+ math.pow(point1[0] - point2[0], 2) + math.pow(point1[1] - point2[1], 2))
+
+
+def get_best_begin_point_single(coordinate):
+ x1, y1, x2, y2, x3, y3, x4, y4 = coordinate
+ xmin = min(x1, x2, x3, x4)
+ ymin = min(y1, y2, y3, y4)
+ xmax = max(x1, x2, x3, x4)
+ ymax = max(y1, y2, y3, y4)
+ combinate = [[[x1, y1], [x2, y2], [x3, y3], [x4, y4]],
+ [[x4, y4], [x1, y1], [x2, y2], [x3, y3]],
+ [[x3, y3], [x4, y4], [x1, y1], [x2, y2]],
+ [[x2, y2], [x3, y3], [x4, y4], [x1, y1]]]
+ dst_coordinate = [[xmin, ymin], [xmax, ymin], [xmax, ymax], [xmin, ymax]]
+ force = 100000000.0
+ force_flag = 0
+ for i in range(4):
+ temp_force = cal_line_length(combinate[i][0], dst_coordinate[0]) \
+ + cal_line_length(combinate[i][1], dst_coordinate[1]) \
+ + cal_line_length(combinate[i][2], dst_coordinate[2]) \
+ + cal_line_length(combinate[i][3], dst_coordinate[3])
+ if temp_force < force:
+ force = temp_force
+ force_flag = i
+ if force_flag != 0:
+ pass
+ return np.array(combinate[force_flag]).reshape(8)
+
+
+def rbox2poly_np(rrects):
+ """
+ rrect:[x_ctr,y_ctr,w,h,angle]
+ to
+ poly:[x0,y0,x1,y1,x2,y2,x3,y3]
+ """
+ polys = []
+ for i in range(rrects.shape[0]):
+ rrect = rrects[i]
+ # x_ctr, y_ctr, width, height, angle = rrect[:5]
+ x_ctr = rrect[0]
+ y_ctr = rrect[1]
+ width = rrect[2]
+ height = rrect[3]
+ angle = rrect[4]
+ tl_x, tl_y, br_x, br_y = -width / 2, -height / 2, width / 2, height / 2
+ rect = np.array([[tl_x, br_x, br_x, tl_x], [tl_y, tl_y, br_y, br_y]])
+ R = np.array([[np.cos(angle), -np.sin(angle)],
+ [np.sin(angle), np.cos(angle)]])
+ poly = R.dot(rect)
+ x0, x1, x2, x3 = poly[0, :4] + x_ctr
+ y0, y1, y2, y3 = poly[1, :4] + y_ctr
+ poly = np.array([x0, y0, x1, y1, x2, y2, x3, y3], dtype=np.float32)
+ poly = get_best_begin_point_single(poly)
+ polys.append(poly)
+ polys = np.array(polys)
+ return polys
+
+
+def rbox2poly(rrects):
+ """
+ rrect:[x_ctr,y_ctr,w,h,angle]
+ to
+ poly:[x0,y0,x1,y1,x2,y2,x3,y3]
+ """
+ N = paddle.shape(rrects)[0]
+
+ x_ctr = rrects[:, 0]
+ y_ctr = rrects[:, 1]
+ width = rrects[:, 2]
+ height = rrects[:, 3]
+ angle = rrects[:, 4]
+
+ tl_x, tl_y, br_x, br_y = -width * 0.5, -height * 0.5, width * 0.5, height * 0.5
+
+ normal_rects = paddle.stack(
+ [tl_x, br_x, br_x, tl_x, tl_y, tl_y, br_y, br_y], axis=0)
+ normal_rects = paddle.reshape(normal_rects, [2, 4, N])
+ normal_rects = paddle.transpose(normal_rects, [2, 0, 1])
+
+ sin, cos = paddle.sin(angle), paddle.cos(angle)
+ # M.shape=[N,2,2]
+ M = paddle.stack([cos, -sin, sin, cos], axis=0)
+ M = paddle.reshape(M, [2, 2, N])
+ M = paddle.transpose(M, [2, 0, 1])
+
+ # polys:[N,8]
+ polys = paddle.matmul(M, normal_rects)
+ polys = paddle.transpose(polys, [2, 1, 0])
+ polys = paddle.reshape(polys, [-1, N])
+ polys = paddle.transpose(polys, [1, 0])
+
+ tmp = paddle.stack(
+ [x_ctr, y_ctr, x_ctr, y_ctr, x_ctr, y_ctr, x_ctr, y_ctr], axis=1)
+ polys = polys + tmp
+ return polys
+
+
+def bbox_iou_np_expand(box1, box2, x1y1x2y2=True, eps=1e-16):
+ """
+ Calculate the iou of box1 and box2 with numpy.
+
+ Args:
+ box1 (ndarray): [N, 4]
+ box2 (ndarray): [M, 4], usually N != M
+ x1y1x2y2 (bool): whether in x1y1x2y2 stype, default True
+ eps (float): epsilon to avoid divide by zero
+ Return:
+ iou (ndarray): iou of box1 and box2, [N, M]
+ """
+ N, M = len(box1), len(box2) # usually N != M
+ if x1y1x2y2:
+ b1_x1, b1_y1 = box1[:, 0], box1[:, 1]
+ b1_x2, b1_y2 = box1[:, 2], box1[:, 3]
+ b2_x1, b2_y1 = box2[:, 0], box2[:, 1]
+ b2_x2, b2_y2 = box2[:, 2], box2[:, 3]
+ else:
+ # cxcywh style
+ # Transform from center and width to exact coordinates
+ b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2
+ b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2
+ b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2
+ b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2
+
+ # get the coordinates of the intersection rectangle
+ inter_rect_x1 = np.zeros((N, M), dtype=np.float32)
+ inter_rect_y1 = np.zeros((N, M), dtype=np.float32)
+ inter_rect_x2 = np.zeros((N, M), dtype=np.float32)
+ inter_rect_y2 = np.zeros((N, M), dtype=np.float32)
+ for i in range(len(box2)):
+ inter_rect_x1[:, i] = np.maximum(b1_x1, b2_x1[i])
+ inter_rect_y1[:, i] = np.maximum(b1_y1, b2_y1[i])
+ inter_rect_x2[:, i] = np.minimum(b1_x2, b2_x2[i])
+ inter_rect_y2[:, i] = np.minimum(b1_y2, b2_y2[i])
+ # Intersection area
+ inter_area = np.maximum(inter_rect_x2 - inter_rect_x1, 0) * np.maximum(
+ inter_rect_y2 - inter_rect_y1, 0)
+ # Union Area
+ b1_area = np.repeat(
+ ((b1_x2 - b1_x1) * (b1_y2 - b1_y1)).reshape(-1, 1), M, axis=-1)
+ b2_area = np.repeat(
+ ((b2_x2 - b2_x1) * (b2_y2 - b2_y1)).reshape(1, -1), N, axis=0)
+
+ ious = inter_area / (b1_area + b2_area - inter_area + eps)
+ return ious
+
+
+def bbox2distance(points, bbox, max_dis=None, eps=0.1):
+ """Decode bounding box based on distances.
+ Args:
+ points (Tensor): Shape (n, 2), [x, y].
+ bbox (Tensor): Shape (n, 4), "xyxy" format
+ max_dis (float): Upper bound of the distance.
+ eps (float): a small value to ensure target < max_dis, instead <=
+ Returns:
+ Tensor: Decoded distances.
+ """
+ left = points[:, 0] - bbox[:, 0]
+ top = points[:, 1] - bbox[:, 1]
+ right = bbox[:, 2] - points[:, 0]
+ bottom = bbox[:, 3] - points[:, 1]
+ if max_dis is not None:
+ left = left.clip(min=0, max=max_dis - eps)
+ top = top.clip(min=0, max=max_dis - eps)
+ right = right.clip(min=0, max=max_dis - eps)
+ bottom = bottom.clip(min=0, max=max_dis - eps)
+ return paddle.stack([left, top, right, bottom], -1)
+
+
+def distance2bbox(points, distance, max_shape=None):
+ """Decode distance prediction to bounding box.
+ Args:
+ points (Tensor): Shape (n, 2), [x, y].
+ distance (Tensor): Distance from the given point to 4
+ boundaries (left, top, right, bottom).
+ max_shape (tuple): Shape of the image.
+ Returns:
+ Tensor: Decoded bboxes.
+ """
+ x1 = points[:, 0] - distance[:, 0]
+ y1 = points[:, 1] - distance[:, 1]
+ x2 = points[:, 0] + distance[:, 2]
+ y2 = points[:, 1] + distance[:, 3]
+ if max_shape is not None:
+ x1 = x1.clip(min=0, max=max_shape[1])
+ y1 = y1.clip(min=0, max=max_shape[0])
+ x2 = x2.clip(min=0, max=max_shape[1])
+ y2 = y2.clip(min=0, max=max_shape[0])
+ return paddle.stack([x1, y1, x2, y2], -1)
+
+
+def bbox_center(boxes):
+ """Get bbox centers from boxes.
+ Args:
+ boxes (Tensor): boxes with shape (N, 4), "xmin, ymin, xmax, ymax" format.
+ Returns:
+ Tensor: boxes centers with shape (N, 2), "cx, cy" format.
+ """
+ boxes_cx = (boxes[:, 0] + boxes[:, 2]) / 2
+ boxes_cy = (boxes[:, 1] + boxes[:, 3]) / 2
+ return paddle.stack([boxes_cx, boxes_cy], axis=-1)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__init__.py
new file mode 100644
index 000000000..b6b928608
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__init__.py
@@ -0,0 +1,53 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import bbox_head
+from . import mask_head
+from . import yolo_head
+from . import roi_extractor
+from . import ssd_head
+from . import fcos_head
+from . import solov2_head
+from . import ttf_head
+from . import cascade_head
+from . import face_head
+from . import s2anet_head
+from . import keypoint_hrhrnet_head
+from . import centernet_head
+from . import gfl_head
+from . import simota_head
+from . import pico_head
+from . import detr_head
+from . import sparsercnn_head
+from . import tood_head
+
+from .bbox_head import *
+from .mask_head import *
+from .yolo_head import *
+from .roi_extractor import *
+from .ssd_head import *
+from .fcos_head import *
+from .solov2_head import *
+from .ttf_head import *
+from .cascade_head import *
+from .face_head import *
+from .s2anet_head import *
+from .keypoint_hrhrnet_head import *
+from .centernet_head import *
+from .gfl_head import *
+from .simota_head import *
+from .pico_head import *
+from .detr_head import *
+from .sparsercnn_head import *
+from .tood_head import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..9813bf8c0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/bbox_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/bbox_head.cpython-37.pyc
new file mode 100644
index 000000000..6e35d8b73
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/bbox_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/cascade_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/cascade_head.cpython-37.pyc
new file mode 100644
index 000000000..fc1fc24f8
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/cascade_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/centernet_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/centernet_head.cpython-37.pyc
new file mode 100644
index 000000000..78dfbeb45
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/centernet_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/detr_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/detr_head.cpython-37.pyc
new file mode 100644
index 000000000..d72d46bc6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/detr_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/face_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/face_head.cpython-37.pyc
new file mode 100644
index 000000000..c77948da9
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/face_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/fcos_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/fcos_head.cpython-37.pyc
new file mode 100644
index 000000000..b2efd5f47
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/fcos_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/gfl_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/gfl_head.cpython-37.pyc
new file mode 100644
index 000000000..2478604a1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/gfl_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/keypoint_hrhrnet_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/keypoint_hrhrnet_head.cpython-37.pyc
new file mode 100644
index 000000000..fff6a345f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/keypoint_hrhrnet_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/mask_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/mask_head.cpython-37.pyc
new file mode 100644
index 000000000..0dce48ccb
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/mask_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/pico_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/pico_head.cpython-37.pyc
new file mode 100644
index 000000000..ef672f171
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/pico_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/roi_extractor.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/roi_extractor.cpython-37.pyc
new file mode 100644
index 000000000..f432d300c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/roi_extractor.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/s2anet_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/s2anet_head.cpython-37.pyc
new file mode 100644
index 000000000..3cf65d2bd
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/s2anet_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/simota_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/simota_head.cpython-37.pyc
new file mode 100644
index 000000000..302f42806
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/simota_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/solov2_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/solov2_head.cpython-37.pyc
new file mode 100644
index 000000000..ed7b79f62
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/solov2_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/sparsercnn_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/sparsercnn_head.cpython-37.pyc
new file mode 100644
index 000000000..ab0357c75
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/sparsercnn_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ssd_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ssd_head.cpython-37.pyc
new file mode 100644
index 000000000..8d47ca572
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ssd_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/tood_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/tood_head.cpython-37.pyc
new file mode 100644
index 000000000..4fdc6a1d0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/tood_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ttf_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ttf_head.cpython-37.pyc
new file mode 100644
index 000000000..bc43a5621
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/ttf_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/yolo_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/yolo_head.cpython-37.pyc
new file mode 100644
index 000000000..6e73b0ef0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/__pycache__/yolo_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/bbox_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/bbox_head.py
new file mode 100644
index 000000000..e4d7d6878
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/bbox_head.py
@@ -0,0 +1,376 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, XavierUniform, KaimingNormal
+from paddle.regularizer import L2Decay
+
+from ppdet.core.workspace import register, create
+from .roi_extractor import RoIAlign
+from ..shape_spec import ShapeSpec
+from ..bbox_utils import bbox2delta
+from ppdet.modeling.layers import ConvNormLayer
+
+__all__ = ['TwoFCHead', 'XConvNormHead', 'BBoxHead']
+
+
+@register
+class TwoFCHead(nn.Layer):
+ """
+ RCNN bbox head with Two fc layers to extract feature
+
+ Args:
+ in_channel (int): Input channel which can be derived by from_config
+ out_channel (int): Output channel
+ resolution (int): Resolution of input feature map, default 7
+ """
+
+ def __init__(self, in_channel=256, out_channel=1024, resolution=7):
+ super(TwoFCHead, self).__init__()
+ self.in_channel = in_channel
+ self.out_channel = out_channel
+ fan = in_channel * resolution * resolution
+ self.fc6 = nn.Linear(
+ in_channel * resolution * resolution,
+ out_channel,
+ weight_attr=paddle.ParamAttr(
+ initializer=XavierUniform(fan_out=fan)))
+ self.fc6.skip_quant = True
+
+ self.fc7 = nn.Linear(
+ out_channel,
+ out_channel,
+ weight_attr=paddle.ParamAttr(initializer=XavierUniform()))
+ self.fc7.skip_quant = True
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ s = input_shape
+ s = s[0] if isinstance(s, (list, tuple)) else s
+ return {'in_channel': s.channels}
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, )]
+
+ def forward(self, rois_feat):
+ rois_feat = paddle.flatten(rois_feat, start_axis=1, stop_axis=-1)
+ fc6 = self.fc6(rois_feat)
+ fc6 = F.relu(fc6)
+ fc7 = self.fc7(fc6)
+ fc7 = F.relu(fc7)
+ return fc7
+
+
+@register
+class XConvNormHead(nn.Layer):
+ __shared__ = ['norm_type', 'freeze_norm']
+ """
+ RCNN bbox head with serveral convolution layers
+
+ Args:
+ in_channel (int): Input channels which can be derived by from_config
+ num_convs (int): The number of conv layers
+ conv_dim (int): The number of channels for the conv layers
+ out_channel (int): Output channels
+ resolution (int): Resolution of input feature map
+ norm_type (string): Norm type, bn, gn, sync_bn are available,
+ default `gn`
+ freeze_norm (bool): Whether to freeze the norm
+ stage_name (string): Prefix name for conv layer, '' by default
+ """
+
+ def __init__(self,
+ in_channel=256,
+ num_convs=4,
+ conv_dim=256,
+ out_channel=1024,
+ resolution=7,
+ norm_type='gn',
+ freeze_norm=False,
+ stage_name=''):
+ super(XConvNormHead, self).__init__()
+ self.in_channel = in_channel
+ self.num_convs = num_convs
+ self.conv_dim = conv_dim
+ self.out_channel = out_channel
+ self.norm_type = norm_type
+ self.freeze_norm = freeze_norm
+
+ self.bbox_head_convs = []
+ fan = conv_dim * 3 * 3
+ initializer = KaimingNormal(fan_in=fan)
+ for i in range(self.num_convs):
+ in_c = in_channel if i == 0 else conv_dim
+ head_conv_name = stage_name + 'bbox_head_conv{}'.format(i)
+ head_conv = self.add_sublayer(
+ head_conv_name,
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=conv_dim,
+ filter_size=3,
+ stride=1,
+ norm_type=self.norm_type,
+ freeze_norm=self.freeze_norm,
+ initializer=initializer))
+ self.bbox_head_convs.append(head_conv)
+
+ fan = conv_dim * resolution * resolution
+ self.fc6 = nn.Linear(
+ conv_dim * resolution * resolution,
+ out_channel,
+ weight_attr=paddle.ParamAttr(
+ initializer=XavierUniform(fan_out=fan)),
+ bias_attr=paddle.ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.)))
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ s = input_shape
+ s = s[0] if isinstance(s, (list, tuple)) else s
+ return {'in_channel': s.channels}
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, )]
+
+ def forward(self, rois_feat):
+ for i in range(self.num_convs):
+ rois_feat = F.relu(self.bbox_head_convs[i](rois_feat))
+ rois_feat = paddle.flatten(rois_feat, start_axis=1, stop_axis=-1)
+ fc6 = F.relu(self.fc6(rois_feat))
+ return fc6
+
+
+@register
+class BBoxHead(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['bbox_assigner', 'bbox_loss']
+ """
+ RCNN bbox head
+
+ Args:
+ head (nn.Layer): Extract feature in bbox head
+ in_channel (int): Input channel after RoI extractor
+ roi_extractor (object): The module of RoI Extractor
+ bbox_assigner (object): The module of Box Assigner, label and sample the
+ box.
+ with_pool (bool): Whether to use pooling for the RoI feature.
+ num_classes (int): The number of classes
+ bbox_weight (List[float]): The weight to get the decode box
+ """
+
+ def __init__(self,
+ head,
+ in_channel,
+ roi_extractor=RoIAlign().__dict__,
+ bbox_assigner='BboxAssigner',
+ with_pool=False,
+ num_classes=80,
+ bbox_weight=[10., 10., 5., 5.],
+ bbox_loss=None):
+ super(BBoxHead, self).__init__()
+ self.head = head
+ self.roi_extractor = roi_extractor
+ if isinstance(roi_extractor, dict):
+ self.roi_extractor = RoIAlign(**roi_extractor)
+ self.bbox_assigner = bbox_assigner
+
+ self.with_pool = with_pool
+ self.num_classes = num_classes
+ self.bbox_weight = bbox_weight
+ self.bbox_loss = bbox_loss
+
+ self.bbox_score = nn.Linear(
+ in_channel,
+ self.num_classes + 1,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0.0, std=0.01)))
+ self.bbox_score.skip_quant = True
+
+ self.bbox_delta = nn.Linear(
+ in_channel,
+ 4 * self.num_classes,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0.0, std=0.001)))
+ self.bbox_delta.skip_quant = True
+ self.assigned_label = None
+ self.assigned_rois = None
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ roi_pooler = cfg['roi_extractor']
+ assert isinstance(roi_pooler, dict)
+ kwargs = RoIAlign.from_config(cfg, input_shape)
+ roi_pooler.update(kwargs)
+ kwargs = {'input_shape': input_shape}
+ head = create(cfg['head'], **kwargs)
+ return {
+ 'roi_extractor': roi_pooler,
+ 'head': head,
+ 'in_channel': head.out_shape[0].channels
+ }
+
+ def forward(self, body_feats=None, rois=None, rois_num=None, inputs=None):
+ """
+ body_feats (list[Tensor]): Feature maps from backbone
+ rois (list[Tensor]): RoIs generated from RPN module
+ rois_num (Tensor): The number of RoIs in each image
+ inputs (dict{Tensor}): The ground-truth of image
+ """
+ if self.training:
+ rois, rois_num, targets = self.bbox_assigner(rois, rois_num, inputs)
+ self.assigned_rois = (rois, rois_num)
+ self.assigned_targets = targets
+
+ rois_feat = self.roi_extractor(body_feats, rois, rois_num)
+ bbox_feat = self.head(rois_feat)
+ if self.with_pool:
+ feat = F.adaptive_avg_pool2d(bbox_feat, output_size=1)
+ feat = paddle.squeeze(feat, axis=[2, 3])
+ else:
+ feat = bbox_feat
+ scores = self.bbox_score(feat)
+ deltas = self.bbox_delta(feat)
+
+ if self.training:
+ loss = self.get_loss(scores, deltas, targets, rois,
+ self.bbox_weight)
+ return loss, bbox_feat
+ else:
+ pred = self.get_prediction(scores, deltas)
+ return pred, self.head
+
+ def get_loss(self, scores, deltas, targets, rois, bbox_weight):
+ """
+ scores (Tensor): scores from bbox head outputs
+ deltas (Tensor): deltas from bbox head outputs
+ targets (list[List[Tensor]]): bbox targets containing tgt_labels, tgt_bboxes and tgt_gt_inds
+ rois (List[Tensor]): RoIs generated in each batch
+ """
+ cls_name = 'loss_bbox_cls'
+ reg_name = 'loss_bbox_reg'
+ loss_bbox = {}
+
+ # TODO: better pass args
+ tgt_labels, tgt_bboxes, tgt_gt_inds = targets
+
+ # bbox cls
+ tgt_labels = paddle.concat(tgt_labels) if len(
+ tgt_labels) > 1 else tgt_labels[0]
+ valid_inds = paddle.nonzero(tgt_labels >= 0).flatten()
+ if valid_inds.shape[0] == 0:
+ loss_bbox[cls_name] = paddle.zeros([1], dtype='float32')
+ else:
+ tgt_labels = tgt_labels.cast('int64')
+ tgt_labels.stop_gradient = True
+ loss_bbox_cls = F.cross_entropy(
+ input=scores, label=tgt_labels, reduction='mean')
+ loss_bbox[cls_name] = loss_bbox_cls
+
+ # bbox reg
+
+ cls_agnostic_bbox_reg = deltas.shape[1] == 4
+
+ fg_inds = paddle.nonzero(
+ paddle.logical_and(tgt_labels >= 0, tgt_labels <
+ self.num_classes)).flatten()
+
+ if fg_inds.numel() == 0:
+ loss_bbox[reg_name] = paddle.zeros([1], dtype='float32')
+ return loss_bbox
+
+ if cls_agnostic_bbox_reg:
+ reg_delta = paddle.gather(deltas, fg_inds)
+ else:
+ fg_gt_classes = paddle.gather(tgt_labels, fg_inds)
+
+ reg_row_inds = paddle.arange(fg_gt_classes.shape[0]).unsqueeze(1)
+ reg_row_inds = paddle.tile(reg_row_inds, [1, 4]).reshape([-1, 1])
+
+ reg_col_inds = 4 * fg_gt_classes.unsqueeze(1) + paddle.arange(4)
+
+ reg_col_inds = reg_col_inds.reshape([-1, 1])
+ reg_inds = paddle.concat([reg_row_inds, reg_col_inds], axis=1)
+
+ reg_delta = paddle.gather(deltas, fg_inds)
+ reg_delta = paddle.gather_nd(reg_delta, reg_inds).reshape([-1, 4])
+ rois = paddle.concat(rois) if len(rois) > 1 else rois[0]
+ tgt_bboxes = paddle.concat(tgt_bboxes) if len(
+ tgt_bboxes) > 1 else tgt_bboxes[0]
+
+ reg_target = bbox2delta(rois, tgt_bboxes, bbox_weight)
+ reg_target = paddle.gather(reg_target, fg_inds)
+ reg_target.stop_gradient = True
+
+ if self.bbox_loss is not None:
+ reg_delta = self.bbox_transform(reg_delta)
+ reg_target = self.bbox_transform(reg_target)
+ loss_bbox_reg = self.bbox_loss(
+ reg_delta, reg_target).sum() / tgt_labels.shape[0]
+ loss_bbox_reg *= self.num_classes
+ else:
+ loss_bbox_reg = paddle.abs(reg_delta - reg_target).sum(
+ ) / tgt_labels.shape[0]
+
+ loss_bbox[reg_name] = loss_bbox_reg
+
+ return loss_bbox
+
+ def bbox_transform(self, deltas, weights=[0.1, 0.1, 0.2, 0.2]):
+ wx, wy, ww, wh = weights
+
+ deltas = paddle.reshape(deltas, shape=(0, -1, 4))
+
+ dx = paddle.slice(deltas, axes=[2], starts=[0], ends=[1]) * wx
+ dy = paddle.slice(deltas, axes=[2], starts=[1], ends=[2]) * wy
+ dw = paddle.slice(deltas, axes=[2], starts=[2], ends=[3]) * ww
+ dh = paddle.slice(deltas, axes=[2], starts=[3], ends=[4]) * wh
+
+ dw = paddle.clip(dw, -1.e10, np.log(1000. / 16))
+ dh = paddle.clip(dh, -1.e10, np.log(1000. / 16))
+
+ pred_ctr_x = dx
+ pred_ctr_y = dy
+ pred_w = paddle.exp(dw)
+ pred_h = paddle.exp(dh)
+
+ x1 = pred_ctr_x - 0.5 * pred_w
+ y1 = pred_ctr_y - 0.5 * pred_h
+ x2 = pred_ctr_x + 0.5 * pred_w
+ y2 = pred_ctr_y + 0.5 * pred_h
+
+ x1 = paddle.reshape(x1, shape=(-1, ))
+ y1 = paddle.reshape(y1, shape=(-1, ))
+ x2 = paddle.reshape(x2, shape=(-1, ))
+ y2 = paddle.reshape(y2, shape=(-1, ))
+
+ return paddle.concat([x1, y1, x2, y2])
+
+ def get_prediction(self, score, delta):
+ bbox_prob = F.softmax(score)
+ return delta, bbox_prob
+
+ def get_head(self, ):
+ return self.head
+
+ def get_assigned_targets(self, ):
+ return self.assigned_targets
+
+ def get_assigned_rois(self, ):
+ return self.assigned_rois
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/cascade_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/cascade_head.py
new file mode 100644
index 000000000..935642bd6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/cascade_head.py
@@ -0,0 +1,283 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal
+
+from ppdet.core.workspace import register
+from .bbox_head import BBoxHead, TwoFCHead, XConvNormHead
+from .roi_extractor import RoIAlign
+from ..shape_spec import ShapeSpec
+from ..bbox_utils import delta2bbox, clip_bbox, nonempty_bbox
+
+__all__ = ['CascadeTwoFCHead', 'CascadeXConvNormHead', 'CascadeHead']
+
+
+@register
+class CascadeTwoFCHead(nn.Layer):
+ __shared__ = ['num_cascade_stage']
+ """
+ Cascade RCNN bbox head with Two fc layers to extract feature
+
+ Args:
+ in_channel (int): Input channel which can be derived by from_config
+ out_channel (int): Output channel
+ resolution (int): Resolution of input feature map, default 7
+ num_cascade_stage (int): The number of cascade stage, default 3
+ """
+
+ def __init__(self,
+ in_channel=256,
+ out_channel=1024,
+ resolution=7,
+ num_cascade_stage=3):
+ super(CascadeTwoFCHead, self).__init__()
+
+ self.in_channel = in_channel
+ self.out_channel = out_channel
+
+ self.head_list = []
+ for stage in range(num_cascade_stage):
+ head_per_stage = self.add_sublayer(
+ str(stage), TwoFCHead(in_channel, out_channel, resolution))
+ self.head_list.append(head_per_stage)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ s = input_shape
+ s = s[0] if isinstance(s, (list, tuple)) else s
+ return {'in_channel': s.channels}
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, )]
+
+ def forward(self, rois_feat, stage=0):
+ out = self.head_list[stage](rois_feat)
+ return out
+
+
+@register
+class CascadeXConvNormHead(nn.Layer):
+ __shared__ = ['norm_type', 'freeze_norm', 'num_cascade_stage']
+ """
+ Cascade RCNN bbox head with serveral convolution layers
+
+ Args:
+ in_channel (int): Input channels which can be derived by from_config
+ num_convs (int): The number of conv layers
+ conv_dim (int): The number of channels for the conv layers
+ out_channel (int): Output channels
+ resolution (int): Resolution of input feature map
+ norm_type (string): Norm type, bn, gn, sync_bn are available,
+ default `gn`
+ freeze_norm (bool): Whether to freeze the norm
+ num_cascade_stage (int): The number of cascade stage, default 3
+ """
+
+ def __init__(self,
+ in_channel=256,
+ num_convs=4,
+ conv_dim=256,
+ out_channel=1024,
+ resolution=7,
+ norm_type='gn',
+ freeze_norm=False,
+ num_cascade_stage=3):
+ super(CascadeXConvNormHead, self).__init__()
+ self.in_channel = in_channel
+ self.out_channel = out_channel
+
+ self.head_list = []
+ for stage in range(num_cascade_stage):
+ head_per_stage = self.add_sublayer(
+ str(stage),
+ XConvNormHead(
+ in_channel,
+ num_convs,
+ conv_dim,
+ out_channel,
+ resolution,
+ norm_type,
+ freeze_norm,
+ stage_name='stage{}_'.format(stage)))
+ self.head_list.append(head_per_stage)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ s = input_shape
+ s = s[0] if isinstance(s, (list, tuple)) else s
+ return {'in_channel': s.channels}
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, )]
+
+ def forward(self, rois_feat, stage=0):
+ out = self.head_list[stage](rois_feat)
+ return out
+
+
+@register
+class CascadeHead(BBoxHead):
+ __shared__ = ['num_classes', 'num_cascade_stages']
+ __inject__ = ['bbox_assigner', 'bbox_loss']
+ """
+ Cascade RCNN bbox head
+
+ Args:
+ head (nn.Layer): Extract feature in bbox head
+ in_channel (int): Input channel after RoI extractor
+ roi_extractor (object): The module of RoI Extractor
+ bbox_assigner (object): The module of Box Assigner, label and sample the
+ box.
+ num_classes (int): The number of classes
+ bbox_weight (List[List[float]]): The weight to get the decode box and the
+ length of weight is the number of cascade stage
+ num_cascade_stages (int): THe number of stage to refine the box
+ """
+
+ def __init__(self,
+ head,
+ in_channel,
+ roi_extractor=RoIAlign().__dict__,
+ bbox_assigner='BboxAssigner',
+ num_classes=80,
+ bbox_weight=[[10., 10., 5., 5.], [20.0, 20.0, 10.0, 10.0],
+ [30.0, 30.0, 15.0, 15.0]],
+ num_cascade_stages=3,
+ bbox_loss=None):
+ nn.Layer.__init__(self, )
+ self.head = head
+ self.roi_extractor = roi_extractor
+ if isinstance(roi_extractor, dict):
+ self.roi_extractor = RoIAlign(**roi_extractor)
+ self.bbox_assigner = bbox_assigner
+
+ self.num_classes = num_classes
+ self.bbox_weight = bbox_weight
+ self.num_cascade_stages = num_cascade_stages
+ self.bbox_loss = bbox_loss
+
+ self.bbox_score_list = []
+ self.bbox_delta_list = []
+ for i in range(num_cascade_stages):
+ score_name = 'bbox_score_stage{}'.format(i)
+ delta_name = 'bbox_delta_stage{}'.format(i)
+ bbox_score = self.add_sublayer(
+ score_name,
+ nn.Linear(
+ in_channel,
+ self.num_classes + 1,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0.0, std=0.01))))
+
+ bbox_delta = self.add_sublayer(
+ delta_name,
+ nn.Linear(
+ in_channel,
+ 4,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0.0, std=0.001))))
+ self.bbox_score_list.append(bbox_score)
+ self.bbox_delta_list.append(bbox_delta)
+ self.assigned_label = None
+ self.assigned_rois = None
+
+ def forward(self, body_feats=None, rois=None, rois_num=None, inputs=None):
+ """
+ body_feats (list[Tensor]): Feature maps from backbone
+ rois (Tensor): RoIs generated from RPN module
+ rois_num (Tensor): The number of RoIs in each image
+ inputs (dict{Tensor}): The ground-truth of image
+ """
+ targets = []
+ if self.training:
+ rois, rois_num, targets = self.bbox_assigner(rois, rois_num, inputs)
+ targets_list = [targets]
+ self.assigned_rois = (rois, rois_num)
+ self.assigned_targets = targets
+
+ pred_bbox = None
+ head_out_list = []
+ for i in range(self.num_cascade_stages):
+ if i > 0:
+ rois, rois_num = self._get_rois_from_boxes(pred_bbox,
+ inputs['im_shape'])
+ if self.training:
+ rois, rois_num, targets = self.bbox_assigner(
+ rois, rois_num, inputs, i, is_cascade=True)
+ targets_list.append(targets)
+
+ rois_feat = self.roi_extractor(body_feats, rois, rois_num)
+ bbox_feat = self.head(rois_feat, i)
+ scores = self.bbox_score_list[i](bbox_feat)
+ deltas = self.bbox_delta_list[i](bbox_feat)
+ head_out_list.append([scores, deltas, rois])
+ pred_bbox = self._get_pred_bbox(deltas, rois, self.bbox_weight[i])
+
+ if self.training:
+ loss = {}
+ for stage, value in enumerate(zip(head_out_list, targets_list)):
+ (scores, deltas, rois), targets = value
+ loss_stage = self.get_loss(scores, deltas, targets, rois,
+ self.bbox_weight[stage])
+ for k, v in loss_stage.items():
+ loss[k + "_stage{}".format(
+ stage)] = v / self.num_cascade_stages
+
+ return loss, bbox_feat
+ else:
+ scores, deltas, self.refined_rois = self.get_prediction(
+ head_out_list)
+ return (deltas, scores), self.head
+
+ def _get_rois_from_boxes(self, boxes, im_shape):
+ rois = []
+ for i, boxes_per_image in enumerate(boxes):
+ clip_box = clip_bbox(boxes_per_image, im_shape[i])
+ if self.training:
+ keep = nonempty_bbox(clip_box)
+ if keep.shape[0] == 0:
+ keep = paddle.zeros([1], dtype='int32')
+ clip_box = paddle.gather(clip_box, keep)
+ rois.append(clip_box)
+ rois_num = paddle.concat([paddle.shape(r)[0] for r in rois])
+ return rois, rois_num
+
+ def _get_pred_bbox(self, deltas, proposals, weights):
+ pred_proposals = paddle.concat(proposals) if len(
+ proposals) > 1 else proposals[0]
+ pred_bbox = delta2bbox(deltas, pred_proposals, weights)
+ pred_bbox = paddle.reshape(pred_bbox, [-1, deltas.shape[-1]])
+ num_prop = []
+ for p in proposals:
+ num_prop.append(p.shape[0])
+ return pred_bbox.split(num_prop)
+
+ def get_prediction(self, head_out_list):
+ """
+ head_out_list(List[Tensor]): scores, deltas, rois
+ """
+ pred_list = []
+ scores_list = [F.softmax(head[0]) for head in head_out_list]
+ scores = paddle.add_n(scores_list) / self.num_cascade_stages
+ # Get deltas and rois from the last stage
+ _, deltas, rois = head_out_list[-1]
+ return scores, deltas, rois
+
+ def get_refined_rois(self, ):
+ return self.refined_rois
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/centernet_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/centernet_head.py
new file mode 100644
index 000000000..ce8b5c15d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/centernet_head.py
@@ -0,0 +1,291 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Constant, Uniform
+from ppdet.core.workspace import register
+from ppdet.modeling.losses import CTFocalLoss, GIoULoss
+
+
+class ConvLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ kernel_size,
+ stride=1,
+ padding=0,
+ dilation=1,
+ groups=1,
+ bias=False):
+ super(ConvLayer, self).__init__()
+ bias_attr = False
+ fan_in = ch_in * kernel_size**2
+ bound = 1 / math.sqrt(fan_in)
+ param_attr = paddle.ParamAttr(initializer=Uniform(-bound, bound))
+ if bias:
+ bias_attr = paddle.ParamAttr(initializer=Constant(0.))
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ dilation=dilation,
+ groups=groups,
+ weight_attr=param_attr,
+ bias_attr=bias_attr)
+
+ def forward(self, inputs):
+ out = self.conv(inputs)
+ return out
+
+
+@register
+class CenterNetHead(nn.Layer):
+ """
+ Args:
+ in_channels (int): the channel number of input to CenterNetHead.
+ num_classes (int): the number of classes, 80 (COCO dataset) by default.
+ head_planes (int): the channel number in all head, 256 by default.
+ heatmap_weight (float): the weight of heatmap loss, 1 by default.
+ regress_ltrb (bool): whether to regress left/top/right/bottom or
+ width/height for a box, true by default
+ size_weight (float): the weight of box size loss, 0.1 by default.
+ size_loss (): the type of size regression loss, 'L1 loss' by default.
+ offset_weight (float): the weight of center offset loss, 1 by default.
+ iou_weight (float): the weight of iou head loss, 0 by default.
+ """
+
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ in_channels,
+ num_classes=80,
+ head_planes=256,
+ heatmap_weight=1,
+ regress_ltrb=True,
+ size_weight=0.1,
+ size_loss='L1',
+ offset_weight=1,
+ iou_weight=0):
+ super(CenterNetHead, self).__init__()
+ self.regress_ltrb = regress_ltrb
+ self.weights = {
+ 'heatmap': heatmap_weight,
+ 'size': size_weight,
+ 'offset': offset_weight,
+ 'iou': iou_weight
+ }
+
+ # heatmap head
+ self.heatmap = nn.Sequential(
+ ConvLayer(
+ in_channels, head_planes, kernel_size=3, padding=1, bias=True),
+ nn.ReLU(),
+ ConvLayer(
+ head_planes,
+ num_classes,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=True))
+ with paddle.no_grad():
+ self.heatmap[2].conv.bias[:] = -2.19
+
+ # size(ltrb or wh) head
+ self.size = nn.Sequential(
+ ConvLayer(
+ in_channels, head_planes, kernel_size=3, padding=1, bias=True),
+ nn.ReLU(),
+ ConvLayer(
+ head_planes,
+ 4 if regress_ltrb else 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=True))
+ self.size_loss = size_loss
+
+ # offset head
+ self.offset = nn.Sequential(
+ ConvLayer(
+ in_channels, head_planes, kernel_size=3, padding=1, bias=True),
+ nn.ReLU(),
+ ConvLayer(
+ head_planes, 2, kernel_size=1, stride=1, padding=0, bias=True))
+
+ # iou head (optinal)
+ if iou_weight > 0:
+ self.iou = nn.Sequential(
+ ConvLayer(
+ in_channels,
+ head_planes,
+ kernel_size=3,
+ padding=1,
+ bias=True),
+ nn.ReLU(),
+ ConvLayer(
+ head_planes,
+ 4 if regress_ltrb else 2,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias=True))
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ if isinstance(input_shape, (list, tuple)):
+ input_shape = input_shape[0]
+ return {'in_channels': input_shape.channels}
+
+ def forward(self, feat, inputs):
+ heatmap = self.heatmap(feat)
+ size = self.size(feat)
+ offset = self.offset(feat)
+ iou = self.iou(feat) if hasattr(self, 'iou_weight') else None
+
+ if self.training:
+ loss = self.get_loss(
+ inputs, self.weights, heatmap, size, offset, iou=iou)
+ return loss
+ else:
+ heatmap = F.sigmoid(heatmap)
+ head_outs = {'heatmap': heatmap, 'size': size, 'offset': offset}
+ if iou is not None:
+ head_outs.update({'iou': iou})
+ return head_outs
+
+ def get_loss(self, inputs, weights, heatmap, size, offset, iou=None):
+ # heatmap head loss: CTFocalLoss
+ heatmap_target = inputs['heatmap']
+ heatmap = paddle.clip(F.sigmoid(heatmap), 1e-4, 1 - 1e-4)
+ ctfocal_loss = CTFocalLoss()
+ heatmap_loss = ctfocal_loss(heatmap, heatmap_target)
+
+ # size head loss: L1 loss or GIoU loss
+ index = inputs['index']
+ mask = inputs['index_mask']
+ size = paddle.transpose(size, perm=[0, 2, 3, 1])
+ size_n, size_h, size_w, size_c = size.shape
+ size = paddle.reshape(size, shape=[size_n, -1, size_c])
+ index = paddle.unsqueeze(index, 2)
+ batch_inds = list()
+ for i in range(size_n):
+ batch_ind = paddle.full(
+ shape=[1, index.shape[1], 1], fill_value=i, dtype='int64')
+ batch_inds.append(batch_ind)
+ batch_inds = paddle.concat(batch_inds, axis=0)
+ index = paddle.concat(x=[batch_inds, index], axis=2)
+ pos_size = paddle.gather_nd(size, index=index)
+ mask = paddle.unsqueeze(mask, axis=2)
+ size_mask = paddle.expand_as(mask, pos_size)
+ size_mask = paddle.cast(size_mask, dtype=pos_size.dtype)
+ pos_num = size_mask.sum()
+ size_mask.stop_gradient = True
+ if self.size_loss == 'L1':
+ if self.regress_ltrb:
+ size_target = inputs['size']
+ # shape: [bs, max_per_img, 4]
+ else:
+ if inputs['size'].shape[-1] == 2:
+ # inputs['size'] is wh, and regress as wh
+ # shape: [bs, max_per_img, 2]
+ size_target = inputs['size']
+ else:
+ # inputs['size'] is ltrb, but regress as wh
+ # shape: [bs, max_per_img, 4]
+ size_target = inputs['size'][:, :, 0:2] + inputs['size'][:, :, 2:]
+
+ size_target.stop_gradient = True
+ size_loss = F.l1_loss(
+ pos_size * size_mask, size_target * size_mask, reduction='sum')
+ size_loss = size_loss / (pos_num + 1e-4)
+ elif self.size_loss == 'giou':
+ size_target = inputs['bbox_xys']
+ size_target.stop_gradient = True
+ centers_x = (size_target[:, :, 0:1] + size_target[:, :, 2:3]) / 2.0
+ centers_y = (size_target[:, :, 1:2] + size_target[:, :, 3:4]) / 2.0
+ x1 = centers_x - pos_size[:, :, 0:1]
+ y1 = centers_y - pos_size[:, :, 1:2]
+ x2 = centers_x + pos_size[:, :, 2:3]
+ y2 = centers_y + pos_size[:, :, 3:4]
+ pred_boxes = paddle.concat([x1, y1, x2, y2], axis=-1)
+ giou_loss = GIoULoss(reduction='sum')
+ size_loss = giou_loss(
+ pred_boxes * size_mask,
+ size_target * size_mask,
+ iou_weight=size_mask,
+ loc_reweight=None)
+ size_loss = size_loss / (pos_num + 1e-4)
+
+ # offset head loss: L1 loss
+ offset_target = inputs['offset']
+ offset = paddle.transpose(offset, perm=[0, 2, 3, 1])
+ offset_n, offset_h, offset_w, offset_c = offset.shape
+ offset = paddle.reshape(offset, shape=[offset_n, -1, offset_c])
+ pos_offset = paddle.gather_nd(offset, index=index)
+ offset_mask = paddle.expand_as(mask, pos_offset)
+ offset_mask = paddle.cast(offset_mask, dtype=pos_offset.dtype)
+ pos_num = offset_mask.sum()
+ offset_mask.stop_gradient = True
+ offset_target.stop_gradient = True
+ offset_loss = F.l1_loss(
+ pos_offset * offset_mask,
+ offset_target * offset_mask,
+ reduction='sum')
+ offset_loss = offset_loss / (pos_num + 1e-4)
+
+ # iou head loss: GIoU loss
+ if iou is not None:
+ iou = paddle.transpose(iou, perm=[0, 2, 3, 1])
+ iou_n, iou_h, iou_w, iou_c = iou.shape
+ iou = paddle.reshape(iou, shape=[iou_n, -1, iou_c])
+ pos_iou = paddle.gather_nd(iou, index=index)
+ iou_mask = paddle.expand_as(mask, pos_iou)
+ iou_mask = paddle.cast(iou_mask, dtype=pos_iou.dtype)
+ pos_num = iou_mask.sum()
+ iou_mask.stop_gradient = True
+ gt_bbox_xys = inputs['bbox_xys']
+ gt_bbox_xys.stop_gradient = True
+ centers_x = (gt_bbox_xys[:, :, 0:1] + gt_bbox_xys[:, :, 2:3]) / 2.0
+ centers_y = (gt_bbox_xys[:, :, 1:2] + gt_bbox_xys[:, :, 3:4]) / 2.0
+ x1 = centers_x - pos_size[:, :, 0:1]
+ y1 = centers_y - pos_size[:, :, 1:2]
+ x2 = centers_x + pos_size[:, :, 2:3]
+ y2 = centers_y + pos_size[:, :, 3:4]
+ pred_boxes = paddle.concat([x1, y1, x2, y2], axis=-1)
+ giou_loss = GIoULoss(reduction='sum')
+ iou_loss = giou_loss(
+ pred_boxes * iou_mask,
+ gt_bbox_xys * iou_mask,
+ iou_weight=iou_mask,
+ loc_reweight=None)
+ iou_loss = iou_loss / (pos_num + 1e-4)
+
+ losses = {
+ 'heatmap_loss': heatmap_loss,
+ 'size_loss': size_loss,
+ 'offset_loss': offset_loss,
+ }
+ det_loss = weights['heatmap'] * heatmap_loss + weights[
+ 'size'] * size_loss + weights['offset'] * offset_loss
+
+ if iou is not None:
+ losses.update({'iou_loss': iou_loss})
+ det_loss = det_loss + weights['iou'] * iou_loss
+ losses.update({'det_loss': det_loss})
+ return losses
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/detr_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/detr_head.py
new file mode 100644
index 000000000..6ca3499b9
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/detr_head.py
@@ -0,0 +1,364 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+import pycocotools.mask as mask_util
+from ..initializer import linear_init_, constant_
+from ..transformers.utils import inverse_sigmoid
+
+__all__ = ['DETRHead', 'DeformableDETRHead']
+
+
+class MLP(nn.Layer):
+ """This code is based on
+ https://github.com/facebookresearch/detr/blob/main/models/detr.py
+ """
+
+ def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
+ super().__init__()
+ self.num_layers = num_layers
+ h = [hidden_dim] * (num_layers - 1)
+ self.layers = nn.LayerList(
+ nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for l in self.layers:
+ linear_init_(l)
+
+ def forward(self, x):
+ for i, layer in enumerate(self.layers):
+ x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
+ return x
+
+
+class MultiHeadAttentionMap(nn.Layer):
+ """This code is based on
+ https://github.com/facebookresearch/detr/blob/main/models/segmentation.py
+
+ This is a 2D attention module, which only returns the attention softmax (no multiplication by value)
+ """
+
+ def __init__(self, query_dim, hidden_dim, num_heads, dropout=0.0,
+ bias=True):
+ super().__init__()
+ self.num_heads = num_heads
+ self.hidden_dim = hidden_dim
+ self.dropout = nn.Dropout(dropout)
+
+ weight_attr = paddle.ParamAttr(
+ initializer=paddle.nn.initializer.XavierUniform())
+ bias_attr = paddle.framework.ParamAttr(
+ initializer=paddle.nn.initializer.Constant()) if bias else False
+
+ self.q_proj = nn.Linear(query_dim, hidden_dim, weight_attr, bias_attr)
+ self.k_proj = nn.Conv2D(
+ query_dim,
+ hidden_dim,
+ 1,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr)
+
+ self.normalize_fact = float(hidden_dim / self.num_heads)**-0.5
+
+ def forward(self, q, k, mask=None):
+ q = self.q_proj(q)
+ k = self.k_proj(k)
+ bs, num_queries, n, c, h, w = q.shape[0], q.shape[1], self.num_heads,\
+ self.hidden_dim // self.num_heads, k.shape[-2], k.shape[-1]
+ qh = q.reshape([bs, num_queries, n, c])
+ kh = k.reshape([bs, n, c, h, w])
+ # weights = paddle.einsum("bqnc,bnchw->bqnhw", qh * self.normalize_fact, kh)
+ qh = qh.transpose([0, 2, 1, 3]).reshape([-1, num_queries, c])
+ kh = kh.reshape([-1, c, h * w])
+ weights = paddle.bmm(qh * self.normalize_fact, kh).reshape(
+ [bs, n, num_queries, h, w]).transpose([0, 2, 1, 3, 4])
+
+ if mask is not None:
+ weights += mask
+ # fix a potenial bug: https://github.com/facebookresearch/detr/issues/247
+ weights = F.softmax(weights.flatten(3), axis=-1).reshape(weights.shape)
+ weights = self.dropout(weights)
+ return weights
+
+
+class MaskHeadFPNConv(nn.Layer):
+ """This code is based on
+ https://github.com/facebookresearch/detr/blob/main/models/segmentation.py
+
+ Simple convolutional head, using group norm.
+ Upsampling is done using a FPN approach
+ """
+
+ def __init__(self, input_dim, fpn_dims, context_dim, num_groups=8):
+ super().__init__()
+
+ inter_dims = [input_dim,
+ ] + [context_dim // (2**i) for i in range(1, 5)]
+ weight_attr = paddle.ParamAttr(
+ initializer=paddle.nn.initializer.KaimingUniform())
+ bias_attr = paddle.framework.ParamAttr(
+ initializer=paddle.nn.initializer.Constant())
+
+ self.conv0 = self._make_layers(input_dim, input_dim, 3, num_groups,
+ weight_attr, bias_attr)
+ self.conv_inter = nn.LayerList()
+ for in_dims, out_dims in zip(inter_dims[:-1], inter_dims[1:]):
+ self.conv_inter.append(
+ self._make_layers(in_dims, out_dims, 3, num_groups, weight_attr,
+ bias_attr))
+
+ self.conv_out = nn.Conv2D(
+ inter_dims[-1],
+ 1,
+ 3,
+ padding=1,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr)
+
+ self.adapter = nn.LayerList()
+ for i in range(len(fpn_dims)):
+ self.adapter.append(
+ nn.Conv2D(
+ fpn_dims[i],
+ inter_dims[i + 1],
+ 1,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr))
+
+ def _make_layers(self,
+ in_dims,
+ out_dims,
+ kernel_size,
+ num_groups,
+ weight_attr=None,
+ bias_attr=None):
+ return nn.Sequential(
+ nn.Conv2D(
+ in_dims,
+ out_dims,
+ kernel_size,
+ padding=kernel_size // 2,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr),
+ nn.GroupNorm(num_groups, out_dims),
+ nn.ReLU())
+
+ def forward(self, x, bbox_attention_map, fpns):
+ x = paddle.concat([
+ x.tile([bbox_attention_map.shape[1], 1, 1, 1]),
+ bbox_attention_map.flatten(0, 1)
+ ], 1)
+ x = self.conv0(x)
+ for inter_layer, adapter_layer, feat in zip(self.conv_inter[:-1],
+ self.adapter, fpns):
+ feat = adapter_layer(feat).tile(
+ [bbox_attention_map.shape[1], 1, 1, 1])
+ x = inter_layer(x)
+ x = feat + F.interpolate(x, size=feat.shape[-2:])
+
+ x = self.conv_inter[-1](x)
+ x = self.conv_out(x)
+ return x
+
+
+@register
+class DETRHead(nn.Layer):
+ __shared__ = ['num_classes', 'hidden_dim', 'use_focal_loss']
+ __inject__ = ['loss']
+
+ def __init__(self,
+ num_classes=80,
+ hidden_dim=256,
+ nhead=8,
+ num_mlp_layers=3,
+ loss='DETRLoss',
+ fpn_dims=[1024, 512, 256],
+ with_mask_head=False,
+ use_focal_loss=False):
+ super(DETRHead, self).__init__()
+ # add background class
+ self.num_classes = num_classes if use_focal_loss else num_classes + 1
+ self.hidden_dim = hidden_dim
+ self.loss = loss
+ self.with_mask_head = with_mask_head
+ self.use_focal_loss = use_focal_loss
+
+ self.score_head = nn.Linear(hidden_dim, self.num_classes)
+ self.bbox_head = MLP(hidden_dim,
+ hidden_dim,
+ output_dim=4,
+ num_layers=num_mlp_layers)
+ if self.with_mask_head:
+ self.bbox_attention = MultiHeadAttentionMap(hidden_dim, hidden_dim,
+ nhead)
+ self.mask_head = MaskHeadFPNConv(hidden_dim + nhead, fpn_dims,
+ hidden_dim)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.score_head)
+
+ @classmethod
+ def from_config(cls, cfg, hidden_dim, nhead, input_shape):
+
+ return {
+ 'hidden_dim': hidden_dim,
+ 'nhead': nhead,
+ 'fpn_dims': [i.channels for i in input_shape[::-1]][1:]
+ }
+
+ @staticmethod
+ def get_gt_mask_from_polygons(gt_poly, pad_mask):
+ out_gt_mask = []
+ for polygons, padding in zip(gt_poly, pad_mask):
+ height, width = int(padding[:, 0].sum()), int(padding[0, :].sum())
+ masks = []
+ for obj_poly in polygons:
+ rles = mask_util.frPyObjects(obj_poly, height, width)
+ rle = mask_util.merge(rles)
+ masks.append(
+ paddle.to_tensor(mask_util.decode(rle)).astype('float32'))
+ masks = paddle.stack(masks)
+ masks_pad = paddle.zeros(
+ [masks.shape[0], pad_mask.shape[1], pad_mask.shape[2]])
+ masks_pad[:, :height, :width] = masks
+ out_gt_mask.append(masks_pad)
+ return out_gt_mask
+
+ def forward(self, out_transformer, body_feats, inputs=None):
+ r"""
+ Args:
+ out_transformer (Tuple): (feats: [num_levels, batch_size,
+ num_queries, hidden_dim],
+ memory: [batch_size, hidden_dim, h, w],
+ src_proj: [batch_size, h*w, hidden_dim],
+ src_mask: [batch_size, 1, 1, h, w])
+ body_feats (List(Tensor)): list[[B, C, H, W]]
+ inputs (dict): dict(inputs)
+ """
+ feats, memory, src_proj, src_mask = out_transformer
+ outputs_logit = self.score_head(feats)
+ outputs_bbox = F.sigmoid(self.bbox_head(feats))
+ outputs_seg = None
+ if self.with_mask_head:
+ bbox_attention_map = self.bbox_attention(feats[-1], memory,
+ src_mask)
+ fpn_feats = [a for a in body_feats[::-1]][1:]
+ outputs_seg = self.mask_head(src_proj, bbox_attention_map,
+ fpn_feats)
+ outputs_seg = outputs_seg.reshape([
+ feats.shape[1], feats.shape[2], outputs_seg.shape[-2],
+ outputs_seg.shape[-1]
+ ])
+
+ if self.training:
+ assert inputs is not None
+ assert 'gt_bbox' in inputs and 'gt_class' in inputs
+ gt_mask = self.get_gt_mask_from_polygons(
+ inputs['gt_poly'],
+ inputs['pad_mask']) if 'gt_poly' in inputs else None
+ return self.loss(
+ outputs_bbox,
+ outputs_logit,
+ inputs['gt_bbox'],
+ inputs['gt_class'],
+ masks=outputs_seg,
+ gt_mask=gt_mask)
+ else:
+ return (outputs_bbox[-1], outputs_logit[-1], outputs_seg)
+
+
+@register
+class DeformableDETRHead(nn.Layer):
+ __shared__ = ['num_classes', 'hidden_dim']
+ __inject__ = ['loss']
+
+ def __init__(self,
+ num_classes=80,
+ hidden_dim=512,
+ nhead=8,
+ num_mlp_layers=3,
+ loss='DETRLoss'):
+ super(DeformableDETRHead, self).__init__()
+ self.num_classes = num_classes
+ self.hidden_dim = hidden_dim
+ self.nhead = nhead
+ self.loss = loss
+
+ self.score_head = nn.Linear(hidden_dim, self.num_classes)
+ self.bbox_head = MLP(hidden_dim,
+ hidden_dim,
+ output_dim=4,
+ num_layers=num_mlp_layers)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.score_head)
+ constant_(self.score_head.bias, -4.595)
+ constant_(self.bbox_head.layers[-1].weight)
+
+ with paddle.no_grad():
+ bias = paddle.zeros_like(self.bbox_head.layers[-1].bias)
+ bias[2:] = -2.0
+ self.bbox_head.layers[-1].bias.set_value(bias)
+
+ @classmethod
+ def from_config(cls, cfg, hidden_dim, nhead, input_shape):
+ return {'hidden_dim': hidden_dim, 'nhead': nhead}
+
+ def forward(self, out_transformer, body_feats, inputs=None):
+ r"""
+ Args:
+ out_transformer (Tuple): (feats: [num_levels, batch_size,
+ num_queries, hidden_dim],
+ memory: [batch_size,
+ \sum_{l=0}^{L-1} H_l \cdot W_l, hidden_dim],
+ reference_points: [batch_size, num_queries, 2])
+ body_feats (List(Tensor)): list[[B, C, H, W]]
+ inputs (dict): dict(inputs)
+ """
+ feats, memory, reference_points = out_transformer
+ reference_points = inverse_sigmoid(reference_points.unsqueeze(0))
+ outputs_bbox = self.bbox_head(feats)
+
+ # It's equivalent to "outputs_bbox[:, :, :, :2] += reference_points",
+ # but the gradient is wrong in paddle.
+ outputs_bbox = paddle.concat(
+ [
+ outputs_bbox[:, :, :, :2] + reference_points,
+ outputs_bbox[:, :, :, 2:]
+ ],
+ axis=-1)
+
+ outputs_bbox = F.sigmoid(outputs_bbox)
+ outputs_logit = self.score_head(feats)
+
+ if self.training:
+ assert inputs is not None
+ assert 'gt_bbox' in inputs and 'gt_class' in inputs
+
+ return self.loss(outputs_bbox, outputs_logit, inputs['gt_bbox'],
+ inputs['gt_class'])
+ else:
+ return (outputs_bbox[-1], outputs_logit[-1], None)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/face_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/face_head.py
new file mode 100644
index 000000000..bb51f2eb9
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/face_head.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register
+from ..layers import AnchorGeneratorSSD
+
+
+@register
+class FaceHead(nn.Layer):
+ """
+ Head block for Face detection network
+
+ Args:
+ num_classes (int): Number of output classes.
+ in_channels (int): Number of input channels.
+ anchor_generator(object): instance of anchor genertor method.
+ kernel_size (int): kernel size of Conv2D in FaceHead.
+ padding (int): padding of Conv2D in FaceHead.
+ conv_decay (float): norm_decay (float): weight decay for conv layer weights.
+ loss (object): loss of face detection model.
+ """
+ __shared__ = ['num_classes']
+ __inject__ = ['anchor_generator', 'loss']
+
+ def __init__(self,
+ num_classes=80,
+ in_channels=[96, 96],
+ anchor_generator=AnchorGeneratorSSD().__dict__,
+ kernel_size=3,
+ padding=1,
+ conv_decay=0.,
+ loss='SSDLoss'):
+ super(FaceHead, self).__init__()
+ # add background class
+ self.num_classes = num_classes + 1
+ self.in_channels = in_channels
+ self.anchor_generator = anchor_generator
+ self.loss = loss
+
+ if isinstance(anchor_generator, dict):
+ self.anchor_generator = AnchorGeneratorSSD(**anchor_generator)
+
+ self.num_priors = self.anchor_generator.num_priors
+ self.box_convs = []
+ self.score_convs = []
+ for i, num_prior in enumerate(self.num_priors):
+ box_conv_name = "boxes{}".format(i)
+ box_conv = self.add_sublayer(
+ box_conv_name,
+ nn.Conv2D(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * 4,
+ kernel_size=kernel_size,
+ padding=padding))
+ self.box_convs.append(box_conv)
+
+ score_conv_name = "scores{}".format(i)
+ score_conv = self.add_sublayer(
+ score_conv_name,
+ nn.Conv2D(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * self.num_classes,
+ kernel_size=kernel_size,
+ padding=padding))
+ self.score_convs.append(score_conv)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ def forward(self, feats, image, gt_bbox=None, gt_class=None):
+ box_preds = []
+ cls_scores = []
+ prior_boxes = []
+ for feat, box_conv, score_conv in zip(feats, self.box_convs,
+ self.score_convs):
+ box_pred = box_conv(feat)
+ box_pred = paddle.transpose(box_pred, [0, 2, 3, 1])
+ box_pred = paddle.reshape(box_pred, [0, -1, 4])
+ box_preds.append(box_pred)
+
+ cls_score = score_conv(feat)
+ cls_score = paddle.transpose(cls_score, [0, 2, 3, 1])
+ cls_score = paddle.reshape(cls_score, [0, -1, self.num_classes])
+ cls_scores.append(cls_score)
+
+ prior_boxes = self.anchor_generator(feats, image)
+
+ if self.training:
+ return self.get_loss(box_preds, cls_scores, gt_bbox, gt_class,
+ prior_boxes)
+ else:
+ return (box_preds, cls_scores), prior_boxes
+
+ def get_loss(self, boxes, scores, gt_bbox, gt_class, prior_boxes):
+ return self.loss(boxes, scores, gt_bbox, gt_class, prior_boxes)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/fcos_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/fcos_head.py
new file mode 100644
index 000000000..1d61feed6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/fcos_head.py
@@ -0,0 +1,258 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Normal, Constant
+
+from ppdet.core.workspace import register
+from ppdet.modeling.layers import ConvNormLayer
+
+
+class ScaleReg(nn.Layer):
+ """
+ Parameter for scaling the regression outputs.
+ """
+
+ def __init__(self):
+ super(ScaleReg, self).__init__()
+ self.scale_reg = self.create_parameter(
+ shape=[1],
+ attr=ParamAttr(initializer=Constant(value=1.)),
+ dtype="float32")
+
+ def forward(self, inputs):
+ out = inputs * self.scale_reg
+ return out
+
+
+@register
+class FCOSFeat(nn.Layer):
+ """
+ FCOSFeat of FCOS
+
+ Args:
+ feat_in (int): The channel number of input Tensor.
+ feat_out (int): The channel number of output Tensor.
+ num_convs (int): The convolution number of the FCOSFeat.
+ norm_type (str): Normalization type, 'bn'/'sync_bn'/'gn'.
+ use_dcn (bool): Whether to use dcn in tower or not.
+ """
+
+ def __init__(self,
+ feat_in=256,
+ feat_out=256,
+ num_convs=4,
+ norm_type='bn',
+ use_dcn=False):
+ super(FCOSFeat, self).__init__()
+ self.num_convs = num_convs
+ self.norm_type = norm_type
+ self.cls_subnet_convs = []
+ self.reg_subnet_convs = []
+ for i in range(self.num_convs):
+ in_c = feat_in if i == 0 else feat_out
+
+ cls_conv_name = 'fcos_head_cls_tower_conv_{}'.format(i)
+ cls_conv = self.add_sublayer(
+ cls_conv_name,
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=3,
+ stride=1,
+ norm_type=norm_type,
+ use_dcn=use_dcn,
+ bias_on=True,
+ lr_scale=2.))
+ self.cls_subnet_convs.append(cls_conv)
+
+ reg_conv_name = 'fcos_head_reg_tower_conv_{}'.format(i)
+ reg_conv = self.add_sublayer(
+ reg_conv_name,
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=3,
+ stride=1,
+ norm_type=norm_type,
+ use_dcn=use_dcn,
+ bias_on=True,
+ lr_scale=2.))
+ self.reg_subnet_convs.append(reg_conv)
+
+ def forward(self, fpn_feat):
+ cls_feat = fpn_feat
+ reg_feat = fpn_feat
+ for i in range(self.num_convs):
+ cls_feat = F.relu(self.cls_subnet_convs[i](cls_feat))
+ reg_feat = F.relu(self.reg_subnet_convs[i](reg_feat))
+ return cls_feat, reg_feat
+
+
+@register
+class FCOSHead(nn.Layer):
+ """
+ FCOSHead
+ Args:
+ fcos_feat (object): Instance of 'FCOSFeat'
+ num_classes (int): Number of classes
+ fpn_stride (list): The stride of each FPN Layer
+ prior_prob (float): Used to set the bias init for the class prediction layer
+ fcos_loss (object): Instance of 'FCOSLoss'
+ norm_reg_targets (bool): Normalization the regression target if true
+ centerness_on_reg (bool): The prediction of centerness on regression or clssification branch
+ """
+ __inject__ = ['fcos_feat', 'fcos_loss']
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ fcos_feat,
+ num_classes=80,
+ fpn_stride=[8, 16, 32, 64, 128],
+ prior_prob=0.01,
+ fcos_loss='FCOSLoss',
+ norm_reg_targets=True,
+ centerness_on_reg=True):
+ super(FCOSHead, self).__init__()
+ self.fcos_feat = fcos_feat
+ self.num_classes = num_classes
+ self.fpn_stride = fpn_stride
+ self.prior_prob = prior_prob
+ self.fcos_loss = fcos_loss
+ self.norm_reg_targets = norm_reg_targets
+ self.centerness_on_reg = centerness_on_reg
+
+ conv_cls_name = "fcos_head_cls"
+ bias_init_value = -math.log((1 - self.prior_prob) / self.prior_prob)
+ self.fcos_head_cls = self.add_sublayer(
+ conv_cls_name,
+ nn.Conv2D(
+ in_channels=256,
+ out_channels=self.num_classes,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(
+ initializer=Constant(value=bias_init_value))))
+
+ conv_reg_name = "fcos_head_reg"
+ self.fcos_head_reg = self.add_sublayer(
+ conv_reg_name,
+ nn.Conv2D(
+ in_channels=256,
+ out_channels=4,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+
+ conv_centerness_name = "fcos_head_centerness"
+ self.fcos_head_centerness = self.add_sublayer(
+ conv_centerness_name,
+ nn.Conv2D(
+ in_channels=256,
+ out_channels=1,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+
+ self.scales_regs = []
+ for i in range(len(self.fpn_stride)):
+ lvl = int(math.log(int(self.fpn_stride[i]), 2))
+ feat_name = 'p{}_feat'.format(lvl)
+ scale_reg = self.add_sublayer(feat_name, ScaleReg())
+ self.scales_regs.append(scale_reg)
+
+ def _compute_locations_by_level(self, fpn_stride, feature):
+ """
+ Compute locations of anchor points of each FPN layer
+ Args:
+ fpn_stride (int): The stride of current FPN feature map
+ feature (Tensor): Tensor of current FPN feature map
+ Return:
+ Anchor points locations of current FPN feature map
+ """
+ shape_fm = paddle.shape(feature)
+ shape_fm.stop_gradient = True
+ h, w = shape_fm[2], shape_fm[3]
+ shift_x = paddle.arange(0, w * fpn_stride, fpn_stride)
+ shift_y = paddle.arange(0, h * fpn_stride, fpn_stride)
+ shift_x = paddle.unsqueeze(shift_x, axis=0)
+ shift_y = paddle.unsqueeze(shift_y, axis=1)
+ shift_x = paddle.expand(shift_x, shape=[h, w])
+ shift_y = paddle.expand(shift_y, shape=[h, w])
+ shift_x.stop_gradient = True
+ shift_y.stop_gradient = True
+ shift_x = paddle.reshape(shift_x, shape=[-1])
+ shift_y = paddle.reshape(shift_y, shape=[-1])
+ location = paddle.stack(
+ [shift_x, shift_y], axis=-1) + float(fpn_stride) / 2
+ location.stop_gradient = True
+ return location
+
+ def forward(self, fpn_feats, is_training):
+ assert len(fpn_feats) == len(
+ self.fpn_stride
+ ), "The size of fpn_feats is not equal to size of fpn_stride"
+ cls_logits_list = []
+ bboxes_reg_list = []
+ centerness_list = []
+ for scale_reg, fpn_stride, fpn_feat in zip(self.scales_regs,
+ self.fpn_stride, fpn_feats):
+ fcos_cls_feat, fcos_reg_feat = self.fcos_feat(fpn_feat)
+ cls_logits = self.fcos_head_cls(fcos_cls_feat)
+ bbox_reg = scale_reg(self.fcos_head_reg(fcos_reg_feat))
+ if self.centerness_on_reg:
+ centerness = self.fcos_head_centerness(fcos_reg_feat)
+ else:
+ centerness = self.fcos_head_centerness(fcos_cls_feat)
+ if self.norm_reg_targets:
+ bbox_reg = F.relu(bbox_reg)
+ if not is_training:
+ bbox_reg = bbox_reg * fpn_stride
+ else:
+ bbox_reg = paddle.exp(bbox_reg)
+ cls_logits_list.append(cls_logits)
+ bboxes_reg_list.append(bbox_reg)
+ centerness_list.append(centerness)
+
+ if not is_training:
+ locations_list = []
+ for fpn_stride, feature in zip(self.fpn_stride, fpn_feats):
+ location = self._compute_locations_by_level(fpn_stride, feature)
+ locations_list.append(location)
+
+ return locations_list, cls_logits_list, bboxes_reg_list, centerness_list
+ else:
+ return cls_logits_list, bboxes_reg_list, centerness_list
+
+ def get_loss(self, fcos_head_outs, tag_labels, tag_bboxes, tag_centerness):
+ cls_logits, bboxes_reg, centerness = fcos_head_outs
+ return self.fcos_loss(cls_logits, bboxes_reg, centerness, tag_labels,
+ tag_bboxes, tag_centerness)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/gfl_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/gfl_head.py
new file mode 100644
index 000000000..17e87a4ef
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/gfl_head.py
@@ -0,0 +1,480 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/gfl_head.py
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Normal, Constant
+
+from ppdet.core.workspace import register
+from ppdet.modeling.layers import ConvNormLayer
+from ppdet.modeling.bbox_utils import distance2bbox, bbox2distance
+from ppdet.data.transform.atss_assigner import bbox_overlaps
+
+
+class ScaleReg(nn.Layer):
+ """
+ Parameter for scaling the regression outputs.
+ """
+
+ def __init__(self):
+ super(ScaleReg, self).__init__()
+ self.scale_reg = self.create_parameter(
+ shape=[1],
+ attr=ParamAttr(initializer=Constant(value=1.)),
+ dtype="float32")
+
+ def forward(self, inputs):
+ out = inputs * self.scale_reg
+ return out
+
+
+class Integral(nn.Layer):
+ """A fixed layer for calculating integral result from distribution.
+ This layer calculates the target location by :math: `sum{P(y_i) * y_i}`,
+ P(y_i) denotes the softmax vector that represents the discrete distribution
+ y_i denotes the discrete set, usually {0, 1, 2, ..., reg_max}
+
+ Args:
+ reg_max (int): The maximal value of the discrete set. Default: 16. You
+ may want to reset it according to your new dataset or related
+ settings.
+ """
+
+ def __init__(self, reg_max=16):
+ super(Integral, self).__init__()
+ self.reg_max = reg_max
+ self.register_buffer('project',
+ paddle.linspace(0, self.reg_max, self.reg_max + 1))
+
+ def forward(self, x):
+ """Forward feature from the regression head to get integral result of
+ bounding box location.
+ Args:
+ x (Tensor): Features of the regression head, shape (N, 4*(n+1)),
+ n is self.reg_max.
+ Returns:
+ x (Tensor): Integral result of box locations, i.e., distance
+ offsets from the box center in four directions, shape (N, 4).
+ """
+ x = F.softmax(x.reshape([-1, self.reg_max + 1]), axis=1)
+ x = F.linear(x, self.project).reshape([-1, 4])
+ return x
+
+
+@register
+class DGQP(nn.Layer):
+ """Distribution-Guided Quality Predictor of GFocal head
+
+ Args:
+ reg_topk (int): top-k statistics of distribution to guide LQE
+ reg_channels (int): hidden layer unit to generate LQE
+ add_mean (bool): Whether to calculate the mean of top-k statistics
+ """
+
+ def __init__(self, reg_topk=4, reg_channels=64, add_mean=True):
+ super(DGQP, self).__init__()
+ self.reg_topk = reg_topk
+ self.reg_channels = reg_channels
+ self.add_mean = add_mean
+ self.total_dim = reg_topk
+ if add_mean:
+ self.total_dim += 1
+ self.reg_conv1 = self.add_sublayer(
+ 'dgqp_reg_conv1',
+ nn.Conv2D(
+ in_channels=4 * self.total_dim,
+ out_channels=self.reg_channels,
+ kernel_size=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+ self.reg_conv2 = self.add_sublayer(
+ 'dgqp_reg_conv2',
+ nn.Conv2D(
+ in_channels=self.reg_channels,
+ out_channels=1,
+ kernel_size=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+
+ def forward(self, x):
+ """Forward feature from the regression head to get integral result of
+ bounding box location.
+ Args:
+ x (Tensor): Features of the regression head, shape (N, 4*(n+1)),
+ n is self.reg_max.
+ Returns:
+ x (Tensor): Integral result of box locations, i.e., distance
+ offsets from the box center in four directions, shape (N, 4).
+ """
+ N, _, H, W = x.shape[:]
+ prob = F.softmax(x.reshape([N, 4, -1, H, W]), axis=2)
+ prob_topk, _ = prob.topk(self.reg_topk, axis=2)
+ if self.add_mean:
+ stat = paddle.concat(
+ [prob_topk, prob_topk.mean(
+ axis=2, keepdim=True)], axis=2)
+ else:
+ stat = prob_topk
+ y = F.relu(self.reg_conv1(stat.reshape([N, -1, H, W])))
+ y = F.sigmoid(self.reg_conv2(y))
+ return y
+
+
+@register
+class GFLHead(nn.Layer):
+ """
+ GFLHead
+ Args:
+ conv_feat (object): Instance of 'FCOSFeat'
+ num_classes (int): Number of classes
+ fpn_stride (list): The stride of each FPN Layer
+ prior_prob (float): Used to set the bias init for the class prediction layer
+ loss_class (object): Instance of QualityFocalLoss.
+ loss_dfl (object): Instance of DistributionFocalLoss.
+ loss_bbox (object): Instance of bbox loss.
+ reg_max: Max value of integral set :math: `{0, ..., reg_max}`
+ n QFL setting. Default: 16.
+ """
+ __inject__ = [
+ 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox', 'nms'
+ ]
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ conv_feat='FCOSFeat',
+ dgqp_module=None,
+ num_classes=80,
+ fpn_stride=[8, 16, 32, 64, 128],
+ prior_prob=0.01,
+ loss_class='QualityFocalLoss',
+ loss_dfl='DistributionFocalLoss',
+ loss_bbox='GIoULoss',
+ reg_max=16,
+ feat_in_chan=256,
+ nms=None,
+ nms_pre=1000,
+ cell_offset=0):
+ super(GFLHead, self).__init__()
+ self.conv_feat = conv_feat
+ self.dgqp_module = dgqp_module
+ self.num_classes = num_classes
+ self.fpn_stride = fpn_stride
+ self.prior_prob = prior_prob
+ self.loss_qfl = loss_class
+ self.loss_dfl = loss_dfl
+ self.loss_bbox = loss_bbox
+ self.reg_max = reg_max
+ self.feat_in_chan = feat_in_chan
+ self.nms = nms
+ self.nms_pre = nms_pre
+ self.cell_offset = cell_offset
+ self.use_sigmoid = self.loss_qfl.use_sigmoid
+ if self.use_sigmoid:
+ self.cls_out_channels = self.num_classes
+ else:
+ self.cls_out_channels = self.num_classes + 1
+
+ conv_cls_name = "gfl_head_cls"
+ bias_init_value = -math.log((1 - self.prior_prob) / self.prior_prob)
+ self.gfl_head_cls = self.add_sublayer(
+ conv_cls_name,
+ nn.Conv2D(
+ in_channels=self.feat_in_chan,
+ out_channels=self.cls_out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(
+ initializer=Constant(value=bias_init_value))))
+
+ conv_reg_name = "gfl_head_reg"
+ self.gfl_head_reg = self.add_sublayer(
+ conv_reg_name,
+ nn.Conv2D(
+ in_channels=self.feat_in_chan,
+ out_channels=4 * (self.reg_max + 1),
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+
+ self.scales_regs = []
+ for i in range(len(self.fpn_stride)):
+ lvl = int(math.log(int(self.fpn_stride[i]), 2))
+ feat_name = 'p{}_feat'.format(lvl)
+ scale_reg = self.add_sublayer(feat_name, ScaleReg())
+ self.scales_regs.append(scale_reg)
+
+ self.distribution_project = Integral(self.reg_max)
+
+ def forward(self, fpn_feats):
+ assert len(fpn_feats) == len(
+ self.fpn_stride
+ ), "The size of fpn_feats is not equal to size of fpn_stride"
+ cls_logits_list = []
+ bboxes_reg_list = []
+ for scale_reg, fpn_feat in zip(self.scales_regs, fpn_feats):
+ conv_cls_feat, conv_reg_feat = self.conv_feat(fpn_feat)
+ cls_logits = self.gfl_head_cls(conv_cls_feat)
+ bbox_reg = scale_reg(self.gfl_head_reg(conv_reg_feat))
+ if self.dgqp_module:
+ quality_score = self.dgqp_module(bbox_reg)
+ cls_logits = F.sigmoid(cls_logits) * quality_score
+ if not self.training:
+ cls_logits = F.sigmoid(cls_logits.transpose([0, 2, 3, 1]))
+ bbox_reg = bbox_reg.transpose([0, 2, 3, 1])
+ cls_logits_list.append(cls_logits)
+ bboxes_reg_list.append(bbox_reg)
+
+ return (cls_logits_list, bboxes_reg_list)
+
+ def _images_to_levels(self, target, num_level_anchors):
+ """
+ Convert targets by image to targets by feature level.
+ """
+ level_targets = []
+ start = 0
+ for n in num_level_anchors:
+ end = start + n
+ level_targets.append(target[:, start:end].squeeze(0))
+ start = end
+ return level_targets
+
+ def _grid_cells_to_center(self, grid_cells):
+ """
+ Get center location of each gird cell
+ Args:
+ grid_cells: grid cells of a feature map
+ Returns:
+ center points
+ """
+ cells_cx = (grid_cells[:, 2] + grid_cells[:, 0]) / 2
+ cells_cy = (grid_cells[:, 3] + grid_cells[:, 1]) / 2
+ return paddle.stack([cells_cx, cells_cy], axis=-1)
+
+ def get_loss(self, gfl_head_outs, gt_meta):
+ cls_logits, bboxes_reg = gfl_head_outs
+ num_level_anchors = [
+ featmap.shape[-2] * featmap.shape[-1] for featmap in cls_logits
+ ]
+ grid_cells_list = self._images_to_levels(gt_meta['grid_cells'],
+ num_level_anchors)
+ labels_list = self._images_to_levels(gt_meta['labels'],
+ num_level_anchors)
+ label_weights_list = self._images_to_levels(gt_meta['label_weights'],
+ num_level_anchors)
+ bbox_targets_list = self._images_to_levels(gt_meta['bbox_targets'],
+ num_level_anchors)
+ num_total_pos = sum(gt_meta['pos_num'])
+ try:
+ num_total_pos = paddle.distributed.all_reduce(num_total_pos.clone(
+ )) / paddle.distributed.get_world_size()
+ except:
+ num_total_pos = max(num_total_pos, 1)
+
+ loss_bbox_list, loss_dfl_list, loss_qfl_list, avg_factor = [], [], [], []
+ for cls_score, bbox_pred, grid_cells, labels, label_weights, bbox_targets, stride in zip(
+ cls_logits, bboxes_reg, grid_cells_list, labels_list,
+ label_weights_list, bbox_targets_list, self.fpn_stride):
+ grid_cells = grid_cells.reshape([-1, 4])
+ cls_score = cls_score.transpose([0, 2, 3, 1]).reshape(
+ [-1, self.cls_out_channels])
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1]).reshape(
+ [-1, 4 * (self.reg_max + 1)])
+ bbox_targets = bbox_targets.reshape([-1, 4])
+ labels = labels.reshape([-1])
+ label_weights = label_weights.reshape([-1])
+
+ bg_class_ind = self.num_classes
+ pos_inds = paddle.nonzero(
+ paddle.logical_and((labels >= 0), (labels < bg_class_ind)),
+ as_tuple=False).squeeze(1)
+ score = np.zeros(labels.shape)
+ if len(pos_inds) > 0:
+ pos_bbox_targets = paddle.gather(bbox_targets, pos_inds, axis=0)
+ pos_bbox_pred = paddle.gather(bbox_pred, pos_inds, axis=0)
+ pos_grid_cells = paddle.gather(grid_cells, pos_inds, axis=0)
+ pos_grid_cell_centers = self._grid_cells_to_center(
+ pos_grid_cells) / stride
+
+ weight_targets = F.sigmoid(cls_score.detach())
+ weight_targets = paddle.gather(
+ weight_targets.max(axis=1, keepdim=True), pos_inds, axis=0)
+ pos_bbox_pred_corners = self.distribution_project(pos_bbox_pred)
+ pos_decode_bbox_pred = distance2bbox(pos_grid_cell_centers,
+ pos_bbox_pred_corners)
+ pos_decode_bbox_targets = pos_bbox_targets / stride
+ bbox_iou = bbox_overlaps(
+ pos_decode_bbox_pred.detach().numpy(),
+ pos_decode_bbox_targets.detach().numpy(),
+ is_aligned=True)
+ score[pos_inds.numpy()] = bbox_iou
+ pred_corners = pos_bbox_pred.reshape([-1, self.reg_max + 1])
+ target_corners = bbox2distance(pos_grid_cell_centers,
+ pos_decode_bbox_targets,
+ self.reg_max).reshape([-1])
+ # regression loss
+ loss_bbox = paddle.sum(
+ self.loss_bbox(pos_decode_bbox_pred,
+ pos_decode_bbox_targets) * weight_targets)
+
+ # dfl loss
+ loss_dfl = self.loss_dfl(
+ pred_corners,
+ target_corners,
+ weight=weight_targets.expand([-1, 4]).reshape([-1]),
+ avg_factor=4.0)
+ else:
+ loss_bbox = bbox_pred.sum() * 0
+ loss_dfl = bbox_pred.sum() * 0
+ weight_targets = paddle.to_tensor([0], dtype='float32')
+
+ # qfl loss
+ score = paddle.to_tensor(score)
+ loss_qfl = self.loss_qfl(
+ cls_score, (labels, score),
+ weight=label_weights,
+ avg_factor=num_total_pos)
+ loss_bbox_list.append(loss_bbox)
+ loss_dfl_list.append(loss_dfl)
+ loss_qfl_list.append(loss_qfl)
+ avg_factor.append(weight_targets.sum())
+
+ avg_factor = sum(avg_factor)
+ try:
+ avg_factor = paddle.distributed.all_reduce(avg_factor.clone())
+ avg_factor = paddle.clip(
+ avg_factor / paddle.distributed.get_world_size(), min=1)
+ except:
+ avg_factor = max(avg_factor.item(), 1)
+ if avg_factor <= 0:
+ loss_qfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ loss_bbox = paddle.to_tensor(
+ 0, dtype='float32', stop_gradient=False)
+ loss_dfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ else:
+ losses_bbox = list(map(lambda x: x / avg_factor, loss_bbox_list))
+ losses_dfl = list(map(lambda x: x / avg_factor, loss_dfl_list))
+ loss_qfl = sum(loss_qfl_list)
+ loss_bbox = sum(losses_bbox)
+ loss_dfl = sum(losses_dfl)
+
+ loss_states = dict(
+ loss_qfl=loss_qfl, loss_bbox=loss_bbox, loss_dfl=loss_dfl)
+
+ return loss_states
+
+ def get_single_level_center_point(self, featmap_size, stride,
+ cell_offset=0):
+ """
+ Generate pixel centers of a single stage feature map.
+ Args:
+ featmap_size: height and width of the feature map
+ stride: down sample stride of the feature map
+ Returns:
+ y and x of the center points
+ """
+ h, w = featmap_size
+ x_range = (paddle.arange(w, dtype='float32') + cell_offset) * stride
+ y_range = (paddle.arange(h, dtype='float32') + cell_offset) * stride
+ y, x = paddle.meshgrid(y_range, x_range)
+ y = y.flatten()
+ x = x.flatten()
+ return y, x
+
+ def get_bboxes_single(self,
+ cls_scores,
+ bbox_preds,
+ img_shape,
+ scale_factor,
+ rescale=True,
+ cell_offset=0):
+ assert len(cls_scores) == len(bbox_preds)
+ mlvl_bboxes = []
+ mlvl_scores = []
+ for stride, cls_score, bbox_pred in zip(self.fpn_stride, cls_scores,
+ bbox_preds):
+ featmap_size = [
+ paddle.shape(cls_score)[0], paddle.shape(cls_score)[1]
+ ]
+ y, x = self.get_single_level_center_point(
+ featmap_size, stride, cell_offset=cell_offset)
+ center_points = paddle.stack([x, y], axis=-1)
+ scores = cls_score.reshape([-1, self.cls_out_channels])
+ bbox_pred = self.distribution_project(bbox_pred) * stride
+
+ if scores.shape[0] > self.nms_pre:
+ max_scores = scores.max(axis=1)
+ _, topk_inds = max_scores.topk(self.nms_pre)
+ center_points = center_points.gather(topk_inds)
+ bbox_pred = bbox_pred.gather(topk_inds)
+ scores = scores.gather(topk_inds)
+
+ bboxes = distance2bbox(
+ center_points, bbox_pred, max_shape=img_shape)
+ mlvl_bboxes.append(bboxes)
+ mlvl_scores.append(scores)
+ mlvl_bboxes = paddle.concat(mlvl_bboxes)
+ if rescale:
+ # [h_scale, w_scale] to [w_scale, h_scale, w_scale, h_scale]
+ im_scale = paddle.concat([scale_factor[::-1], scale_factor[::-1]])
+ mlvl_bboxes /= im_scale
+ mlvl_scores = paddle.concat(mlvl_scores)
+ mlvl_scores = mlvl_scores.transpose([1, 0])
+ return mlvl_bboxes, mlvl_scores
+
+ def decode(self, cls_scores, bbox_preds, im_shape, scale_factor,
+ cell_offset):
+ batch_bboxes = []
+ batch_scores = []
+ for img_id in range(cls_scores[0].shape[0]):
+ num_levels = len(cls_scores)
+ cls_score_list = [cls_scores[i][img_id] for i in range(num_levels)]
+ bbox_pred_list = [bbox_preds[i][img_id] for i in range(num_levels)]
+ bboxes, scores = self.get_bboxes_single(
+ cls_score_list,
+ bbox_pred_list,
+ im_shape[img_id],
+ scale_factor[img_id],
+ cell_offset=cell_offset)
+ batch_bboxes.append(bboxes)
+ batch_scores.append(scores)
+ batch_bboxes = paddle.stack(batch_bboxes, axis=0)
+ batch_scores = paddle.stack(batch_scores, axis=0)
+
+ return batch_bboxes, batch_scores
+
+ def post_process(self, gfl_head_outs, im_shape, scale_factor):
+ cls_scores, bboxes_reg = gfl_head_outs
+ bboxes, score = self.decode(cls_scores, bboxes_reg, im_shape,
+ scale_factor, self.cell_offset)
+ bbox_pred, bbox_num, _ = self.nms(bboxes, score)
+ return bbox_pred, bbox_num
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/keypoint_hrhrnet_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/keypoint_hrhrnet_head.py
new file mode 100644
index 000000000..869b1816e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/keypoint_hrhrnet_head.py
@@ -0,0 +1,108 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register
+from .. import layers as L
+from ..backbones.hrnet import BasicBlock
+
+
+@register
+class HrHRNetHead(nn.Layer):
+ __inject__ = ['loss']
+
+ def __init__(self, num_joints, loss='HrHRNetLoss', swahr=False, width=32):
+ """
+ Head for HigherHRNet network
+
+ Args:
+ num_joints (int): number of keypoints
+ hrloss (object): HrHRNetLoss instance
+ swahr (bool): whether to use swahr
+ width (int): hrnet channel width
+ """
+ super(HrHRNetHead, self).__init__()
+ self.loss = loss
+
+ self.num_joints = num_joints
+ num_featout1 = num_joints * 2
+ num_featout2 = num_joints
+ self.swahr = swahr
+ self.conv1 = L.Conv2d(width, num_featout1, 1, 1, 0, bias=True)
+ self.conv2 = L.Conv2d(width, num_featout2, 1, 1, 0, bias=True)
+ self.deconv = nn.Sequential(
+ L.ConvTranspose2d(
+ num_featout1 + width, width, 4, 2, 1, 0, bias=False),
+ L.BatchNorm2d(width),
+ L.ReLU())
+ self.blocks = nn.Sequential(*(BasicBlock(
+ num_channels=width,
+ num_filters=width,
+ has_se=False,
+ freeze_norm=False,
+ name='HrHRNetHead_{}'.format(i)) for i in range(4)))
+
+ self.interpolate = L.Upsample(2, mode='bilinear')
+ self.concat = L.Concat(dim=1)
+ if swahr:
+ self.scalelayer0 = nn.Sequential(
+ L.Conv2d(
+ width, num_joints, 1, 1, 0, bias=True),
+ L.BatchNorm2d(num_joints),
+ L.ReLU(),
+ L.Conv2d(
+ num_joints,
+ num_joints,
+ 9,
+ 1,
+ 4,
+ groups=num_joints,
+ bias=True))
+ self.scalelayer1 = nn.Sequential(
+ L.Conv2d(
+ width, num_joints, 1, 1, 0, bias=True),
+ L.BatchNorm2d(num_joints),
+ L.ReLU(),
+ L.Conv2d(
+ num_joints,
+ num_joints,
+ 9,
+ 1,
+ 4,
+ groups=num_joints,
+ bias=True))
+
+ def forward(self, feats, targets=None):
+ x1 = feats[0]
+ xo1 = self.conv1(x1)
+ x2 = self.blocks(self.deconv(self.concat((x1, xo1))))
+ xo2 = self.conv2(x2)
+ num_joints = self.num_joints
+ if self.training:
+ heatmap1, tagmap = paddle.split(xo1, 2, axis=1)
+ if self.swahr:
+ so1 = self.scalelayer0(x1)
+ so2 = self.scalelayer1(x2)
+ hrhrnet_outputs = ([heatmap1, so1], [xo2, so2], tagmap)
+ return self.loss(hrhrnet_outputs, targets)
+ else:
+ hrhrnet_outputs = (heatmap1, xo2, tagmap)
+ return self.loss(hrhrnet_outputs, targets)
+
+ # averaged heatmap, upsampled tagmap
+ upsampled = self.interpolate(xo1)
+ avg = (upsampled[:, :num_joints] + xo2[:, :num_joints]) / 2
+ return avg, upsampled[:, num_joints:]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/mask_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/mask_head.py
new file mode 100644
index 000000000..bfce2dc5b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/mask_head.py
@@ -0,0 +1,250 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import KaimingNormal
+
+from ppdet.core.workspace import register, create
+from ppdet.modeling.layers import ConvNormLayer
+from .roi_extractor import RoIAlign
+
+
+@register
+class MaskFeat(nn.Layer):
+ """
+ Feature extraction in Mask head
+
+ Args:
+ in_channel (int): Input channels
+ out_channel (int): Output channels
+ num_convs (int): The number of conv layers, default 4
+ norm_type (string | None): Norm type, bn, gn, sync_bn are available,
+ default None
+ """
+
+ def __init__(self,
+ in_channel=256,
+ out_channel=256,
+ num_convs=4,
+ norm_type=None):
+ super(MaskFeat, self).__init__()
+ self.num_convs = num_convs
+ self.in_channel = in_channel
+ self.out_channel = out_channel
+ self.norm_type = norm_type
+ fan_conv = out_channel * 3 * 3
+ fan_deconv = out_channel * 2 * 2
+
+ mask_conv = nn.Sequential()
+ if norm_type == 'gn':
+ for i in range(self.num_convs):
+ conv_name = 'mask_inter_feat_{}'.format(i + 1)
+ mask_conv.add_sublayer(
+ conv_name,
+ ConvNormLayer(
+ ch_in=in_channel if i == 0 else out_channel,
+ ch_out=out_channel,
+ filter_size=3,
+ stride=1,
+ norm_type=self.norm_type,
+ initializer=KaimingNormal(fan_in=fan_conv),
+ skip_quant=True))
+ mask_conv.add_sublayer(conv_name + 'act', nn.ReLU())
+ else:
+ for i in range(self.num_convs):
+ conv_name = 'mask_inter_feat_{}'.format(i + 1)
+ conv = nn.Conv2D(
+ in_channels=in_channel if i == 0 else out_channel,
+ out_channels=out_channel,
+ kernel_size=3,
+ padding=1,
+ weight_attr=paddle.ParamAttr(
+ initializer=KaimingNormal(fan_in=fan_conv)))
+ conv.skip_quant = True
+ mask_conv.add_sublayer(conv_name, conv)
+ mask_conv.add_sublayer(conv_name + 'act', nn.ReLU())
+ mask_conv.add_sublayer(
+ 'conv5_mask',
+ nn.Conv2DTranspose(
+ in_channels=self.in_channel,
+ out_channels=self.out_channel,
+ kernel_size=2,
+ stride=2,
+ weight_attr=paddle.ParamAttr(
+ initializer=KaimingNormal(fan_in=fan_deconv))))
+ mask_conv.add_sublayer('conv5_mask' + 'act', nn.ReLU())
+ self.upsample = mask_conv
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ if isinstance(input_shape, (list, tuple)):
+ input_shape = input_shape[0]
+ return {'in_channel': input_shape.channels, }
+
+ def out_channels(self):
+ return self.out_channel
+
+ def forward(self, feats):
+ return self.upsample(feats)
+
+
+@register
+class MaskHead(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['mask_assigner']
+ """
+ RCNN mask head
+
+ Args:
+ head (nn.Layer): Extract feature in mask head
+ roi_extractor (object): The module of RoI Extractor
+ mask_assigner (object): The module of Mask Assigner,
+ label and sample the mask
+ num_classes (int): The number of classes
+ share_bbox_feat (bool): Whether to share the feature from bbox head,
+ default false
+ """
+
+ def __init__(self,
+ head,
+ roi_extractor=RoIAlign().__dict__,
+ mask_assigner='MaskAssigner',
+ num_classes=80,
+ share_bbox_feat=False):
+ super(MaskHead, self).__init__()
+ self.num_classes = num_classes
+
+ self.roi_extractor = roi_extractor
+ if isinstance(roi_extractor, dict):
+ self.roi_extractor = RoIAlign(**roi_extractor)
+ self.head = head
+ self.in_channels = head.out_channels()
+ self.mask_assigner = mask_assigner
+ self.share_bbox_feat = share_bbox_feat
+ self.bbox_head = None
+
+ self.mask_fcn_logits = nn.Conv2D(
+ in_channels=self.in_channels,
+ out_channels=self.num_classes,
+ kernel_size=1,
+ weight_attr=paddle.ParamAttr(initializer=KaimingNormal(
+ fan_in=self.num_classes)))
+ self.mask_fcn_logits.skip_quant = True
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ roi_pooler = cfg['roi_extractor']
+ assert isinstance(roi_pooler, dict)
+ kwargs = RoIAlign.from_config(cfg, input_shape)
+ roi_pooler.update(kwargs)
+ kwargs = {'input_shape': input_shape}
+ head = create(cfg['head'], **kwargs)
+ return {
+ 'roi_extractor': roi_pooler,
+ 'head': head,
+ }
+
+ def get_loss(self, mask_logits, mask_label, mask_target, mask_weight):
+ mask_label = F.one_hot(mask_label, self.num_classes).unsqueeze([2, 3])
+ mask_label = paddle.expand_as(mask_label, mask_logits)
+ mask_label.stop_gradient = True
+ mask_pred = paddle.gather_nd(mask_logits, paddle.nonzero(mask_label))
+ shape = mask_logits.shape
+ mask_pred = paddle.reshape(mask_pred, [shape[0], shape[2], shape[3]])
+
+ mask_target = mask_target.cast('float32')
+ mask_weight = mask_weight.unsqueeze([1, 2])
+ loss_mask = F.binary_cross_entropy_with_logits(
+ mask_pred, mask_target, weight=mask_weight, reduction="mean")
+ return loss_mask
+
+ def forward_train(self, body_feats, rois, rois_num, inputs, targets,
+ bbox_feat):
+ """
+ body_feats (list[Tensor]): Multi-level backbone features
+ rois (list[Tensor]): Proposals for each batch with shape [N, 4]
+ rois_num (Tensor): The number of proposals for each batch
+ inputs (dict): ground truth info
+ """
+ tgt_labels, _, tgt_gt_inds = targets
+ rois, rois_num, tgt_classes, tgt_masks, mask_index, tgt_weights = self.mask_assigner(
+ rois, tgt_labels, tgt_gt_inds, inputs)
+
+ if self.share_bbox_feat:
+ rois_feat = paddle.gather(bbox_feat, mask_index)
+ else:
+ rois_feat = self.roi_extractor(body_feats, rois, rois_num)
+ mask_feat = self.head(rois_feat)
+ mask_logits = self.mask_fcn_logits(mask_feat)
+
+ loss_mask = self.get_loss(mask_logits, tgt_classes, tgt_masks,
+ tgt_weights)
+ return {'loss_mask': loss_mask}
+
+ def forward_test(self,
+ body_feats,
+ rois,
+ rois_num,
+ scale_factor,
+ feat_func=None):
+ """
+ body_feats (list[Tensor]): Multi-level backbone features
+ rois (Tensor): Prediction from bbox head with shape [N, 6]
+ rois_num (Tensor): The number of prediction for each batch
+ scale_factor (Tensor): The scale factor from origin size to input size
+ """
+ if rois.shape[0] == 0:
+ mask_out = paddle.full([1, 1, 1, 1], -1)
+ else:
+ bbox = [rois[:, 2:]]
+ labels = rois[:, 0].cast('int32')
+ rois_feat = self.roi_extractor(body_feats, bbox, rois_num)
+ if self.share_bbox_feat:
+ assert feat_func is not None
+ rois_feat = feat_func(rois_feat)
+
+ mask_feat = self.head(rois_feat)
+ mask_logit = self.mask_fcn_logits(mask_feat)
+ mask_num_class = mask_logit.shape[1]
+ if mask_num_class == 1:
+ mask_out = F.sigmoid(mask_logit)
+ else:
+ num_masks = mask_logit.shape[0]
+ mask_out = []
+ # TODO: need to optimize gather
+ for i in range(mask_logit.shape[0]):
+ pred_masks = paddle.unsqueeze(
+ mask_logit[i, :, :, :], axis=0)
+ mask = paddle.gather(pred_masks, labels[i], axis=1)
+ mask_out.append(mask)
+ mask_out = F.sigmoid(paddle.concat(mask_out))
+ return mask_out
+
+ def forward(self,
+ body_feats,
+ rois,
+ rois_num,
+ inputs,
+ targets=None,
+ bbox_feat=None,
+ feat_func=None):
+ if self.training:
+ return self.forward_train(body_feats, rois, rois_num, inputs,
+ targets, bbox_feat)
+ else:
+ im_scale = inputs['scale_factor']
+ return self.forward_test(body_feats, rois, rois_num, im_scale,
+ feat_func)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/pico_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/pico_head.py
new file mode 100644
index 000000000..7cfd24c3c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/pico_head.py
@@ -0,0 +1,277 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Normal, Constant
+
+from ppdet.core.workspace import register
+from ppdet.modeling.layers import ConvNormLayer
+from .simota_head import OTAVFLHead
+
+
+@register
+class PicoFeat(nn.Layer):
+ """
+ PicoFeat of PicoDet
+
+ Args:
+ feat_in (int): The channel number of input Tensor.
+ feat_out (int): The channel number of output Tensor.
+ num_convs (int): The convolution number of the LiteGFLFeat.
+ norm_type (str): Normalization type, 'bn'/'sync_bn'/'gn'.
+ """
+
+ def __init__(self,
+ feat_in=256,
+ feat_out=96,
+ num_fpn_stride=3,
+ num_convs=2,
+ norm_type='bn',
+ share_cls_reg=False,
+ act='hard_swish'):
+ super(PicoFeat, self).__init__()
+ self.num_convs = num_convs
+ self.norm_type = norm_type
+ self.share_cls_reg = share_cls_reg
+ self.act = act
+ self.cls_convs = []
+ self.reg_convs = []
+ for stage_idx in range(num_fpn_stride):
+ cls_subnet_convs = []
+ reg_subnet_convs = []
+ for i in range(self.num_convs):
+ in_c = feat_in if i == 0 else feat_out
+ cls_conv_dw = self.add_sublayer(
+ 'cls_conv_dw{}.{}'.format(stage_idx, i),
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=5,
+ stride=1,
+ groups=feat_out,
+ norm_type=norm_type,
+ bias_on=False,
+ lr_scale=2.))
+ cls_subnet_convs.append(cls_conv_dw)
+ cls_conv_pw = self.add_sublayer(
+ 'cls_conv_pw{}.{}'.format(stage_idx, i),
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ bias_on=False,
+ lr_scale=2.))
+ cls_subnet_convs.append(cls_conv_pw)
+
+ if not self.share_cls_reg:
+ reg_conv_dw = self.add_sublayer(
+ 'reg_conv_dw{}.{}'.format(stage_idx, i),
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=5,
+ stride=1,
+ groups=feat_out,
+ norm_type=norm_type,
+ bias_on=False,
+ lr_scale=2.))
+ reg_subnet_convs.append(reg_conv_dw)
+ reg_conv_pw = self.add_sublayer(
+ 'reg_conv_pw{}.{}'.format(stage_idx, i),
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=feat_out,
+ filter_size=1,
+ stride=1,
+ norm_type=norm_type,
+ bias_on=False,
+ lr_scale=2.))
+ reg_subnet_convs.append(reg_conv_pw)
+ self.cls_convs.append(cls_subnet_convs)
+ self.reg_convs.append(reg_subnet_convs)
+
+ def act_func(self, x):
+ if self.act == "leaky_relu":
+ x = F.leaky_relu(x)
+ elif self.act == "hard_swish":
+ x = F.hardswish(x)
+ return x
+
+ def forward(self, fpn_feat, stage_idx):
+ assert stage_idx < len(self.cls_convs)
+ cls_feat = fpn_feat
+ reg_feat = fpn_feat
+ for i in range(len(self.cls_convs[stage_idx])):
+ cls_feat = self.act_func(self.cls_convs[stage_idx][i](cls_feat))
+ if not self.share_cls_reg:
+ reg_feat = self.act_func(self.reg_convs[stage_idx][i](reg_feat))
+ return cls_feat, reg_feat
+
+
+@register
+class PicoHead(OTAVFLHead):
+ """
+ PicoHead
+ Args:
+ conv_feat (object): Instance of 'PicoFeat'
+ num_classes (int): Number of classes
+ fpn_stride (list): The stride of each FPN Layer
+ prior_prob (float): Used to set the bias init for the class prediction layer
+ loss_class (object): Instance of VariFocalLoss.
+ loss_dfl (object): Instance of DistributionFocalLoss.
+ loss_bbox (object): Instance of bbox loss.
+ assigner (object): Instance of label assigner.
+ reg_max: Max value of integral set :math: `{0, ..., reg_max}`
+ n QFL setting. Default: 7.
+ """
+ __inject__ = [
+ 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox',
+ 'assigner', 'nms'
+ ]
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ conv_feat='PicoFeat',
+ dgqp_module=None,
+ num_classes=80,
+ fpn_stride=[8, 16, 32],
+ prior_prob=0.01,
+ loss_class='VariFocalLoss',
+ loss_dfl='DistributionFocalLoss',
+ loss_bbox='GIoULoss',
+ assigner='SimOTAAssigner',
+ reg_max=16,
+ feat_in_chan=96,
+ nms=None,
+ nms_pre=1000,
+ cell_offset=0):
+ super(PicoHead, self).__init__(
+ conv_feat=conv_feat,
+ dgqp_module=dgqp_module,
+ num_classes=num_classes,
+ fpn_stride=fpn_stride,
+ prior_prob=prior_prob,
+ loss_class=loss_class,
+ loss_dfl=loss_dfl,
+ loss_bbox=loss_bbox,
+ assigner=assigner,
+ reg_max=reg_max,
+ feat_in_chan=feat_in_chan,
+ nms=nms,
+ nms_pre=nms_pre,
+ cell_offset=cell_offset)
+ self.conv_feat = conv_feat
+ self.num_classes = num_classes
+ self.fpn_stride = fpn_stride
+ self.prior_prob = prior_prob
+ self.loss_vfl = loss_class
+ self.loss_dfl = loss_dfl
+ self.loss_bbox = loss_bbox
+ self.assigner = assigner
+ self.reg_max = reg_max
+ self.feat_in_chan = feat_in_chan
+ self.nms = nms
+ self.nms_pre = nms_pre
+ self.cell_offset = cell_offset
+
+ self.use_sigmoid = self.loss_vfl.use_sigmoid
+ if self.use_sigmoid:
+ self.cls_out_channels = self.num_classes
+ else:
+ self.cls_out_channels = self.num_classes + 1
+ bias_init_value = -math.log((1 - self.prior_prob) / self.prior_prob)
+ # Clear the super class initialization
+ self.gfl_head_cls = None
+ self.gfl_head_reg = None
+ self.scales_regs = None
+
+ self.head_cls_list = []
+ self.head_reg_list = []
+ for i in range(len(fpn_stride)):
+ head_cls = self.add_sublayer(
+ "head_cls" + str(i),
+ nn.Conv2D(
+ in_channels=self.feat_in_chan,
+ out_channels=self.cls_out_channels + 4 * (self.reg_max + 1)
+ if self.conv_feat.share_cls_reg else self.cls_out_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(
+ initializer=Constant(value=bias_init_value))))
+ self.head_cls_list.append(head_cls)
+ if not self.conv_feat.share_cls_reg:
+ head_reg = self.add_sublayer(
+ "head_reg" + str(i),
+ nn.Conv2D(
+ in_channels=self.feat_in_chan,
+ out_channels=4 * (self.reg_max + 1),
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(value=0))))
+ self.head_reg_list.append(head_reg)
+
+ def forward(self, fpn_feats, deploy=False):
+ assert len(fpn_feats) == len(
+ self.fpn_stride
+ ), "The size of fpn_feats is not equal to size of fpn_stride"
+ cls_logits_list = []
+ bboxes_reg_list = []
+ for i, fpn_feat in enumerate(fpn_feats):
+ conv_cls_feat, conv_reg_feat = self.conv_feat(fpn_feat, i)
+ if self.conv_feat.share_cls_reg:
+ cls_logits = self.head_cls_list[i](conv_cls_feat)
+ cls_score, bbox_pred = paddle.split(
+ cls_logits,
+ [self.cls_out_channels, 4 * (self.reg_max + 1)],
+ axis=1)
+ else:
+ cls_score = self.head_cls_list[i](conv_cls_feat)
+ bbox_pred = self.head_reg_list[i](conv_reg_feat)
+
+ if self.dgqp_module:
+ quality_score = self.dgqp_module(bbox_pred)
+ cls_score = F.sigmoid(cls_score) * quality_score
+
+ if deploy:
+ # Now only supports batch size = 1 in deploy
+ # TODO(ygh): support batch size > 1
+ cls_score = F.sigmoid(cls_score).reshape(
+ [1, self.cls_out_channels, -1]).transpose([0, 2, 1])
+ bbox_pred = bbox_pred.reshape([1, (self.reg_max + 1) * 4,
+ -1]).transpose([0, 2, 1])
+ elif not self.training:
+ cls_score = F.sigmoid(cls_score.transpose([0, 2, 3, 1]))
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1])
+
+ cls_logits_list.append(cls_score)
+ bboxes_reg_list.append(bbox_pred)
+
+ return (cls_logits_list, bboxes_reg_list)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/roi_extractor.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/roi_extractor.py
new file mode 100644
index 000000000..35c3924e3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/roi_extractor.py
@@ -0,0 +1,111 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+from ppdet.core.workspace import register
+from ppdet.modeling import ops
+
+
+def _to_list(v):
+ if not isinstance(v, (list, tuple)):
+ return [v]
+ return v
+
+
+@register
+class RoIAlign(object):
+ """
+ RoI Align module
+
+ For more details, please refer to the document of roi_align in
+ in ppdet/modeing/ops.py
+
+ Args:
+ resolution (int): The output size, default 14
+ spatial_scale (float): Multiplicative spatial scale factor to translate
+ ROI coords from their input scale to the scale used when pooling.
+ default 0.0625
+ sampling_ratio (int): The number of sampling points in the interpolation
+ grid, default 0
+ canconical_level (int): The referring level of FPN layer with
+ specified level. default 4
+ canonical_size (int): The referring scale of FPN layer with
+ specified scale. default 224
+ start_level (int): The start level of FPN layer to extract RoI feature,
+ default 0
+ end_level (int): The end level of FPN layer to extract RoI feature,
+ default 3
+ aligned (bool): Whether to add offset to rois' coord in roi_align.
+ default false
+ """
+
+ def __init__(self,
+ resolution=14,
+ spatial_scale=0.0625,
+ sampling_ratio=0,
+ canconical_level=4,
+ canonical_size=224,
+ start_level=0,
+ end_level=3,
+ aligned=False):
+ super(RoIAlign, self).__init__()
+ self.resolution = resolution
+ self.spatial_scale = _to_list(spatial_scale)
+ self.sampling_ratio = sampling_ratio
+ self.canconical_level = canconical_level
+ self.canonical_size = canonical_size
+ self.start_level = start_level
+ self.end_level = end_level
+ self.aligned = aligned
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'spatial_scale': [1. / i.stride for i in input_shape]}
+
+ def __call__(self, feats, roi, rois_num):
+ roi = paddle.concat(roi) if len(roi) > 1 else roi[0]
+ if len(feats) == 1:
+ rois_feat = ops.roi_align(
+ feats[self.start_level],
+ roi,
+ self.resolution,
+ self.spatial_scale[0],
+ rois_num=rois_num,
+ aligned=self.aligned)
+ else:
+ offset = 2
+ k_min = self.start_level + offset
+ k_max = self.end_level + offset
+ rois_dist, restore_index, rois_num_dist = ops.distribute_fpn_proposals(
+ roi,
+ k_min,
+ k_max,
+ self.canconical_level,
+ self.canonical_size,
+ rois_num=rois_num)
+ rois_feat_list = []
+ for lvl in range(self.start_level, self.end_level + 1):
+ roi_feat = ops.roi_align(
+ feats[lvl],
+ rois_dist[lvl],
+ self.resolution,
+ self.spatial_scale[lvl],
+ sampling_ratio=self.sampling_ratio,
+ rois_num=rois_num_dist[lvl],
+ aligned=self.aligned)
+ rois_feat_list.append(roi_feat)
+ rois_feat_shuffle = paddle.concat(rois_feat_list)
+ rois_feat = paddle.gather(rois_feat_shuffle, restore_index)
+
+ return rois_feat
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/s2anet_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/s2anet_head.py
new file mode 100644
index 000000000..7910379c4
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/s2anet_head.py
@@ -0,0 +1,1048 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# The code is based on https://github.com/csuhan/s2anet/blob/master/mmdet/models/anchor_heads_rotated/s2anet_head.py
+
+import paddle
+from paddle import ParamAttr
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, Constant
+from ppdet.core.workspace import register
+from ppdet.modeling import ops
+from ppdet.modeling import bbox_utils
+from ppdet.modeling.proposal_generator.target_layer import RBoxAssigner
+import numpy as np
+
+
+class S2ANetAnchorGenerator(nn.Layer):
+ """
+ AnchorGenerator by paddle
+ """
+
+ def __init__(self, base_size, scales, ratios, scale_major=True, ctr=None):
+ super(S2ANetAnchorGenerator, self).__init__()
+ self.base_size = base_size
+ self.scales = paddle.to_tensor(scales)
+ self.ratios = paddle.to_tensor(ratios)
+ self.scale_major = scale_major
+ self.ctr = ctr
+ self.base_anchors = self.gen_base_anchors()
+
+ @property
+ def num_base_anchors(self):
+ return self.base_anchors.shape[0]
+
+ def gen_base_anchors(self):
+ w = self.base_size
+ h = self.base_size
+ if self.ctr is None:
+ x_ctr = 0.5 * (w - 1)
+ y_ctr = 0.5 * (h - 1)
+ else:
+ x_ctr, y_ctr = self.ctr
+
+ h_ratios = paddle.sqrt(self.ratios)
+ w_ratios = 1 / h_ratios
+ if self.scale_major:
+ ws = (w * w_ratios[:] * self.scales[:]).reshape([-1])
+ hs = (h * h_ratios[:] * self.scales[:]).reshape([-1])
+ else:
+ ws = (w * self.scales[:] * w_ratios[:]).reshape([-1])
+ hs = (h * self.scales[:] * h_ratios[:]).reshape([-1])
+
+ base_anchors = paddle.stack(
+ [
+ x_ctr - 0.5 * (ws - 1), y_ctr - 0.5 * (hs - 1),
+ x_ctr + 0.5 * (ws - 1), y_ctr + 0.5 * (hs - 1)
+ ],
+ axis=-1)
+ base_anchors = paddle.round(base_anchors)
+ return base_anchors
+
+ def _meshgrid(self, x, y, row_major=True):
+ yy, xx = paddle.meshgrid(y, x)
+ yy = yy.reshape([-1])
+ xx = xx.reshape([-1])
+ if row_major:
+ return xx, yy
+ else:
+ return yy, xx
+
+ def forward(self, featmap_size, stride=16):
+ # featmap_size*stride project it to original area
+
+ feat_h = featmap_size[0]
+ feat_w = featmap_size[1]
+ shift_x = paddle.arange(0, feat_w, 1, 'int32') * stride
+ shift_y = paddle.arange(0, feat_h, 1, 'int32') * stride
+ shift_xx, shift_yy = self._meshgrid(shift_x, shift_y)
+ shifts = paddle.stack([shift_xx, shift_yy, shift_xx, shift_yy], axis=-1)
+
+ all_anchors = self.base_anchors[:, :] + shifts[:, :]
+ all_anchors = all_anchors.reshape([feat_h * feat_w, 4])
+ return all_anchors
+
+ def valid_flags(self, featmap_size, valid_size):
+ feat_h, feat_w = featmap_size
+ valid_h, valid_w = valid_size
+ assert valid_h <= feat_h and valid_w <= feat_w
+ valid_x = paddle.zeros([feat_w], dtype='int32')
+ valid_y = paddle.zeros([feat_h], dtype='int32')
+ valid_x[:valid_w] = 1
+ valid_y[:valid_h] = 1
+ valid_xx, valid_yy = self._meshgrid(valid_x, valid_y)
+ valid = valid_xx & valid_yy
+ valid = paddle.reshape(valid, [-1, 1])
+ valid = paddle.expand(valid, [-1, self.num_base_anchors]).reshape([-1])
+ return valid
+
+
+class AlignConv(nn.Layer):
+ def __init__(self, in_channels, out_channels, kernel_size=3, groups=1):
+ super(AlignConv, self).__init__()
+ self.kernel_size = kernel_size
+ self.align_conv = paddle.vision.ops.DeformConv2D(
+ in_channels,
+ out_channels,
+ kernel_size=self.kernel_size,
+ padding=(self.kernel_size - 1) // 2,
+ groups=groups,
+ weight_attr=ParamAttr(initializer=Normal(0, 0.01)),
+ bias_attr=None)
+
+ @paddle.no_grad()
+ def get_offset(self, anchors, featmap_size, stride):
+ """
+ Args:
+ anchors: [M,5] xc,yc,w,h,angle
+ featmap_size: (feat_h, feat_w)
+ stride: 8
+ Returns:
+
+ """
+ anchors = paddle.reshape(anchors, [-1, 5]) # (NA,5)
+ dtype = anchors.dtype
+ feat_h = featmap_size[0]
+ feat_w = featmap_size[1]
+ pad = (self.kernel_size - 1) // 2
+ idx = paddle.arange(-pad, pad + 1, dtype=dtype)
+
+ yy, xx = paddle.meshgrid(idx, idx)
+ xx = paddle.reshape(xx, [-1])
+ yy = paddle.reshape(yy, [-1])
+
+ # get sampling locations of default conv
+ xc = paddle.arange(0, feat_w, dtype=dtype)
+ yc = paddle.arange(0, feat_h, dtype=dtype)
+ yc, xc = paddle.meshgrid(yc, xc)
+
+ xc = paddle.reshape(xc, [-1, 1])
+ yc = paddle.reshape(yc, [-1, 1])
+ x_conv = xc + xx
+ y_conv = yc + yy
+
+ # get sampling locations of anchors
+ # x_ctr, y_ctr, w, h, a = np.unbind(anchors, dim=1)
+ x_ctr = anchors[:, 0]
+ y_ctr = anchors[:, 1]
+ w = anchors[:, 2]
+ h = anchors[:, 3]
+ a = anchors[:, 4]
+
+ x_ctr = paddle.reshape(x_ctr, [-1, 1])
+ y_ctr = paddle.reshape(y_ctr, [-1, 1])
+ w = paddle.reshape(w, [-1, 1])
+ h = paddle.reshape(h, [-1, 1])
+ a = paddle.reshape(a, [-1, 1])
+
+ x_ctr = x_ctr / stride
+ y_ctr = y_ctr / stride
+ w_s = w / stride
+ h_s = h / stride
+ cos, sin = paddle.cos(a), paddle.sin(a)
+ dw, dh = w_s / self.kernel_size, h_s / self.kernel_size
+ x, y = dw * xx, dh * yy
+ xr = cos * x - sin * y
+ yr = sin * x + cos * y
+ x_anchor, y_anchor = xr + x_ctr, yr + y_ctr
+ # get offset filed
+ offset_x = x_anchor - x_conv
+ offset_y = y_anchor - y_conv
+ offset = paddle.stack([offset_y, offset_x], axis=-1)
+ offset = paddle.reshape(
+ offset, [feat_h * feat_w, self.kernel_size * self.kernel_size * 2])
+ offset = paddle.transpose(offset, [1, 0])
+ offset = paddle.reshape(
+ offset,
+ [1, self.kernel_size * self.kernel_size * 2, feat_h, feat_w])
+ return offset
+
+ def forward(self, x, refine_anchors, featmap_size, stride):
+ offset = self.get_offset(refine_anchors, featmap_size, stride)
+ x = F.relu(self.align_conv(x, offset))
+ return x
+
+
+@register
+class S2ANetHead(nn.Layer):
+ """
+ S2Anet head
+ Args:
+ stacked_convs (int): number of stacked_convs
+ feat_in (int): input channels of feat
+ feat_out (int): output channels of feat
+ num_classes (int): num_classes
+ anchor_strides (list): stride of anchors
+ anchor_scales (list): scale of anchors
+ anchor_ratios (list): ratios of anchors
+ target_means (list): target_means
+ target_stds (list): target_stds
+ align_conv_type (str): align_conv_type ['Conv', 'AlignConv']
+ align_conv_size (int): kernel size of align_conv
+ use_sigmoid_cls (bool): use sigmoid_cls or not
+ reg_loss_weight (list): loss weight for regression
+ """
+ __shared__ = ['num_classes']
+ __inject__ = ['anchor_assign']
+
+ def __init__(self,
+ stacked_convs=2,
+ feat_in=256,
+ feat_out=256,
+ num_classes=15,
+ anchor_strides=[8, 16, 32, 64, 128],
+ anchor_scales=[4],
+ anchor_ratios=[1.0],
+ target_means=0.0,
+ target_stds=1.0,
+ align_conv_type='AlignConv',
+ align_conv_size=3,
+ use_sigmoid_cls=True,
+ anchor_assign=RBoxAssigner().__dict__,
+ reg_loss_weight=[1.0, 1.0, 1.0, 1.0, 1.1],
+ cls_loss_weight=[1.1, 1.05],
+ reg_loss_type='l1'):
+ super(S2ANetHead, self).__init__()
+ self.stacked_convs = stacked_convs
+ self.feat_in = feat_in
+ self.feat_out = feat_out
+ self.anchor_list = None
+ self.anchor_scales = anchor_scales
+ self.anchor_ratios = anchor_ratios
+ self.anchor_strides = anchor_strides
+ self.anchor_strides = paddle.to_tensor(anchor_strides)
+ self.anchor_base_sizes = list(anchor_strides)
+ self.means = paddle.ones(shape=[5]) * target_means
+ self.stds = paddle.ones(shape=[5]) * target_stds
+ assert align_conv_type in ['AlignConv', 'Conv', 'DCN']
+ self.align_conv_type = align_conv_type
+ self.align_conv_size = align_conv_size
+
+ self.use_sigmoid_cls = use_sigmoid_cls
+ self.cls_out_channels = num_classes if self.use_sigmoid_cls else 1
+ self.sampling = False
+ self.anchor_assign = anchor_assign
+ self.reg_loss_weight = reg_loss_weight
+ self.cls_loss_weight = cls_loss_weight
+ self.alpha = 1.0
+ self.beta = 1.0
+ self.reg_loss_type = reg_loss_type
+ self.s2anet_head_out = None
+
+ # anchor
+ self.anchor_generators = []
+ for anchor_base in self.anchor_base_sizes:
+ self.anchor_generators.append(
+ S2ANetAnchorGenerator(anchor_base, anchor_scales,
+ anchor_ratios))
+
+ self.anchor_generators = nn.LayerList(self.anchor_generators)
+ self.fam_cls_convs = nn.Sequential()
+ self.fam_reg_convs = nn.Sequential()
+
+ for i in range(self.stacked_convs):
+ chan_in = self.feat_in if i == 0 else self.feat_out
+
+ self.fam_cls_convs.add_sublayer(
+ 'fam_cls_conv_{}'.format(i),
+ nn.Conv2D(
+ in_channels=chan_in,
+ out_channels=self.feat_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0))))
+
+ self.fam_cls_convs.add_sublayer('fam_cls_conv_{}_act'.format(i),
+ nn.ReLU())
+
+ self.fam_reg_convs.add_sublayer(
+ 'fam_reg_conv_{}'.format(i),
+ nn.Conv2D(
+ in_channels=chan_in,
+ out_channels=self.feat_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0))))
+
+ self.fam_reg_convs.add_sublayer('fam_reg_conv_{}_act'.format(i),
+ nn.ReLU())
+
+ self.fam_reg = nn.Conv2D(
+ self.feat_out,
+ 5,
+ 1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0)))
+ prior_prob = 0.01
+ bias_init = float(-np.log((1 - prior_prob) / prior_prob))
+ self.fam_cls = nn.Conv2D(
+ self.feat_out,
+ self.cls_out_channels,
+ 1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(bias_init)))
+
+ if self.align_conv_type == "AlignConv":
+ self.align_conv = AlignConv(self.feat_out, self.feat_out,
+ self.align_conv_size)
+ elif self.align_conv_type == "Conv":
+ self.align_conv = nn.Conv2D(
+ self.feat_out,
+ self.feat_out,
+ self.align_conv_size,
+ padding=(self.align_conv_size - 1) // 2,
+ bias_attr=ParamAttr(initializer=Constant(0)))
+
+ elif self.align_conv_type == "DCN":
+ self.align_conv_offset = nn.Conv2D(
+ self.feat_out,
+ 2 * self.align_conv_size**2,
+ 1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0)))
+
+ self.align_conv = paddle.vision.ops.DeformConv2D(
+ self.feat_out,
+ self.feat_out,
+ self.align_conv_size,
+ padding=(self.align_conv_size - 1) // 2,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=False)
+
+ self.or_conv = nn.Conv2D(
+ self.feat_out,
+ self.feat_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0)))
+
+ # ODM
+ self.odm_cls_convs = nn.Sequential()
+ self.odm_reg_convs = nn.Sequential()
+
+ for i in range(self.stacked_convs):
+ ch_in = self.feat_out
+ # ch_in = int(self.feat_out / 8) if i == 0 else self.feat_out
+
+ self.odm_cls_convs.add_sublayer(
+ 'odm_cls_conv_{}'.format(i),
+ nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=self.feat_out,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0))))
+
+ self.odm_cls_convs.add_sublayer('odm_cls_conv_{}_act'.format(i),
+ nn.ReLU())
+
+ self.odm_reg_convs.add_sublayer(
+ 'odm_reg_conv_{}'.format(i),
+ nn.Conv2D(
+ in_channels=self.feat_out,
+ out_channels=self.feat_out,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0))))
+
+ self.odm_reg_convs.add_sublayer('odm_reg_conv_{}_act'.format(i),
+ nn.ReLU())
+
+ self.odm_cls = nn.Conv2D(
+ self.feat_out,
+ self.cls_out_channels,
+ 3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(bias_init)))
+ self.odm_reg = nn.Conv2D(
+ self.feat_out,
+ 5,
+ 3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0.0, 0.01)),
+ bias_attr=ParamAttr(initializer=Constant(0)))
+
+ self.featmap_sizes = []
+ self.base_anchors_list = []
+ self.refine_anchor_list = []
+
+ def forward(self, feats):
+ fam_reg_branch_list = []
+ fam_cls_branch_list = []
+
+ odm_reg_branch_list = []
+ odm_cls_branch_list = []
+
+ self.featmap_sizes_list = []
+ self.base_anchors_list = []
+ self.refine_anchor_list = []
+
+ for feat_idx in range(len(feats)):
+ feat = feats[feat_idx]
+ fam_cls_feat = self.fam_cls_convs(feat)
+
+ fam_cls = self.fam_cls(fam_cls_feat)
+ # [N, CLS, H, W] --> [N, H, W, CLS]
+ fam_cls = fam_cls.transpose([0, 2, 3, 1])
+ fam_cls_reshape = paddle.reshape(
+ fam_cls, [fam_cls.shape[0], -1, self.cls_out_channels])
+ fam_cls_branch_list.append(fam_cls_reshape)
+
+ fam_reg_feat = self.fam_reg_convs(feat)
+
+ fam_reg = self.fam_reg(fam_reg_feat)
+ # [N, 5, H, W] --> [N, H, W, 5]
+ fam_reg = fam_reg.transpose([0, 2, 3, 1])
+ fam_reg_reshape = paddle.reshape(fam_reg, [fam_reg.shape[0], -1, 5])
+ fam_reg_branch_list.append(fam_reg_reshape)
+
+ # prepare anchor
+ featmap_size = (paddle.shape(feat)[2], paddle.shape(feat)[3])
+ self.featmap_sizes_list.append(featmap_size)
+ init_anchors = self.anchor_generators[feat_idx](
+ featmap_size, self.anchor_strides[feat_idx])
+
+ init_anchors = paddle.to_tensor(init_anchors, dtype='float32')
+ NA = featmap_size[0] * featmap_size[1]
+ init_anchors = paddle.reshape(init_anchors, [NA, 4])
+ init_anchors = self.rect2rbox(init_anchors)
+ self.base_anchors_list.append(init_anchors)
+
+ if self.training:
+ refine_anchor = self.bbox_decode(fam_reg.detach(), init_anchors)
+ else:
+ refine_anchor = self.bbox_decode(fam_reg, init_anchors)
+
+ self.refine_anchor_list.append(refine_anchor)
+
+ if self.align_conv_type == 'AlignConv':
+ align_feat = self.align_conv(feat,
+ refine_anchor.clone(),
+ featmap_size,
+ self.anchor_strides[feat_idx])
+ elif self.align_conv_type == 'DCN':
+ align_offset = self.align_conv_offset(feat)
+ align_feat = self.align_conv(feat, align_offset)
+ elif self.align_conv_type == 'Conv':
+ align_feat = self.align_conv(feat)
+
+ or_feat = self.or_conv(align_feat)
+ odm_reg_feat = or_feat
+ odm_cls_feat = or_feat
+
+ odm_reg_feat = self.odm_reg_convs(odm_reg_feat)
+ odm_cls_feat = self.odm_cls_convs(odm_cls_feat)
+
+ odm_cls_score = self.odm_cls(odm_cls_feat)
+ # [N, CLS, H, W] --> [N, H, W, CLS]
+ odm_cls_score = odm_cls_score.transpose([0, 2, 3, 1])
+ odm_cls_score_shape = odm_cls_score.shape
+ odm_cls_score_reshape = paddle.reshape(odm_cls_score, [
+ odm_cls_score_shape[0], odm_cls_score_shape[1] *
+ odm_cls_score_shape[2], self.cls_out_channels
+ ])
+
+ odm_cls_branch_list.append(odm_cls_score_reshape)
+
+ odm_bbox_pred = self.odm_reg(odm_reg_feat)
+ # [N, 5, H, W] --> [N, H, W, 5]
+ odm_bbox_pred = odm_bbox_pred.transpose([0, 2, 3, 1])
+ odm_bbox_pred_reshape = paddle.reshape(odm_bbox_pred, [-1, 5])
+ odm_bbox_pred_reshape = paddle.unsqueeze(
+ odm_bbox_pred_reshape, axis=0)
+ odm_reg_branch_list.append(odm_bbox_pred_reshape)
+
+ self.s2anet_head_out = (fam_cls_branch_list, fam_reg_branch_list,
+ odm_cls_branch_list, odm_reg_branch_list)
+ return self.s2anet_head_out
+
+ def get_prediction(self, nms_pre=2000):
+ refine_anchors = self.refine_anchor_list
+ fam_cls_branch_list = self.s2anet_head_out[0]
+ fam_reg_branch_list = self.s2anet_head_out[1]
+ odm_cls_branch_list = self.s2anet_head_out[2]
+ odm_reg_branch_list = self.s2anet_head_out[3]
+ pred_scores, pred_bboxes = self.get_bboxes(
+ odm_cls_branch_list, odm_reg_branch_list, refine_anchors, nms_pre,
+ self.cls_out_channels, self.use_sigmoid_cls)
+ return pred_scores, pred_bboxes
+
+ def smooth_l1_loss(self, pred, label, delta=1.0 / 9.0):
+ """
+ Args:
+ pred: pred score
+ label: label
+ delta: delta
+ Returns: loss
+ """
+ assert pred.shape == label.shape and label.numel() > 0
+ assert delta > 0
+ diff = paddle.abs(pred - label)
+ loss = paddle.where(diff < delta, 0.5 * diff * diff / delta,
+ diff - 0.5 * delta)
+ return loss
+
+ def get_fam_loss(self, fam_target, s2anet_head_out, reg_loss_type='gwd'):
+ (labels, label_weights, bbox_targets, bbox_weights, bbox_gt_bboxes,
+ pos_inds, neg_inds) = fam_target
+ fam_cls_branch_list, fam_reg_branch_list, odm_cls_branch_list, odm_reg_branch_list = s2anet_head_out
+
+ fam_cls_losses = []
+ fam_bbox_losses = []
+ st_idx = 0
+ num_total_samples = len(pos_inds) + len(
+ neg_inds) if self.sampling else len(pos_inds)
+ num_total_samples = max(1, num_total_samples)
+
+ for idx, feat_size in enumerate(self.featmap_sizes_list):
+ feat_anchor_num = feat_size[0] * feat_size[1]
+
+ # step1: get data
+ feat_labels = labels[st_idx:st_idx + feat_anchor_num]
+ feat_label_weights = label_weights[st_idx:st_idx + feat_anchor_num]
+
+ feat_bbox_targets = bbox_targets[st_idx:st_idx + feat_anchor_num, :]
+ feat_bbox_weights = bbox_weights[st_idx:st_idx + feat_anchor_num, :]
+
+ # step2: calc cls loss
+ feat_labels = feat_labels.reshape(-1)
+ feat_label_weights = feat_label_weights.reshape(-1)
+
+ fam_cls_score = fam_cls_branch_list[idx]
+ fam_cls_score = paddle.squeeze(fam_cls_score, axis=0)
+ fam_cls_score1 = fam_cls_score
+
+ feat_labels = paddle.to_tensor(feat_labels)
+ feat_labels_one_hot = paddle.nn.functional.one_hot(
+ feat_labels, self.cls_out_channels + 1)
+ feat_labels_one_hot = feat_labels_one_hot[:, 1:]
+ feat_labels_one_hot.stop_gradient = True
+
+ num_total_samples = paddle.to_tensor(
+ num_total_samples, dtype='float32', stop_gradient=True)
+
+ fam_cls = F.sigmoid_focal_loss(
+ fam_cls_score1,
+ feat_labels_one_hot,
+ normalizer=num_total_samples,
+ reduction='none')
+
+ feat_label_weights = feat_label_weights.reshape(
+ feat_label_weights.shape[0], 1)
+ feat_label_weights = np.repeat(
+ feat_label_weights, self.cls_out_channels, axis=1)
+ feat_label_weights = paddle.to_tensor(
+ feat_label_weights, stop_gradient=True)
+
+ fam_cls = fam_cls * feat_label_weights
+ fam_cls_total = paddle.sum(fam_cls)
+ fam_cls_losses.append(fam_cls_total)
+
+ # step3: regression loss
+ feat_bbox_targets = paddle.to_tensor(
+ feat_bbox_targets, dtype='float32', stop_gradient=True)
+ feat_bbox_targets = paddle.reshape(feat_bbox_targets, [-1, 5])
+
+ fam_bbox_pred = fam_reg_branch_list[idx]
+ fam_bbox_pred = paddle.squeeze(fam_bbox_pred, axis=0)
+ fam_bbox_pred = paddle.reshape(fam_bbox_pred, [-1, 5])
+ fam_bbox = self.smooth_l1_loss(fam_bbox_pred, feat_bbox_targets)
+ loss_weight = paddle.to_tensor(
+ self.reg_loss_weight, dtype='float32', stop_gradient=True)
+ fam_bbox = paddle.multiply(fam_bbox, loss_weight)
+ feat_bbox_weights = paddle.to_tensor(
+ feat_bbox_weights, stop_gradient=True)
+
+ if reg_loss_type == 'l1':
+ fam_bbox = fam_bbox * feat_bbox_weights
+ fam_bbox_total = paddle.sum(fam_bbox) / num_total_samples
+ elif reg_loss_type == 'iou' or reg_loss_type == 'gwd':
+ fam_bbox = paddle.sum(fam_bbox, axis=-1)
+ feat_bbox_weights = paddle.sum(feat_bbox_weights, axis=-1)
+ try:
+ from rbox_iou_ops import rbox_iou
+ except Exception as e:
+ print("import custom_ops error, try install rbox_iou_ops " \
+ "following ppdet/ext_op/README.md", e)
+ sys.stdout.flush()
+ sys.exit(-1)
+ # calc iou
+ fam_bbox_decode = self.delta2rbox(self.base_anchors_list[idx],
+ fam_bbox_pred)
+ bbox_gt_bboxes = paddle.to_tensor(
+ bbox_gt_bboxes,
+ dtype=fam_bbox_decode.dtype,
+ place=fam_bbox_decode.place)
+ bbox_gt_bboxes.stop_gradient = True
+ iou = rbox_iou(fam_bbox_decode, bbox_gt_bboxes)
+ iou = paddle.diag(iou)
+
+ if reg_loss_type == 'gwd':
+ bbox_gt_bboxes_level = bbox_gt_bboxes[st_idx:st_idx +
+ feat_anchor_num, :]
+ fam_bbox_total = self.gwd_loss(fam_bbox_decode,
+ bbox_gt_bboxes_level)
+ fam_bbox_total = fam_bbox_total * feat_bbox_weights
+ fam_bbox_total = paddle.sum(
+ fam_bbox_total) / num_total_samples
+
+ fam_bbox_losses.append(fam_bbox_total)
+ st_idx += feat_anchor_num
+
+ fam_cls_loss = paddle.add_n(fam_cls_losses)
+ fam_cls_loss_weight = paddle.to_tensor(
+ self.cls_loss_weight[0], dtype='float32', stop_gradient=True)
+ fam_cls_loss = fam_cls_loss * fam_cls_loss_weight
+ fam_reg_loss = paddle.add_n(fam_bbox_losses)
+ return fam_cls_loss, fam_reg_loss
+
+ def get_odm_loss(self, odm_target, s2anet_head_out, reg_loss_type='gwd'):
+ (labels, label_weights, bbox_targets, bbox_weights, bbox_gt_bboxes,
+ pos_inds, neg_inds) = odm_target
+ fam_cls_branch_list, fam_reg_branch_list, odm_cls_branch_list, odm_reg_branch_list = s2anet_head_out
+
+ odm_cls_losses = []
+ odm_bbox_losses = []
+ st_idx = 0
+ num_total_samples = len(pos_inds) + len(
+ neg_inds) if self.sampling else len(pos_inds)
+ num_total_samples = max(1, num_total_samples)
+
+ for idx, feat_size in enumerate(self.featmap_sizes_list):
+ feat_anchor_num = feat_size[0] * feat_size[1]
+
+ # step1: get data
+ feat_labels = labels[st_idx:st_idx + feat_anchor_num]
+ feat_label_weights = label_weights[st_idx:st_idx + feat_anchor_num]
+
+ feat_bbox_targets = bbox_targets[st_idx:st_idx + feat_anchor_num, :]
+ feat_bbox_weights = bbox_weights[st_idx:st_idx + feat_anchor_num, :]
+
+ # step2: calc cls loss
+ feat_labels = feat_labels.reshape(-1)
+ feat_label_weights = feat_label_weights.reshape(-1)
+
+ odm_cls_score = odm_cls_branch_list[idx]
+ odm_cls_score = paddle.squeeze(odm_cls_score, axis=0)
+ odm_cls_score1 = odm_cls_score
+
+ feat_labels = paddle.to_tensor(feat_labels)
+ feat_labels_one_hot = paddle.nn.functional.one_hot(
+ feat_labels, self.cls_out_channels + 1)
+ feat_labels_one_hot = feat_labels_one_hot[:, 1:]
+ feat_labels_one_hot.stop_gradient = True
+
+ num_total_samples = paddle.to_tensor(
+ num_total_samples, dtype='float32', stop_gradient=True)
+ odm_cls = F.sigmoid_focal_loss(
+ odm_cls_score1,
+ feat_labels_one_hot,
+ normalizer=num_total_samples,
+ reduction='none')
+
+ feat_label_weights = feat_label_weights.reshape(
+ feat_label_weights.shape[0], 1)
+ feat_label_weights = np.repeat(
+ feat_label_weights, self.cls_out_channels, axis=1)
+ feat_label_weights = paddle.to_tensor(feat_label_weights)
+ feat_label_weights.stop_gradient = True
+
+ odm_cls = odm_cls * feat_label_weights
+ odm_cls_total = paddle.sum(odm_cls)
+ odm_cls_losses.append(odm_cls_total)
+
+ # # step3: regression loss
+ feat_bbox_targets = paddle.to_tensor(
+ feat_bbox_targets, dtype='float32')
+ feat_bbox_targets = paddle.reshape(feat_bbox_targets, [-1, 5])
+ feat_bbox_targets.stop_gradient = True
+
+ odm_bbox_pred = odm_reg_branch_list[idx]
+ odm_bbox_pred = paddle.squeeze(odm_bbox_pred, axis=0)
+ odm_bbox_pred = paddle.reshape(odm_bbox_pred, [-1, 5])
+ odm_bbox = self.smooth_l1_loss(odm_bbox_pred, feat_bbox_targets)
+
+ loss_weight = paddle.to_tensor(
+ self.reg_loss_weight, dtype='float32', stop_gradient=True)
+ odm_bbox = paddle.multiply(odm_bbox, loss_weight)
+ feat_bbox_weights = paddle.to_tensor(
+ feat_bbox_weights, stop_gradient=True)
+
+ if reg_loss_type == 'l1':
+ odm_bbox = odm_bbox * feat_bbox_weights
+ odm_bbox_total = paddle.sum(odm_bbox) / num_total_samples
+ elif reg_loss_type == 'iou' or reg_loss_type == 'gwd':
+ odm_bbox = paddle.sum(odm_bbox, axis=-1)
+ feat_bbox_weights = paddle.sum(feat_bbox_weights, axis=-1)
+ try:
+ from rbox_iou_ops import rbox_iou
+ except Exception as e:
+ print("import custom_ops error, try install rbox_iou_ops " \
+ "following ppdet/ext_op/README.md", e)
+ sys.stdout.flush()
+ sys.exit(-1)
+ # calc iou
+ odm_bbox_decode = self.delta2rbox(self.refine_anchor_list[idx],
+ odm_bbox_pred)
+ bbox_gt_bboxes = paddle.to_tensor(
+ bbox_gt_bboxes,
+ dtype=odm_bbox_decode.dtype,
+ place=odm_bbox_decode.place)
+ bbox_gt_bboxes.stop_gradient = True
+ iou = rbox_iou(odm_bbox_decode, bbox_gt_bboxes)
+ iou = paddle.diag(iou)
+
+ if reg_loss_type == 'gwd':
+ bbox_gt_bboxes_level = bbox_gt_bboxes[st_idx:st_idx +
+ feat_anchor_num, :]
+ odm_bbox_total = self.gwd_loss(odm_bbox_decode,
+ bbox_gt_bboxes_level)
+ odm_bbox_total = odm_bbox_total * feat_bbox_weights
+ odm_bbox_total = paddle.sum(
+ odm_bbox_total) / num_total_samples
+
+ odm_bbox_losses.append(odm_bbox_total)
+ st_idx += feat_anchor_num
+
+ odm_cls_loss = paddle.add_n(odm_cls_losses)
+ odm_cls_loss_weight = paddle.to_tensor(
+ self.cls_loss_weight[1], dtype='float32', stop_gradient=True)
+ odm_cls_loss = odm_cls_loss * odm_cls_loss_weight
+ odm_reg_loss = paddle.add_n(odm_bbox_losses)
+ return odm_cls_loss, odm_reg_loss
+
+ def get_loss(self, inputs):
+ # inputs: im_id image im_shape scale_factor gt_bbox gt_class is_crowd
+
+ # compute loss
+ fam_cls_loss_lst = []
+ fam_reg_loss_lst = []
+ odm_cls_loss_lst = []
+ odm_reg_loss_lst = []
+
+ im_shape = inputs['im_shape']
+ for im_id in range(im_shape.shape[0]):
+ np_im_shape = inputs['im_shape'][im_id].numpy()
+ np_scale_factor = inputs['scale_factor'][im_id].numpy()
+ # data_format: (xc, yc, w, h, theta)
+ gt_bboxes = inputs['gt_rbox'][im_id].numpy()
+ gt_labels = inputs['gt_class'][im_id].numpy()
+ is_crowd = inputs['is_crowd'][im_id].numpy()
+ gt_labels = gt_labels + 1
+
+ # featmap_sizes
+ anchors_list_all = np.concatenate(self.base_anchors_list)
+
+ # get im_feat
+ fam_cls_feats_list = [e[im_id] for e in self.s2anet_head_out[0]]
+ fam_reg_feats_list = [e[im_id] for e in self.s2anet_head_out[1]]
+ odm_cls_feats_list = [e[im_id] for e in self.s2anet_head_out[2]]
+ odm_reg_feats_list = [e[im_id] for e in self.s2anet_head_out[3]]
+ im_s2anet_head_out = (fam_cls_feats_list, fam_reg_feats_list,
+ odm_cls_feats_list, odm_reg_feats_list)
+
+ # FAM
+ im_fam_target = self.anchor_assign(anchors_list_all, gt_bboxes,
+ gt_labels, is_crowd)
+ if im_fam_target is not None:
+ im_fam_cls_loss, im_fam_reg_loss = self.get_fam_loss(
+ im_fam_target, im_s2anet_head_out, self.reg_loss_type)
+ fam_cls_loss_lst.append(im_fam_cls_loss)
+ fam_reg_loss_lst.append(im_fam_reg_loss)
+
+ # ODM
+ np_refine_anchors_list = paddle.concat(
+ self.refine_anchor_list).numpy()
+ np_refine_anchors_list = np.concatenate(np_refine_anchors_list)
+ np_refine_anchors_list = np_refine_anchors_list.reshape(-1, 5)
+ im_odm_target = self.anchor_assign(np_refine_anchors_list,
+ gt_bboxes, gt_labels, is_crowd)
+
+ if im_odm_target is not None:
+ im_odm_cls_loss, im_odm_reg_loss = self.get_odm_loss(
+ im_odm_target, im_s2anet_head_out, self.reg_loss_type)
+ odm_cls_loss_lst.append(im_odm_cls_loss)
+ odm_reg_loss_lst.append(im_odm_reg_loss)
+ fam_cls_loss = paddle.add_n(fam_cls_loss_lst)
+ fam_reg_loss = paddle.add_n(fam_reg_loss_lst)
+ odm_cls_loss = paddle.add_n(odm_cls_loss_lst)
+ odm_reg_loss = paddle.add_n(odm_reg_loss_lst)
+ return {
+ 'fam_cls_loss': fam_cls_loss,
+ 'fam_reg_loss': fam_reg_loss,
+ 'odm_cls_loss': odm_cls_loss,
+ 'odm_reg_loss': odm_reg_loss
+ }
+
+ def get_bboxes(self, cls_score_list, bbox_pred_list, mlvl_anchors, nms_pre,
+ cls_out_channels, use_sigmoid_cls):
+ assert len(cls_score_list) == len(bbox_pred_list) == len(mlvl_anchors)
+
+ mlvl_bboxes = []
+ mlvl_scores = []
+
+ idx = 0
+ for cls_score, bbox_pred, anchors in zip(cls_score_list, bbox_pred_list,
+ mlvl_anchors):
+ cls_score = paddle.reshape(cls_score, [-1, cls_out_channels])
+ if use_sigmoid_cls:
+ scores = F.sigmoid(cls_score)
+ else:
+ scores = F.softmax(cls_score, axis=-1)
+
+ # bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 5)
+ bbox_pred = paddle.transpose(bbox_pred, [1, 2, 0])
+ bbox_pred = paddle.reshape(bbox_pred, [-1, 5])
+ anchors = paddle.reshape(anchors, [-1, 5])
+
+ if scores.shape[0] > nms_pre:
+ # Get maximum scores for foreground classes.
+ if use_sigmoid_cls:
+ max_scores = paddle.max(scores, axis=1)
+ else:
+ max_scores = paddle.max(scores[:, 1:], axis=1)
+
+ topk_val, topk_inds = paddle.topk(max_scores, nms_pre)
+ anchors = paddle.gather(anchors, topk_inds)
+ bbox_pred = paddle.gather(bbox_pred, topk_inds)
+ scores = paddle.gather(scores, topk_inds)
+
+ bbox_delta = paddle.reshape(bbox_pred, [-1, 5])
+ bboxes = self.delta2rbox(anchors, bbox_delta)
+ mlvl_bboxes.append(bboxes)
+ mlvl_scores.append(scores)
+
+ idx += 1
+
+ mlvl_bboxes = paddle.concat(mlvl_bboxes, axis=0)
+ mlvl_scores = paddle.concat(mlvl_scores)
+
+ return mlvl_scores, mlvl_bboxes
+
+ def rect2rbox(self, bboxes):
+ """
+ :param bboxes: shape (n, 4) (xmin, ymin, xmax, ymax)
+ :return: dbboxes: shape (n, 5) (x_ctr, y_ctr, w, h, angle)
+ """
+ bboxes = paddle.reshape(bboxes, [-1, 4])
+ num_boxes = paddle.shape(bboxes)[0]
+ x_ctr = (bboxes[:, 2] + bboxes[:, 0]) / 2.0
+ y_ctr = (bboxes[:, 3] + bboxes[:, 1]) / 2.0
+ edges1 = paddle.abs(bboxes[:, 2] - bboxes[:, 0])
+ edges2 = paddle.abs(bboxes[:, 3] - bboxes[:, 1])
+
+ rbox_w = paddle.maximum(edges1, edges2)
+ rbox_h = paddle.minimum(edges1, edges2)
+
+ # set angle
+ inds = edges1 < edges2
+ inds = paddle.cast(inds, 'int32')
+ rboxes_angle = inds * np.pi / 2.0
+
+ rboxes = paddle.stack(
+ (x_ctr, y_ctr, rbox_w, rbox_h, rboxes_angle), axis=1)
+ return rboxes
+
+ # deltas to rbox
+ def delta2rbox(self, rrois, deltas, wh_ratio_clip=1e-6):
+ """
+ :param rrois: (cx, cy, w, h, theta)
+ :param deltas: (dx, dy, dw, dh, dtheta)
+ :param means: means of anchor
+ :param stds: stds of anchor
+ :param wh_ratio_clip: clip threshold of wh_ratio
+ :return:
+ """
+ deltas = paddle.reshape(deltas, [-1, 5])
+ rrois = paddle.reshape(rrois, [-1, 5])
+ # fix dy2st bug denorm_deltas = deltas * self.stds + self.means
+ denorm_deltas = paddle.add(
+ paddle.multiply(deltas, self.stds), self.means)
+
+ dx = denorm_deltas[:, 0]
+ dy = denorm_deltas[:, 1]
+ dw = denorm_deltas[:, 2]
+ dh = denorm_deltas[:, 3]
+ dangle = denorm_deltas[:, 4]
+ max_ratio = np.abs(np.log(wh_ratio_clip))
+ dw = paddle.clip(dw, min=-max_ratio, max=max_ratio)
+ dh = paddle.clip(dh, min=-max_ratio, max=max_ratio)
+
+ rroi_x = rrois[:, 0]
+ rroi_y = rrois[:, 1]
+ rroi_w = rrois[:, 2]
+ rroi_h = rrois[:, 3]
+ rroi_angle = rrois[:, 4]
+
+ gx = dx * rroi_w * paddle.cos(rroi_angle) - dy * rroi_h * paddle.sin(
+ rroi_angle) + rroi_x
+ gy = dx * rroi_w * paddle.sin(rroi_angle) + dy * rroi_h * paddle.cos(
+ rroi_angle) + rroi_y
+ gw = rroi_w * dw.exp()
+ gh = rroi_h * dh.exp()
+ ga = np.pi * dangle + rroi_angle
+ ga = (ga + np.pi / 4) % np.pi - np.pi / 4
+ ga = paddle.to_tensor(ga)
+ gw = paddle.to_tensor(gw, dtype='float32')
+ gh = paddle.to_tensor(gh, dtype='float32')
+ bboxes = paddle.stack([gx, gy, gw, gh, ga], axis=-1)
+ return bboxes
+
+ def bbox_decode(self, bbox_preds, anchors):
+ """decode bbox from deltas
+ Args:
+ bbox_preds: [N,H,W,5]
+ anchors: [H*W,5]
+ return:
+ bboxes: [N,H,W,5]
+ """
+ num_imgs, H, W, _ = bbox_preds.shape
+ bbox_delta = paddle.reshape(bbox_preds, [-1, 5])
+ bboxes = self.delta2rbox(anchors, bbox_delta)
+ return bboxes
+
+ def trace(self, A):
+ tr = paddle.diagonal(A, axis1=-2, axis2=-1)
+ tr = paddle.sum(tr, axis=-1)
+ return tr
+
+ def sqrt_newton_schulz_autograd(self, A, numIters):
+ A_shape = A.shape
+ batchSize = A_shape[0]
+ dim = A_shape[1]
+
+ normA = A * A
+ normA = paddle.sum(normA, axis=1)
+ normA = paddle.sum(normA, axis=1)
+ normA = paddle.sqrt(normA)
+ normA1 = normA.reshape([batchSize, 1, 1])
+ Y = paddle.divide(A, paddle.expand_as(normA1, A))
+ I = paddle.eye(dim, dim).reshape([1, dim, dim])
+ l0 = []
+ for i in range(batchSize):
+ l0.append(I)
+ I = paddle.concat(l0, axis=0)
+ I.stop_gradient = False
+ Z = paddle.eye(dim, dim).reshape([1, dim, dim])
+ l1 = []
+ for i in range(batchSize):
+ l1.append(Z)
+ Z = paddle.concat(l1, axis=0)
+ Z.stop_gradient = False
+
+ for i in range(numIters):
+ T = 0.5 * (3.0 * I - Z.bmm(Y))
+ Y = Y.bmm(T)
+ Z = T.bmm(Z)
+ sA = Y * paddle.sqrt(normA1).reshape([batchSize, 1, 1])
+ sA = paddle.expand_as(sA, A)
+ return sA
+
+ def wasserstein_distance_sigma(sigma1, sigma2):
+ wasserstein_distance_item2 = paddle.matmul(
+ sigma1, sigma1) + paddle.matmul(
+ sigma2, sigma2) - 2 * self.sqrt_newton_schulz_autograd(
+ paddle.matmul(
+ paddle.matmul(sigma1, paddle.matmul(sigma2, sigma2)),
+ sigma1), 10)
+ wasserstein_distance_item2 = self.trace(wasserstein_distance_item2)
+
+ return wasserstein_distance_item2
+
+ def xywhr2xyrs(self, xywhr):
+ xywhr = paddle.reshape(xywhr, [-1, 5])
+ xy = xywhr[:, :2]
+ wh = paddle.clip(xywhr[:, 2:4], min=1e-7, max=1e7)
+ r = xywhr[:, 4]
+ cos_r = paddle.cos(r)
+ sin_r = paddle.sin(r)
+ R = paddle.stack(
+ (cos_r, -sin_r, sin_r, cos_r), axis=-1).reshape([-1, 2, 2])
+ S = 0.5 * paddle.nn.functional.diag_embed(wh)
+ return xy, R, S
+
+ def gwd_loss(self,
+ pred,
+ target,
+ fun='log',
+ tau=1.0,
+ alpha=1.0,
+ normalize=False):
+
+ xy_p, R_p, S_p = self.xywhr2xyrs(pred)
+ xy_t, R_t, S_t = self.xywhr2xyrs(target)
+
+ xy_distance = (xy_p - xy_t).square().sum(axis=-1)
+
+ Sigma_p = R_p.matmul(S_p.square()).matmul(R_p.transpose([0, 2, 1]))
+ Sigma_t = R_t.matmul(S_t.square()).matmul(R_t.transpose([0, 2, 1]))
+
+ whr_distance = paddle.diagonal(
+ S_p, axis1=-2, axis2=-1).square().sum(axis=-1)
+
+ whr_distance = whr_distance + paddle.diagonal(
+ S_t, axis1=-2, axis2=-1).square().sum(axis=-1)
+ _t = Sigma_p.matmul(Sigma_t)
+
+ _t_tr = paddle.diagonal(_t, axis1=-2, axis2=-1).sum(axis=-1)
+ _t_det_sqrt = paddle.diagonal(S_p, axis1=-2, axis2=-1).prod(axis=-1)
+ _t_det_sqrt = _t_det_sqrt * paddle.diagonal(
+ S_t, axis1=-2, axis2=-1).prod(axis=-1)
+ whr_distance = whr_distance + (-2) * (
+ (_t_tr + 2 * _t_det_sqrt).clip(0).sqrt())
+
+ distance = (xy_distance + alpha * alpha * whr_distance).clip(0)
+
+ if normalize:
+ wh_p = pred[..., 2:4].clip(min=1e-7, max=1e7)
+ wh_t = target[..., 2:4].clip(min=1e-7, max=1e7)
+ scale = ((wh_p.log() + wh_t.log()).sum(dim=-1) / 4).exp()
+ distance = distance / scale
+
+ if fun == 'log':
+ distance = paddle.log1p(distance)
+
+ if tau >= 1.0:
+ return 1 - 1 / (tau + distance)
+
+ return distance
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/simota_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/simota_head.py
new file mode 100644
index 000000000..a1485f390
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/simota_head.py
@@ -0,0 +1,498 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/yolox_head.py
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+from functools import partial
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Normal, Constant
+
+from ppdet.core.workspace import register
+
+from ppdet.modeling.bbox_utils import distance2bbox, bbox2distance
+from ppdet.data.transform.atss_assigner import bbox_overlaps
+
+from .gfl_head import GFLHead
+
+
+@register
+class OTAHead(GFLHead):
+ """
+ OTAHead
+ Args:
+ conv_feat (object): Instance of 'FCOSFeat'
+ num_classes (int): Number of classes
+ fpn_stride (list): The stride of each FPN Layer
+ prior_prob (float): Used to set the bias init for the class prediction layer
+ loss_qfl (object): Instance of QualityFocalLoss.
+ loss_dfl (object): Instance of DistributionFocalLoss.
+ loss_bbox (object): Instance of bbox loss.
+ assigner (object): Instance of label assigner.
+ reg_max: Max value of integral set :math: `{0, ..., reg_max}`
+ n QFL setting. Default: 16.
+ """
+ __inject__ = [
+ 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox',
+ 'assigner', 'nms'
+ ]
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ conv_feat='FCOSFeat',
+ dgqp_module=None,
+ num_classes=80,
+ fpn_stride=[8, 16, 32, 64, 128],
+ prior_prob=0.01,
+ loss_class='QualityFocalLoss',
+ loss_dfl='DistributionFocalLoss',
+ loss_bbox='GIoULoss',
+ assigner='SimOTAAssigner',
+ reg_max=16,
+ feat_in_chan=256,
+ nms=None,
+ nms_pre=1000,
+ cell_offset=0):
+ super(OTAHead, self).__init__(
+ conv_feat=conv_feat,
+ dgqp_module=dgqp_module,
+ num_classes=num_classes,
+ fpn_stride=fpn_stride,
+ prior_prob=prior_prob,
+ loss_class=loss_class,
+ loss_dfl=loss_dfl,
+ loss_bbox=loss_bbox,
+ reg_max=reg_max,
+ feat_in_chan=feat_in_chan,
+ nms=nms,
+ nms_pre=nms_pre,
+ cell_offset=cell_offset)
+ self.conv_feat = conv_feat
+ self.dgqp_module = dgqp_module
+ self.num_classes = num_classes
+ self.fpn_stride = fpn_stride
+ self.prior_prob = prior_prob
+ self.loss_qfl = loss_class
+ self.loss_dfl = loss_dfl
+ self.loss_bbox = loss_bbox
+ self.reg_max = reg_max
+ self.feat_in_chan = feat_in_chan
+ self.nms = nms
+ self.nms_pre = nms_pre
+ self.cell_offset = cell_offset
+ self.use_sigmoid = self.loss_qfl.use_sigmoid
+
+ self.assigner = assigner
+
+ def _get_target_single(self, flatten_cls_pred, flatten_center_and_stride,
+ flatten_bbox, gt_bboxes, gt_labels):
+ """Compute targets for priors in a single image.
+ """
+ pos_num, label, label_weight, bbox_target = self.assigner(
+ F.sigmoid(flatten_cls_pred), flatten_center_and_stride,
+ flatten_bbox, gt_bboxes, gt_labels)
+
+ return (pos_num, label, label_weight, bbox_target)
+
+ def get_loss(self, head_outs, gt_meta):
+ cls_scores, bbox_preds = head_outs
+ num_level_anchors = [
+ featmap.shape[-2] * featmap.shape[-1] for featmap in cls_scores
+ ]
+ num_imgs = gt_meta['im_id'].shape[0]
+ featmap_sizes = [[featmap.shape[-2], featmap.shape[-1]]
+ for featmap in cls_scores]
+
+ decode_bbox_preds = []
+ center_and_strides = []
+ for featmap_size, stride, bbox_pred in zip(featmap_sizes,
+ self.fpn_stride, bbox_preds):
+
+ # center in origin image
+ yy, xx = self.get_single_level_center_point(featmap_size, stride,
+ self.cell_offset)
+
+ center_and_stride = paddle.stack([xx, yy, stride, stride], -1).tile(
+ [num_imgs, 1, 1])
+ center_and_strides.append(center_and_stride)
+ center_in_feature = center_and_stride.reshape(
+ [-1, 4])[:, :-2] / stride
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1]).reshape(
+ [num_imgs, -1, 4 * (self.reg_max + 1)])
+ pred_distances = self.distribution_project(bbox_pred)
+ decode_bbox_pred_wo_stride = distance2bbox(
+ center_in_feature, pred_distances).reshape([num_imgs, -1, 4])
+ decode_bbox_preds.append(decode_bbox_pred_wo_stride * stride)
+
+ flatten_cls_preds = [
+ cls_pred.transpose([0, 2, 3, 1]).reshape(
+ [num_imgs, -1, self.cls_out_channels])
+ for cls_pred in cls_scores
+ ]
+ flatten_cls_preds = paddle.concat(flatten_cls_preds, axis=1)
+ flatten_bboxes = paddle.concat(decode_bbox_preds, axis=1)
+ flatten_center_and_strides = paddle.concat(center_and_strides, axis=1)
+
+ gt_boxes, gt_labels = gt_meta['gt_bbox'], gt_meta['gt_class']
+ pos_num_l, label_l, label_weight_l, bbox_target_l = [], [], [], []
+ for flatten_cls_pred,flatten_center_and_stride,flatten_bbox,gt_box, gt_label \
+ in zip(flatten_cls_preds.detach(),flatten_center_and_strides.detach(), \
+ flatten_bboxes.detach(),gt_boxes, gt_labels):
+ pos_num, label, label_weight, bbox_target = self._get_target_single(
+ flatten_cls_pred, flatten_center_and_stride, flatten_bbox,
+ gt_box, gt_label)
+ pos_num_l.append(pos_num)
+ label_l.append(label)
+ label_weight_l.append(label_weight)
+ bbox_target_l.append(bbox_target)
+
+ labels = paddle.to_tensor(np.stack(label_l, axis=0))
+ label_weights = paddle.to_tensor(np.stack(label_weight_l, axis=0))
+ bbox_targets = paddle.to_tensor(np.stack(bbox_target_l, axis=0))
+
+ center_and_strides_list = self._images_to_levels(
+ flatten_center_and_strides, num_level_anchors)
+ labels_list = self._images_to_levels(labels, num_level_anchors)
+ label_weights_list = self._images_to_levels(label_weights,
+ num_level_anchors)
+ bbox_targets_list = self._images_to_levels(bbox_targets,
+ num_level_anchors)
+ num_total_pos = sum(pos_num_l)
+ try:
+ num_total_pos = paddle.distributed.all_reduce(num_total_pos.clone(
+ )) / paddle.distributed.get_world_size()
+ except:
+ num_total_pos = max(num_total_pos, 1)
+
+ loss_bbox_list, loss_dfl_list, loss_qfl_list, avg_factor = [], [], [], []
+ for cls_score, bbox_pred, center_and_strides, labels, label_weights, bbox_targets, stride in zip(
+ cls_scores, bbox_preds, center_and_strides_list, labels_list,
+ label_weights_list, bbox_targets_list, self.fpn_stride):
+ center_and_strides = center_and_strides.reshape([-1, 4])
+ cls_score = cls_score.transpose([0, 2, 3, 1]).reshape(
+ [-1, self.cls_out_channels])
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1]).reshape(
+ [-1, 4 * (self.reg_max + 1)])
+ bbox_targets = bbox_targets.reshape([-1, 4])
+ labels = labels.reshape([-1])
+ label_weights = label_weights.reshape([-1])
+
+ bg_class_ind = self.num_classes
+ pos_inds = paddle.nonzero(
+ paddle.logical_and((labels >= 0), (labels < bg_class_ind)),
+ as_tuple=False).squeeze(1)
+ score = np.zeros(labels.shape)
+
+ if len(pos_inds) > 0:
+ pos_bbox_targets = paddle.gather(bbox_targets, pos_inds, axis=0)
+ pos_bbox_pred = paddle.gather(bbox_pred, pos_inds, axis=0)
+ pos_centers = paddle.gather(
+ center_and_strides[:, :-2], pos_inds, axis=0) / stride
+
+ weight_targets = F.sigmoid(cls_score.detach())
+ weight_targets = paddle.gather(
+ weight_targets.max(axis=1, keepdim=True), pos_inds, axis=0)
+ pos_bbox_pred_corners = self.distribution_project(pos_bbox_pred)
+ pos_decode_bbox_pred = distance2bbox(pos_centers,
+ pos_bbox_pred_corners)
+ pos_decode_bbox_targets = pos_bbox_targets / stride
+ bbox_iou = bbox_overlaps(
+ pos_decode_bbox_pred.detach().numpy(),
+ pos_decode_bbox_targets.detach().numpy(),
+ is_aligned=True)
+ score[pos_inds.numpy()] = bbox_iou
+
+ pred_corners = pos_bbox_pred.reshape([-1, self.reg_max + 1])
+ target_corners = bbox2distance(pos_centers,
+ pos_decode_bbox_targets,
+ self.reg_max).reshape([-1])
+ # regression loss
+ loss_bbox = paddle.sum(
+ self.loss_bbox(pos_decode_bbox_pred,
+ pos_decode_bbox_targets) * weight_targets)
+
+ # dfl loss
+ loss_dfl = self.loss_dfl(
+ pred_corners,
+ target_corners,
+ weight=weight_targets.expand([-1, 4]).reshape([-1]),
+ avg_factor=4.0)
+ else:
+ loss_bbox = bbox_pred.sum() * 0
+ loss_dfl = bbox_pred.sum() * 0
+ weight_targets = paddle.to_tensor([0], dtype='float32')
+
+ # qfl loss
+ score = paddle.to_tensor(score)
+ loss_qfl = self.loss_qfl(
+ cls_score, (labels, score),
+ weight=label_weights,
+ avg_factor=num_total_pos)
+ loss_bbox_list.append(loss_bbox)
+ loss_dfl_list.append(loss_dfl)
+ loss_qfl_list.append(loss_qfl)
+ avg_factor.append(weight_targets.sum())
+
+ avg_factor = sum(avg_factor)
+ try:
+ avg_factor = paddle.distributed.all_reduce(avg_factor.clone())
+ avg_factor = paddle.clip(
+ avg_factor / paddle.distributed.get_world_size(), min=1)
+ except:
+ avg_factor = max(avg_factor.item(), 1)
+ if avg_factor <= 0:
+ loss_qfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ loss_bbox = paddle.to_tensor(
+ 0, dtype='float32', stop_gradient=False)
+ loss_dfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ else:
+ losses_bbox = list(map(lambda x: x / avg_factor, loss_bbox_list))
+ losses_dfl = list(map(lambda x: x / avg_factor, loss_dfl_list))
+ loss_qfl = sum(loss_qfl_list)
+ loss_bbox = sum(losses_bbox)
+ loss_dfl = sum(losses_dfl)
+
+ loss_states = dict(
+ loss_qfl=loss_qfl, loss_bbox=loss_bbox, loss_dfl=loss_dfl)
+
+ return loss_states
+
+
+@register
+class OTAVFLHead(OTAHead):
+ __inject__ = [
+ 'conv_feat', 'dgqp_module', 'loss_class', 'loss_dfl', 'loss_bbox',
+ 'assigner', 'nms'
+ ]
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ conv_feat='FCOSFeat',
+ dgqp_module=None,
+ num_classes=80,
+ fpn_stride=[8, 16, 32, 64, 128],
+ prior_prob=0.01,
+ loss_class='VarifocalLoss',
+ loss_dfl='DistributionFocalLoss',
+ loss_bbox='GIoULoss',
+ assigner='SimOTAAssigner',
+ reg_max=16,
+ feat_in_chan=256,
+ nms=None,
+ nms_pre=1000,
+ cell_offset=0):
+ super(OTAVFLHead, self).__init__(
+ conv_feat=conv_feat,
+ dgqp_module=dgqp_module,
+ num_classes=num_classes,
+ fpn_stride=fpn_stride,
+ prior_prob=prior_prob,
+ loss_class=loss_class,
+ loss_dfl=loss_dfl,
+ loss_bbox=loss_bbox,
+ reg_max=reg_max,
+ feat_in_chan=feat_in_chan,
+ nms=nms,
+ nms_pre=nms_pre,
+ cell_offset=cell_offset)
+ self.conv_feat = conv_feat
+ self.dgqp_module = dgqp_module
+ self.num_classes = num_classes
+ self.fpn_stride = fpn_stride
+ self.prior_prob = prior_prob
+ self.loss_vfl = loss_class
+ self.loss_dfl = loss_dfl
+ self.loss_bbox = loss_bbox
+ self.reg_max = reg_max
+ self.feat_in_chan = feat_in_chan
+ self.nms = nms
+ self.nms_pre = nms_pre
+ self.cell_offset = cell_offset
+ self.use_sigmoid = self.loss_vfl.use_sigmoid
+
+ self.assigner = assigner
+
+ def get_loss(self, head_outs, gt_meta):
+ cls_scores, bbox_preds = head_outs
+ num_level_anchors = [
+ featmap.shape[-2] * featmap.shape[-1] for featmap in cls_scores
+ ]
+ num_imgs = gt_meta['im_id'].shape[0]
+ featmap_sizes = [[featmap.shape[-2], featmap.shape[-1]]
+ for featmap in cls_scores]
+
+ decode_bbox_preds = []
+ center_and_strides = []
+ for featmap_size, stride, bbox_pred in zip(featmap_sizes,
+ self.fpn_stride, bbox_preds):
+ # center in origin image
+ yy, xx = self.get_single_level_center_point(featmap_size, stride,
+ self.cell_offset)
+ strides = paddle.full((len(xx), ), stride)
+ center_and_stride = paddle.stack([xx, yy, strides, strides],
+ -1).tile([num_imgs, 1, 1])
+ center_and_strides.append(center_and_stride)
+ center_in_feature = center_and_stride.reshape(
+ [-1, 4])[:, :-2] / stride
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1]).reshape(
+ [num_imgs, -1, 4 * (self.reg_max + 1)])
+ pred_distances = self.distribution_project(bbox_pred)
+ decode_bbox_pred_wo_stride = distance2bbox(
+ center_in_feature, pred_distances).reshape([num_imgs, -1, 4])
+ decode_bbox_preds.append(decode_bbox_pred_wo_stride * stride)
+
+ flatten_cls_preds = [
+ cls_pred.transpose([0, 2, 3, 1]).reshape(
+ [num_imgs, -1, self.cls_out_channels])
+ for cls_pred in cls_scores
+ ]
+ flatten_cls_preds = paddle.concat(flatten_cls_preds, axis=1)
+ flatten_bboxes = paddle.concat(decode_bbox_preds, axis=1)
+ flatten_center_and_strides = paddle.concat(center_and_strides, axis=1)
+
+ gt_boxes, gt_labels = gt_meta['gt_bbox'], gt_meta['gt_class']
+ pos_num_l, label_l, label_weight_l, bbox_target_l = [], [], [], []
+ for flatten_cls_pred, flatten_center_and_stride, flatten_bbox,gt_box,gt_label \
+ in zip(flatten_cls_preds.detach(), flatten_center_and_strides.detach(), \
+ flatten_bboxes.detach(),gt_boxes,gt_labels):
+ pos_num, label, label_weight, bbox_target = self._get_target_single(
+ flatten_cls_pred, flatten_center_and_stride, flatten_bbox,
+ gt_box, gt_label)
+ pos_num_l.append(pos_num)
+ label_l.append(label)
+ label_weight_l.append(label_weight)
+ bbox_target_l.append(bbox_target)
+
+ labels = paddle.to_tensor(np.stack(label_l, axis=0))
+ label_weights = paddle.to_tensor(np.stack(label_weight_l, axis=0))
+ bbox_targets = paddle.to_tensor(np.stack(bbox_target_l, axis=0))
+
+ center_and_strides_list = self._images_to_levels(
+ flatten_center_and_strides, num_level_anchors)
+ labels_list = self._images_to_levels(labels, num_level_anchors)
+ label_weights_list = self._images_to_levels(label_weights,
+ num_level_anchors)
+ bbox_targets_list = self._images_to_levels(bbox_targets,
+ num_level_anchors)
+ num_total_pos = sum(pos_num_l)
+ try:
+ num_total_pos = paddle.distributed.all_reduce(num_total_pos.clone(
+ )) / paddle.distributed.get_world_size()
+ except:
+ num_total_pos = max(num_total_pos, 1)
+
+ loss_bbox_list, loss_dfl_list, loss_vfl_list, avg_factor = [], [], [], []
+ for cls_score, bbox_pred, center_and_strides, labels, label_weights, bbox_targets, stride in zip(
+ cls_scores, bbox_preds, center_and_strides_list, labels_list,
+ label_weights_list, bbox_targets_list, self.fpn_stride):
+ center_and_strides = center_and_strides.reshape([-1, 4])
+ cls_score = cls_score.transpose([0, 2, 3, 1]).reshape(
+ [-1, self.cls_out_channels])
+ bbox_pred = bbox_pred.transpose([0, 2, 3, 1]).reshape(
+ [-1, 4 * (self.reg_max + 1)])
+ bbox_targets = bbox_targets.reshape([-1, 4])
+ labels = labels.reshape([-1])
+
+ bg_class_ind = self.num_classes
+ pos_inds = paddle.nonzero(
+ paddle.logical_and((labels >= 0), (labels < bg_class_ind)),
+ as_tuple=False).squeeze(1)
+ # vfl
+ vfl_score = np.zeros(cls_score.shape)
+
+ if len(pos_inds) > 0:
+ pos_bbox_targets = paddle.gather(bbox_targets, pos_inds, axis=0)
+ pos_bbox_pred = paddle.gather(bbox_pred, pos_inds, axis=0)
+ pos_centers = paddle.gather(
+ center_and_strides[:, :-2], pos_inds, axis=0) / stride
+
+ weight_targets = F.sigmoid(cls_score.detach())
+ weight_targets = paddle.gather(
+ weight_targets.max(axis=1, keepdim=True), pos_inds, axis=0)
+ pos_bbox_pred_corners = self.distribution_project(pos_bbox_pred)
+ pos_decode_bbox_pred = distance2bbox(pos_centers,
+ pos_bbox_pred_corners)
+ pos_decode_bbox_targets = pos_bbox_targets / stride
+ bbox_iou = bbox_overlaps(
+ pos_decode_bbox_pred.detach().numpy(),
+ pos_decode_bbox_targets.detach().numpy(),
+ is_aligned=True)
+
+ # vfl
+ pos_labels = paddle.gather(labels, pos_inds, axis=0)
+ vfl_score[pos_inds.numpy(), pos_labels] = bbox_iou
+
+ pred_corners = pos_bbox_pred.reshape([-1, self.reg_max + 1])
+ target_corners = bbox2distance(pos_centers,
+ pos_decode_bbox_targets,
+ self.reg_max).reshape([-1])
+ # regression loss
+ loss_bbox = paddle.sum(
+ self.loss_bbox(pos_decode_bbox_pred,
+ pos_decode_bbox_targets) * weight_targets)
+
+ # dfl loss
+ loss_dfl = self.loss_dfl(
+ pred_corners,
+ target_corners,
+ weight=weight_targets.expand([-1, 4]).reshape([-1]),
+ avg_factor=4.0)
+ else:
+ loss_bbox = bbox_pred.sum() * 0
+ loss_dfl = bbox_pred.sum() * 0
+ weight_targets = paddle.to_tensor([0], dtype='float32')
+
+ # vfl loss
+ num_pos_avg_per_gpu = num_total_pos
+ vfl_score = paddle.to_tensor(vfl_score)
+ loss_vfl = self.loss_vfl(
+ cls_score, vfl_score, avg_factor=num_pos_avg_per_gpu)
+
+ loss_bbox_list.append(loss_bbox)
+ loss_dfl_list.append(loss_dfl)
+ loss_vfl_list.append(loss_vfl)
+ avg_factor.append(weight_targets.sum())
+
+ avg_factor = sum(avg_factor)
+ try:
+ avg_factor = paddle.distributed.all_reduce(avg_factor.clone())
+ avg_factor = paddle.clip(
+ avg_factor / paddle.distributed.get_world_size(), min=1)
+ except:
+ avg_factor = max(avg_factor.item(), 1)
+ if avg_factor <= 0:
+ loss_vfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ loss_bbox = paddle.to_tensor(
+ 0, dtype='float32', stop_gradient=False)
+ loss_dfl = paddle.to_tensor(0, dtype='float32', stop_gradient=False)
+ else:
+ losses_bbox = list(map(lambda x: x / avg_factor, loss_bbox_list))
+ losses_dfl = list(map(lambda x: x / avg_factor, loss_dfl_list))
+ loss_vfl = sum(loss_vfl_list)
+ loss_bbox = sum(losses_bbox)
+ loss_dfl = sum(losses_dfl)
+
+ loss_states = dict(
+ loss_vfl=loss_vfl, loss_bbox=loss_bbox, loss_dfl=loss_dfl)
+
+ return loss_states
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/solov2_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/solov2_head.py
new file mode 100644
index 000000000..6989abb3a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/solov2_head.py
@@ -0,0 +1,554 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from paddle import ParamAttr
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, Constant
+
+from ppdet.modeling.layers import ConvNormLayer, MaskMatrixNMS, DropBlock
+from ppdet.core.workspace import register
+
+from six.moves import zip
+import numpy as np
+
+__all__ = ['SOLOv2Head']
+
+
+@register
+class SOLOv2MaskHead(nn.Layer):
+ """
+ MaskHead of SOLOv2.
+ The code of this function is based on:
+ https://github.com/WXinlong/SOLO/blob/master/mmdet/models/mask_heads/mask_feat_head.py
+
+ Args:
+ in_channels (int): The channel number of input Tensor.
+ out_channels (int): The channel number of output Tensor.
+ start_level (int): The position where the input starts.
+ end_level (int): The position where the input ends.
+ use_dcn_in_tower (bool): Whether to use dcn in tower or not.
+ """
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ in_channels=256,
+ mid_channels=128,
+ out_channels=256,
+ start_level=0,
+ end_level=3,
+ use_dcn_in_tower=False,
+ norm_type='gn'):
+ super(SOLOv2MaskHead, self).__init__()
+ assert start_level >= 0 and end_level >= start_level
+ self.in_channels = in_channels
+ self.out_channels = out_channels
+ self.mid_channels = mid_channels
+ self.use_dcn_in_tower = use_dcn_in_tower
+ self.range_level = end_level - start_level + 1
+ self.use_dcn = True if self.use_dcn_in_tower else False
+ self.convs_all_levels = []
+ self.norm_type = norm_type
+ for i in range(start_level, end_level + 1):
+ conv_feat_name = 'mask_feat_head.convs_all_levels.{}'.format(i)
+ conv_pre_feat = nn.Sequential()
+ if i == start_level:
+ conv_pre_feat.add_sublayer(
+ conv_feat_name + '.conv' + str(i),
+ ConvNormLayer(
+ ch_in=self.in_channels,
+ ch_out=self.mid_channels,
+ filter_size=3,
+ stride=1,
+ use_dcn=self.use_dcn,
+ norm_type=self.norm_type))
+ self.add_sublayer('conv_pre_feat' + str(i), conv_pre_feat)
+ self.convs_all_levels.append(conv_pre_feat)
+ else:
+ for j in range(i):
+ ch_in = 0
+ if j == 0:
+ ch_in = self.in_channels + 2 if i == end_level else self.in_channels
+ else:
+ ch_in = self.mid_channels
+ conv_pre_feat.add_sublayer(
+ conv_feat_name + '.conv' + str(j),
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=self.mid_channels,
+ filter_size=3,
+ stride=1,
+ use_dcn=self.use_dcn,
+ norm_type=self.norm_type))
+ conv_pre_feat.add_sublayer(
+ conv_feat_name + '.conv' + str(j) + 'act', nn.ReLU())
+ conv_pre_feat.add_sublayer(
+ 'upsample' + str(i) + str(j),
+ nn.Upsample(
+ scale_factor=2, mode='bilinear'))
+ self.add_sublayer('conv_pre_feat' + str(i), conv_pre_feat)
+ self.convs_all_levels.append(conv_pre_feat)
+
+ conv_pred_name = 'mask_feat_head.conv_pred.0'
+ self.conv_pred = self.add_sublayer(
+ conv_pred_name,
+ ConvNormLayer(
+ ch_in=self.mid_channels,
+ ch_out=self.out_channels,
+ filter_size=1,
+ stride=1,
+ use_dcn=self.use_dcn,
+ norm_type=self.norm_type))
+
+ def forward(self, inputs):
+ """
+ Get SOLOv2MaskHead output.
+
+ Args:
+ inputs(list[Tensor]): feature map from each necks with shape of [N, C, H, W]
+ Returns:
+ ins_pred(Tensor): Output of SOLOv2MaskHead head
+ """
+ feat_all_level = F.relu(self.convs_all_levels[0](inputs[0]))
+ for i in range(1, self.range_level):
+ input_p = inputs[i]
+ if i == (self.range_level - 1):
+ input_feat = input_p
+ x_range = paddle.linspace(
+ -1, 1, paddle.shape(input_feat)[-1], dtype='float32')
+ y_range = paddle.linspace(
+ -1, 1, paddle.shape(input_feat)[-2], dtype='float32')
+ y, x = paddle.meshgrid([y_range, x_range])
+ x = paddle.unsqueeze(x, [0, 1])
+ y = paddle.unsqueeze(y, [0, 1])
+ y = paddle.expand(
+ y, shape=[paddle.shape(input_feat)[0], 1, -1, -1])
+ x = paddle.expand(
+ x, shape=[paddle.shape(input_feat)[0], 1, -1, -1])
+ coord_feat = paddle.concat([x, y], axis=1)
+ input_p = paddle.concat([input_p, coord_feat], axis=1)
+ feat_all_level = paddle.add(feat_all_level,
+ self.convs_all_levels[i](input_p))
+ ins_pred = F.relu(self.conv_pred(feat_all_level))
+
+ return ins_pred
+
+
+@register
+class SOLOv2Head(nn.Layer):
+ """
+ Head block for SOLOv2 network
+
+ Args:
+ num_classes (int): Number of output classes.
+ in_channels (int): Number of input channels.
+ seg_feat_channels (int): Num_filters of kernel & categroy branch convolution operation.
+ stacked_convs (int): Times of convolution operation.
+ num_grids (list[int]): List of feature map grids size.
+ kernel_out_channels (int): Number of output channels in kernel branch.
+ dcn_v2_stages (list): Which stage use dcn v2 in tower. It is between [0, stacked_convs).
+ segm_strides (list[int]): List of segmentation area stride.
+ solov2_loss (object): SOLOv2Loss instance.
+ score_threshold (float): Threshold of categroy score.
+ mask_nms (object): MaskMatrixNMS instance.
+ """
+ __inject__ = ['solov2_loss', 'mask_nms']
+ __shared__ = ['norm_type', 'num_classes']
+
+ def __init__(self,
+ num_classes=80,
+ in_channels=256,
+ seg_feat_channels=256,
+ stacked_convs=4,
+ num_grids=[40, 36, 24, 16, 12],
+ kernel_out_channels=256,
+ dcn_v2_stages=[],
+ segm_strides=[8, 8, 16, 32, 32],
+ solov2_loss=None,
+ score_threshold=0.1,
+ mask_threshold=0.5,
+ mask_nms=None,
+ norm_type='gn',
+ drop_block=False):
+ super(SOLOv2Head, self).__init__()
+ self.num_classes = num_classes
+ self.in_channels = in_channels
+ self.seg_num_grids = num_grids
+ self.cate_out_channels = self.num_classes
+ self.seg_feat_channels = seg_feat_channels
+ self.stacked_convs = stacked_convs
+ self.kernel_out_channels = kernel_out_channels
+ self.dcn_v2_stages = dcn_v2_stages
+ self.segm_strides = segm_strides
+ self.solov2_loss = solov2_loss
+ self.mask_nms = mask_nms
+ self.score_threshold = score_threshold
+ self.mask_threshold = mask_threshold
+ self.norm_type = norm_type
+ self.drop_block = drop_block
+
+ self.kernel_pred_convs = []
+ self.cate_pred_convs = []
+ for i in range(self.stacked_convs):
+ use_dcn = True if i in self.dcn_v2_stages else False
+ ch_in = self.in_channels + 2 if i == 0 else self.seg_feat_channels
+ kernel_conv = self.add_sublayer(
+ 'bbox_head.kernel_convs.' + str(i),
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=self.seg_feat_channels,
+ filter_size=3,
+ stride=1,
+ use_dcn=use_dcn,
+ norm_type=self.norm_type))
+ self.kernel_pred_convs.append(kernel_conv)
+ ch_in = self.in_channels if i == 0 else self.seg_feat_channels
+ cate_conv = self.add_sublayer(
+ 'bbox_head.cate_convs.' + str(i),
+ ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=self.seg_feat_channels,
+ filter_size=3,
+ stride=1,
+ use_dcn=use_dcn,
+ norm_type=self.norm_type))
+ self.cate_pred_convs.append(cate_conv)
+
+ self.solo_kernel = self.add_sublayer(
+ 'bbox_head.solo_kernel',
+ nn.Conv2D(
+ self.seg_feat_channels,
+ self.kernel_out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=True))
+ self.solo_cate = self.add_sublayer(
+ 'bbox_head.solo_cate',
+ nn.Conv2D(
+ self.seg_feat_channels,
+ self.cate_out_channels,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.01)),
+ bias_attr=ParamAttr(initializer=Constant(
+ value=float(-np.log((1 - 0.01) / 0.01))))))
+
+ if self.drop_block and self.training:
+ self.drop_block_fun = DropBlock(
+ block_size=3, keep_prob=0.9, name='solo_cate.dropblock')
+
+ def _points_nms(self, heat, kernel_size=2):
+ hmax = F.max_pool2d(heat, kernel_size=kernel_size, stride=1, padding=1)
+ keep = paddle.cast((hmax[:, :, :-1, :-1] == heat), 'float32')
+ return heat * keep
+
+ def _split_feats(self, feats):
+ return (F.interpolate(
+ feats[0],
+ scale_factor=0.5,
+ align_corners=False,
+ align_mode=0,
+ mode='bilinear'), feats[1], feats[2], feats[3], F.interpolate(
+ feats[4],
+ size=paddle.shape(feats[3])[-2:],
+ mode='bilinear',
+ align_corners=False,
+ align_mode=0))
+
+ def forward(self, input):
+ """
+ Get SOLOv2 head output
+
+ Args:
+ input (list): List of Tensors, output of backbone or neck stages
+ Returns:
+ cate_pred_list (list): Tensors of each category branch layer
+ kernel_pred_list (list): Tensors of each kernel branch layer
+ """
+ feats = self._split_feats(input)
+ cate_pred_list = []
+ kernel_pred_list = []
+ for idx in range(len(self.seg_num_grids)):
+ cate_pred, kernel_pred = self._get_output_single(feats[idx], idx)
+ cate_pred_list.append(cate_pred)
+ kernel_pred_list.append(kernel_pred)
+
+ return cate_pred_list, kernel_pred_list
+
+ def _get_output_single(self, input, idx):
+ ins_kernel_feat = input
+ # CoordConv
+ x_range = paddle.linspace(
+ -1, 1, paddle.shape(ins_kernel_feat)[-1], dtype='float32')
+ y_range = paddle.linspace(
+ -1, 1, paddle.shape(ins_kernel_feat)[-2], dtype='float32')
+ y, x = paddle.meshgrid([y_range, x_range])
+ x = paddle.unsqueeze(x, [0, 1])
+ y = paddle.unsqueeze(y, [0, 1])
+ y = paddle.expand(
+ y, shape=[paddle.shape(ins_kernel_feat)[0], 1, -1, -1])
+ x = paddle.expand(
+ x, shape=[paddle.shape(ins_kernel_feat)[0], 1, -1, -1])
+ coord_feat = paddle.concat([x, y], axis=1)
+ ins_kernel_feat = paddle.concat([ins_kernel_feat, coord_feat], axis=1)
+
+ # kernel branch
+ kernel_feat = ins_kernel_feat
+ seg_num_grid = self.seg_num_grids[idx]
+ kernel_feat = F.interpolate(
+ kernel_feat,
+ size=[seg_num_grid, seg_num_grid],
+ mode='bilinear',
+ align_corners=False,
+ align_mode=0)
+ cate_feat = kernel_feat[:, :-2, :, :]
+
+ for kernel_layer in self.kernel_pred_convs:
+ kernel_feat = F.relu(kernel_layer(kernel_feat))
+ if self.drop_block and self.training:
+ kernel_feat = self.drop_block_fun(kernel_feat)
+ kernel_pred = self.solo_kernel(kernel_feat)
+ # cate branch
+ for cate_layer in self.cate_pred_convs:
+ cate_feat = F.relu(cate_layer(cate_feat))
+ if self.drop_block and self.training:
+ cate_feat = self.drop_block_fun(cate_feat)
+ cate_pred = self.solo_cate(cate_feat)
+
+ if not self.training:
+ cate_pred = self._points_nms(F.sigmoid(cate_pred), kernel_size=2)
+ cate_pred = paddle.transpose(cate_pred, [0, 2, 3, 1])
+ return cate_pred, kernel_pred
+
+ def get_loss(self, cate_preds, kernel_preds, ins_pred, ins_labels,
+ cate_labels, grid_order_list, fg_num):
+ """
+ Get loss of network of SOLOv2.
+
+ Args:
+ cate_preds (list): Tensor list of categroy branch output.
+ kernel_preds (list): Tensor list of kernel branch output.
+ ins_pred (list): Tensor list of instance branch output.
+ ins_labels (list): List of instance labels pre batch.
+ cate_labels (list): List of categroy labels pre batch.
+ grid_order_list (list): List of index in pre grid.
+ fg_num (int): Number of positive samples in a mini-batch.
+ Returns:
+ loss_ins (Tensor): The instance loss Tensor of SOLOv2 network.
+ loss_cate (Tensor): The category loss Tensor of SOLOv2 network.
+ """
+ batch_size = paddle.shape(grid_order_list[0])[0]
+ ins_pred_list = []
+ for kernel_preds_level, grid_orders_level in zip(kernel_preds,
+ grid_order_list):
+ if grid_orders_level.shape[1] == 0:
+ ins_pred_list.append(None)
+ continue
+ grid_orders_level = paddle.reshape(grid_orders_level, [-1])
+ reshape_pred = paddle.reshape(
+ kernel_preds_level,
+ shape=(paddle.shape(kernel_preds_level)[0],
+ paddle.shape(kernel_preds_level)[1], -1))
+ reshape_pred = paddle.transpose(reshape_pred, [0, 2, 1])
+ reshape_pred = paddle.reshape(
+ reshape_pred, shape=(-1, paddle.shape(reshape_pred)[2]))
+ gathered_pred = paddle.gather(reshape_pred, index=grid_orders_level)
+ gathered_pred = paddle.reshape(
+ gathered_pred,
+ shape=[batch_size, -1, paddle.shape(gathered_pred)[1]])
+ cur_ins_pred = ins_pred
+ cur_ins_pred = paddle.reshape(
+ cur_ins_pred,
+ shape=(paddle.shape(cur_ins_pred)[0],
+ paddle.shape(cur_ins_pred)[1], -1))
+ ins_pred_conv = paddle.matmul(gathered_pred, cur_ins_pred)
+ cur_ins_pred = paddle.reshape(
+ ins_pred_conv,
+ shape=(-1, paddle.shape(ins_pred)[-2],
+ paddle.shape(ins_pred)[-1]))
+ ins_pred_list.append(cur_ins_pred)
+
+ num_ins = paddle.sum(fg_num)
+ cate_preds = [
+ paddle.reshape(
+ paddle.transpose(cate_pred, [0, 2, 3, 1]),
+ shape=(-1, self.cate_out_channels)) for cate_pred in cate_preds
+ ]
+ flatten_cate_preds = paddle.concat(cate_preds)
+ new_cate_labels = []
+ for cate_label in cate_labels:
+ new_cate_labels.append(paddle.reshape(cate_label, shape=[-1]))
+ cate_labels = paddle.concat(new_cate_labels)
+
+ loss_ins, loss_cate = self.solov2_loss(
+ ins_pred_list, ins_labels, flatten_cate_preds, cate_labels, num_ins)
+
+ return {'loss_ins': loss_ins, 'loss_cate': loss_cate}
+
+ def get_prediction(self, cate_preds, kernel_preds, seg_pred, im_shape,
+ scale_factor):
+ """
+ Get prediction result of SOLOv2 network
+
+ Args:
+ cate_preds (list): List of Variables, output of categroy branch.
+ kernel_preds (list): List of Variables, output of kernel branch.
+ seg_pred (list): List of Variables, output of mask head stages.
+ im_shape (Variables): [h, w] for input images.
+ scale_factor (Variables): [scale, scale] for input images.
+ Returns:
+ seg_masks (Tensor): The prediction segmentation.
+ cate_labels (Tensor): The prediction categroy label of each segmentation.
+ seg_masks (Tensor): The prediction score of each segmentation.
+ """
+ num_levels = len(cate_preds)
+ featmap_size = paddle.shape(seg_pred)[-2:]
+ seg_masks_list = []
+ cate_labels_list = []
+ cate_scores_list = []
+ cate_preds = [cate_pred * 1.0 for cate_pred in cate_preds]
+ kernel_preds = [kernel_pred * 1.0 for kernel_pred in kernel_preds]
+ # Currently only supports batch size == 1
+ for idx in range(1):
+ cate_pred_list = [
+ paddle.reshape(
+ cate_preds[i][idx], shape=(-1, self.cate_out_channels))
+ for i in range(num_levels)
+ ]
+ seg_pred_list = seg_pred
+ kernel_pred_list = [
+ paddle.reshape(
+ paddle.transpose(kernel_preds[i][idx], [1, 2, 0]),
+ shape=(-1, self.kernel_out_channels))
+ for i in range(num_levels)
+ ]
+ cate_pred_list = paddle.concat(cate_pred_list, axis=0)
+ kernel_pred_list = paddle.concat(kernel_pred_list, axis=0)
+
+ seg_masks, cate_labels, cate_scores = self.get_seg_single(
+ cate_pred_list, seg_pred_list, kernel_pred_list, featmap_size,
+ im_shape[idx], scale_factor[idx][0])
+ bbox_num = paddle.shape(cate_labels)[0]
+ return seg_masks, cate_labels, cate_scores, bbox_num
+
+ def get_seg_single(self, cate_preds, seg_preds, kernel_preds, featmap_size,
+ im_shape, scale_factor):
+ """
+ The code of this function is based on:
+ https://github.com/WXinlong/SOLO/blob/master/mmdet/models/anchor_heads/solov2_head.py#L385
+ """
+ h = paddle.cast(im_shape[0], 'int32')[0]
+ w = paddle.cast(im_shape[1], 'int32')[0]
+ upsampled_size_out = [featmap_size[0] * 4, featmap_size[1] * 4]
+
+ y = paddle.zeros(shape=paddle.shape(cate_preds), dtype='float32')
+ inds = paddle.where(cate_preds > self.score_threshold, cate_preds, y)
+ inds = paddle.nonzero(inds)
+ cate_preds = paddle.reshape(cate_preds, shape=[-1])
+ # Prevent empty and increase fake data
+ ind_a = paddle.cast(paddle.shape(kernel_preds)[0], 'int64')
+ ind_b = paddle.zeros(shape=[1], dtype='int64')
+ inds_end = paddle.unsqueeze(paddle.concat([ind_a, ind_b]), 0)
+ inds = paddle.concat([inds, inds_end])
+ kernel_preds_end = paddle.ones(
+ shape=[1, self.kernel_out_channels], dtype='float32')
+ kernel_preds = paddle.concat([kernel_preds, kernel_preds_end])
+ cate_preds = paddle.concat(
+ [cate_preds, paddle.zeros(
+ shape=[1], dtype='float32')])
+
+ # cate_labels & kernel_preds
+ cate_labels = inds[:, 1]
+ kernel_preds = paddle.gather(kernel_preds, index=inds[:, 0])
+ cate_score_idx = paddle.add(inds[:, 0] * self.cate_out_channels,
+ cate_labels)
+ cate_scores = paddle.gather(cate_preds, index=cate_score_idx)
+
+ size_trans = np.power(self.seg_num_grids, 2)
+ strides = []
+ for _ind in range(len(self.segm_strides)):
+ strides.append(
+ paddle.full(
+ shape=[int(size_trans[_ind])],
+ fill_value=self.segm_strides[_ind],
+ dtype="int32"))
+ strides = paddle.concat(strides)
+ strides = paddle.concat(
+ [strides, paddle.zeros(
+ shape=[1], dtype='int32')])
+ strides = paddle.gather(strides, index=inds[:, 0])
+
+ # mask encoding.
+ kernel_preds = paddle.unsqueeze(kernel_preds, [2, 3])
+ seg_preds = F.conv2d(seg_preds, kernel_preds)
+ seg_preds = F.sigmoid(paddle.squeeze(seg_preds, [0]))
+ seg_masks = seg_preds > self.mask_threshold
+ seg_masks = paddle.cast(seg_masks, 'float32')
+ sum_masks = paddle.sum(seg_masks, axis=[1, 2])
+
+ y = paddle.zeros(shape=paddle.shape(sum_masks), dtype='float32')
+ keep = paddle.where(sum_masks > strides, sum_masks, y)
+ keep = paddle.nonzero(keep)
+ keep = paddle.squeeze(keep, axis=[1])
+ # Prevent empty and increase fake data
+ keep_other = paddle.concat(
+ [keep, paddle.cast(paddle.shape(sum_masks)[0] - 1, 'int64')])
+ keep_scores = paddle.concat(
+ [keep, paddle.cast(paddle.shape(sum_masks)[0], 'int64')])
+ cate_scores_end = paddle.zeros(shape=[1], dtype='float32')
+ cate_scores = paddle.concat([cate_scores, cate_scores_end])
+
+ seg_masks = paddle.gather(seg_masks, index=keep_other)
+ seg_preds = paddle.gather(seg_preds, index=keep_other)
+ sum_masks = paddle.gather(sum_masks, index=keep_other)
+ cate_labels = paddle.gather(cate_labels, index=keep_other)
+ cate_scores = paddle.gather(cate_scores, index=keep_scores)
+
+ # mask scoring.
+ seg_mul = paddle.cast(seg_preds * seg_masks, 'float32')
+ seg_scores = paddle.sum(seg_mul, axis=[1, 2]) / sum_masks
+ cate_scores *= seg_scores
+ # Matrix NMS
+ seg_preds, cate_scores, cate_labels = self.mask_nms(
+ seg_preds, seg_masks, cate_labels, cate_scores, sum_masks=sum_masks)
+ ori_shape = im_shape[:2] / scale_factor + 0.5
+ ori_shape = paddle.cast(ori_shape, 'int32')
+ seg_preds = F.interpolate(
+ paddle.unsqueeze(seg_preds, 0),
+ size=upsampled_size_out,
+ mode='bilinear',
+ align_corners=False,
+ align_mode=0)
+ seg_preds = paddle.slice(
+ seg_preds, axes=[2, 3], starts=[0, 0], ends=[h, w])
+ seg_masks = paddle.squeeze(
+ F.interpolate(
+ seg_preds,
+ size=ori_shape[:2],
+ mode='bilinear',
+ align_corners=False,
+ align_mode=0),
+ axis=[0])
+ seg_masks = paddle.cast(seg_masks > self.mask_threshold, 'uint8')
+ return seg_masks, cate_labels, cate_scores
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/sparsercnn_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/sparsercnn_head.py
new file mode 100644
index 000000000..377cf27fc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/sparsercnn_head.py
@@ -0,0 +1,375 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/PeizeSun/SparseR-CNN/blob/main/projects/SparseRCNN/sparsercnn/head.py
+Ths copyright of PeizeSun/SparseR-CNN is as follows:
+MIT License [see LICENSE for details]
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import copy
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register
+from ppdet.modeling.heads.roi_extractor import RoIAlign
+from ppdet.modeling.bbox_utils import delta2bbox
+from .. import initializer as init
+
+_DEFAULT_SCALE_CLAMP = math.log(100000. / 16)
+
+
+class DynamicConv(nn.Layer):
+ def __init__(
+ self,
+ head_hidden_dim,
+ head_dim_dynamic,
+ head_num_dynamic, ):
+ super().__init__()
+
+ self.hidden_dim = head_hidden_dim
+ self.dim_dynamic = head_dim_dynamic
+ self.num_dynamic = head_num_dynamic
+ self.num_params = self.hidden_dim * self.dim_dynamic
+ self.dynamic_layer = nn.Linear(self.hidden_dim,
+ self.num_dynamic * self.num_params)
+
+ self.norm1 = nn.LayerNorm(self.dim_dynamic)
+ self.norm2 = nn.LayerNorm(self.hidden_dim)
+
+ self.activation = nn.ReLU()
+
+ pooler_resolution = 7
+ num_output = self.hidden_dim * pooler_resolution**2
+ self.out_layer = nn.Linear(num_output, self.hidden_dim)
+ self.norm3 = nn.LayerNorm(self.hidden_dim)
+
+ def forward(self, pro_features, roi_features):
+ '''
+ pro_features: (1, N * nr_boxes, self.d_model)
+ roi_features: (49, N * nr_boxes, self.d_model)
+ '''
+ features = roi_features.transpose(perm=[1, 0, 2])
+ parameters = self.dynamic_layer(pro_features).transpose(perm=[1, 0, 2])
+
+ param1 = parameters[:, :, :self.num_params].reshape(
+ [-1, self.hidden_dim, self.dim_dynamic])
+ param2 = parameters[:, :, self.num_params:].reshape(
+ [-1, self.dim_dynamic, self.hidden_dim])
+
+ features = paddle.bmm(features, param1)
+ features = self.norm1(features)
+ features = self.activation(features)
+
+ features = paddle.bmm(features, param2)
+ features = self.norm2(features)
+ features = self.activation(features)
+
+ features = features.flatten(1)
+ features = self.out_layer(features)
+ features = self.norm3(features)
+ features = self.activation(features)
+
+ return features
+
+
+class RCNNHead(nn.Layer):
+ def __init__(
+ self,
+ d_model,
+ num_classes,
+ dim_feedforward,
+ nhead,
+ dropout,
+ head_cls,
+ head_reg,
+ head_dim_dynamic,
+ head_num_dynamic,
+ scale_clamp: float=_DEFAULT_SCALE_CLAMP,
+ bbox_weights=(2.0, 2.0, 1.0, 1.0), ):
+ super().__init__()
+
+ self.d_model = d_model
+
+ # dynamic.
+ self.self_attn = nn.MultiHeadAttention(d_model, nhead, dropout=dropout)
+ self.inst_interact = DynamicConv(d_model, head_dim_dynamic,
+ head_num_dynamic)
+
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.norm3 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout)
+ self.dropout2 = nn.Dropout(dropout)
+ self.dropout3 = nn.Dropout(dropout)
+
+ self.activation = nn.ReLU()
+
+ # cls.
+ num_cls = head_cls
+ cls_module = list()
+ for _ in range(num_cls):
+ cls_module.append(nn.Linear(d_model, d_model, bias_attr=False))
+ cls_module.append(nn.LayerNorm(d_model))
+ cls_module.append(nn.ReLU())
+ self.cls_module = nn.LayerList(cls_module)
+
+ # reg.
+ num_reg = head_reg
+ reg_module = list()
+ for _ in range(num_reg):
+ reg_module.append(nn.Linear(d_model, d_model, bias_attr=False))
+ reg_module.append(nn.LayerNorm(d_model))
+ reg_module.append(nn.ReLU())
+ self.reg_module = nn.LayerList(reg_module)
+
+ # pred.
+ self.class_logits = nn.Linear(d_model, num_classes)
+ self.bboxes_delta = nn.Linear(d_model, 4)
+ self.scale_clamp = scale_clamp
+ self.bbox_weights = bbox_weights
+
+ def forward(self, features, bboxes, pro_features, pooler):
+ """
+ :param bboxes: (N, nr_boxes, 4)
+ :param pro_features: (N, nr_boxes, d_model)
+ """
+
+ N, nr_boxes = bboxes.shape[:2]
+
+ proposal_boxes = list()
+ for b in range(N):
+ proposal_boxes.append(bboxes[b])
+ roi_num = paddle.full([N], nr_boxes).astype("int32")
+
+ roi_features = pooler(features, proposal_boxes, roi_num)
+ roi_features = roi_features.reshape(
+ [N * nr_boxes, self.d_model, -1]).transpose(perm=[2, 0, 1])
+
+ # self_att.
+ pro_features = pro_features.reshape([N, nr_boxes, self.d_model])
+ pro_features2 = self.self_attn(
+ pro_features, pro_features, value=pro_features)
+ pro_features = pro_features.transpose(perm=[1, 0, 2]) + self.dropout1(
+ pro_features2.transpose(perm=[1, 0, 2]))
+ pro_features = self.norm1(pro_features)
+
+ # inst_interact.
+ pro_features = pro_features.reshape(
+ [nr_boxes, N, self.d_model]).transpose(perm=[1, 0, 2]).reshape(
+ [1, N * nr_boxes, self.d_model])
+ pro_features2 = self.inst_interact(pro_features, roi_features)
+ pro_features = pro_features + self.dropout2(pro_features2)
+ obj_features = self.norm2(pro_features)
+
+ # obj_feature.
+ obj_features2 = self.linear2(
+ self.dropout(self.activation(self.linear1(obj_features))))
+ obj_features = obj_features + self.dropout3(obj_features2)
+ obj_features = self.norm3(obj_features)
+
+ fc_feature = obj_features.transpose(perm=[1, 0, 2]).reshape(
+ [N * nr_boxes, -1])
+ cls_feature = fc_feature.clone()
+ reg_feature = fc_feature.clone()
+ for cls_layer in self.cls_module:
+ cls_feature = cls_layer(cls_feature)
+ for reg_layer in self.reg_module:
+ reg_feature = reg_layer(reg_feature)
+ class_logits = self.class_logits(cls_feature)
+ bboxes_deltas = self.bboxes_delta(reg_feature)
+ pred_bboxes = delta2bbox(bboxes_deltas,
+ bboxes.reshape([-1, 4]), self.bbox_weights)
+
+ return class_logits.reshape([N, nr_boxes, -1]), pred_bboxes.reshape(
+ [N, nr_boxes, -1]), obj_features
+
+
+@register
+class SparseRCNNHead(nn.Layer):
+ '''
+ SparsercnnHead
+ Args:
+ roi_input_shape (list[ShapeSpec]): The output shape of fpn
+ num_classes (int): Number of classes,
+ head_hidden_dim (int): The param of MultiHeadAttention,
+ head_dim_feedforward (int): The param of MultiHeadAttention,
+ nhead (int): The param of MultiHeadAttention,
+ head_dropout (float): The p of dropout,
+ head_cls (int): The number of class head,
+ head_reg (int): The number of regressionhead,
+ head_num_dynamic (int): The number of DynamicConv's param,
+ head_num_heads (int): The number of RCNNHead,
+ deep_supervision (int): wheather supervise the intermediate results,
+ num_proposals (int): the number of proposals boxes and features
+ '''
+ __inject__ = ['loss_func']
+ __shared__ = ['num_classes']
+
+ def __init__(
+ self,
+ head_hidden_dim,
+ head_dim_feedforward,
+ nhead,
+ head_dropout,
+ head_cls,
+ head_reg,
+ head_dim_dynamic,
+ head_num_dynamic,
+ head_num_heads,
+ deep_supervision,
+ num_proposals,
+ num_classes=80,
+ loss_func="SparseRCNNLoss",
+ roi_input_shape=None, ):
+ super().__init__()
+
+ # Build RoI.
+ box_pooler = self._init_box_pooler(roi_input_shape)
+ self.box_pooler = box_pooler
+
+ # Build heads.
+ rcnn_head = RCNNHead(
+ head_hidden_dim,
+ num_classes,
+ head_dim_feedforward,
+ nhead,
+ head_dropout,
+ head_cls,
+ head_reg,
+ head_dim_dynamic,
+ head_num_dynamic, )
+ self.head_series = nn.LayerList(
+ [copy.deepcopy(rcnn_head) for i in range(head_num_heads)])
+ self.return_intermediate = deep_supervision
+
+ self.num_classes = num_classes
+
+ # build init proposal
+ self.init_proposal_features = nn.Embedding(num_proposals,
+ head_hidden_dim)
+ self.init_proposal_boxes = nn.Embedding(num_proposals, 4)
+
+ self.lossfunc = loss_func
+
+ # Init parameters.
+ init.reset_initialized_parameter(self)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ # init all parameters.
+ prior_prob = 0.01
+ bias_value = -math.log((1 - prior_prob) / prior_prob)
+
+ for m in self.sublayers():
+ if isinstance(m, nn.Linear):
+ init.xavier_normal_(m.weight, reverse=True)
+ elif not isinstance(m, nn.Embedding) and hasattr(
+ m, "weight") and m.weight.dim() > 1:
+ init.xavier_normal_(m.weight, reverse=False)
+
+ if hasattr(m, "bias") and m.bias is not None and m.bias.shape[
+ -1] == self.num_classes:
+ init.constant_(m.bias, bias_value)
+
+ init_bboxes = paddle.empty_like(self.init_proposal_boxes.weight)
+ init_bboxes[:, :2] = 0.5
+ init_bboxes[:, 2:] = 1.0
+ self.init_proposal_boxes.weight.set_value(init_bboxes)
+
+ @staticmethod
+ def _init_box_pooler(input_shape):
+
+ pooler_resolution = 7
+ sampling_ratio = 2
+
+ if input_shape is not None:
+ pooler_scales = tuple(1.0 / input_shape[k].stride
+ for k in range(len(input_shape)))
+ in_channels = [
+ input_shape[f].channels for f in range(len(input_shape))
+ ]
+ end_level = len(input_shape) - 1
+ # Check all channel counts are equal
+ assert len(set(in_channels)) == 1, in_channels
+ else:
+ pooler_scales = [1.0 / 4.0, 1.0 / 8.0, 1.0 / 16.0, 1.0 / 32.0]
+ end_level = 3
+
+ box_pooler = RoIAlign(
+ resolution=pooler_resolution,
+ spatial_scale=pooler_scales,
+ sampling_ratio=sampling_ratio,
+ end_level=end_level,
+ aligned=True)
+ return box_pooler
+
+ def forward(self, features, input_whwh):
+
+ bs = len(features[0])
+ bboxes = box_cxcywh_to_xyxy(self.init_proposal_boxes.weight.clone(
+ )).unsqueeze(0)
+ bboxes = bboxes * input_whwh.unsqueeze(-2)
+
+ init_features = self.init_proposal_features.weight.unsqueeze(0).tile(
+ [1, bs, 1])
+ proposal_features = init_features.clone()
+
+ inter_class_logits = []
+ inter_pred_bboxes = []
+
+ for rcnn_head in self.head_series:
+ class_logits, pred_bboxes, proposal_features = rcnn_head(
+ features, bboxes, proposal_features, self.box_pooler)
+
+ if self.return_intermediate:
+ inter_class_logits.append(class_logits)
+ inter_pred_bboxes.append(pred_bboxes)
+ bboxes = pred_bboxes.detach()
+
+ output = {
+ 'pred_logits': inter_class_logits[-1],
+ 'pred_boxes': inter_pred_bboxes[-1]
+ }
+ if self.return_intermediate:
+ output['aux_outputs'] = [{
+ 'pred_logits': a,
+ 'pred_boxes': b
+ } for a, b in zip(inter_class_logits[:-1], inter_pred_bboxes[:-1])]
+
+ return output
+
+ def get_loss(self, outputs, targets):
+ losses = self.lossfunc(outputs, targets)
+ weight_dict = self.lossfunc.weight_dict
+
+ for k in losses.keys():
+ if k in weight_dict:
+ losses[k] *= weight_dict[k]
+
+ return losses
+
+
+def box_cxcywh_to_xyxy(x):
+ x_c, y_c, w, h = x.unbind(-1)
+ b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)]
+ return paddle.stack(b, axis=-1)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ssd_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ssd_head.py
new file mode 100644
index 000000000..07e7e92f9
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ssd_head.py
@@ -0,0 +1,215 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+from paddle.regularizer import L2Decay
+from paddle import ParamAttr
+
+from ..layers import AnchorGeneratorSSD
+
+
+class SepConvLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size=3,
+ padding=1,
+ conv_decay=0.):
+ super(SepConvLayer, self).__init__()
+ self.dw_conv = nn.Conv2D(
+ in_channels=in_channels,
+ out_channels=in_channels,
+ kernel_size=kernel_size,
+ stride=1,
+ padding=padding,
+ groups=in_channels,
+ weight_attr=ParamAttr(regularizer=L2Decay(conv_decay)),
+ bias_attr=False)
+
+ self.bn = nn.BatchNorm2D(
+ in_channels,
+ weight_attr=ParamAttr(regularizer=L2Decay(0.)),
+ bias_attr=ParamAttr(regularizer=L2Decay(0.)))
+
+ self.pw_conv = nn.Conv2D(
+ in_channels=in_channels,
+ out_channels=out_channels,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ weight_attr=ParamAttr(regularizer=L2Decay(conv_decay)),
+ bias_attr=False)
+
+ def forward(self, x):
+ x = self.dw_conv(x)
+ x = F.relu6(self.bn(x))
+ x = self.pw_conv(x)
+ return x
+
+
+class SSDExtraHead(nn.Layer):
+ def __init__(self,
+ in_channels=256,
+ out_channels=([256, 512], [256, 512], [128, 256], [128, 256],
+ [128, 256]),
+ strides=(2, 2, 2, 1, 1),
+ paddings=(1, 1, 1, 0, 0)):
+ super(SSDExtraHead, self).__init__()
+ self.convs = nn.LayerList()
+ for out_channel, stride, padding in zip(out_channels, strides,
+ paddings):
+ self.convs.append(
+ self._make_layers(in_channels, out_channel[0], out_channel[1],
+ stride, padding))
+ in_channels = out_channel[-1]
+
+ def _make_layers(self, c_in, c_hidden, c_out, stride_3x3, padding_3x3):
+ return nn.Sequential(
+ nn.Conv2D(c_in, c_hidden, 1),
+ nn.ReLU(),
+ nn.Conv2D(c_hidden, c_out, 3, stride_3x3, padding_3x3), nn.ReLU())
+
+ def forward(self, x):
+ out = [x]
+ for conv_layer in self.convs:
+ out.append(conv_layer(out[-1]))
+ return out
+
+
+@register
+class SSDHead(nn.Layer):
+ """
+ SSDHead
+
+ Args:
+ num_classes (int): Number of classes
+ in_channels (list): Number of channels per input feature
+ anchor_generator (dict): Configuration of 'AnchorGeneratorSSD' instance
+ kernel_size (int): Conv kernel size
+ padding (int): Conv padding
+ use_sepconv (bool): Use SepConvLayer if true
+ conv_decay (float): Conv regularization coeff
+ loss (object): 'SSDLoss' instance
+ use_extra_head (bool): If use ResNet34 as baskbone, you should set `use_extra_head`=True
+ """
+
+ __shared__ = ['num_classes']
+ __inject__ = ['anchor_generator', 'loss']
+
+ def __init__(self,
+ num_classes=80,
+ in_channels=(512, 1024, 512, 256, 256, 256),
+ anchor_generator=AnchorGeneratorSSD().__dict__,
+ kernel_size=3,
+ padding=1,
+ use_sepconv=False,
+ conv_decay=0.,
+ loss='SSDLoss',
+ use_extra_head=False):
+ super(SSDHead, self).__init__()
+ # add background class
+ self.num_classes = num_classes + 1
+ self.in_channels = in_channels
+ self.anchor_generator = anchor_generator
+ self.loss = loss
+ self.use_extra_head = use_extra_head
+
+ if self.use_extra_head:
+ self.ssd_extra_head = SSDExtraHead()
+ self.in_channels = [256, 512, 512, 256, 256, 256]
+
+ if isinstance(anchor_generator, dict):
+ self.anchor_generator = AnchorGeneratorSSD(**anchor_generator)
+
+ self.num_priors = self.anchor_generator.num_priors
+ self.box_convs = []
+ self.score_convs = []
+ for i, num_prior in enumerate(self.num_priors):
+ box_conv_name = "boxes{}".format(i)
+ if not use_sepconv:
+ box_conv = self.add_sublayer(
+ box_conv_name,
+ nn.Conv2D(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * 4,
+ kernel_size=kernel_size,
+ padding=padding))
+ else:
+ box_conv = self.add_sublayer(
+ box_conv_name,
+ SepConvLayer(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * 4,
+ kernel_size=kernel_size,
+ padding=padding,
+ conv_decay=conv_decay))
+ self.box_convs.append(box_conv)
+
+ score_conv_name = "scores{}".format(i)
+ if not use_sepconv:
+ score_conv = self.add_sublayer(
+ score_conv_name,
+ nn.Conv2D(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * self.num_classes,
+ kernel_size=kernel_size,
+ padding=padding))
+ else:
+ score_conv = self.add_sublayer(
+ score_conv_name,
+ SepConvLayer(
+ in_channels=self.in_channels[i],
+ out_channels=num_prior * self.num_classes,
+ kernel_size=kernel_size,
+ padding=padding,
+ conv_decay=conv_decay))
+ self.score_convs.append(score_conv)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ def forward(self, feats, image, gt_bbox=None, gt_class=None):
+ if self.use_extra_head:
+ assert len(feats) == 1, \
+ ("If you set use_extra_head=True, backbone feature "
+ "list length should be 1.")
+ feats = self.ssd_extra_head(feats[0])
+ box_preds = []
+ cls_scores = []
+ for feat, box_conv, score_conv in zip(feats, self.box_convs,
+ self.score_convs):
+ box_pred = box_conv(feat)
+ box_pred = paddle.transpose(box_pred, [0, 2, 3, 1])
+ box_pred = paddle.reshape(box_pred, [0, -1, 4])
+ box_preds.append(box_pred)
+
+ cls_score = score_conv(feat)
+ cls_score = paddle.transpose(cls_score, [0, 2, 3, 1])
+ cls_score = paddle.reshape(cls_score, [0, -1, self.num_classes])
+ cls_scores.append(cls_score)
+
+ prior_boxes = self.anchor_generator(feats, image)
+
+ if self.training:
+ return self.get_loss(box_preds, cls_scores, gt_bbox, gt_class,
+ prior_boxes)
+ else:
+ return (box_preds, cls_scores), prior_boxes
+
+ def get_loss(self, boxes, scores, gt_bbox, gt_class, prior_boxes):
+ return self.loss(boxes, scores, gt_bbox, gt_class, prior_boxes)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/tood_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/tood_head.py
new file mode 100644
index 000000000..b9dbd17e3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/tood_head.py
@@ -0,0 +1,425 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Constant
+
+from ppdet.core.workspace import register
+from ..initializer import normal_, constant_, bias_init_with_prob
+from ppdet.modeling.bbox_utils import bbox_center
+from ..losses import GIoULoss
+from paddle.vision.ops import deform_conv2d
+from ppdet.modeling.layers import ConvNormLayer
+
+
+class ScaleReg(nn.Layer):
+ """
+ Parameter for scaling the regression outputs.
+ """
+
+ def __init__(self, init_scale=1.):
+ super(ScaleReg, self).__init__()
+ self.scale_reg = self.create_parameter(
+ shape=[1],
+ attr=ParamAttr(initializer=Constant(value=init_scale)),
+ dtype="float32")
+
+ def forward(self, inputs):
+ out = inputs * self.scale_reg
+ return out
+
+
+class TaskDecomposition(nn.Layer):
+ """This code is based on
+ https://github.com/fcjian/TOOD/blob/master/mmdet/models/dense_heads/tood_head.py
+ """
+
+ def __init__(
+ self,
+ feat_channels,
+ stacked_convs,
+ la_down_rate=8,
+ norm_type='gn',
+ norm_groups=32, ):
+ super(TaskDecomposition, self).__init__()
+ self.feat_channels = feat_channels
+ self.stacked_convs = stacked_convs
+ self.norm_type = norm_type
+ self.norm_groups = norm_groups
+ self.in_channels = self.feat_channels * self.stacked_convs
+ self.la_conv1 = nn.Conv2D(self.in_channels,
+ self.in_channels // la_down_rate, 1)
+ self.la_conv2 = nn.Conv2D(self.in_channels // la_down_rate,
+ self.stacked_convs, 1)
+
+ self.reduction_conv = ConvNormLayer(
+ self.in_channels,
+ self.feat_channels,
+ filter_size=1,
+ stride=1,
+ norm_type=self.norm_type,
+ norm_groups=self.norm_groups)
+
+ self._init_weights()
+
+ def _init_weights(self):
+ normal_(self.la_conv1.weight, std=0.001)
+ normal_(self.la_conv2.weight, std=0.001)
+
+ def forward(self, feat, avg_feat=None):
+ b, _, h, w = feat.shape
+ if avg_feat is None:
+ avg_feat = F.adaptive_avg_pool2d(feat, (1, 1))
+ weight = F.relu(self.la_conv1(avg_feat))
+ weight = F.sigmoid(self.la_conv2(weight))
+
+ # here new_conv_weight = layer_attention_weight * conv_weight
+ # in order to save memory and FLOPs.
+ conv_weight = weight.reshape([b, 1, self.stacked_convs, 1]) * \
+ self.reduction_conv.conv.weight.reshape(
+ [1, self.feat_channels, self.stacked_convs, self.feat_channels])
+ conv_weight = conv_weight.reshape(
+ [b, self.feat_channels, self.in_channels])
+ feat = feat.reshape([b, self.in_channels, h * w])
+ feat = paddle.bmm(conv_weight, feat).reshape(
+ [b, self.feat_channels, h, w])
+ if self.norm_type is not None:
+ feat = self.reduction_conv.norm(feat)
+ feat = F.relu(feat)
+ return feat
+
+
+@register
+class TOODHead(nn.Layer):
+ """This code is based on
+ https://github.com/fcjian/TOOD/blob/master/mmdet/models/dense_heads/tood_head.py
+ """
+ __inject__ = ['nms', 'static_assigner', 'assigner']
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ num_classes=80,
+ feat_channels=256,
+ stacked_convs=6,
+ fpn_strides=(8, 16, 32, 64, 128),
+ grid_cell_scale=8,
+ grid_cell_offset=0.5,
+ norm_type='gn',
+ norm_groups=32,
+ static_assigner_epoch=4,
+ use_align_head=True,
+ loss_weight={
+ 'class': 1.0,
+ 'bbox': 1.0,
+ 'iou': 2.0,
+ },
+ nms='MultiClassNMS',
+ static_assigner='ATSSAssigner',
+ assigner='TaskAlignedAssigner'):
+ super(TOODHead, self).__init__()
+ self.num_classes = num_classes
+ self.feat_channels = feat_channels
+ self.stacked_convs = stacked_convs
+ self.fpn_strides = fpn_strides
+ self.grid_cell_scale = grid_cell_scale
+ self.grid_cell_offset = grid_cell_offset
+ self.static_assigner_epoch = static_assigner_epoch
+ self.use_align_head = use_align_head
+ self.nms = nms
+ self.static_assigner = static_assigner
+ self.assigner = assigner
+ self.loss_weight = loss_weight
+ self.giou_loss = GIoULoss()
+
+ self.inter_convs = nn.LayerList()
+ for i in range(self.stacked_convs):
+ self.inter_convs.append(
+ ConvNormLayer(
+ self.feat_channels,
+ self.feat_channels,
+ filter_size=3,
+ stride=1,
+ norm_type=norm_type,
+ norm_groups=norm_groups))
+
+ self.cls_decomp = TaskDecomposition(
+ self.feat_channels,
+ self.stacked_convs,
+ self.stacked_convs * 8,
+ norm_type=norm_type,
+ norm_groups=norm_groups)
+ self.reg_decomp = TaskDecomposition(
+ self.feat_channels,
+ self.stacked_convs,
+ self.stacked_convs * 8,
+ norm_type=norm_type,
+ norm_groups=norm_groups)
+
+ self.tood_cls = nn.Conv2D(
+ self.feat_channels, self.num_classes, 3, padding=1)
+ self.tood_reg = nn.Conv2D(self.feat_channels, 4, 3, padding=1)
+
+ if self.use_align_head:
+ self.cls_prob_conv1 = nn.Conv2D(self.feat_channels *
+ self.stacked_convs,
+ self.feat_channels // 4, 1)
+ self.cls_prob_conv2 = nn.Conv2D(
+ self.feat_channels // 4, 1, 3, padding=1)
+ self.reg_offset_conv1 = nn.Conv2D(self.feat_channels *
+ self.stacked_convs,
+ self.feat_channels // 4, 1)
+ self.reg_offset_conv2 = nn.Conv2D(
+ self.feat_channels // 4, 4 * 2, 3, padding=1)
+
+ self.scales_regs = nn.LayerList([ScaleReg() for _ in self.fpn_strides])
+
+ self._init_weights()
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {
+ 'feat_channels': input_shape[0].channels,
+ 'fpn_strides': [i.stride for i in input_shape],
+ }
+
+ def _init_weights(self):
+ bias_cls = bias_init_with_prob(0.01)
+ normal_(self.tood_cls.weight, std=0.01)
+ constant_(self.tood_cls.bias, bias_cls)
+ normal_(self.tood_reg.weight, std=0.01)
+
+ if self.use_align_head:
+ normal_(self.cls_prob_conv1.weight, std=0.01)
+ normal_(self.cls_prob_conv2.weight, std=0.01)
+ constant_(self.cls_prob_conv2.bias, bias_cls)
+ normal_(self.reg_offset_conv1.weight, std=0.001)
+ normal_(self.reg_offset_conv2.weight, std=0.001)
+ constant_(self.reg_offset_conv2.bias)
+
+ def _generate_anchors(self, feats):
+ anchors, num_anchors_list = [], []
+ stride_tensor_list = []
+ for feat, stride in zip(feats, self.fpn_strides):
+ _, _, h, w = feat.shape
+ cell_half_size = self.grid_cell_scale * stride * 0.5
+ shift_x = (paddle.arange(end=w) + self.grid_cell_offset) * stride
+ shift_y = (paddle.arange(end=h) + self.grid_cell_offset) * stride
+ shift_y, shift_x = paddle.meshgrid(shift_y, shift_x)
+ anchor = paddle.stack(
+ [
+ shift_x - cell_half_size, shift_y - cell_half_size,
+ shift_x + cell_half_size, shift_y + cell_half_size
+ ],
+ axis=-1)
+ anchors.append(anchor.reshape([-1, 4]))
+ num_anchors_list.append(len(anchors[-1]))
+ stride_tensor_list.append(
+ paddle.full([num_anchors_list[-1], 1], stride))
+ return anchors, num_anchors_list, stride_tensor_list
+
+ @staticmethod
+ def _batch_distance2bbox(points, distance, max_shapes=None):
+ """Decode distance prediction to bounding box.
+ Args:
+ points (Tensor): [B, l, 2]
+ distance (Tensor): [B, l, 4]
+ max_shapes (tuple): [B, 2], "h w" format, Shape of the image.
+ Returns:
+ Tensor: Decoded bboxes.
+ """
+ x1 = points[:, :, 0] - distance[:, :, 0]
+ y1 = points[:, :, 1] - distance[:, :, 1]
+ x2 = points[:, :, 0] + distance[:, :, 2]
+ y2 = points[:, :, 1] + distance[:, :, 3]
+ bboxes = paddle.stack([x1, y1, x2, y2], -1)
+ if max_shapes is not None:
+ out_bboxes = []
+ for bbox, max_shape in zip(bboxes, max_shapes):
+ bbox[:, 0] = bbox[:, 0].clip(min=0, max=max_shape[1])
+ bbox[:, 1] = bbox[:, 1].clip(min=0, max=max_shape[0])
+ bbox[:, 2] = bbox[:, 2].clip(min=0, max=max_shape[1])
+ bbox[:, 3] = bbox[:, 3].clip(min=0, max=max_shape[0])
+ out_bboxes.append(bbox)
+ out_bboxes = paddle.stack(out_bboxes)
+ return out_bboxes
+ return bboxes
+
+ @staticmethod
+ def _deform_sampling(feat, offset):
+ """ Sampling the feature according to offset.
+ Args:
+ feat (Tensor): Feature
+ offset (Tensor): Spatial offset for for feature sampliing
+ """
+ # it is an equivalent implementation of bilinear interpolation
+ # you can also use F.grid_sample instead
+ c = feat.shape[1]
+ weight = paddle.ones([c, 1, 1, 1])
+ y = deform_conv2d(feat, offset, weight, deformable_groups=c, groups=c)
+ return y
+
+ def forward(self, feats):
+ assert len(feats) == len(self.fpn_strides), \
+ "The size of feats is not equal to size of fpn_strides"
+
+ anchors, num_anchors_list, stride_tensor_list = self._generate_anchors(
+ feats)
+ cls_score_list, bbox_pred_list = [], []
+ for feat, scale_reg, anchor, stride in zip(feats, self.scales_regs,
+ anchors, self.fpn_strides):
+ b, _, h, w = feat.shape
+ inter_feats = []
+ for inter_conv in self.inter_convs:
+ feat = F.relu(inter_conv(feat))
+ inter_feats.append(feat)
+ feat = paddle.concat(inter_feats, axis=1)
+
+ # task decomposition
+ avg_feat = F.adaptive_avg_pool2d(feat, (1, 1))
+ cls_feat = self.cls_decomp(feat, avg_feat)
+ reg_feat = self.reg_decomp(feat, avg_feat)
+
+ # cls prediction and alignment
+ cls_logits = self.tood_cls(cls_feat)
+ if self.use_align_head:
+ cls_prob = F.relu(self.cls_prob_conv1(feat))
+ cls_prob = F.sigmoid(self.cls_prob_conv2(cls_prob))
+ cls_score = (F.sigmoid(cls_logits) * cls_prob).sqrt()
+ else:
+ cls_score = F.sigmoid(cls_logits)
+ cls_score_list.append(cls_score.flatten(2).transpose([0, 2, 1]))
+
+ # reg prediction and alignment
+ reg_dist = scale_reg(self.tood_reg(reg_feat).exp())
+ reg_dist = reg_dist.transpose([0, 2, 3, 1]).reshape([b, -1, 4])
+ anchor_centers = bbox_center(anchor).unsqueeze(0) / stride
+ reg_bbox = self._batch_distance2bbox(
+ anchor_centers.tile([b, 1, 1]), reg_dist)
+ if self.use_align_head:
+ reg_bbox = reg_bbox.reshape([b, h, w, 4]).transpose(
+ [0, 3, 1, 2])
+ reg_offset = F.relu(self.reg_offset_conv1(feat))
+ reg_offset = self.reg_offset_conv2(reg_offset)
+ bbox_pred = self._deform_sampling(reg_bbox, reg_offset)
+ bbox_pred = bbox_pred.flatten(2).transpose([0, 2, 1])
+ else:
+ bbox_pred = reg_bbox
+
+ if not self.training:
+ bbox_pred *= stride
+ bbox_pred_list.append(bbox_pred)
+ cls_score_list = paddle.concat(cls_score_list, axis=1)
+ bbox_pred_list = paddle.concat(bbox_pred_list, axis=1)
+ anchors = paddle.concat(anchors)
+ anchors.stop_gradient = True
+ stride_tensor_list = paddle.concat(stride_tensor_list).unsqueeze(0)
+ stride_tensor_list.stop_gradient = True
+
+ return cls_score_list, bbox_pred_list, anchors, num_anchors_list, stride_tensor_list
+
+ @staticmethod
+ def _focal_loss(score, label, alpha=0.25, gamma=2.0):
+ weight = (score - label).pow(gamma)
+ if alpha > 0:
+ alpha_t = alpha * label + (1 - alpha) * (1 - label)
+ weight *= alpha_t
+ loss = F.binary_cross_entropy(
+ score, label, weight=weight, reduction='sum')
+ return loss
+
+ def get_loss(self, head_outs, gt_meta):
+ pred_scores, pred_bboxes, anchors, num_anchors_list, stride_tensor_list = head_outs
+ gt_labels = gt_meta['gt_class']
+ gt_bboxes = gt_meta['gt_bbox']
+ # label assignment
+ if gt_meta['epoch_id'] < self.static_assigner_epoch:
+ assigned_labels, assigned_bboxes, assigned_scores = self.static_assigner(
+ anchors,
+ num_anchors_list,
+ gt_labels,
+ gt_bboxes,
+ bg_index=self.num_classes)
+ alpha_l = 0.25
+ else:
+ assigned_labels, assigned_bboxes, assigned_scores = self.assigner(
+ pred_scores.detach(),
+ pred_bboxes.detach() * stride_tensor_list,
+ bbox_center(anchors),
+ gt_labels,
+ gt_bboxes,
+ bg_index=self.num_classes)
+ alpha_l = -1
+
+ # rescale bbox
+ assigned_bboxes /= stride_tensor_list
+ # classification loss
+ loss_cls = self._focal_loss(pred_scores, assigned_scores, alpha=alpha_l)
+ # select positive samples mask
+ mask_positive = (assigned_labels != self.num_classes)
+ num_pos = mask_positive.astype(paddle.float32).sum()
+ # bbox regression loss
+ if num_pos > 0:
+ bbox_mask = mask_positive.unsqueeze(-1).tile([1, 1, 4])
+ pred_bboxes_pos = paddle.masked_select(pred_bboxes,
+ bbox_mask).reshape([-1, 4])
+ assigned_bboxes_pos = paddle.masked_select(
+ assigned_bboxes, bbox_mask).reshape([-1, 4])
+ bbox_weight = paddle.masked_select(
+ assigned_scores.sum(-1), mask_positive).unsqueeze(-1)
+ # iou loss
+ loss_iou = self.giou_loss(pred_bboxes_pos,
+ assigned_bboxes_pos) * bbox_weight
+ loss_iou = loss_iou.sum() / bbox_weight.sum()
+ # l1 loss
+ loss_l1 = F.l1_loss(pred_bboxes_pos, assigned_bboxes_pos)
+ else:
+ loss_iou = paddle.zeros([1])
+ loss_l1 = paddle.zeros([1])
+
+ loss_cls /= assigned_scores.sum().clip(min=1)
+ loss = self.loss_weight['class'] * loss_cls + self.loss_weight[
+ 'iou'] * loss_iou
+
+ return {
+ 'loss': loss,
+ 'loss_class': loss_cls,
+ 'loss_iou': loss_iou,
+ 'loss_l1': loss_l1
+ }
+
+ def post_process(self, head_outs, img_shape, scale_factor):
+ pred_scores, pred_bboxes, _, _, _ = head_outs
+ pred_scores = pred_scores.transpose([0, 2, 1])
+
+ for i in range(len(pred_bboxes)):
+ pred_bboxes[i, :, 0] = pred_bboxes[i, :, 0].clip(
+ min=0, max=img_shape[i, 1])
+ pred_bboxes[i, :, 1] = pred_bboxes[i, :, 1].clip(
+ min=0, max=img_shape[i, 0])
+ pred_bboxes[i, :, 2] = pred_bboxes[i, :, 2].clip(
+ min=0, max=img_shape[i, 1])
+ pred_bboxes[i, :, 3] = pred_bboxes[i, :, 3].clip(
+ min=0, max=img_shape[i, 0])
+ # scale bbox to origin
+ scale_factor = scale_factor.flip([1]).tile([1, 2]).unsqueeze(1)
+ pred_bboxes /= scale_factor
+ bbox_pred, bbox_num, _ = self.nms(pred_bboxes, pred_scores)
+ return bbox_pred, bbox_num
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ttf_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ttf_head.py
new file mode 100644
index 000000000..dfe97bdb7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/ttf_head.py
@@ -0,0 +1,311 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Constant, Normal
+from paddle.regularizer import L2Decay
+from ppdet.core.workspace import register
+from ppdet.modeling.layers import DeformableConvV2, LiteConv
+import numpy as np
+
+
+@register
+class HMHead(nn.Layer):
+ """
+ Args:
+ ch_in (int): The channel number of input Tensor.
+ ch_out (int): The channel number of output Tensor.
+ num_classes (int): Number of classes.
+ conv_num (int): The convolution number of hm_feat.
+ dcn_head(bool): whether use dcn in head. False by default.
+ lite_head(bool): whether use lite version. False by default.
+ norm_type (string): norm type, 'sync_bn', 'bn', 'gn' are optional.
+ bn by default
+
+ Return:
+ Heatmap head output
+ """
+ __shared__ = ['num_classes', 'norm_type']
+
+ def __init__(
+ self,
+ ch_in,
+ ch_out=128,
+ num_classes=80,
+ conv_num=2,
+ dcn_head=False,
+ lite_head=False,
+ norm_type='bn', ):
+ super(HMHead, self).__init__()
+ head_conv = nn.Sequential()
+ for i in range(conv_num):
+ name = 'conv.{}'.format(i)
+ if lite_head:
+ lite_name = 'hm.' + name
+ head_conv.add_sublayer(
+ lite_name,
+ LiteConv(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ norm_type=norm_type))
+ else:
+ if dcn_head:
+ head_conv.add_sublayer(
+ name,
+ DeformableConvV2(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ kernel_size=3,
+ weight_attr=ParamAttr(initializer=Normal(0, 0.01))))
+ else:
+ head_conv.add_sublayer(
+ name,
+ nn.Conv2D(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0, 0.01)),
+ bias_attr=ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.))))
+ head_conv.add_sublayer(name + '.act', nn.ReLU())
+ self.feat = head_conv
+ bias_init = float(-np.log((1 - 0.01) / 0.01))
+ weight_attr = None if lite_head else ParamAttr(initializer=Normal(0,
+ 0.01))
+ self.head = nn.Conv2D(
+ in_channels=ch_out,
+ out_channels=num_classes,
+ kernel_size=1,
+ weight_attr=weight_attr,
+ bias_attr=ParamAttr(
+ learning_rate=2.,
+ regularizer=L2Decay(0.),
+ initializer=Constant(bias_init)))
+
+ def forward(self, feat):
+ out = self.feat(feat)
+ out = self.head(out)
+ return out
+
+
+@register
+class WHHead(nn.Layer):
+ """
+ Args:
+ ch_in (int): The channel number of input Tensor.
+ ch_out (int): The channel number of output Tensor.
+ conv_num (int): The convolution number of wh_feat.
+ dcn_head(bool): whether use dcn in head. False by default.
+ lite_head(bool): whether use lite version. False by default.
+ norm_type (string): norm type, 'sync_bn', 'bn', 'gn' are optional.
+ bn by default
+ Return:
+ Width & Height head output
+ """
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ ch_in,
+ ch_out=64,
+ conv_num=2,
+ dcn_head=False,
+ lite_head=False,
+ norm_type='bn'):
+ super(WHHead, self).__init__()
+ head_conv = nn.Sequential()
+ for i in range(conv_num):
+ name = 'conv.{}'.format(i)
+ if lite_head:
+ lite_name = 'wh.' + name
+ head_conv.add_sublayer(
+ lite_name,
+ LiteConv(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ norm_type=norm_type))
+ else:
+ if dcn_head:
+ head_conv.add_sublayer(
+ name,
+ DeformableConvV2(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ kernel_size=3,
+ weight_attr=ParamAttr(initializer=Normal(0, 0.01))))
+ else:
+ head_conv.add_sublayer(
+ name,
+ nn.Conv2D(
+ in_channels=ch_in if i == 0 else ch_out,
+ out_channels=ch_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0, 0.01)),
+ bias_attr=ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.))))
+ head_conv.add_sublayer(name + '.act', nn.ReLU())
+
+ weight_attr = None if lite_head else ParamAttr(initializer=Normal(0,
+ 0.01))
+ self.feat = head_conv
+ self.head = nn.Conv2D(
+ in_channels=ch_out,
+ out_channels=4,
+ kernel_size=1,
+ weight_attr=weight_attr,
+ bias_attr=ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.)))
+
+ def forward(self, feat):
+ out = self.feat(feat)
+ out = self.head(out)
+ out = F.relu(out)
+ return out
+
+
+@register
+class TTFHead(nn.Layer):
+ """
+ TTFHead
+ Args:
+ in_channels (int): the channel number of input to TTFHead.
+ num_classes (int): the number of classes, 80 by default.
+ hm_head_planes (int): the channel number in heatmap head,
+ 128 by default.
+ wh_head_planes (int): the channel number in width & height head,
+ 64 by default.
+ hm_head_conv_num (int): the number of convolution in heatmap head,
+ 2 by default.
+ wh_head_conv_num (int): the number of convolution in width & height
+ head, 2 by default.
+ hm_loss (object): Instance of 'CTFocalLoss'.
+ wh_loss (object): Instance of 'GIoULoss'.
+ wh_offset_base (float): the base offset of width and height,
+ 16.0 by default.
+ down_ratio (int): the actual down_ratio is calculated by base_down_ratio
+ (default 16) and the number of upsample layers.
+ lite_head(bool): whether use lite version. False by default.
+ norm_type (string): norm type, 'sync_bn', 'bn', 'gn' are optional.
+ bn by default
+ ags_module(bool): whether use AGS module to reweight location feature.
+ false by default.
+
+ """
+
+ __shared__ = ['num_classes', 'down_ratio', 'norm_type']
+ __inject__ = ['hm_loss', 'wh_loss']
+
+ def __init__(self,
+ in_channels,
+ num_classes=80,
+ hm_head_planes=128,
+ wh_head_planes=64,
+ hm_head_conv_num=2,
+ wh_head_conv_num=2,
+ hm_loss='CTFocalLoss',
+ wh_loss='GIoULoss',
+ wh_offset_base=16.,
+ down_ratio=4,
+ dcn_head=False,
+ lite_head=False,
+ norm_type='bn',
+ ags_module=False):
+ super(TTFHead, self).__init__()
+ self.in_channels = in_channels
+ self.hm_head = HMHead(in_channels, hm_head_planes, num_classes,
+ hm_head_conv_num, dcn_head, lite_head, norm_type)
+ self.wh_head = WHHead(in_channels, wh_head_planes, wh_head_conv_num,
+ dcn_head, lite_head, norm_type)
+ self.hm_loss = hm_loss
+ self.wh_loss = wh_loss
+
+ self.wh_offset_base = wh_offset_base
+ self.down_ratio = down_ratio
+ self.ags_module = ags_module
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ if isinstance(input_shape, (list, tuple)):
+ input_shape = input_shape[0]
+ return {'in_channels': input_shape.channels, }
+
+ def forward(self, feats):
+ hm = self.hm_head(feats)
+ wh = self.wh_head(feats) * self.wh_offset_base
+ return hm, wh
+
+ def filter_box_by_weight(self, pred, target, weight):
+ """
+ Filter out boxes where ttf_reg_weight is 0, only keep positive samples.
+ """
+ index = paddle.nonzero(weight > 0)
+ index.stop_gradient = True
+ weight = paddle.gather_nd(weight, index)
+ pred = paddle.gather_nd(pred, index)
+ target = paddle.gather_nd(target, index)
+ return pred, target, weight
+
+ def filter_loc_by_weight(self, score, weight):
+ index = paddle.nonzero(weight > 0)
+ index.stop_gradient = True
+ score = paddle.gather_nd(score, index)
+ return score
+
+ def get_loss(self, pred_hm, pred_wh, target_hm, box_target, target_weight):
+ pred_hm = paddle.clip(F.sigmoid(pred_hm), 1e-4, 1 - 1e-4)
+ hm_loss = self.hm_loss(pred_hm, target_hm)
+ H, W = target_hm.shape[2:]
+ mask = paddle.reshape(target_weight, [-1, H, W])
+ avg_factor = paddle.sum(mask) + 1e-4
+
+ base_step = self.down_ratio
+ shifts_x = paddle.arange(0, W * base_step, base_step, dtype='int32')
+ shifts_y = paddle.arange(0, H * base_step, base_step, dtype='int32')
+ shift_y, shift_x = paddle.tensor.meshgrid([shifts_y, shifts_x])
+ base_loc = paddle.stack([shift_x, shift_y], axis=0)
+ base_loc.stop_gradient = True
+
+ pred_boxes = paddle.concat(
+ [0 - pred_wh[:, 0:2, :, :] + base_loc, pred_wh[:, 2:4] + base_loc],
+ axis=1)
+ pred_boxes = paddle.transpose(pred_boxes, [0, 2, 3, 1])
+ boxes = paddle.transpose(box_target, [0, 2, 3, 1])
+ boxes.stop_gradient = True
+
+ if self.ags_module:
+ pred_hm_max = paddle.max(pred_hm, axis=1, keepdim=True)
+ pred_hm_max_softmax = F.softmax(pred_hm_max, axis=1)
+ pred_hm_max_softmax = paddle.transpose(pred_hm_max_softmax,
+ [0, 2, 3, 1])
+ pred_hm_max_softmax = self.filter_loc_by_weight(pred_hm_max_softmax,
+ mask)
+ else:
+ pred_hm_max_softmax = None
+
+ pred_boxes, boxes, mask = self.filter_box_by_weight(pred_boxes, boxes,
+ mask)
+ mask.stop_gradient = True
+ wh_loss = self.wh_loss(
+ pred_boxes,
+ boxes,
+ iou_weight=mask.unsqueeze(1),
+ loc_reweight=pred_hm_max_softmax)
+ wh_loss = wh_loss / avg_factor
+
+ ttf_loss = {'hm_loss': hm_loss, 'wh_loss': wh_loss}
+ return ttf_loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/yolo_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/yolo_head.py
new file mode 100644
index 000000000..7b4e9bc33
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/heads/yolo_head.py
@@ -0,0 +1,124 @@
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from ppdet.core.workspace import register
+
+
+def _de_sigmoid(x, eps=1e-7):
+ x = paddle.clip(x, eps, 1. / eps)
+ x = paddle.clip(1. / x - 1., eps, 1. / eps)
+ x = -paddle.log(x)
+ return x
+
+
+@register
+class YOLOv3Head(nn.Layer):
+ __shared__ = ['num_classes', 'data_format']
+ __inject__ = ['loss']
+
+ def __init__(self,
+ in_channels=[1024, 512, 256],
+ anchors=[[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
+ [59, 119], [116, 90], [156, 198], [373, 326]],
+ anchor_masks=[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
+ num_classes=80,
+ loss='YOLOv3Loss',
+ iou_aware=False,
+ iou_aware_factor=0.4,
+ data_format='NCHW'):
+ """
+ Head for YOLOv3 network
+
+ Args:
+ num_classes (int): number of foreground classes
+ anchors (list): anchors
+ anchor_masks (list): anchor masks
+ loss (object): YOLOv3Loss instance
+ iou_aware (bool): whether to use iou_aware
+ iou_aware_factor (float): iou aware factor
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(YOLOv3Head, self).__init__()
+ assert len(in_channels) > 0, "in_channels length should > 0"
+ self.in_channels = in_channels
+ self.num_classes = num_classes
+ self.loss = loss
+
+ self.iou_aware = iou_aware
+ self.iou_aware_factor = iou_aware_factor
+
+ self.parse_anchor(anchors, anchor_masks)
+ self.num_outputs = len(self.anchors)
+ self.data_format = data_format
+
+ self.yolo_outputs = []
+ for i in range(len(self.anchors)):
+
+ if self.iou_aware:
+ num_filters = len(self.anchors[i]) * (self.num_classes + 6)
+ else:
+ num_filters = len(self.anchors[i]) * (self.num_classes + 5)
+ name = 'yolo_output.{}'.format(i)
+ conv = nn.Conv2D(
+ in_channels=self.in_channels[i],
+ out_channels=num_filters,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ data_format=data_format,
+ bias_attr=ParamAttr(regularizer=L2Decay(0.)))
+ conv.skip_quant = True
+ yolo_output = self.add_sublayer(name, conv)
+ self.yolo_outputs.append(yolo_output)
+
+ def parse_anchor(self, anchors, anchor_masks):
+ self.anchors = [[anchors[i] for i in mask] for mask in anchor_masks]
+ self.mask_anchors = []
+ anchor_num = len(anchors)
+ for masks in anchor_masks:
+ self.mask_anchors.append([])
+ for mask in masks:
+ assert mask < anchor_num, "anchor mask index overflow"
+ self.mask_anchors[-1].extend(anchors[mask])
+
+ def forward(self, feats, targets=None):
+ assert len(feats) == len(self.anchors)
+ yolo_outputs = []
+ for i, feat in enumerate(feats):
+ yolo_output = self.yolo_outputs[i](feat)
+ if self.data_format == 'NHWC':
+ yolo_output = paddle.transpose(yolo_output, [0, 3, 1, 2])
+ yolo_outputs.append(yolo_output)
+
+ if self.training:
+ return self.loss(yolo_outputs, targets, self.anchors)
+ else:
+ if self.iou_aware:
+ y = []
+ for i, out in enumerate(yolo_outputs):
+ na = len(self.anchors[i])
+ ioup, x = out[:, 0:na, :, :], out[:, na:, :, :]
+ b, c, h, w = x.shape
+ no = c // na
+ x = x.reshape((b, na, no, h * w))
+ ioup = ioup.reshape((b, na, 1, h * w))
+ obj = x[:, :, 4:5, :]
+ ioup = F.sigmoid(ioup)
+ obj = F.sigmoid(obj)
+ obj_t = (obj**(1 - self.iou_aware_factor)) * (
+ ioup**self.iou_aware_factor)
+ obj_t = _de_sigmoid(obj_t)
+ loc_t = x[:, :, :4, :]
+ cls_t = x[:, :, 5:, :]
+ y_t = paddle.concat([loc_t, obj_t, cls_t], axis=2)
+ y_t = y_t.reshape((b, c, h, w))
+ y.append(y_t)
+ return y
+ else:
+ return yolo_outputs
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/initializer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/initializer.py
new file mode 100644
index 000000000..b7a135dcc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/initializer.py
@@ -0,0 +1,317 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py
+Ths copyright of pytorch/pytorch is a BSD-style license, as found in the LICENSE file.
+"""
+
+import math
+import numpy as np
+
+import paddle
+import paddle.nn as nn
+
+__all__ = [
+ 'uniform_',
+ 'normal_',
+ 'constant_',
+ 'ones_',
+ 'zeros_',
+ 'xavier_uniform_',
+ 'xavier_normal_',
+ 'kaiming_uniform_',
+ 'kaiming_normal_',
+ 'linear_init_',
+ 'conv_init_',
+ 'reset_initialized_parameter',
+]
+
+
+def _no_grad_uniform_(tensor, a, b):
+ with paddle.no_grad():
+ tensor.set_value(
+ paddle.uniform(
+ shape=tensor.shape, dtype=tensor.dtype, min=a, max=b))
+ return tensor
+
+
+def _no_grad_normal_(tensor, mean=0., std=1.):
+ with paddle.no_grad():
+ tensor.set_value(paddle.normal(mean=mean, std=std, shape=tensor.shape))
+ return tensor
+
+
+def _no_grad_fill_(tensor, value=0.):
+ with paddle.no_grad():
+ tensor.set_value(paddle.full_like(tensor, value, dtype=tensor.dtype))
+ return tensor
+
+
+def uniform_(tensor, a, b):
+ """
+ Modified tensor inspace using uniform_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ a (float|int): min value.
+ b (float|int): max value.
+ Return:
+ tensor
+ """
+ return _no_grad_uniform_(tensor, a, b)
+
+
+def normal_(tensor, mean=0., std=1.):
+ """
+ Modified tensor inspace using normal_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ mean (float|int): mean value.
+ std (float|int): std value.
+ Return:
+ tensor
+ """
+ return _no_grad_normal_(tensor, mean, std)
+
+
+def constant_(tensor, value=0.):
+ """
+ Modified tensor inspace using constant_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ value (float|int): value to fill tensor.
+ Return:
+ tensor
+ """
+ return _no_grad_fill_(tensor, value)
+
+
+def ones_(tensor):
+ """
+ Modified tensor inspace using ones_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ Return:
+ tensor
+ """
+ return _no_grad_fill_(tensor, 1)
+
+
+def zeros_(tensor):
+ """
+ Modified tensor inspace using zeros_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ Return:
+ tensor
+ """
+ return _no_grad_fill_(tensor, 0)
+
+
+def _calculate_fan_in_and_fan_out(tensor, reverse=False):
+ """
+ Calculate (fan_in, _fan_out) for tensor
+
+ Args:
+ tensor (Tensor): paddle.Tensor
+ reverse (bool: False): tensor data format order, False by default as [fout, fin, ...]. e.g. : conv.weight [cout, cin, kh, kw] is False; linear.weight [cin, cout] is True
+
+ Return:
+ Tuple[fan_in, fan_out]
+ """
+ if tensor.ndim < 2:
+ raise ValueError(
+ "Fan in and fan out can not be computed for tensor with fewer than 2 dimensions"
+ )
+
+ if reverse:
+ num_input_fmaps, num_output_fmaps = tensor.shape[0], tensor.shape[1]
+ else:
+ num_input_fmaps, num_output_fmaps = tensor.shape[1], tensor.shape[0]
+
+ receptive_field_size = 1
+ if tensor.ndim > 2:
+ receptive_field_size = np.prod(tensor.shape[2:])
+
+ fan_in = num_input_fmaps * receptive_field_size
+ fan_out = num_output_fmaps * receptive_field_size
+
+ return fan_in, fan_out
+
+
+def xavier_uniform_(tensor, gain=1., reverse=False):
+ """
+ Modified tensor inspace using xavier_uniform_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ gain (float): super parameter, 1. default.
+ reverse (bool): reverse (bool: False): tensor data format order, False by default as [fout, fin, ...].
+ Return:
+ tensor
+ """
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor, reverse=reverse)
+ std = gain * math.sqrt(2.0 / float(fan_in + fan_out))
+ k = math.sqrt(3.0) * std
+ return _no_grad_uniform_(tensor, -k, k)
+
+
+def xavier_normal_(tensor, gain=1., reverse=False):
+ """
+ Modified tensor inspace using xavier_normal_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ gain (float): super parameter, 1. default.
+ reverse (bool): reverse (bool: False): tensor data format order, False by default as [fout, fin, ...].
+ Return:
+ tensor
+ """
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor, reverse=reverse)
+ std = gain * math.sqrt(2.0 / float(fan_in + fan_out))
+ return _no_grad_normal_(tensor, 0, std)
+
+
+# reference: https://pytorch.org/docs/stable/_modules/torch/nn/init.html
+def _calculate_correct_fan(tensor, mode, reverse=False):
+ mode = mode.lower()
+ valid_modes = ['fan_in', 'fan_out']
+ if mode not in valid_modes:
+ raise ValueError("Mode {} not supported, please use one of {}".format(
+ mode, valid_modes))
+
+ fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor, reverse)
+
+ return fan_in if mode == 'fan_in' else fan_out
+
+
+def _calculate_gain(nonlinearity, param=None):
+ linear_fns = [
+ 'linear', 'conv1d', 'conv2d', 'conv3d', 'conv_transpose1d',
+ 'conv_transpose2d', 'conv_transpose3d'
+ ]
+ if nonlinearity in linear_fns or nonlinearity == 'sigmoid':
+ return 1
+ elif nonlinearity == 'tanh':
+ return 5.0 / 3
+ elif nonlinearity == 'relu':
+ return math.sqrt(2.0)
+ elif nonlinearity == 'leaky_relu':
+ if param is None:
+ negative_slope = 0.01
+ elif not isinstance(param, bool) and isinstance(
+ param, int) or isinstance(param, float):
+ # True/False are instances of int, hence check above
+ negative_slope = param
+ else:
+ raise ValueError("negative_slope {} not a valid number".format(
+ param))
+ return math.sqrt(2.0 / (1 + negative_slope**2))
+ elif nonlinearity == 'selu':
+ return 3.0 / 4
+ else:
+ raise ValueError("Unsupported nonlinearity {}".format(nonlinearity))
+
+
+def kaiming_uniform_(tensor,
+ a=0,
+ mode='fan_in',
+ nonlinearity='leaky_relu',
+ reverse=False):
+ """
+ Modified tensor inspace using kaiming_uniform method
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ mode (str): ['fan_in', 'fan_out'], 'fin_in' defalut
+ nonlinearity (str): nonlinearity method name
+ reverse (bool): reverse (bool: False): tensor data format order, False by default as [fout, fin, ...].
+ Return:
+ tensor
+ """
+ fan = _calculate_correct_fan(tensor, mode, reverse)
+ gain = _calculate_gain(nonlinearity, a)
+ std = gain / math.sqrt(fan)
+ k = math.sqrt(3.0) * std
+ return _no_grad_uniform_(tensor, -k, k)
+
+
+def kaiming_normal_(tensor,
+ a=0,
+ mode='fan_in',
+ nonlinearity='leaky_relu',
+ reverse=False):
+ """
+ Modified tensor inspace using kaiming_normal_
+ Args:
+ tensor (paddle.Tensor): paddle Tensor
+ mode (str): ['fan_in', 'fan_out'], 'fin_in' defalut
+ nonlinearity (str): nonlinearity method name
+ reverse (bool): reverse (bool: False): tensor data format order, False by default as [fout, fin, ...].
+ Return:
+ tensor
+ """
+ fan = _calculate_correct_fan(tensor, mode, reverse)
+ gain = _calculate_gain(nonlinearity, a)
+ std = gain / math.sqrt(fan)
+ return _no_grad_normal_(tensor, 0, std)
+
+
+def linear_init_(module):
+ bound = 1 / math.sqrt(module.weight.shape[0])
+ uniform_(module.weight, -bound, bound)
+ uniform_(module.bias, -bound, bound)
+
+
+def conv_init_(module):
+ bound = 1 / np.sqrt(np.prod(module.weight.shape[1:]))
+ uniform_(module.weight, -bound, bound)
+ uniform_(module.bias, -bound, bound)
+
+
+def bias_init_with_prob(prior_prob=0.01):
+ """initialize conv/fc bias value according to a given probability value."""
+ bias_init = float(-np.log((1 - prior_prob) / prior_prob))
+ return bias_init
+
+
+@paddle.no_grad()
+def reset_initialized_parameter(model, include_self=True):
+ """
+ Reset initialized parameter using following method for [conv, linear, embedding, bn]
+
+ Args:
+ model (paddle.Layer): paddle Layer
+ include_self (bool: False): include_self for Layer.named_sublayers method. Indicate whether including itself
+ Return:
+ None
+ """
+ for _, m in model.named_sublayers(include_self=include_self):
+ if isinstance(m, nn.Conv2D):
+ k = float(m._groups) / (m._in_channels * m._kernel_size[0] *
+ m._kernel_size[1])
+ k = math.sqrt(k)
+ _no_grad_uniform_(m.weight, -k, k)
+ if hasattr(m, 'bias') and getattr(m, 'bias') is not None:
+ _no_grad_uniform_(m.bias, -k, k)
+
+ elif isinstance(m, nn.Linear):
+ k = math.sqrt(1. / m.weight.shape[0])
+ _no_grad_uniform_(m.weight, -k, k)
+ if hasattr(m, 'bias') and getattr(m, 'bias') is not None:
+ _no_grad_uniform_(m.bias, -k, k)
+
+ elif isinstance(m, nn.Embedding):
+ _no_grad_normal_(m.weight, mean=0., std=1.)
+
+ elif isinstance(m, (nn.BatchNorm2D, nn.LayerNorm)):
+ _no_grad_fill_(m.weight, 1.)
+ if hasattr(m, 'bias') and getattr(m, 'bias') is not None:
+ _no_grad_fill_(m.bias, 0)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/keypoint_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/keypoint_utils.py
new file mode 100644
index 000000000..b3f84da7d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/keypoint_utils.py
@@ -0,0 +1,336 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import numpy as np
+
+
+def get_affine_mat_kernel(h, w, s, inv=False):
+ if w < h:
+ w_ = s
+ h_ = int(np.ceil((s / w * h) / 64.) * 64)
+ scale_w = w
+ scale_h = h_ / w_ * w
+
+ else:
+ h_ = s
+ w_ = int(np.ceil((s / h * w) / 64.) * 64)
+ scale_h = h
+ scale_w = w_ / h_ * h
+
+ center = np.array([np.round(w / 2.), np.round(h / 2.)])
+
+ size_resized = (w_, h_)
+ trans = get_affine_transform(
+ center, np.array([scale_w, scale_h]), 0, size_resized, inv=inv)
+
+ return trans, size_resized
+
+
+def get_affine_transform(center,
+ input_size,
+ rot,
+ output_size,
+ shift=(0., 0.),
+ inv=False):
+ """Get the affine transform matrix, given the center/scale/rot/output_size.
+
+ Args:
+ center (np.ndarray[2, ]): Center of the bounding box (x, y).
+ input_size (np.ndarray[2, ]): Size of input feature (width, height).
+ rot (float): Rotation angle (degree).
+ output_size (np.ndarray[2, ]): Size of the destination heatmaps.
+ shift (0-100%): Shift translation ratio wrt the width/height.
+ Default (0., 0.).
+ inv (bool): Option to inverse the affine transform direction.
+ (inv=False: src->dst or inv=True: dst->src)
+
+ Returns:
+ np.ndarray: The transform matrix.
+ """
+ assert len(center) == 2
+ assert len(output_size) == 2
+ assert len(shift) == 2
+
+ if not isinstance(input_size, (np.ndarray, list)):
+ input_size = np.array([input_size, input_size], dtype=np.float32)
+ scale_tmp = input_size
+
+ shift = np.array(shift)
+ src_w = scale_tmp[0]
+ dst_w = output_size[0]
+ dst_h = output_size[1]
+
+ rot_rad = np.pi * rot / 180
+ src_dir = rotate_point([0., src_w * -0.5], rot_rad)
+ dst_dir = np.array([0., dst_w * -0.5])
+
+ src = np.zeros((3, 2), dtype=np.float32)
+
+ src[0, :] = center + scale_tmp * shift
+ src[1, :] = center + src_dir + scale_tmp * shift
+ src[2, :] = _get_3rd_point(src[0, :], src[1, :])
+
+ dst = np.zeros((3, 2), dtype=np.float32)
+ dst[0, :] = [dst_w * 0.5, dst_h * 0.5]
+ dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir
+ dst[2, :] = _get_3rd_point(dst[0, :], dst[1, :])
+
+ if inv:
+ trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))
+ else:
+ trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))
+
+ return trans
+
+
+def get_warp_matrix(theta, size_input, size_dst, size_target):
+ """Calculate the transformation matrix under the constraint of unbiased.
+ Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased
+ Data Processing for Human Pose Estimation (CVPR 2020).
+
+ Args:
+ theta (float): Rotation angle in degrees.
+ size_input (np.ndarray): Size of input image [w, h].
+ size_dst (np.ndarray): Size of output image [w, h].
+ size_target (np.ndarray): Size of ROI in input plane [w, h].
+
+ Returns:
+ matrix (np.ndarray): A matrix for transformation.
+ """
+ theta = np.deg2rad(theta)
+ matrix = np.zeros((2, 3), dtype=np.float32)
+ scale_x = size_dst[0] / size_target[0]
+ scale_y = size_dst[1] / size_target[1]
+ matrix[0, 0] = np.cos(theta) * scale_x
+ matrix[0, 1] = -np.sin(theta) * scale_x
+ matrix[0, 2] = scale_x * (
+ -0.5 * size_input[0] * np.cos(theta) + 0.5 * size_input[1] *
+ np.sin(theta) + 0.5 * size_target[0])
+ matrix[1, 0] = np.sin(theta) * scale_y
+ matrix[1, 1] = np.cos(theta) * scale_y
+ matrix[1, 2] = scale_y * (
+ -0.5 * size_input[0] * np.sin(theta) - 0.5 * size_input[1] *
+ np.cos(theta) + 0.5 * size_target[1])
+ return matrix
+
+
+def _get_3rd_point(a, b):
+ """To calculate the affine matrix, three pairs of points are required. This
+ function is used to get the 3rd point, given 2D points a & b.
+
+ The 3rd point is defined by rotating vector `a - b` by 90 degrees
+ anticlockwise, using b as the rotation center.
+
+ Args:
+ a (np.ndarray): point(x,y)
+ b (np.ndarray): point(x,y)
+
+ Returns:
+ np.ndarray: The 3rd point.
+ """
+ assert len(
+ a) == 2, 'input of _get_3rd_point should be point with length of 2'
+ assert len(
+ b) == 2, 'input of _get_3rd_point should be point with length of 2'
+ direction = a - b
+ third_pt = b + np.array([-direction[1], direction[0]], dtype=np.float32)
+
+ return third_pt
+
+
+def rotate_point(pt, angle_rad):
+ """Rotate a point by an angle.
+
+ Args:
+ pt (list[float]): 2 dimensional point to be rotated
+ angle_rad (float): rotation angle by radian
+
+ Returns:
+ list[float]: Rotated point.
+ """
+ assert len(pt) == 2
+ sn, cs = np.sin(angle_rad), np.cos(angle_rad)
+ new_x = pt[0] * cs - pt[1] * sn
+ new_y = pt[0] * sn + pt[1] * cs
+ rotated_pt = [new_x, new_y]
+
+ return rotated_pt
+
+
+def transpred(kpts, h, w, s):
+ trans, _ = get_affine_mat_kernel(h, w, s, inv=True)
+
+ return warp_affine_joints(kpts[..., :2].copy(), trans)
+
+
+def warp_affine_joints(joints, mat):
+ """Apply affine transformation defined by the transform matrix on the
+ joints.
+
+ Args:
+ joints (np.ndarray[..., 2]): Origin coordinate of joints.
+ mat (np.ndarray[3, 2]): The affine matrix.
+
+ Returns:
+ matrix (np.ndarray[..., 2]): Result coordinate of joints.
+ """
+ joints = np.array(joints)
+ shape = joints.shape
+ joints = joints.reshape(-1, 2)
+ return np.dot(np.concatenate(
+ (joints, joints[:, 0:1] * 0 + 1), axis=1),
+ mat.T).reshape(shape)
+
+
+def affine_transform(pt, t):
+ new_pt = np.array([pt[0], pt[1], 1.]).T
+ new_pt = np.dot(t, new_pt)
+ return new_pt[:2]
+
+
+def transform_preds(coords, center, scale, output_size):
+ target_coords = np.zeros(coords.shape)
+ trans = get_affine_transform(center, scale * 200, 0, output_size, inv=1)
+ for p in range(coords.shape[0]):
+ target_coords[p, 0:2] = affine_transform(coords[p, 0:2], trans)
+ return target_coords
+
+
+def oks_iou(g, d, a_g, a_d, sigmas=None, in_vis_thre=None):
+ if not isinstance(sigmas, np.ndarray):
+ sigmas = np.array([
+ .26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07,
+ .87, .87, .89, .89
+ ]) / 10.0
+ vars = (sigmas * 2)**2
+ xg = g[0::3]
+ yg = g[1::3]
+ vg = g[2::3]
+ ious = np.zeros((d.shape[0]))
+ for n_d in range(0, d.shape[0]):
+ xd = d[n_d, 0::3]
+ yd = d[n_d, 1::3]
+ vd = d[n_d, 2::3]
+ dx = xd - xg
+ dy = yd - yg
+ e = (dx**2 + dy**2) / vars / ((a_g + a_d[n_d]) / 2 + np.spacing(1)) / 2
+ if in_vis_thre is not None:
+ ind = list(vg > in_vis_thre) and list(vd > in_vis_thre)
+ e = e[ind]
+ ious[n_d] = np.sum(np.exp(-e)) / e.shape[0] if e.shape[0] != 0 else 0.0
+ return ious
+
+
+def oks_nms(kpts_db, thresh, sigmas=None, in_vis_thre=None):
+ """greedily select boxes with high confidence and overlap with current maximum <= thresh
+ rule out overlap >= thresh
+
+ Args:
+ kpts_db (list): The predicted keypoints within the image
+ thresh (float): The threshold to select the boxes
+ sigmas (np.array): The variance to calculate the oks iou
+ Default: None
+ in_vis_thre (float): The threshold to select the high confidence boxes
+ Default: None
+
+ Return:
+ keep (list): indexes to keep
+ """
+
+ if len(kpts_db) == 0:
+ return []
+
+ scores = np.array([kpts_db[i]['score'] for i in range(len(kpts_db))])
+ kpts = np.array(
+ [kpts_db[i]['keypoints'].flatten() for i in range(len(kpts_db))])
+ areas = np.array([kpts_db[i]['area'] for i in range(len(kpts_db))])
+
+ order = scores.argsort()[::-1]
+
+ keep = []
+ while order.size > 0:
+ i = order[0]
+ keep.append(i)
+
+ oks_ovr = oks_iou(kpts[i], kpts[order[1:]], areas[i], areas[order[1:]],
+ sigmas, in_vis_thre)
+
+ inds = np.where(oks_ovr <= thresh)[0]
+ order = order[inds + 1]
+
+ return keep
+
+
+def rescore(overlap, scores, thresh, type='gaussian'):
+ assert overlap.shape[0] == scores.shape[0]
+ if type == 'linear':
+ inds = np.where(overlap >= thresh)[0]
+ scores[inds] = scores[inds] * (1 - overlap[inds])
+ else:
+ scores = scores * np.exp(-overlap**2 / thresh)
+
+ return scores
+
+
+def soft_oks_nms(kpts_db, thresh, sigmas=None, in_vis_thre=None):
+ """greedily select boxes with high confidence and overlap with current maximum <= thresh
+ rule out overlap >= thresh
+
+ Args:
+ kpts_db (list): The predicted keypoints within the image
+ thresh (float): The threshold to select the boxes
+ sigmas (np.array): The variance to calculate the oks iou
+ Default: None
+ in_vis_thre (float): The threshold to select the high confidence boxes
+ Default: None
+
+ Return:
+ keep (list): indexes to keep
+ """
+
+ if len(kpts_db) == 0:
+ return []
+
+ scores = np.array([kpts_db[i]['score'] for i in range(len(kpts_db))])
+ kpts = np.array(
+ [kpts_db[i]['keypoints'].flatten() for i in range(len(kpts_db))])
+ areas = np.array([kpts_db[i]['area'] for i in range(len(kpts_db))])
+
+ order = scores.argsort()[::-1]
+ scores = scores[order]
+
+ # max_dets = order.size
+ max_dets = 20
+ keep = np.zeros(max_dets, dtype=np.intp)
+ keep_cnt = 0
+ while order.size > 0 and keep_cnt < max_dets:
+ i = order[0]
+
+ oks_ovr = oks_iou(kpts[i], kpts[order[1:]], areas[i], areas[order[1:]],
+ sigmas, in_vis_thre)
+
+ order = order[1:]
+ scores = rescore(oks_ovr, scores[1:], thresh)
+
+ tmp = scores.argsort()[::-1]
+ order = order[tmp]
+ scores = scores[tmp]
+
+ keep[keep_cnt] = i
+ keep_cnt += 1
+
+ keep = keep[:keep_cnt]
+
+ return keep
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/layers.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/layers.py
new file mode 100644
index 000000000..73da16a14
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/layers.py
@@ -0,0 +1,1424 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import math
+import six
+import numpy as np
+from numbers import Integral
+
+import paddle
+import paddle.nn as nn
+from paddle import ParamAttr
+from paddle import to_tensor
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, Constant, XavierUniform
+from paddle.regularizer import L2Decay
+
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.bbox_utils import delta2bbox
+from . import ops
+from .initializer import xavier_uniform_, constant_
+
+from paddle.vision.ops import DeformConv2D
+
+
+def _to_list(l):
+ if isinstance(l, (list, tuple)):
+ return list(l)
+ return [l]
+
+
+class DeformableConvV2(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride=1,
+ padding=0,
+ dilation=1,
+ groups=1,
+ weight_attr=None,
+ bias_attr=None,
+ lr_scale=1,
+ regularizer=None,
+ skip_quant=False,
+ dcn_bias_regularizer=L2Decay(0.),
+ dcn_bias_lr_scale=2.):
+ super(DeformableConvV2, self).__init__()
+ self.offset_channel = 2 * kernel_size**2
+ self.mask_channel = kernel_size**2
+
+ if lr_scale == 1 and regularizer is None:
+ offset_bias_attr = ParamAttr(initializer=Constant(0.))
+ else:
+ offset_bias_attr = ParamAttr(
+ initializer=Constant(0.),
+ learning_rate=lr_scale,
+ regularizer=regularizer)
+ self.conv_offset = nn.Conv2D(
+ in_channels,
+ 3 * kernel_size**2,
+ kernel_size,
+ stride=stride,
+ padding=(kernel_size - 1) // 2,
+ weight_attr=ParamAttr(initializer=Constant(0.0)),
+ bias_attr=offset_bias_attr)
+ if skip_quant:
+ self.conv_offset.skip_quant = True
+
+ if bias_attr:
+ # in FCOS-DCN head, specifically need learning_rate and regularizer
+ dcn_bias_attr = ParamAttr(
+ initializer=Constant(value=0),
+ regularizer=dcn_bias_regularizer,
+ learning_rate=dcn_bias_lr_scale)
+ else:
+ # in ResNet backbone, do not need bias
+ dcn_bias_attr = False
+ self.conv_dcn = DeformConv2D(
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride=stride,
+ padding=(kernel_size - 1) // 2 * dilation,
+ dilation=dilation,
+ groups=groups,
+ weight_attr=weight_attr,
+ bias_attr=dcn_bias_attr)
+
+ def forward(self, x):
+ offset_mask = self.conv_offset(x)
+ offset, mask = paddle.split(
+ offset_mask,
+ num_or_sections=[self.offset_channel, self.mask_channel],
+ axis=1)
+ mask = F.sigmoid(mask)
+ y = self.conv_dcn(x, offset, mask=mask)
+ return y
+
+
+class ConvNormLayer(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ stride,
+ groups=1,
+ norm_type='bn',
+ norm_decay=0.,
+ norm_groups=32,
+ use_dcn=False,
+ bias_on=False,
+ lr_scale=1.,
+ freeze_norm=False,
+ initializer=Normal(
+ mean=0., std=0.01),
+ skip_quant=False,
+ dcn_lr_scale=2.,
+ dcn_regularizer=L2Decay(0.)):
+ super(ConvNormLayer, self).__init__()
+ assert norm_type in ['bn', 'sync_bn', 'gn']
+
+ if bias_on:
+ bias_attr = ParamAttr(
+ initializer=Constant(value=0.), learning_rate=lr_scale)
+ else:
+ bias_attr = False
+
+ if not use_dcn:
+ self.conv = nn.Conv2D(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ weight_attr=ParamAttr(
+ initializer=initializer, learning_rate=1.),
+ bias_attr=bias_attr)
+ if skip_quant:
+ self.conv.skip_quant = True
+ else:
+ # in FCOS-DCN head, specifically need learning_rate and regularizer
+ self.conv = DeformableConvV2(
+ in_channels=ch_in,
+ out_channels=ch_out,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ weight_attr=ParamAttr(
+ initializer=initializer, learning_rate=1.),
+ bias_attr=True,
+ lr_scale=dcn_lr_scale,
+ regularizer=dcn_regularizer,
+ dcn_bias_regularizer=dcn_regularizer,
+ dcn_bias_lr_scale=dcn_lr_scale,
+ skip_quant=skip_quant)
+
+ norm_lr = 0. if freeze_norm else 1.
+ param_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay) if norm_decay is not None else None)
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay) if norm_decay is not None else None)
+ if norm_type == 'bn':
+ self.norm = nn.BatchNorm2D(
+ ch_out, weight_attr=param_attr, bias_attr=bias_attr)
+ elif norm_type == 'sync_bn':
+ self.norm = nn.SyncBatchNorm(
+ ch_out, weight_attr=param_attr, bias_attr=bias_attr)
+ elif norm_type == 'gn':
+ self.norm = nn.GroupNorm(
+ num_groups=norm_groups,
+ num_channels=ch_out,
+ weight_attr=param_attr,
+ bias_attr=bias_attr)
+
+ def forward(self, inputs):
+ out = self.conv(inputs)
+ out = self.norm(out)
+ return out
+
+
+class LiteConv(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ stride=1,
+ with_act=True,
+ norm_type='sync_bn',
+ name=None):
+ super(LiteConv, self).__init__()
+ self.lite_conv = nn.Sequential()
+ conv1 = ConvNormLayer(
+ in_channels,
+ in_channels,
+ filter_size=5,
+ stride=stride,
+ groups=in_channels,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+ conv2 = ConvNormLayer(
+ in_channels,
+ out_channels,
+ filter_size=1,
+ stride=stride,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+ conv3 = ConvNormLayer(
+ out_channels,
+ out_channels,
+ filter_size=1,
+ stride=stride,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+ conv4 = ConvNormLayer(
+ out_channels,
+ out_channels,
+ filter_size=5,
+ stride=stride,
+ groups=out_channels,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+ conv_list = [conv1, conv2, conv3, conv4]
+ self.lite_conv.add_sublayer('conv1', conv1)
+ self.lite_conv.add_sublayer('relu6_1', nn.ReLU6())
+ self.lite_conv.add_sublayer('conv2', conv2)
+ if with_act:
+ self.lite_conv.add_sublayer('relu6_2', nn.ReLU6())
+ self.lite_conv.add_sublayer('conv3', conv3)
+ self.lite_conv.add_sublayer('relu6_3', nn.ReLU6())
+ self.lite_conv.add_sublayer('conv4', conv4)
+ if with_act:
+ self.lite_conv.add_sublayer('relu6_4', nn.ReLU6())
+
+ def forward(self, inputs):
+ out = self.lite_conv(inputs)
+ return out
+
+
+class DropBlock(nn.Layer):
+ def __init__(self, block_size, keep_prob, name, data_format='NCHW'):
+ """
+ DropBlock layer, see https://arxiv.org/abs/1810.12890
+
+ Args:
+ block_size (int): block size
+ keep_prob (int): keep probability
+ name (str): layer name
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(DropBlock, self).__init__()
+ self.block_size = block_size
+ self.keep_prob = keep_prob
+ self.name = name
+ self.data_format = data_format
+
+ def forward(self, x):
+ if not self.training or self.keep_prob == 1:
+ return x
+ else:
+ gamma = (1. - self.keep_prob) / (self.block_size**2)
+ if self.data_format == 'NCHW':
+ shape = x.shape[2:]
+ else:
+ shape = x.shape[1:3]
+ for s in shape:
+ gamma *= s / (s - self.block_size + 1)
+
+ matrix = paddle.cast(paddle.rand(x.shape) < gamma, x.dtype)
+ mask_inv = F.max_pool2d(
+ matrix,
+ self.block_size,
+ stride=1,
+ padding=self.block_size // 2,
+ data_format=self.data_format)
+ mask = 1. - mask_inv
+ y = x * mask * (mask.numel() / mask.sum())
+ return y
+
+
+@register
+@serializable
+class AnchorGeneratorSSD(object):
+ def __init__(self,
+ steps=[8, 16, 32, 64, 100, 300],
+ aspect_ratios=[[2.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]],
+ min_ratio=15,
+ max_ratio=90,
+ base_size=300,
+ min_sizes=[30.0, 60.0, 111.0, 162.0, 213.0, 264.0],
+ max_sizes=[60.0, 111.0, 162.0, 213.0, 264.0, 315.0],
+ offset=0.5,
+ flip=True,
+ clip=False,
+ min_max_aspect_ratios_order=False):
+ self.steps = steps
+ self.aspect_ratios = aspect_ratios
+ self.min_ratio = min_ratio
+ self.max_ratio = max_ratio
+ self.base_size = base_size
+ self.min_sizes = min_sizes
+ self.max_sizes = max_sizes
+ self.offset = offset
+ self.flip = flip
+ self.clip = clip
+ self.min_max_aspect_ratios_order = min_max_aspect_ratios_order
+
+ if self.min_sizes == [] and self.max_sizes == []:
+ num_layer = len(aspect_ratios)
+ step = int(
+ math.floor(((self.max_ratio - self.min_ratio)) / (num_layer - 2
+ )))
+ for ratio in six.moves.range(self.min_ratio, self.max_ratio + 1,
+ step):
+ self.min_sizes.append(self.base_size * ratio / 100.)
+ self.max_sizes.append(self.base_size * (ratio + step) / 100.)
+ self.min_sizes = [self.base_size * .10] + self.min_sizes
+ self.max_sizes = [self.base_size * .20] + self.max_sizes
+
+ self.num_priors = []
+ for aspect_ratio, min_size, max_size in zip(
+ aspect_ratios, self.min_sizes, self.max_sizes):
+ if isinstance(min_size, (list, tuple)):
+ self.num_priors.append(
+ len(_to_list(min_size)) + len(_to_list(max_size)))
+ else:
+ self.num_priors.append((len(aspect_ratio) * 2 + 1) * len(
+ _to_list(min_size)) + len(_to_list(max_size)))
+
+ def __call__(self, inputs, image):
+ boxes = []
+ for input, min_size, max_size, aspect_ratio, step in zip(
+ inputs, self.min_sizes, self.max_sizes, self.aspect_ratios,
+ self.steps):
+ box, _ = ops.prior_box(
+ input=input,
+ image=image,
+ min_sizes=_to_list(min_size),
+ max_sizes=_to_list(max_size),
+ aspect_ratios=aspect_ratio,
+ flip=self.flip,
+ clip=self.clip,
+ steps=[step, step],
+ offset=self.offset,
+ min_max_aspect_ratios_order=self.min_max_aspect_ratios_order)
+ boxes.append(paddle.reshape(box, [-1, 4]))
+ return boxes
+
+
+@register
+@serializable
+class RCNNBox(object):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ prior_box_var=[10., 10., 5., 5.],
+ code_type="decode_center_size",
+ box_normalized=False,
+ num_classes=80):
+ super(RCNNBox, self).__init__()
+ self.prior_box_var = prior_box_var
+ self.code_type = code_type
+ self.box_normalized = box_normalized
+ self.num_classes = num_classes
+
+ def __call__(self, bbox_head_out, rois, im_shape, scale_factor):
+ bbox_pred = bbox_head_out[0]
+ cls_prob = bbox_head_out[1]
+ roi = rois[0]
+ rois_num = rois[1]
+
+ origin_shape = paddle.floor(im_shape / scale_factor + 0.5)
+ scale_list = []
+ origin_shape_list = []
+
+ batch_size = 1
+ if isinstance(roi, list):
+ batch_size = len(roi)
+ else:
+ batch_size = paddle.slice(paddle.shape(im_shape), [0], [0], [1])
+ # bbox_pred.shape: [N, C*4]
+ for idx in range(batch_size):
+ roi_per_im = roi[idx]
+ rois_num_per_im = rois_num[idx]
+ expand_im_shape = paddle.expand(im_shape[idx, :],
+ [rois_num_per_im, 2])
+ origin_shape_list.append(expand_im_shape)
+
+ origin_shape = paddle.concat(origin_shape_list)
+
+ # bbox_pred.shape: [N, C*4]
+ # C=num_classes in faster/mask rcnn(bbox_head), C=1 in cascade rcnn(cascade_head)
+ bbox = paddle.concat(roi)
+ if bbox.shape[0] == 0:
+ bbox = paddle.zeros([0, bbox_pred.shape[1]], dtype='float32')
+ else:
+ bbox = delta2bbox(bbox_pred, bbox, self.prior_box_var)
+ scores = cls_prob[:, :-1]
+
+ # bbox.shape: [N, C, 4]
+ # bbox.shape[1] must be equal to scores.shape[1]
+ bbox_num_class = bbox.shape[1]
+ if bbox_num_class == 1:
+ bbox = paddle.tile(bbox, [1, self.num_classes, 1])
+
+ origin_h = paddle.unsqueeze(origin_shape[:, 0], axis=1)
+ origin_w = paddle.unsqueeze(origin_shape[:, 1], axis=1)
+ zeros = paddle.zeros_like(origin_h)
+ x1 = paddle.maximum(paddle.minimum(bbox[:, :, 0], origin_w), zeros)
+ y1 = paddle.maximum(paddle.minimum(bbox[:, :, 1], origin_h), zeros)
+ x2 = paddle.maximum(paddle.minimum(bbox[:, :, 2], origin_w), zeros)
+ y2 = paddle.maximum(paddle.minimum(bbox[:, :, 3], origin_h), zeros)
+ bbox = paddle.stack([x1, y1, x2, y2], axis=-1)
+ bboxes = (bbox, rois_num)
+ return bboxes, scores
+
+
+@register
+@serializable
+class MultiClassNMS(object):
+ def __init__(self,
+ score_threshold=.05,
+ nms_top_k=-1,
+ keep_top_k=100,
+ nms_threshold=.5,
+ normalized=True,
+ nms_eta=1.0,
+ return_index=False,
+ return_rois_num=True):
+ super(MultiClassNMS, self).__init__()
+ self.score_threshold = score_threshold
+ self.nms_top_k = nms_top_k
+ self.keep_top_k = keep_top_k
+ self.nms_threshold = nms_threshold
+ self.normalized = normalized
+ self.nms_eta = nms_eta
+ self.return_index = return_index
+ self.return_rois_num = return_rois_num
+
+ def __call__(self, bboxes, score, background_label=-1):
+ """
+ bboxes (Tensor|List[Tensor]): 1. (Tensor) Predicted bboxes with shape
+ [N, M, 4], N is the batch size and M
+ is the number of bboxes
+ 2. (List[Tensor]) bboxes and bbox_num,
+ bboxes have shape of [M, C, 4], C
+ is the class number and bbox_num means
+ the number of bboxes of each batch with
+ shape [N,]
+ score (Tensor): Predicted scores with shape [N, C, M] or [M, C]
+ background_label (int): Ignore the background label; For example, RCNN
+ is num_classes and YOLO is -1.
+ """
+ kwargs = self.__dict__.copy()
+ if isinstance(bboxes, tuple):
+ bboxes, bbox_num = bboxes
+ kwargs.update({'rois_num': bbox_num})
+ if background_label > -1:
+ kwargs.update({'background_label': background_label})
+ return ops.multiclass_nms(bboxes, score, **kwargs)
+
+
+@register
+@serializable
+class MatrixNMS(object):
+ __append_doc__ = True
+
+ def __init__(self,
+ score_threshold=.05,
+ post_threshold=.05,
+ nms_top_k=-1,
+ keep_top_k=100,
+ use_gaussian=False,
+ gaussian_sigma=2.,
+ normalized=False,
+ background_label=0):
+ super(MatrixNMS, self).__init__()
+ self.score_threshold = score_threshold
+ self.post_threshold = post_threshold
+ self.nms_top_k = nms_top_k
+ self.keep_top_k = keep_top_k
+ self.normalized = normalized
+ self.use_gaussian = use_gaussian
+ self.gaussian_sigma = gaussian_sigma
+ self.background_label = background_label
+
+ def __call__(self, bbox, score, *args):
+ return ops.matrix_nms(
+ bboxes=bbox,
+ scores=score,
+ score_threshold=self.score_threshold,
+ post_threshold=self.post_threshold,
+ nms_top_k=self.nms_top_k,
+ keep_top_k=self.keep_top_k,
+ use_gaussian=self.use_gaussian,
+ gaussian_sigma=self.gaussian_sigma,
+ background_label=self.background_label,
+ normalized=self.normalized)
+
+
+@register
+@serializable
+class YOLOBox(object):
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ num_classes=80,
+ conf_thresh=0.005,
+ downsample_ratio=32,
+ clip_bbox=True,
+ scale_x_y=1.):
+ self.num_classes = num_classes
+ self.conf_thresh = conf_thresh
+ self.downsample_ratio = downsample_ratio
+ self.clip_bbox = clip_bbox
+ self.scale_x_y = scale_x_y
+
+ def __call__(self,
+ yolo_head_out,
+ anchors,
+ im_shape,
+ scale_factor,
+ var_weight=None):
+ boxes_list = []
+ scores_list = []
+ origin_shape = im_shape / scale_factor
+ origin_shape = paddle.cast(origin_shape, 'int32')
+ for i, head_out in enumerate(yolo_head_out):
+ boxes, scores = ops.yolo_box(head_out, origin_shape, anchors[i],
+ self.num_classes, self.conf_thresh,
+ self.downsample_ratio // 2**i,
+ self.clip_bbox, self.scale_x_y)
+ boxes_list.append(boxes)
+ scores_list.append(paddle.transpose(scores, perm=[0, 2, 1]))
+ yolo_boxes = paddle.concat(boxes_list, axis=1)
+ yolo_scores = paddle.concat(scores_list, axis=2)
+ return yolo_boxes, yolo_scores
+
+
+@register
+@serializable
+class SSDBox(object):
+ def __init__(self, is_normalized=True):
+ self.is_normalized = is_normalized
+ self.norm_delta = float(not self.is_normalized)
+
+ def __call__(self,
+ preds,
+ prior_boxes,
+ im_shape,
+ scale_factor,
+ var_weight=None):
+ boxes, scores = preds
+ outputs = []
+ for box, score, prior_box in zip(boxes, scores, prior_boxes):
+ pb_w = prior_box[:, 2] - prior_box[:, 0] + self.norm_delta
+ pb_h = prior_box[:, 3] - prior_box[:, 1] + self.norm_delta
+ pb_x = prior_box[:, 0] + pb_w * 0.5
+ pb_y = prior_box[:, 1] + pb_h * 0.5
+ out_x = pb_x + box[:, :, 0] * pb_w * 0.1
+ out_y = pb_y + box[:, :, 1] * pb_h * 0.1
+ out_w = paddle.exp(box[:, :, 2] * 0.2) * pb_w
+ out_h = paddle.exp(box[:, :, 3] * 0.2) * pb_h
+
+ if self.is_normalized:
+ h = paddle.unsqueeze(
+ im_shape[:, 0] / scale_factor[:, 0], axis=-1)
+ w = paddle.unsqueeze(
+ im_shape[:, 1] / scale_factor[:, 1], axis=-1)
+ output = paddle.stack(
+ [(out_x - out_w / 2.) * w, (out_y - out_h / 2.) * h,
+ (out_x + out_w / 2.) * w, (out_y + out_h / 2.) * h],
+ axis=-1)
+ else:
+ output = paddle.stack(
+ [
+ out_x - out_w / 2., out_y - out_h / 2.,
+ out_x + out_w / 2. - 1., out_y + out_h / 2. - 1.
+ ],
+ axis=-1)
+ outputs.append(output)
+ boxes = paddle.concat(outputs, axis=1)
+
+ scores = F.softmax(paddle.concat(scores, axis=1))
+ scores = paddle.transpose(scores, [0, 2, 1])
+
+ return boxes, scores
+
+
+@register
+@serializable
+class AnchorGrid(object):
+ """Generate anchor grid
+
+ Args:
+ image_size (int or list): input image size, may be a single integer or
+ list of [h, w]. Default: 512
+ min_level (int): min level of the feature pyramid. Default: 3
+ max_level (int): max level of the feature pyramid. Default: 7
+ anchor_base_scale: base anchor scale. Default: 4
+ num_scales: number of anchor scales. Default: 3
+ aspect_ratios: aspect ratios. default: [[1, 1], [1.4, 0.7], [0.7, 1.4]]
+ """
+
+ def __init__(self,
+ image_size=512,
+ min_level=3,
+ max_level=7,
+ anchor_base_scale=4,
+ num_scales=3,
+ aspect_ratios=[[1, 1], [1.4, 0.7], [0.7, 1.4]]):
+ super(AnchorGrid, self).__init__()
+ if isinstance(image_size, Integral):
+ self.image_size = [image_size, image_size]
+ else:
+ self.image_size = image_size
+ for dim in self.image_size:
+ assert dim % 2 ** max_level == 0, \
+ "image size should be multiple of the max level stride"
+ self.min_level = min_level
+ self.max_level = max_level
+ self.anchor_base_scale = anchor_base_scale
+ self.num_scales = num_scales
+ self.aspect_ratios = aspect_ratios
+
+ @property
+ def base_cell(self):
+ if not hasattr(self, '_base_cell'):
+ self._base_cell = self.make_cell()
+ return self._base_cell
+
+ def make_cell(self):
+ scales = [2**(i / self.num_scales) for i in range(self.num_scales)]
+ scales = np.array(scales)
+ ratios = np.array(self.aspect_ratios)
+ ws = np.outer(scales, ratios[:, 0]).reshape(-1, 1)
+ hs = np.outer(scales, ratios[:, 1]).reshape(-1, 1)
+ anchors = np.hstack((-0.5 * ws, -0.5 * hs, 0.5 * ws, 0.5 * hs))
+ return anchors
+
+ def make_grid(self, stride):
+ cell = self.base_cell * stride * self.anchor_base_scale
+ x_steps = np.arange(stride // 2, self.image_size[1], stride)
+ y_steps = np.arange(stride // 2, self.image_size[0], stride)
+ offset_x, offset_y = np.meshgrid(x_steps, y_steps)
+ offset_x = offset_x.flatten()
+ offset_y = offset_y.flatten()
+ offsets = np.stack((offset_x, offset_y, offset_x, offset_y), axis=-1)
+ offsets = offsets[:, np.newaxis, :]
+ return (cell + offsets).reshape(-1, 4)
+
+ def generate(self):
+ return [
+ self.make_grid(2**l)
+ for l in range(self.min_level, self.max_level + 1)
+ ]
+
+ def __call__(self):
+ if not hasattr(self, '_anchor_vars'):
+ anchor_vars = []
+ helper = LayerHelper('anchor_grid')
+ for idx, l in enumerate(range(self.min_level, self.max_level + 1)):
+ stride = 2**l
+ anchors = self.make_grid(stride)
+ var = helper.create_parameter(
+ attr=ParamAttr(name='anchors_{}'.format(idx)),
+ shape=anchors.shape,
+ dtype='float32',
+ stop_gradient=True,
+ default_initializer=NumpyArrayInitializer(anchors))
+ anchor_vars.append(var)
+ var.persistable = True
+ self._anchor_vars = anchor_vars
+
+ return self._anchor_vars
+
+
+@register
+@serializable
+class FCOSBox(object):
+ __shared__ = ['num_classes']
+
+ def __init__(self, num_classes=80):
+ super(FCOSBox, self).__init__()
+ self.num_classes = num_classes
+
+ def _merge_hw(self, inputs, ch_type="channel_first"):
+ """
+ Merge h and w of the feature map into one dimension.
+ Args:
+ inputs (Tensor): Tensor of the input feature map
+ ch_type (str): "channel_first" or "channel_last" style
+ Return:
+ new_shape (Tensor): The new shape after h and w merged
+ """
+ shape_ = paddle.shape(inputs)
+ bs, ch, hi, wi = shape_[0], shape_[1], shape_[2], shape_[3]
+ img_size = hi * wi
+ img_size.stop_gradient = True
+ if ch_type == "channel_first":
+ new_shape = paddle.concat([bs, ch, img_size])
+ elif ch_type == "channel_last":
+ new_shape = paddle.concat([bs, img_size, ch])
+ else:
+ raise KeyError("Wrong ch_type %s" % ch_type)
+ new_shape.stop_gradient = True
+ return new_shape
+
+ def _postprocessing_by_level(self, locations, box_cls, box_reg, box_ctn,
+ scale_factor):
+ """
+ Postprocess each layer of the output with corresponding locations.
+ Args:
+ locations (Tensor): anchor points for current layer, [H*W, 2]
+ box_cls (Tensor): categories prediction, [N, C, H, W],
+ C is the number of classes
+ box_reg (Tensor): bounding box prediction, [N, 4, H, W]
+ box_ctn (Tensor): centerness prediction, [N, 1, H, W]
+ scale_factor (Tensor): [h_scale, w_scale] for input images
+ Return:
+ box_cls_ch_last (Tensor): score for each category, in [N, C, M]
+ C is the number of classes and M is the number of anchor points
+ box_reg_decoding (Tensor): decoded bounding box, in [N, M, 4]
+ last dimension is [x1, y1, x2, y2]
+ """
+ act_shape_cls = self._merge_hw(box_cls)
+ box_cls_ch_last = paddle.reshape(x=box_cls, shape=act_shape_cls)
+ box_cls_ch_last = F.sigmoid(box_cls_ch_last)
+
+ act_shape_reg = self._merge_hw(box_reg)
+ box_reg_ch_last = paddle.reshape(x=box_reg, shape=act_shape_reg)
+ box_reg_ch_last = paddle.transpose(box_reg_ch_last, perm=[0, 2, 1])
+ box_reg_decoding = paddle.stack(
+ [
+ locations[:, 0] - box_reg_ch_last[:, :, 0],
+ locations[:, 1] - box_reg_ch_last[:, :, 1],
+ locations[:, 0] + box_reg_ch_last[:, :, 2],
+ locations[:, 1] + box_reg_ch_last[:, :, 3]
+ ],
+ axis=1)
+ box_reg_decoding = paddle.transpose(box_reg_decoding, perm=[0, 2, 1])
+
+ act_shape_ctn = self._merge_hw(box_ctn)
+ box_ctn_ch_last = paddle.reshape(x=box_ctn, shape=act_shape_ctn)
+ box_ctn_ch_last = F.sigmoid(box_ctn_ch_last)
+
+ # recover the location to original image
+ im_scale = paddle.concat([scale_factor, scale_factor], axis=1)
+ im_scale = paddle.expand(im_scale, [box_reg_decoding.shape[0], 4])
+ im_scale = paddle.reshape(im_scale, [box_reg_decoding.shape[0], -1, 4])
+ box_reg_decoding = box_reg_decoding / im_scale
+ box_cls_ch_last = box_cls_ch_last * box_ctn_ch_last
+ return box_cls_ch_last, box_reg_decoding
+
+ def __call__(self, locations, cls_logits, bboxes_reg, centerness,
+ scale_factor):
+ pred_boxes_ = []
+ pred_scores_ = []
+ for pts, cls, box, ctn in zip(locations, cls_logits, bboxes_reg,
+ centerness):
+ pred_scores_lvl, pred_boxes_lvl = self._postprocessing_by_level(
+ pts, cls, box, ctn, scale_factor)
+ pred_boxes_.append(pred_boxes_lvl)
+ pred_scores_.append(pred_scores_lvl)
+ pred_boxes = paddle.concat(pred_boxes_, axis=1)
+ pred_scores = paddle.concat(pred_scores_, axis=2)
+ return pred_boxes, pred_scores
+
+
+@register
+class TTFBox(object):
+ __shared__ = ['down_ratio']
+
+ def __init__(self, max_per_img=100, score_thresh=0.01, down_ratio=4):
+ super(TTFBox, self).__init__()
+ self.max_per_img = max_per_img
+ self.score_thresh = score_thresh
+ self.down_ratio = down_ratio
+
+ def _simple_nms(self, heat, kernel=3):
+ """
+ Use maxpool to filter the max score, get local peaks.
+ """
+ pad = (kernel - 1) // 2
+ hmax = F.max_pool2d(heat, kernel, stride=1, padding=pad)
+ keep = paddle.cast(hmax == heat, 'float32')
+ return heat * keep
+
+ def _topk(self, scores):
+ """
+ Select top k scores and decode to get xy coordinates.
+ """
+ k = self.max_per_img
+ shape_fm = paddle.shape(scores)
+ shape_fm.stop_gradient = True
+ cat, height, width = shape_fm[1], shape_fm[2], shape_fm[3]
+ # batch size is 1
+ scores_r = paddle.reshape(scores, [cat, -1])
+ topk_scores, topk_inds = paddle.topk(scores_r, k)
+ topk_scores, topk_inds = paddle.topk(scores_r, k)
+ topk_ys = topk_inds // width
+ topk_xs = topk_inds % width
+
+ topk_score_r = paddle.reshape(topk_scores, [-1])
+ topk_score, topk_ind = paddle.topk(topk_score_r, k)
+ k_t = paddle.full(paddle.shape(topk_ind), k, dtype='int64')
+ topk_clses = paddle.cast(paddle.floor_divide(topk_ind, k_t), 'float32')
+
+ topk_inds = paddle.reshape(topk_inds, [-1])
+ topk_ys = paddle.reshape(topk_ys, [-1, 1])
+ topk_xs = paddle.reshape(topk_xs, [-1, 1])
+ topk_inds = paddle.gather(topk_inds, topk_ind)
+ topk_ys = paddle.gather(topk_ys, topk_ind)
+ topk_xs = paddle.gather(topk_xs, topk_ind)
+
+ return topk_score, topk_inds, topk_clses, topk_ys, topk_xs
+
+ def _decode(self, hm, wh, im_shape, scale_factor):
+ heatmap = F.sigmoid(hm)
+ heat = self._simple_nms(heatmap)
+ scores, inds, clses, ys, xs = self._topk(heat)
+ ys = paddle.cast(ys, 'float32') * self.down_ratio
+ xs = paddle.cast(xs, 'float32') * self.down_ratio
+ scores = paddle.tensor.unsqueeze(scores, [1])
+ clses = paddle.tensor.unsqueeze(clses, [1])
+
+ wh_t = paddle.transpose(wh, [0, 2, 3, 1])
+ wh = paddle.reshape(wh_t, [-1, paddle.shape(wh_t)[-1]])
+ wh = paddle.gather(wh, inds)
+
+ x1 = xs - wh[:, 0:1]
+ y1 = ys - wh[:, 1:2]
+ x2 = xs + wh[:, 2:3]
+ y2 = ys + wh[:, 3:4]
+
+ bboxes = paddle.concat([x1, y1, x2, y2], axis=1)
+
+ scale_y = scale_factor[:, 0:1]
+ scale_x = scale_factor[:, 1:2]
+ scale_expand = paddle.concat(
+ [scale_x, scale_y, scale_x, scale_y], axis=1)
+ boxes_shape = paddle.shape(bboxes)
+ boxes_shape.stop_gradient = True
+ scale_expand = paddle.expand(scale_expand, shape=boxes_shape)
+ bboxes = paddle.divide(bboxes, scale_expand)
+ results = paddle.concat([clses, scores, bboxes], axis=1)
+ # hack: append result with cls=-1 and score=1. to avoid all scores
+ # are less than score_thresh which may cause error in gather.
+ fill_r = paddle.to_tensor(np.array([[-1, 1, 0, 0, 0, 0]]))
+ fill_r = paddle.cast(fill_r, results.dtype)
+ results = paddle.concat([results, fill_r])
+ scores = results[:, 1]
+ valid_ind = paddle.nonzero(scores > self.score_thresh)
+ results = paddle.gather(results, valid_ind)
+ return results, paddle.shape(results)[0:1]
+
+ def __call__(self, hm, wh, im_shape, scale_factor):
+ results = []
+ results_num = []
+ for i in range(scale_factor.shape[0]):
+ result, num = self._decode(hm[i:i + 1, ], wh[i:i + 1, ],
+ im_shape[i:i + 1, ],
+ scale_factor[i:i + 1, ])
+ results.append(result)
+ results_num.append(num)
+ results = paddle.concat(results, axis=0)
+ results_num = paddle.concat(results_num, axis=0)
+ return results, results_num
+
+
+@register
+@serializable
+class JDEBox(object):
+ __shared__ = ['num_classes']
+
+ def __init__(self, num_classes=1, conf_thresh=0.3, downsample_ratio=32):
+ self.num_classes = num_classes
+ self.conf_thresh = conf_thresh
+ self.downsample_ratio = downsample_ratio
+
+ def generate_anchor(self, nGh, nGw, anchor_wh):
+ nA = len(anchor_wh)
+ yv, xv = paddle.meshgrid([paddle.arange(nGh), paddle.arange(nGw)])
+ mesh = paddle.stack(
+ (xv, yv), axis=0).cast(dtype='float32') # 2 x nGh x nGw
+ meshs = paddle.tile(mesh, [nA, 1, 1, 1])
+
+ anchor_offset_mesh = anchor_wh[:, :, None][:, :, :, None].repeat(
+ int(nGh), axis=-2).repeat(
+ int(nGw), axis=-1)
+ anchor_offset_mesh = paddle.to_tensor(
+ anchor_offset_mesh.astype(np.float32))
+ # nA x 2 x nGh x nGw
+
+ anchor_mesh = paddle.concat([meshs, anchor_offset_mesh], axis=1)
+ anchor_mesh = paddle.transpose(anchor_mesh,
+ [0, 2, 3, 1]) # (nA x nGh x nGw) x 4
+ return anchor_mesh
+
+ def decode_delta(self, delta, fg_anchor_list):
+ px, py, pw, ph = fg_anchor_list[:, 0], fg_anchor_list[:,1], \
+ fg_anchor_list[:, 2], fg_anchor_list[:,3]
+ dx, dy, dw, dh = delta[:, 0], delta[:, 1], delta[:, 2], delta[:, 3]
+ gx = pw * dx + px
+ gy = ph * dy + py
+ gw = pw * paddle.exp(dw)
+ gh = ph * paddle.exp(dh)
+ gx1 = gx - gw * 0.5
+ gy1 = gy - gh * 0.5
+ gx2 = gx + gw * 0.5
+ gy2 = gy + gh * 0.5
+ return paddle.stack([gx1, gy1, gx2, gy2], axis=1)
+
+ def decode_delta_map(self, nA, nGh, nGw, delta_map, anchor_vec):
+ anchor_mesh = self.generate_anchor(nGh, nGw, anchor_vec)
+ anchor_mesh = paddle.unsqueeze(anchor_mesh, 0)
+ pred_list = self.decode_delta(
+ paddle.reshape(
+ delta_map, shape=[-1, 4]),
+ paddle.reshape(
+ anchor_mesh, shape=[-1, 4]))
+ pred_map = paddle.reshape(pred_list, shape=[nA * nGh * nGw, 4])
+ return pred_map
+
+ def _postprocessing_by_level(self, nA, stride, head_out, anchor_vec):
+ boxes_shape = head_out.shape # [nB, nA*6, nGh, nGw]
+ nGh, nGw = boxes_shape[-2], boxes_shape[-1]
+ nB = 1 # TODO: only support bs=1 now
+ boxes_list, scores_list = [], []
+ for idx in range(nB):
+ p = paddle.reshape(
+ head_out[idx], shape=[nA, self.num_classes + 5, nGh, nGw])
+ p = paddle.transpose(p, perm=[0, 2, 3, 1]) # [nA, nGh, nGw, 6]
+ delta_map = p[:, :, :, :4]
+ boxes = self.decode_delta_map(nA, nGh, nGw, delta_map, anchor_vec)
+ # [nA * nGh * nGw, 4]
+ boxes_list.append(boxes * stride)
+
+ p_conf = paddle.transpose(
+ p[:, :, :, 4:6], perm=[3, 0, 1, 2]) # [2, nA, nGh, nGw]
+ p_conf = F.softmax(
+ p_conf, axis=0)[1, :, :, :].unsqueeze(-1) # [nA, nGh, nGw, 1]
+ scores = paddle.reshape(p_conf, shape=[nA * nGh * nGw, 1])
+ scores_list.append(scores)
+
+ boxes_results = paddle.stack(boxes_list)
+ scores_results = paddle.stack(scores_list)
+ return boxes_results, scores_results
+
+ def __call__(self, yolo_head_out, anchors):
+ bbox_pred_list = []
+ for i, head_out in enumerate(yolo_head_out):
+ stride = self.downsample_ratio // 2**i
+ anc_w, anc_h = anchors[i][0::2], anchors[i][1::2]
+ anchor_vec = np.stack((anc_w, anc_h), axis=1) / stride
+ nA = len(anc_w)
+ boxes, scores = self._postprocessing_by_level(nA, stride, head_out,
+ anchor_vec)
+ bbox_pred_list.append(paddle.concat([boxes, scores], axis=-1))
+
+ yolo_boxes_scores = paddle.concat(bbox_pred_list, axis=1)
+ boxes_idx_over_conf_thr = paddle.nonzero(
+ yolo_boxes_scores[:, :, -1] > self.conf_thresh)
+ boxes_idx_over_conf_thr.stop_gradient = True
+
+ return boxes_idx_over_conf_thr, yolo_boxes_scores
+
+
+@register
+@serializable
+class MaskMatrixNMS(object):
+ """
+ Matrix NMS for multi-class masks.
+ Args:
+ update_threshold (float): Updated threshold of categroy score in second time.
+ pre_nms_top_n (int): Number of total instance to be kept per image before NMS
+ post_nms_top_n (int): Number of total instance to be kept per image after NMS.
+ kernel (str): 'linear' or 'gaussian'.
+ sigma (float): std in gaussian method.
+ Input:
+ seg_preds (Variable): shape (n, h, w), segmentation feature maps
+ seg_masks (Variable): shape (n, h, w), segmentation feature maps
+ cate_labels (Variable): shape (n), mask labels in descending order
+ cate_scores (Variable): shape (n), mask scores in descending order
+ sum_masks (Variable): a float tensor of the sum of seg_masks
+ Returns:
+ Variable: cate_scores, tensors of shape (n)
+ """
+
+ def __init__(self,
+ update_threshold=0.05,
+ pre_nms_top_n=500,
+ post_nms_top_n=100,
+ kernel='gaussian',
+ sigma=2.0):
+ super(MaskMatrixNMS, self).__init__()
+ self.update_threshold = update_threshold
+ self.pre_nms_top_n = pre_nms_top_n
+ self.post_nms_top_n = post_nms_top_n
+ self.kernel = kernel
+ self.sigma = sigma
+
+ def _sort_score(self, scores, top_num):
+ if paddle.shape(scores)[0] > top_num:
+ return paddle.topk(scores, top_num)[1]
+ else:
+ return paddle.argsort(scores, descending=True)
+
+ def __call__(self,
+ seg_preds,
+ seg_masks,
+ cate_labels,
+ cate_scores,
+ sum_masks=None):
+ # sort and keep top nms_pre
+ sort_inds = self._sort_score(cate_scores, self.pre_nms_top_n)
+ seg_masks = paddle.gather(seg_masks, index=sort_inds)
+ seg_preds = paddle.gather(seg_preds, index=sort_inds)
+ sum_masks = paddle.gather(sum_masks, index=sort_inds)
+ cate_scores = paddle.gather(cate_scores, index=sort_inds)
+ cate_labels = paddle.gather(cate_labels, index=sort_inds)
+
+ seg_masks = paddle.flatten(seg_masks, start_axis=1, stop_axis=-1)
+ # inter.
+ inter_matrix = paddle.mm(seg_masks, paddle.transpose(seg_masks, [1, 0]))
+ n_samples = paddle.shape(cate_labels)
+ # union.
+ sum_masks_x = paddle.expand(sum_masks, shape=[n_samples, n_samples])
+ # iou.
+ iou_matrix = (inter_matrix / (
+ sum_masks_x + paddle.transpose(sum_masks_x, [1, 0]) - inter_matrix))
+ iou_matrix = paddle.triu(iou_matrix, diagonal=1)
+ # label_specific matrix.
+ cate_labels_x = paddle.expand(cate_labels, shape=[n_samples, n_samples])
+ label_matrix = paddle.cast(
+ (cate_labels_x == paddle.transpose(cate_labels_x, [1, 0])),
+ 'float32')
+ label_matrix = paddle.triu(label_matrix, diagonal=1)
+
+ # IoU compensation
+ compensate_iou = paddle.max((iou_matrix * label_matrix), axis=0)
+ compensate_iou = paddle.expand(
+ compensate_iou, shape=[n_samples, n_samples])
+ compensate_iou = paddle.transpose(compensate_iou, [1, 0])
+
+ # IoU decay
+ decay_iou = iou_matrix * label_matrix
+
+ # matrix nms
+ if self.kernel == 'gaussian':
+ decay_matrix = paddle.exp(-1 * self.sigma * (decay_iou**2))
+ compensate_matrix = paddle.exp(-1 * self.sigma *
+ (compensate_iou**2))
+ decay_coefficient = paddle.min(decay_matrix / compensate_matrix,
+ axis=0)
+ elif self.kernel == 'linear':
+ decay_matrix = (1 - decay_iou) / (1 - compensate_iou)
+ decay_coefficient = paddle.min(decay_matrix, axis=0)
+ else:
+ raise NotImplementedError
+
+ # update the score.
+ cate_scores = cate_scores * decay_coefficient
+ y = paddle.zeros(shape=paddle.shape(cate_scores), dtype='float32')
+ keep = paddle.where(cate_scores >= self.update_threshold, cate_scores,
+ y)
+ keep = paddle.nonzero(keep)
+ keep = paddle.squeeze(keep, axis=[1])
+ # Prevent empty and increase fake data
+ keep = paddle.concat(
+ [keep, paddle.cast(paddle.shape(cate_scores)[0] - 1, 'int64')])
+
+ seg_preds = paddle.gather(seg_preds, index=keep)
+ cate_scores = paddle.gather(cate_scores, index=keep)
+ cate_labels = paddle.gather(cate_labels, index=keep)
+
+ # sort and keep top_k
+ sort_inds = self._sort_score(cate_scores, self.post_nms_top_n)
+ seg_preds = paddle.gather(seg_preds, index=sort_inds)
+ cate_scores = paddle.gather(cate_scores, index=sort_inds)
+ cate_labels = paddle.gather(cate_labels, index=sort_inds)
+ return seg_preds, cate_scores, cate_labels
+
+
+def Conv2d(in_channels,
+ out_channels,
+ kernel_size,
+ stride=1,
+ padding=0,
+ dilation=1,
+ groups=1,
+ bias=True,
+ weight_init=Normal(std=0.001),
+ bias_init=Constant(0.)):
+ weight_attr = paddle.framework.ParamAttr(initializer=weight_init)
+ if bias:
+ bias_attr = paddle.framework.ParamAttr(initializer=bias_init)
+ else:
+ bias_attr = False
+ conv = nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ dilation,
+ groups,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr)
+ return conv
+
+
+def ConvTranspose2d(in_channels,
+ out_channels,
+ kernel_size,
+ stride=1,
+ padding=0,
+ output_padding=0,
+ groups=1,
+ bias=True,
+ dilation=1,
+ weight_init=Normal(std=0.001),
+ bias_init=Constant(0.)):
+ weight_attr = paddle.framework.ParamAttr(initializer=weight_init)
+ if bias:
+ bias_attr = paddle.framework.ParamAttr(initializer=bias_init)
+ else:
+ bias_attr = False
+ conv = nn.Conv2DTranspose(
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ output_padding,
+ dilation,
+ groups,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr)
+ return conv
+
+
+def BatchNorm2d(num_features, eps=1e-05, momentum=0.9, affine=True):
+ if not affine:
+ weight_attr = False
+ bias_attr = False
+ else:
+ weight_attr = None
+ bias_attr = None
+ batchnorm = nn.BatchNorm2D(
+ num_features,
+ momentum,
+ eps,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr)
+ return batchnorm
+
+
+def ReLU():
+ return nn.ReLU()
+
+
+def Upsample(scale_factor=None, mode='nearest', align_corners=False):
+ return nn.Upsample(None, scale_factor, mode, align_corners)
+
+
+def MaxPool(kernel_size, stride, padding, ceil_mode=False):
+ return nn.MaxPool2D(kernel_size, stride, padding, ceil_mode=ceil_mode)
+
+
+class Concat(nn.Layer):
+ def __init__(self, dim=0):
+ super(Concat, self).__init__()
+ self.dim = dim
+
+ def forward(self, inputs):
+ return paddle.concat(inputs, axis=self.dim)
+
+ def extra_repr(self):
+ return 'dim={}'.format(self.dim)
+
+
+def _convert_attention_mask(attn_mask, dtype):
+ """
+ Convert the attention mask to the target dtype we expect.
+ Parameters:
+ attn_mask (Tensor, optional): A tensor used in multi-head attention
+ to prevents attention to some unwanted positions, usually the
+ paddings or the subsequent positions. It is a tensor with shape
+ broadcasted to `[batch_size, n_head, sequence_length, sequence_length]`.
+ When the data type is bool, the unwanted positions have `False`
+ values and the others have `True` values. When the data type is
+ int, the unwanted positions have 0 values and the others have 1
+ values. When the data type is float, the unwanted positions have
+ `-INF` values and the others have 0 values. It can be None when
+ nothing wanted or needed to be prevented attention to. Default None.
+ dtype (VarType): The target type of `attn_mask` we expect.
+ Returns:
+ Tensor: A Tensor with shape same as input `attn_mask`, with data type `dtype`.
+ """
+ return nn.layer.transformer._convert_attention_mask(attn_mask, dtype)
+
+
+class MultiHeadAttention(nn.Layer):
+ """
+ Attention mapps queries and a set of key-value pairs to outputs, and
+ Multi-Head Attention performs multiple parallel attention to jointly attending
+ to information from different representation subspaces.
+
+ Please refer to `Attention Is All You Need `_
+ for more details.
+
+ Parameters:
+ embed_dim (int): The expected feature size in the input and output.
+ num_heads (int): The number of heads in multi-head attention.
+ dropout (float, optional): The dropout probability used on attention
+ weights to drop some attention targets. 0 for no dropout. Default 0
+ kdim (int, optional): The feature size in key. If None, assumed equal to
+ `embed_dim`. Default None.
+ vdim (int, optional): The feature size in value. If None, assumed equal to
+ `embed_dim`. Default None.
+ need_weights (bool, optional): Indicate whether to return the attention
+ weights. Default False.
+
+ Examples:
+
+ .. code-block:: python
+
+ import paddle
+
+ # encoder input: [batch_size, sequence_length, d_model]
+ query = paddle.rand((2, 4, 128))
+ # self attention mask: [batch_size, num_heads, query_len, query_len]
+ attn_mask = paddle.rand((2, 2, 4, 4))
+ multi_head_attn = paddle.nn.MultiHeadAttention(128, 2)
+ output = multi_head_attn(query, None, None, attn_mask=attn_mask) # [2, 4, 128]
+ """
+
+ def __init__(self,
+ embed_dim,
+ num_heads,
+ dropout=0.,
+ kdim=None,
+ vdim=None,
+ need_weights=False):
+ super(MultiHeadAttention, self).__init__()
+ self.embed_dim = embed_dim
+ self.kdim = kdim if kdim is not None else embed_dim
+ self.vdim = vdim if vdim is not None else embed_dim
+ self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
+
+ self.num_heads = num_heads
+ self.dropout = dropout
+ self.need_weights = need_weights
+
+ self.head_dim = embed_dim // num_heads
+ assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
+
+ if self._qkv_same_embed_dim:
+ self.in_proj_weight = self.create_parameter(
+ shape=[embed_dim, 3 * embed_dim],
+ attr=None,
+ dtype=self._dtype,
+ is_bias=False)
+ self.in_proj_bias = self.create_parameter(
+ shape=[3 * embed_dim],
+ attr=None,
+ dtype=self._dtype,
+ is_bias=True)
+ else:
+ self.q_proj = nn.Linear(embed_dim, embed_dim)
+ self.k_proj = nn.Linear(self.kdim, embed_dim)
+ self.v_proj = nn.Linear(self.vdim, embed_dim)
+
+ self.out_proj = nn.Linear(embed_dim, embed_dim)
+ self._type_list = ('q_proj', 'k_proj', 'v_proj')
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ xavier_uniform_(p)
+ else:
+ constant_(p)
+
+ def compute_qkv(self, tensor, index):
+ if self._qkv_same_embed_dim:
+ tensor = F.linear(
+ x=tensor,
+ weight=self.in_proj_weight[:, index * self.embed_dim:(index + 1)
+ * self.embed_dim],
+ bias=self.in_proj_bias[index * self.embed_dim:(index + 1) *
+ self.embed_dim]
+ if self.in_proj_bias is not None else None)
+ else:
+ tensor = getattr(self, self._type_list[index])(tensor)
+ tensor = tensor.reshape(
+ [0, 0, self.num_heads, self.head_dim]).transpose([0, 2, 1, 3])
+ return tensor
+
+ def forward(self, query, key=None, value=None, attn_mask=None):
+ r"""
+ Applies multi-head attention to map queries and a set of key-value pairs
+ to outputs.
+
+ Parameters:
+ query (Tensor): The queries for multi-head attention. It is a
+ tensor with shape `[batch_size, query_length, embed_dim]`. The
+ data type should be float32 or float64.
+ key (Tensor, optional): The keys for multi-head attention. It is
+ a tensor with shape `[batch_size, key_length, kdim]`. The
+ data type should be float32 or float64. If None, use `query` as
+ `key`. Default None.
+ value (Tensor, optional): The values for multi-head attention. It
+ is a tensor with shape `[batch_size, value_length, vdim]`.
+ The data type should be float32 or float64. If None, use `query` as
+ `value`. Default None.
+ attn_mask (Tensor, optional): A tensor used in multi-head attention
+ to prevents attention to some unwanted positions, usually the
+ paddings or the subsequent positions. It is a tensor with shape
+ broadcasted to `[batch_size, n_head, sequence_length, sequence_length]`.
+ When the data type is bool, the unwanted positions have `False`
+ values and the others have `True` values. When the data type is
+ int, the unwanted positions have 0 values and the others have 1
+ values. When the data type is float, the unwanted positions have
+ `-INF` values and the others have 0 values. It can be None when
+ nothing wanted or needed to be prevented attention to. Default None.
+
+ Returns:
+ Tensor|tuple: It is a tensor that has the same shape and data type \
+ as `query`, representing attention output. Or a tuple if \
+ `need_weights` is True or `cache` is not None. If `need_weights` \
+ is True, except for attention output, the tuple also includes \
+ the attention weights tensor shaped `[batch_size, num_heads, query_length, key_length]`. \
+ If `cache` is not None, the tuple then includes the new cache \
+ having the same type as `cache`, and if it is `StaticCache`, it \
+ is same as the input `cache`, if it is `Cache`, the new cache \
+ reserves tensors concatanating raw tensors with intermediate \
+ results of current query.
+ """
+ key = query if key is None else key
+ value = query if value is None else value
+ # compute q ,k ,v
+ q, k, v = (self.compute_qkv(t, i)
+ for i, t in enumerate([query, key, value]))
+
+ # scale dot product attention
+ product = paddle.matmul(x=q, y=k, transpose_y=True)
+ scaling = float(self.head_dim)**-0.5
+ product = product * scaling
+
+ if attn_mask is not None:
+ # Support bool or int mask
+ attn_mask = _convert_attention_mask(attn_mask, product.dtype)
+ product = product + attn_mask
+ weights = F.softmax(product)
+ if self.dropout:
+ weights = F.dropout(
+ weights,
+ self.dropout,
+ training=self.training,
+ mode="upscale_in_train")
+
+ out = paddle.matmul(weights, v)
+
+ # combine heads
+ out = paddle.transpose(out, perm=[0, 2, 1, 3])
+ out = paddle.reshape(x=out, shape=[0, 0, out.shape[2] * out.shape[3]])
+
+ # project to output
+ out = self.out_proj(out)
+
+ outs = [out]
+ if self.need_weights:
+ outs.append(weights)
+ return out if len(outs) == 1 else tuple(outs)
+
+
+@register
+class ConvMixer(nn.Layer):
+ def __init__(
+ self,
+ dim,
+ depth,
+ kernel_size=3, ):
+ super().__init__()
+ self.dim = dim
+ self.depth = depth
+ self.kernel_size = kernel_size
+
+ self.mixer = self.conv_mixer(dim, depth, kernel_size)
+
+ def forward(self, x):
+ return self.mixer(x)
+
+ @staticmethod
+ def conv_mixer(
+ dim,
+ depth,
+ kernel_size, ):
+ Seq, ActBn = nn.Sequential, lambda x: Seq(x, nn.GELU(), nn.BatchNorm2D(dim))
+ Residual = type('Residual', (Seq, ),
+ {'forward': lambda self, x: self[0](x) + x})
+ return Seq(*[
+ Seq(Residual(
+ ActBn(
+ nn.Conv2D(
+ dim, dim, kernel_size, groups=dim, padding="same"))),
+ ActBn(nn.Conv2D(dim, dim, 1))) for i in range(depth)
+ ])
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__init__.py
new file mode 100644
index 000000000..83389c08e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__init__.py
@@ -0,0 +1,41 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import yolo_loss
+from . import iou_aware_loss
+from . import iou_loss
+from . import ssd_loss
+from . import fcos_loss
+from . import solov2_loss
+from . import ctfocal_loss
+from . import keypoint_loss
+from . import jde_loss
+from . import fairmot_loss
+from . import gfocal_loss
+from . import detr_loss
+from . import sparsercnn_loss
+
+from .yolo_loss import *
+from .iou_aware_loss import *
+from .iou_loss import *
+from .ssd_loss import *
+from .fcos_loss import *
+from .solov2_loss import *
+from .ctfocal_loss import *
+from .keypoint_loss import *
+from .jde_loss import *
+from .fairmot_loss import *
+from .gfocal_loss import *
+from .detr_loss import *
+from .sparsercnn_loss import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..f19224ba0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ctfocal_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ctfocal_loss.cpython-37.pyc
new file mode 100644
index 000000000..093a6a7a2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ctfocal_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/detr_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/detr_loss.cpython-37.pyc
new file mode 100644
index 000000000..998b42ff4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/detr_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fairmot_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fairmot_loss.cpython-37.pyc
new file mode 100644
index 000000000..1b031f7a9
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fairmot_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fcos_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fcos_loss.cpython-37.pyc
new file mode 100644
index 000000000..4aa367a0f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/fcos_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/gfocal_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/gfocal_loss.cpython-37.pyc
new file mode 100644
index 000000000..bea48b9bc
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/gfocal_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_aware_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_aware_loss.cpython-37.pyc
new file mode 100644
index 000000000..9f06e288c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_aware_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_loss.cpython-37.pyc
new file mode 100644
index 000000000..524270db2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/iou_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/jde_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/jde_loss.cpython-37.pyc
new file mode 100644
index 000000000..aae1c7aef
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/jde_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/keypoint_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/keypoint_loss.cpython-37.pyc
new file mode 100644
index 000000000..696189238
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/keypoint_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/solov2_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/solov2_loss.cpython-37.pyc
new file mode 100644
index 000000000..84bc2fcda
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/solov2_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/sparsercnn_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/sparsercnn_loss.cpython-37.pyc
new file mode 100644
index 000000000..c791e49cb
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/sparsercnn_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ssd_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ssd_loss.cpython-37.pyc
new file mode 100644
index 000000000..f2fd2b9c3
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/ssd_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/varifocal_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/varifocal_loss.cpython-37.pyc
new file mode 100644
index 000000000..bf1a6509e
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/varifocal_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/yolo_loss.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/yolo_loss.cpython-37.pyc
new file mode 100644
index 000000000..18a2ae010
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/__pycache__/yolo_loss.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ctfocal_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ctfocal_loss.py
new file mode 100644
index 000000000..dd00eb854
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ctfocal_loss.py
@@ -0,0 +1,68 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['CTFocalLoss']
+
+
+@register
+@serializable
+class CTFocalLoss(object):
+ """
+ CTFocalLoss: CornerNet & CenterNet Focal Loss
+ Args:
+ loss_weight (float): loss weight
+ gamma (float): gamma parameter for Focal Loss
+ """
+
+ def __init__(self, loss_weight=1., gamma=2.0):
+ self.loss_weight = loss_weight
+ self.gamma = gamma
+
+ def __call__(self, pred, target):
+ """
+ Calculate the loss
+ Args:
+ pred (Tensor): heatmap prediction
+ target (Tensor): target for positive samples
+ Return:
+ ct_focal_loss (Tensor): Focal Loss used in CornerNet & CenterNet.
+ Note that the values in target are in [0, 1] since gaussian is
+ used to reduce the punishment and we treat [0, 1) as neg example.
+ """
+ fg_map = paddle.cast(target == 1, 'float32')
+ fg_map.stop_gradient = True
+ bg_map = paddle.cast(target < 1, 'float32')
+ bg_map.stop_gradient = True
+
+ neg_weights = paddle.pow(1 - target, 4)
+ pos_loss = 0 - paddle.log(pred) * paddle.pow(1 - pred,
+ self.gamma) * fg_map
+
+ neg_loss = 0 - paddle.log(1 - pred) * paddle.pow(
+ pred, self.gamma) * neg_weights * bg_map
+ pos_loss = paddle.sum(pos_loss)
+ neg_loss = paddle.sum(neg_loss)
+
+ fg_num = paddle.sum(fg_map)
+ ct_focal_loss = (pos_loss + neg_loss) / (
+ fg_num + paddle.cast(fg_num == 0, 'float32'))
+ return ct_focal_loss * self.loss_weight
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/detr_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/detr_loss.py
new file mode 100644
index 000000000..5a589d4a2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/detr_loss.py
@@ -0,0 +1,230 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+from .iou_loss import GIoULoss
+from ..transformers import bbox_cxcywh_to_xyxy, sigmoid_focal_loss
+
+__all__ = ['DETRLoss']
+
+
+@register
+class DETRLoss(nn.Layer):
+ __shared__ = ['num_classes', 'use_focal_loss']
+ __inject__ = ['matcher']
+
+ def __init__(self,
+ num_classes=80,
+ matcher='HungarianMatcher',
+ loss_coeff={
+ 'class': 1,
+ 'bbox': 5,
+ 'giou': 2,
+ 'no_object': 0.1,
+ 'mask': 1,
+ 'dice': 1
+ },
+ aux_loss=True,
+ use_focal_loss=False):
+ r"""
+ Args:
+ num_classes (int): The number of classes.
+ matcher (HungarianMatcher): It computes an assignment between the targets
+ and the predictions of the network.
+ loss_coeff (dict): The coefficient of loss.
+ aux_loss (bool): If 'aux_loss = True', loss at each decoder layer are to be used.
+ use_focal_loss (bool): Use focal loss or not.
+ """
+ super(DETRLoss, self).__init__()
+ self.num_classes = num_classes
+
+ self.matcher = matcher
+ self.loss_coeff = loss_coeff
+ self.aux_loss = aux_loss
+ self.use_focal_loss = use_focal_loss
+
+ if not self.use_focal_loss:
+ self.loss_coeff['class'] = paddle.full([num_classes + 1],
+ loss_coeff['class'])
+ self.loss_coeff['class'][-1] = loss_coeff['no_object']
+ self.giou_loss = GIoULoss()
+
+ def _get_loss_class(self, logits, gt_class, match_indices, bg_index,
+ num_gts):
+ # logits: [b, query, num_classes], gt_class: list[[n, 1]]
+ target_label = paddle.full(logits.shape[:2], bg_index, dtype='int64')
+ bs, num_query_objects = target_label.shape
+ if sum(len(a) for a in gt_class) > 0:
+ index, updates = self._get_index_updates(num_query_objects,
+ gt_class, match_indices)
+ target_label = paddle.scatter(
+ target_label.reshape([-1, 1]), index, updates.astype('int64'))
+ target_label = target_label.reshape([bs, num_query_objects])
+ if self.use_focal_loss:
+ target_label = F.one_hot(target_label,
+ self.num_classes + 1)[:, :, :-1]
+ return {
+ 'loss_class': self.loss_coeff['class'] * sigmoid_focal_loss(
+ logits, target_label, num_gts / num_query_objects)
+ if self.use_focal_loss else F.cross_entropy(
+ logits, target_label, weight=self.loss_coeff['class'])
+ }
+
+ def _get_loss_bbox(self, boxes, gt_bbox, match_indices, num_gts):
+ # boxes: [b, query, 4], gt_bbox: list[[n, 4]]
+ loss = dict()
+ if sum(len(a) for a in gt_bbox) == 0:
+ loss['loss_bbox'] = paddle.to_tensor([0.])
+ loss['loss_giou'] = paddle.to_tensor([0.])
+ return loss
+
+ src_bbox, target_bbox = self._get_src_target_assign(boxes, gt_bbox,
+ match_indices)
+ loss['loss_bbox'] = self.loss_coeff['bbox'] * F.l1_loss(
+ src_bbox, target_bbox, reduction='sum') / num_gts
+ loss['loss_giou'] = self.giou_loss(
+ bbox_cxcywh_to_xyxy(src_bbox), bbox_cxcywh_to_xyxy(target_bbox))
+ loss['loss_giou'] = loss['loss_giou'].sum() / num_gts
+ loss['loss_giou'] = self.loss_coeff['giou'] * loss['loss_giou']
+ return loss
+
+ def _get_loss_mask(self, masks, gt_mask, match_indices, num_gts):
+ # masks: [b, query, h, w], gt_mask: list[[n, H, W]]
+ loss = dict()
+ if sum(len(a) for a in gt_mask) == 0:
+ loss['loss_mask'] = paddle.to_tensor([0.])
+ loss['loss_dice'] = paddle.to_tensor([0.])
+ return loss
+
+ src_masks, target_masks = self._get_src_target_assign(masks, gt_mask,
+ match_indices)
+ src_masks = F.interpolate(
+ src_masks.unsqueeze(0),
+ size=target_masks.shape[-2:],
+ mode="bilinear")[0]
+ loss['loss_mask'] = self.loss_coeff['mask'] * F.sigmoid_focal_loss(
+ src_masks,
+ target_masks,
+ paddle.to_tensor(
+ [num_gts], dtype='float32'))
+ loss['loss_dice'] = self.loss_coeff['dice'] * self._dice_loss(
+ src_masks, target_masks, num_gts)
+ return loss
+
+ def _dice_loss(self, inputs, targets, num_gts):
+ inputs = F.sigmoid(inputs)
+ inputs = inputs.flatten(1)
+ targets = targets.flatten(1)
+ numerator = 2 * (inputs * targets).sum(1)
+ denominator = inputs.sum(-1) + targets.sum(-1)
+ loss = 1 - (numerator + 1) / (denominator + 1)
+ return loss.sum() / num_gts
+
+ def _get_loss_aux(self, boxes, logits, gt_bbox, gt_class, bg_index,
+ num_gts):
+ loss_class = []
+ loss_bbox = []
+ loss_giou = []
+ for aux_boxes, aux_logits in zip(boxes, logits):
+ match_indices = self.matcher(aux_boxes, aux_logits, gt_bbox,
+ gt_class)
+ loss_class.append(
+ self._get_loss_class(aux_logits, gt_class, match_indices,
+ bg_index, num_gts)['loss_class'])
+ loss_ = self._get_loss_bbox(aux_boxes, gt_bbox, match_indices,
+ num_gts)
+ loss_bbox.append(loss_['loss_bbox'])
+ loss_giou.append(loss_['loss_giou'])
+ loss = {
+ 'loss_class_aux': paddle.add_n(loss_class),
+ 'loss_bbox_aux': paddle.add_n(loss_bbox),
+ 'loss_giou_aux': paddle.add_n(loss_giou)
+ }
+ return loss
+
+ def _get_index_updates(self, num_query_objects, target, match_indices):
+ batch_idx = paddle.concat([
+ paddle.full_like(src, i) for i, (src, _) in enumerate(match_indices)
+ ])
+ src_idx = paddle.concat([src for (src, _) in match_indices])
+ src_idx += (batch_idx * num_query_objects)
+ target_assign = paddle.concat([
+ paddle.gather(
+ t, dst, axis=0) for t, (_, dst) in zip(target, match_indices)
+ ])
+ return src_idx, target_assign
+
+ def _get_src_target_assign(self, src, target, match_indices):
+ src_assign = paddle.concat([
+ paddle.gather(
+ t, I, axis=0) if len(I) > 0 else paddle.zeros([0, t.shape[-1]])
+ for t, (I, _) in zip(src, match_indices)
+ ])
+ target_assign = paddle.concat([
+ paddle.gather(
+ t, J, axis=0) if len(J) > 0 else paddle.zeros([0, t.shape[-1]])
+ for t, (_, J) in zip(target, match_indices)
+ ])
+ return src_assign, target_assign
+
+ def forward(self,
+ boxes,
+ logits,
+ gt_bbox,
+ gt_class,
+ masks=None,
+ gt_mask=None):
+ r"""
+ Args:
+ boxes (Tensor): [l, b, query, 4]
+ logits (Tensor): [l, b, query, num_classes]
+ gt_bbox (List(Tensor)): list[[n, 4]]
+ gt_class (List(Tensor)): list[[n, 1]]
+ masks (Tensor, optional): [b, query, h, w]
+ gt_mask (List(Tensor), optional): list[[n, H, W]]
+ """
+ match_indices = self.matcher(boxes[-1].detach(), logits[-1].detach(),
+ gt_bbox, gt_class)
+ num_gts = sum(len(a) for a in gt_bbox)
+ try:
+ # TODO: Paddle does not have a "paddle.distributed.is_initialized()"
+ num_gts = paddle.to_tensor([num_gts], dtype=paddle.float32)
+ paddle.distributed.all_reduce(num_gts)
+ num_gts = paddle.clip(
+ num_gts / paddle.distributed.get_world_size(), min=1).item()
+ except:
+ num_gts = max(num_gts.item(), 1)
+ total_loss = dict()
+ total_loss.update(
+ self._get_loss_class(logits[-1], gt_class, match_indices,
+ self.num_classes, num_gts))
+ total_loss.update(
+ self._get_loss_bbox(boxes[-1], gt_bbox, match_indices, num_gts))
+ if masks is not None and gt_mask is not None:
+ total_loss.update(
+ self._get_loss_mask(masks, gt_mask, match_indices, num_gts))
+
+ if self.aux_loss:
+ total_loss.update(
+ self._get_loss_aux(boxes[:-1], logits[:-1], gt_bbox, gt_class,
+ self.num_classes, num_gts))
+
+ return total_loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fairmot_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fairmot_loss.py
new file mode 100644
index 000000000..e24ff33fe
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fairmot_loss.py
@@ -0,0 +1,41 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+from paddle.nn.initializer import Constant
+from ppdet.core.workspace import register
+
+__all__ = ['FairMOTLoss']
+
+
+@register
+class FairMOTLoss(nn.Layer):
+ def __init__(self):
+ super(FairMOTLoss, self).__init__()
+ self.det_weight = self.create_parameter(
+ shape=[1], default_initializer=Constant(-1.85))
+ self.reid_weight = self.create_parameter(
+ shape=[1], default_initializer=Constant(-1.05))
+
+ def forward(self, det_loss, reid_loss):
+ loss = paddle.exp(-self.det_weight) * det_loss + paddle.exp(
+ -self.reid_weight) * reid_loss + (self.det_weight + self.reid_weight
+ )
+ loss *= 0.5
+ return {'loss': loss}
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fcos_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fcos_loss.py
new file mode 100644
index 000000000..c8d600573
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/fcos_loss.py
@@ -0,0 +1,225 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+from ppdet.modeling import ops
+
+__all__ = ['FCOSLoss']
+
+
+def flatten_tensor(inputs, channel_first=False):
+ """
+ Flatten a Tensor
+ Args:
+ inputs (Tensor): 4-D Tensor with shape [N, C, H, W] or [N, H, W, C]
+ channel_first (bool): If true the dimension order of Tensor is
+ [N, C, H, W], otherwise is [N, H, W, C]
+ Return:
+ output_channel_last (Tensor): The flattened Tensor in channel_last style
+ """
+ if channel_first:
+ input_channel_last = paddle.transpose(inputs, perm=[0, 2, 3, 1])
+ else:
+ input_channel_last = inputs
+ output_channel_last = paddle.flatten(
+ input_channel_last, start_axis=0, stop_axis=2)
+ return output_channel_last
+
+
+@register
+class FCOSLoss(nn.Layer):
+ """
+ FCOSLoss
+ Args:
+ loss_alpha (float): alpha in focal loss
+ loss_gamma (float): gamma in focal loss
+ iou_loss_type (str): location loss type, IoU/GIoU/LINEAR_IoU
+ reg_weights (float): weight for location loss
+ """
+
+ def __init__(self,
+ loss_alpha=0.25,
+ loss_gamma=2.0,
+ iou_loss_type="giou",
+ reg_weights=1.0):
+ super(FCOSLoss, self).__init__()
+ self.loss_alpha = loss_alpha
+ self.loss_gamma = loss_gamma
+ self.iou_loss_type = iou_loss_type
+ self.reg_weights = reg_weights
+
+ def __iou_loss(self, pred, targets, positive_mask, weights=None):
+ """
+ Calculate the loss for location prediction
+ Args:
+ pred (Tensor): bounding boxes prediction
+ targets (Tensor): targets for positive samples
+ positive_mask (Tensor): mask of positive samples
+ weights (Tensor): weights for each positive samples
+ Return:
+ loss (Tensor): location loss
+ """
+ plw = pred[:, 0] * positive_mask
+ pth = pred[:, 1] * positive_mask
+ prw = pred[:, 2] * positive_mask
+ pbh = pred[:, 3] * positive_mask
+
+ tlw = targets[:, 0] * positive_mask
+ tth = targets[:, 1] * positive_mask
+ trw = targets[:, 2] * positive_mask
+ tbh = targets[:, 3] * positive_mask
+ tlw.stop_gradient = True
+ trw.stop_gradient = True
+ tth.stop_gradient = True
+ tbh.stop_gradient = True
+
+ ilw = paddle.minimum(plw, tlw)
+ irw = paddle.minimum(prw, trw)
+ ith = paddle.minimum(pth, tth)
+ ibh = paddle.minimum(pbh, tbh)
+
+ clw = paddle.maximum(plw, tlw)
+ crw = paddle.maximum(prw, trw)
+ cth = paddle.maximum(pth, tth)
+ cbh = paddle.maximum(pbh, tbh)
+
+ area_predict = (plw + prw) * (pth + pbh)
+ area_target = (tlw + trw) * (tth + tbh)
+ area_inter = (ilw + irw) * (ith + ibh)
+ ious = (area_inter + 1.0) / (
+ area_predict + area_target - area_inter + 1.0)
+ ious = ious * positive_mask
+
+ if self.iou_loss_type.lower() == "linear_iou":
+ loss = 1.0 - ious
+ elif self.iou_loss_type.lower() == "giou":
+ area_uniou = area_predict + area_target - area_inter
+ area_circum = (clw + crw) * (cth + cbh) + 1e-7
+ giou = ious - (area_circum - area_uniou) / area_circum
+ loss = 1.0 - giou
+ elif self.iou_loss_type.lower() == "iou":
+ loss = 0.0 - paddle.log(ious)
+ else:
+ raise KeyError
+ if weights is not None:
+ loss = loss * weights
+ return loss
+
+ def forward(self, cls_logits, bboxes_reg, centerness, tag_labels,
+ tag_bboxes, tag_center):
+ """
+ Calculate the loss for classification, location and centerness
+ Args:
+ cls_logits (list): list of Tensor, which is predicted
+ score for all anchor points with shape [N, M, C]
+ bboxes_reg (list): list of Tensor, which is predicted
+ offsets for all anchor points with shape [N, M, 4]
+ centerness (list): list of Tensor, which is predicted
+ centerness for all anchor points with shape [N, M, 1]
+ tag_labels (list): list of Tensor, which is category
+ targets for each anchor point
+ tag_bboxes (list): list of Tensor, which is bounding
+ boxes targets for positive samples
+ tag_center (list): list of Tensor, which is centerness
+ targets for positive samples
+ Return:
+ loss (dict): loss composed by classification loss, bounding box
+ """
+ cls_logits_flatten_list = []
+ bboxes_reg_flatten_list = []
+ centerness_flatten_list = []
+ tag_labels_flatten_list = []
+ tag_bboxes_flatten_list = []
+ tag_center_flatten_list = []
+ num_lvl = len(cls_logits)
+ for lvl in range(num_lvl):
+ cls_logits_flatten_list.append(
+ flatten_tensor(cls_logits[lvl], True))
+ bboxes_reg_flatten_list.append(
+ flatten_tensor(bboxes_reg[lvl], True))
+ centerness_flatten_list.append(
+ flatten_tensor(centerness[lvl], True))
+
+ tag_labels_flatten_list.append(
+ flatten_tensor(tag_labels[lvl], False))
+ tag_bboxes_flatten_list.append(
+ flatten_tensor(tag_bboxes[lvl], False))
+ tag_center_flatten_list.append(
+ flatten_tensor(tag_center[lvl], False))
+
+ cls_logits_flatten = paddle.concat(cls_logits_flatten_list, axis=0)
+ bboxes_reg_flatten = paddle.concat(bboxes_reg_flatten_list, axis=0)
+ centerness_flatten = paddle.concat(centerness_flatten_list, axis=0)
+
+ tag_labels_flatten = paddle.concat(tag_labels_flatten_list, axis=0)
+ tag_bboxes_flatten = paddle.concat(tag_bboxes_flatten_list, axis=0)
+ tag_center_flatten = paddle.concat(tag_center_flatten_list, axis=0)
+ tag_labels_flatten.stop_gradient = True
+ tag_bboxes_flatten.stop_gradient = True
+ tag_center_flatten.stop_gradient = True
+
+ mask_positive_bool = tag_labels_flatten > 0
+ mask_positive_bool.stop_gradient = True
+ mask_positive_float = paddle.cast(mask_positive_bool, dtype="float32")
+ mask_positive_float.stop_gradient = True
+
+ num_positive_fp32 = paddle.sum(mask_positive_float)
+ num_positive_fp32.stop_gradient = True
+ num_positive_int32 = paddle.cast(num_positive_fp32, dtype="int32")
+ num_positive_int32 = num_positive_int32 * 0 + 1
+ num_positive_int32.stop_gradient = True
+
+ normalize_sum = paddle.sum(tag_center_flatten * mask_positive_float)
+ normalize_sum.stop_gradient = True
+
+ # 1. cls_logits: sigmoid_focal_loss
+ # expand onehot labels
+ num_classes = cls_logits_flatten.shape[-1]
+ tag_labels_flatten = paddle.squeeze(tag_labels_flatten, axis=-1)
+ tag_labels_flatten_bin = F.one_hot(
+ tag_labels_flatten, num_classes=1 + num_classes)
+ tag_labels_flatten_bin = tag_labels_flatten_bin[:, 1:]
+ # sigmoid_focal_loss
+ cls_loss = F.sigmoid_focal_loss(
+ cls_logits_flatten, tag_labels_flatten_bin) / num_positive_fp32
+
+ # 2. bboxes_reg: giou_loss
+ mask_positive_float = paddle.squeeze(mask_positive_float, axis=-1)
+ tag_center_flatten = paddle.squeeze(tag_center_flatten, axis=-1)
+ reg_loss = self.__iou_loss(
+ bboxes_reg_flatten,
+ tag_bboxes_flatten,
+ mask_positive_float,
+ weights=tag_center_flatten)
+ reg_loss = reg_loss * mask_positive_float / normalize_sum
+
+ # 3. centerness: sigmoid_cross_entropy_with_logits_loss
+ centerness_flatten = paddle.squeeze(centerness_flatten, axis=-1)
+ ctn_loss = ops.sigmoid_cross_entropy_with_logits(centerness_flatten,
+ tag_center_flatten)
+ ctn_loss = ctn_loss * mask_positive_float / num_positive_fp32
+
+ loss_all = {
+ "loss_centerness": paddle.sum(ctn_loss),
+ "loss_cls": paddle.sum(cls_loss),
+ "loss_box": paddle.sum(reg_loss)
+ }
+ return loss_all
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/gfocal_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/gfocal_loss.py
new file mode 100644
index 000000000..37e27f084
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/gfocal_loss.py
@@ -0,0 +1,217 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/losses/gfocal_loss.py
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling import ops
+
+__all__ = ['QualityFocalLoss', 'DistributionFocalLoss']
+
+
+def quality_focal_loss(pred, target, beta=2.0, use_sigmoid=True):
+ """
+ Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning
+ Qualified and Distributed Bounding Boxes for Dense Object Detection
+ `_.
+ Args:
+ pred (Tensor): Predicted joint representation of classification
+ and quality (IoU) estimation with shape (N, C), C is the number of
+ classes.
+ target (tuple([Tensor])): Target category label with shape (N,)
+ and target quality label with shape (N,).
+ beta (float): The beta parameter for calculating the modulating factor.
+ Defaults to 2.0.
+ Returns:
+ Tensor: Loss tensor with shape (N,).
+ """
+ assert len(target) == 2, """target for QFL must be a tuple of two elements,
+ including category label and quality label, respectively"""
+ # label denotes the category id, score denotes the quality score
+ label, score = target
+ if use_sigmoid:
+ func = F.binary_cross_entropy_with_logits
+ else:
+ func = F.binary_cross_entropy
+
+ # negatives are supervised by 0 quality score
+ pred_sigmoid = F.sigmoid(pred) if use_sigmoid else pred
+ scale_factor = pred_sigmoid
+ zerolabel = paddle.zeros(pred.shape, dtype='float32')
+ loss = func(pred, zerolabel, reduction='none') * scale_factor.pow(beta)
+
+ # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
+ bg_class_ind = pred.shape[1]
+ pos = paddle.logical_and((label >= 0),
+ (label < bg_class_ind)).nonzero().squeeze(1)
+ if pos.shape[0] == 0:
+ return loss.sum(axis=1)
+ pos_label = paddle.gather(label, pos, axis=0)
+ pos_mask = np.zeros(pred.shape, dtype=np.int32)
+ pos_mask[pos.numpy(), pos_label.numpy()] = 1
+ pos_mask = paddle.to_tensor(pos_mask, dtype='bool')
+ score = score.unsqueeze(-1).expand([-1, pred.shape[1]]).cast('float32')
+ # positives are supervised by bbox quality (IoU) score
+ scale_factor_new = score - pred_sigmoid
+
+ loss_pos = func(
+ pred, score, reduction='none') * scale_factor_new.abs().pow(beta)
+ loss = loss * paddle.logical_not(pos_mask) + loss_pos * pos_mask
+ loss = loss.sum(axis=1)
+ return loss
+
+
+def distribution_focal_loss(pred, label):
+ """Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning
+ Qualified and Distributed Bounding Boxes for Dense Object Detection
+ `_.
+ Args:
+ pred (Tensor): Predicted general distribution of bounding boxes
+ (before softmax) with shape (N, n+1), n is the max value of the
+ integral set `{0, ..., n}` in paper.
+ label (Tensor): Target distance label for bounding boxes with
+ shape (N,).
+ Returns:
+ Tensor: Loss tensor with shape (N,).
+ """
+ dis_left = label.cast('int64')
+ dis_right = dis_left + 1
+ weight_left = dis_right.cast('float32') - label
+ weight_right = label - dis_left.cast('float32')
+ loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \
+ + F.cross_entropy(pred, dis_right, reduction='none') * weight_right
+ return loss
+
+
+@register
+@serializable
+class QualityFocalLoss(nn.Layer):
+ r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss:
+ Learning Qualified and Distributed Bounding Boxes for Dense Object
+ Detection `_.
+ Args:
+ use_sigmoid (bool): Whether sigmoid operation is conducted in QFL.
+ Defaults to True.
+ beta (float): The beta parameter for calculating the modulating factor.
+ Defaults to 2.0.
+ reduction (str): Options are "none", "mean" and "sum".
+ loss_weight (float): Loss weight of current loss.
+ """
+
+ def __init__(self,
+ use_sigmoid=True,
+ beta=2.0,
+ reduction='mean',
+ loss_weight=1.0):
+ super(QualityFocalLoss, self).__init__()
+ self.use_sigmoid = use_sigmoid
+ self.beta = beta
+ assert reduction in ('none', 'mean', 'sum')
+ self.reduction = reduction
+ self.loss_weight = loss_weight
+
+ def forward(self, pred, target, weight=None, avg_factor=None):
+ """Forward function.
+ Args:
+ pred (Tensor): Predicted joint representation of
+ classification and quality (IoU) estimation with shape (N, C),
+ C is the number of classes.
+ target (tuple([Tensor])): Target category label with shape
+ (N,) and target quality label with shape (N,).
+ weight (Tensor, optional): The weight of loss for each
+ prediction. Defaults to None.
+ avg_factor (int, optional): Average factor that is used to average
+ the loss. Defaults to None.
+ """
+
+ loss = self.loss_weight * quality_focal_loss(
+ pred, target, beta=self.beta, use_sigmoid=self.use_sigmoid)
+
+ if weight is not None:
+ loss = loss * weight
+ if avg_factor is None:
+ if self.reduction == 'none':
+ return loss
+ elif self.reduction == 'mean':
+ return loss.mean()
+ elif self.reduction == 'sum':
+ return loss.sum()
+ else:
+ # if reduction is mean, then average the loss by avg_factor
+ if self.reduction == 'mean':
+ loss = loss.sum() / avg_factor
+ # if reduction is 'none', then do nothing, otherwise raise an error
+ elif self.reduction != 'none':
+ raise ValueError(
+ 'avg_factor can not be used with reduction="sum"')
+ return loss
+
+
+@register
+@serializable
+class DistributionFocalLoss(nn.Layer):
+ """Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss:
+ Learning Qualified and Distributed Bounding Boxes for Dense Object
+ Detection `_.
+ Args:
+ reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
+ loss_weight (float): Loss weight of current loss.
+ """
+
+ def __init__(self, reduction='mean', loss_weight=1.0):
+ super(DistributionFocalLoss, self).__init__()
+ assert reduction in ('none', 'mean', 'sum')
+ self.reduction = reduction
+ self.loss_weight = loss_weight
+
+ def forward(self, pred, target, weight=None, avg_factor=None):
+ """Forward function.
+ Args:
+ pred (Tensor): Predicted general distribution of bounding
+ boxes (before softmax) with shape (N, n+1), n is the max value
+ of the integral set `{0, ..., n}` in paper.
+ target (Tensor): Target distance label for bounding boxes
+ with shape (N,).
+ weight (Tensor, optional): The weight of loss for each
+ prediction. Defaults to None.
+ avg_factor (int, optional): Average factor that is used to average
+ the loss. Defaults to None.
+ """
+ loss = self.loss_weight * distribution_focal_loss(pred, target)
+ if weight is not None:
+ loss = loss * weight
+ if avg_factor is None:
+ if self.reduction == 'none':
+ return loss
+ elif self.reduction == 'mean':
+ return loss.mean()
+ elif self.reduction == 'sum':
+ return loss.sum()
+ else:
+ # if reduction is mean, then average the loss by avg_factor
+ if self.reduction == 'mean':
+ loss = loss.sum() / avg_factor
+ # if reduction is 'none', then do nothing, otherwise raise an error
+ elif self.reduction != 'none':
+ raise ValueError(
+ 'avg_factor can not be used with reduction="sum"')
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_aware_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_aware_loss.py
new file mode 100644
index 000000000..4a9e904dd
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_aware_loss.py
@@ -0,0 +1,47 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from .iou_loss import IouLoss
+from ..bbox_utils import bbox_iou
+
+
+@register
+@serializable
+class IouAwareLoss(IouLoss):
+ """
+ iou aware loss, see https://arxiv.org/abs/1912.05992
+ Args:
+ loss_weight (float): iou aware loss weight, default is 1.0
+ max_height (int): max height of input to support random shape input
+ max_width (int): max width of input to support random shape input
+ """
+
+ def __init__(self, loss_weight=1.0, giou=False, diou=False, ciou=False):
+ super(IouAwareLoss, self).__init__(
+ loss_weight=loss_weight, giou=giou, diou=diou, ciou=ciou)
+
+ def __call__(self, ioup, pbox, gbox):
+ iou = bbox_iou(
+ pbox, gbox, giou=self.giou, diou=self.diou, ciou=self.ciou)
+ iou.stop_gradient = True
+ loss_iou_aware = F.binary_cross_entropy_with_logits(
+ ioup, iou, reduction='none')
+ loss_iou_aware = loss_iou_aware * self.loss_weight
+ return loss_iou_aware
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_loss.py
new file mode 100644
index 000000000..9b8da6c05
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/iou_loss.py
@@ -0,0 +1,210 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import numpy as np
+
+import paddle
+
+from ppdet.core.workspace import register, serializable
+from ..bbox_utils import bbox_iou
+
+__all__ = ['IouLoss', 'GIoULoss', 'DIouLoss']
+
+
+@register
+@serializable
+class IouLoss(object):
+ """
+ iou loss, see https://arxiv.org/abs/1908.03851
+ loss = 1.0 - iou * iou
+ Args:
+ loss_weight (float): iou loss weight, default is 2.5
+ max_height (int): max height of input to support random shape input
+ max_width (int): max width of input to support random shape input
+ ciou_term (bool): whether to add ciou_term
+ loss_square (bool): whether to square the iou term
+ """
+
+ def __init__(self,
+ loss_weight=2.5,
+ giou=False,
+ diou=False,
+ ciou=False,
+ loss_square=True):
+ self.loss_weight = loss_weight
+ self.giou = giou
+ self.diou = diou
+ self.ciou = ciou
+ self.loss_square = loss_square
+
+ def __call__(self, pbox, gbox):
+ iou = bbox_iou(
+ pbox, gbox, giou=self.giou, diou=self.diou, ciou=self.ciou)
+ if self.loss_square:
+ loss_iou = 1 - iou * iou
+ else:
+ loss_iou = 1 - iou
+
+ loss_iou = loss_iou * self.loss_weight
+ return loss_iou
+
+
+@register
+@serializable
+class GIoULoss(object):
+ """
+ Generalized Intersection over Union, see https://arxiv.org/abs/1902.09630
+ Args:
+ loss_weight (float): giou loss weight, default as 1
+ eps (float): epsilon to avoid divide by zero, default as 1e-10
+ reduction (string): Options are "none", "mean" and "sum". default as none
+ """
+
+ def __init__(self, loss_weight=1., eps=1e-10, reduction='none'):
+ self.loss_weight = loss_weight
+ self.eps = eps
+ assert reduction in ('none', 'mean', 'sum')
+ self.reduction = reduction
+
+ def bbox_overlap(self, box1, box2, eps=1e-10):
+ """calculate the iou of box1 and box2
+ Args:
+ box1 (Tensor): box1 with the shape (..., 4)
+ box2 (Tensor): box1 with the shape (..., 4)
+ eps (float): epsilon to avoid divide by zero
+ Return:
+ iou (Tensor): iou of box1 and box2
+ overlap (Tensor): overlap of box1 and box2
+ union (Tensor): union of box1 and box2
+ """
+ x1, y1, x2, y2 = box1
+ x1g, y1g, x2g, y2g = box2
+
+ xkis1 = paddle.maximum(x1, x1g)
+ ykis1 = paddle.maximum(y1, y1g)
+ xkis2 = paddle.minimum(x2, x2g)
+ ykis2 = paddle.minimum(y2, y2g)
+ w_inter = (xkis2 - xkis1).clip(0)
+ h_inter = (ykis2 - ykis1).clip(0)
+ overlap = w_inter * h_inter
+
+ area1 = (x2 - x1) * (y2 - y1)
+ area2 = (x2g - x1g) * (y2g - y1g)
+ union = area1 + area2 - overlap + eps
+ iou = overlap / union
+
+ return iou, overlap, union
+
+ def __call__(self, pbox, gbox, iou_weight=1., loc_reweight=None):
+ x1, y1, x2, y2 = paddle.split(pbox, num_or_sections=4, axis=-1)
+ x1g, y1g, x2g, y2g = paddle.split(gbox, num_or_sections=4, axis=-1)
+ box1 = [x1, y1, x2, y2]
+ box2 = [x1g, y1g, x2g, y2g]
+ iou, overlap, union = self.bbox_overlap(box1, box2, self.eps)
+ xc1 = paddle.minimum(x1, x1g)
+ yc1 = paddle.minimum(y1, y1g)
+ xc2 = paddle.maximum(x2, x2g)
+ yc2 = paddle.maximum(y2, y2g)
+
+ area_c = (xc2 - xc1) * (yc2 - yc1) + self.eps
+ miou = iou - ((area_c - union) / area_c)
+ if loc_reweight is not None:
+ loc_reweight = paddle.reshape(loc_reweight, shape=(-1, 1))
+ loc_thresh = 0.9
+ giou = 1 - (1 - loc_thresh
+ ) * miou - loc_thresh * miou * loc_reweight
+ else:
+ giou = 1 - miou
+ if self.reduction == 'none':
+ loss = giou
+ elif self.reduction == 'sum':
+ loss = paddle.sum(giou * iou_weight)
+ else:
+ loss = paddle.mean(giou * iou_weight)
+ return loss * self.loss_weight
+
+
+@register
+@serializable
+class DIouLoss(GIoULoss):
+ """
+ Distance-IoU Loss, see https://arxiv.org/abs/1911.08287
+ Args:
+ loss_weight (float): giou loss weight, default as 1
+ eps (float): epsilon to avoid divide by zero, default as 1e-10
+ use_complete_iou_loss (bool): whether to use complete iou loss
+ """
+
+ def __init__(self, loss_weight=1., eps=1e-10, use_complete_iou_loss=True):
+ super(DIouLoss, self).__init__(loss_weight=loss_weight, eps=eps)
+ self.use_complete_iou_loss = use_complete_iou_loss
+
+ def __call__(self, pbox, gbox, iou_weight=1.):
+ x1, y1, x2, y2 = paddle.split(pbox, num_or_sections=4, axis=-1)
+ x1g, y1g, x2g, y2g = paddle.split(gbox, num_or_sections=4, axis=-1)
+ cx = (x1 + x2) / 2
+ cy = (y1 + y2) / 2
+ w = x2 - x1
+ h = y2 - y1
+
+ cxg = (x1g + x2g) / 2
+ cyg = (y1g + y2g) / 2
+ wg = x2g - x1g
+ hg = y2g - y1g
+
+ x2 = paddle.maximum(x1, x2)
+ y2 = paddle.maximum(y1, y2)
+
+ # A and B
+ xkis1 = paddle.maximum(x1, x1g)
+ ykis1 = paddle.maximum(y1, y1g)
+ xkis2 = paddle.minimum(x2, x2g)
+ ykis2 = paddle.minimum(y2, y2g)
+
+ # A or B
+ xc1 = paddle.minimum(x1, x1g)
+ yc1 = paddle.minimum(y1, y1g)
+ xc2 = paddle.maximum(x2, x2g)
+ yc2 = paddle.maximum(y2, y2g)
+
+ intsctk = (xkis2 - xkis1) * (ykis2 - ykis1)
+ intsctk = intsctk * paddle.greater_than(
+ xkis2, xkis1) * paddle.greater_than(ykis2, ykis1)
+ unionk = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g
+ ) - intsctk + self.eps
+ iouk = intsctk / unionk
+
+ # DIOU term
+ dist_intersection = (cx - cxg) * (cx - cxg) + (cy - cyg) * (cy - cyg)
+ dist_union = (xc2 - xc1) * (xc2 - xc1) + (yc2 - yc1) * (yc2 - yc1)
+ diou_term = (dist_intersection + self.eps) / (dist_union + self.eps)
+
+ # CIOU term
+ ciou_term = 0
+ if self.use_complete_iou_loss:
+ ar_gt = wg / hg
+ ar_pred = w / h
+ arctan = paddle.atan(ar_gt) - paddle.atan(ar_pred)
+ ar_loss = 4. / np.pi / np.pi * arctan * arctan
+ alpha = ar_loss / (1 - iouk + ar_loss + self.eps)
+ alpha.stop_gradient = True
+ ciou_term = alpha * ar_loss
+
+ diou = paddle.mean((1 - iouk + ciou_term + diou_term) * iou_weight)
+
+ return diou * self.loss_weight
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/jde_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/jde_loss.py
new file mode 100644
index 000000000..5c3b5a615
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/jde_loss.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+
+__all__ = ['JDEDetectionLoss', 'JDEEmbeddingLoss', 'JDELoss']
+
+
+@register
+class JDEDetectionLoss(nn.Layer):
+ __shared__ = ['num_classes']
+
+ def __init__(self, num_classes=1, for_mot=True):
+ super(JDEDetectionLoss, self).__init__()
+ self.num_classes = num_classes
+ self.for_mot = for_mot
+
+ def det_loss(self, p_det, anchor, t_conf, t_box):
+ pshape = paddle.shape(p_det)
+ pshape.stop_gradient = True
+ nB, nGh, nGw = pshape[0], pshape[-2], pshape[-1]
+ nA = len(anchor)
+ p_det = paddle.reshape(
+ p_det, [nB, nA, self.num_classes + 5, nGh, nGw]).transpose(
+ (0, 1, 3, 4, 2))
+
+ # 1. loss_conf: cross_entropy
+ p_conf = p_det[:, :, :, :, 4:6]
+ p_conf_flatten = paddle.reshape(p_conf, [-1, 2])
+ t_conf_flatten = t_conf.flatten()
+ t_conf_flatten = paddle.cast(t_conf_flatten, dtype="int64")
+ t_conf_flatten.stop_gradient = True
+ loss_conf = F.cross_entropy(
+ p_conf_flatten, t_conf_flatten, ignore_index=-1, reduction='mean')
+ loss_conf.stop_gradient = False
+
+ # 2. loss_box: smooth_l1_loss
+ p_box = p_det[:, :, :, :, :4]
+ p_box_flatten = paddle.reshape(p_box, [-1, 4])
+ t_box_flatten = paddle.reshape(t_box, [-1, 4])
+ fg_inds = paddle.nonzero(t_conf_flatten > 0).flatten()
+ if fg_inds.numel() > 0:
+ reg_delta = paddle.gather(p_box_flatten, fg_inds)
+ reg_target = paddle.gather(t_box_flatten, fg_inds)
+ else:
+ reg_delta = paddle.to_tensor([0, 0, 0, 0], dtype='float32')
+ reg_delta.stop_gradient = False
+ reg_target = paddle.to_tensor([0, 0, 0, 0], dtype='float32')
+ reg_target.stop_gradient = True
+ loss_box = F.smooth_l1_loss(
+ reg_delta, reg_target, reduction='mean', delta=1.0)
+ loss_box.stop_gradient = False
+
+ return loss_conf, loss_box
+
+ def forward(self, det_outs, targets, anchors):
+ """
+ Args:
+ det_outs (list[Tensor]): output from detection head, each one
+ is a 4-D Tensor with shape [N, C, H, W].
+ targets (dict): contains 'im_id', 'gt_bbox', 'gt_ide', 'image',
+ 'im_shape', 'scale_factor' and 'tbox', 'tconf', 'tide' of
+ each FPN level.
+ anchors (list[list]): anchor setting of JDE model, N row M col, N is
+ the anchor levels(FPN levels), M is the anchor scales each
+ level.
+ """
+ assert len(det_outs) == len(anchors)
+ loss_confs = []
+ loss_boxes = []
+ for i, (p_det, anchor) in enumerate(zip(det_outs, anchors)):
+ t_conf = targets['tconf{}'.format(i)]
+ t_box = targets['tbox{}'.format(i)]
+
+ loss_conf, loss_box = self.det_loss(p_det, anchor, t_conf, t_box)
+ loss_confs.append(loss_conf)
+ loss_boxes.append(loss_box)
+ if self.for_mot:
+ return {'loss_confs': loss_confs, 'loss_boxes': loss_boxes}
+ else:
+ jde_conf_losses = sum(loss_confs)
+ jde_box_losses = sum(loss_boxes)
+ jde_det_losses = {
+ "loss_conf": jde_conf_losses,
+ "loss_box": jde_box_losses,
+ "loss": jde_conf_losses + jde_box_losses,
+ }
+ return jde_det_losses
+
+
+@register
+class JDEEmbeddingLoss(nn.Layer):
+ def __init__(self, ):
+ super(JDEEmbeddingLoss, self).__init__()
+ self.phony = self.create_parameter(shape=[1], dtype="float32")
+
+ def emb_loss(self, p_ide, t_conf, t_ide, emb_scale, classifier):
+ emb_dim = p_ide.shape[1]
+ p_ide = p_ide.transpose((0, 2, 3, 1))
+ p_ide_flatten = paddle.reshape(p_ide, [-1, emb_dim])
+ mask = t_conf > 0
+ mask = paddle.cast(mask, dtype="int64")
+ mask.stop_gradient = True
+ emb_mask = mask.max(1).flatten()
+ emb_mask_inds = paddle.nonzero(emb_mask > 0).flatten()
+ emb_mask_inds.stop_gradient = True
+ # use max(1) to decide the id, TODO: more reseanable strategy
+ t_ide_flatten = t_ide.max(1).flatten()
+ t_ide_flatten = paddle.cast(t_ide_flatten, dtype="int64")
+ valid_inds = paddle.nonzero(t_ide_flatten != -1).flatten()
+
+ if emb_mask_inds.numel() == 0 or valid_inds.numel() == 0:
+ # loss_ide = paddle.to_tensor([0]) # will be error in gradient backward
+ loss_ide = self.phony * 0 # todo
+ else:
+ embedding = paddle.gather(p_ide_flatten, emb_mask_inds)
+ embedding = emb_scale * F.normalize(embedding)
+ logits = classifier(embedding)
+
+ ide_target = paddle.gather(t_ide_flatten, emb_mask_inds)
+
+ loss_ide = F.cross_entropy(
+ logits, ide_target, ignore_index=-1, reduction='mean')
+ loss_ide.stop_gradient = False
+
+ return loss_ide
+
+ def forward(self, ide_outs, targets, emb_scale, classifier):
+ loss_ides = []
+ for i, p_ide in enumerate(ide_outs):
+ t_conf = targets['tconf{}'.format(i)]
+ t_ide = targets['tide{}'.format(i)]
+
+ loss_ide = self.emb_loss(p_ide, t_conf, t_ide, emb_scale,
+ classifier)
+ loss_ides.append(loss_ide)
+ return loss_ides
+
+
+@register
+class JDELoss(nn.Layer):
+ def __init__(self):
+ super(JDELoss, self).__init__()
+
+ def forward(self, loss_confs, loss_boxes, loss_ides, loss_params_cls,
+ loss_params_reg, loss_params_ide, targets):
+ assert len(loss_confs) == len(loss_boxes) == len(loss_ides)
+ assert len(loss_params_cls) == len(loss_params_reg) == len(
+ loss_params_ide)
+ assert len(loss_confs) == len(loss_params_cls)
+
+ batchsize = targets['gt_bbox'].shape[0]
+ nTargets = paddle.nonzero(paddle.sum(targets['gt_bbox'], axis=2)).shape[
+ 0] / batchsize
+ nTargets = paddle.to_tensor(nTargets, dtype='float32')
+ nTargets.stop_gradient = True
+
+ jde_losses = []
+ for i, (loss_conf, loss_box, loss_ide, l_conf_p, l_box_p,
+ l_ide_p) in enumerate(
+ zip(loss_confs, loss_boxes, loss_ides, loss_params_cls,
+ loss_params_reg, loss_params_ide)):
+
+ jde_loss = l_conf_p(loss_conf) + l_box_p(loss_box) + l_ide_p(
+ loss_ide)
+ jde_losses.append(jde_loss)
+
+ loss_all = {
+ "loss_conf": sum(loss_confs),
+ "loss_box": sum(loss_boxes),
+ "loss_ide": sum(loss_ides),
+ "loss": sum(jde_losses),
+ "nTargets": nTargets,
+ }
+ return loss_all
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/keypoint_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/keypoint_loss.py
new file mode 100644
index 000000000..9c3c113db
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/keypoint_loss.py
@@ -0,0 +1,228 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from itertools import cycle, islice
+from collections import abc
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['HrHRNetLoss', 'KeyPointMSELoss']
+
+
+@register
+@serializable
+class KeyPointMSELoss(nn.Layer):
+ def __init__(self, use_target_weight=True, loss_scale=0.5):
+ """
+ KeyPointMSELoss layer
+
+ Args:
+ use_target_weight (bool): whether to use target weight
+ """
+ super(KeyPointMSELoss, self).__init__()
+ self.criterion = nn.MSELoss(reduction='mean')
+ self.use_target_weight = use_target_weight
+ self.loss_scale = loss_scale
+
+ def forward(self, output, records):
+ target = records['target']
+ target_weight = records['target_weight']
+ batch_size = output.shape[0]
+ num_joints = output.shape[1]
+ heatmaps_pred = output.reshape(
+ (batch_size, num_joints, -1)).split(num_joints, 1)
+ heatmaps_gt = target.reshape(
+ (batch_size, num_joints, -1)).split(num_joints, 1)
+ loss = 0
+ for idx in range(num_joints):
+ heatmap_pred = heatmaps_pred[idx].squeeze()
+ heatmap_gt = heatmaps_gt[idx].squeeze()
+ if self.use_target_weight:
+ loss += self.loss_scale * self.criterion(
+ heatmap_pred.multiply(target_weight[:, idx]),
+ heatmap_gt.multiply(target_weight[:, idx]))
+ else:
+ loss += self.loss_scale * self.criterion(heatmap_pred,
+ heatmap_gt)
+ keypoint_losses = dict()
+ keypoint_losses['loss'] = loss / num_joints
+ return keypoint_losses
+
+
+@register
+@serializable
+class HrHRNetLoss(nn.Layer):
+ def __init__(self, num_joints, swahr):
+ """
+ HrHRNetLoss layer
+
+ Args:
+ num_joints (int): number of keypoints
+ """
+ super(HrHRNetLoss, self).__init__()
+ if swahr:
+ self.heatmaploss = HeatMapSWAHRLoss(num_joints)
+ else:
+ self.heatmaploss = HeatMapLoss()
+ self.aeloss = AELoss()
+ self.ziploss = ZipLoss(
+ [self.heatmaploss, self.heatmaploss, self.aeloss])
+
+ def forward(self, inputs, records):
+ targets = []
+ targets.append([records['heatmap_gt1x'], records['mask_1x']])
+ targets.append([records['heatmap_gt2x'], records['mask_2x']])
+ targets.append(records['tagmap'])
+ keypoint_losses = dict()
+ loss = self.ziploss(inputs, targets)
+ keypoint_losses['heatmap_loss'] = loss[0] + loss[1]
+ keypoint_losses['pull_loss'] = loss[2][0]
+ keypoint_losses['push_loss'] = loss[2][1]
+ keypoint_losses['loss'] = recursive_sum(loss)
+ return keypoint_losses
+
+
+class HeatMapLoss(object):
+ def __init__(self, loss_factor=1.0):
+ super(HeatMapLoss, self).__init__()
+ self.loss_factor = loss_factor
+
+ def __call__(self, preds, targets):
+ heatmap, mask = targets
+ loss = ((preds - heatmap)**2 * mask.cast('float').unsqueeze(1))
+ loss = paddle.clip(loss, min=0, max=2).mean()
+ loss *= self.loss_factor
+ return loss
+
+
+class HeatMapSWAHRLoss(object):
+ def __init__(self, num_joints, loss_factor=1.0):
+ super(HeatMapSWAHRLoss, self).__init__()
+ self.loss_factor = loss_factor
+ self.num_joints = num_joints
+
+ def __call__(self, preds, targets):
+ heatmaps_gt, mask = targets
+ heatmaps_pred = preds[0]
+ scalemaps_pred = preds[1]
+
+ heatmaps_scaled_gt = paddle.where(heatmaps_gt > 0, 0.5 * heatmaps_gt * (
+ 1 + (1 +
+ (scalemaps_pred - 1.) * paddle.log(heatmaps_gt + 1e-10))**2),
+ heatmaps_gt)
+
+ regularizer_loss = paddle.mean(
+ paddle.pow((scalemaps_pred - 1.) * (heatmaps_gt > 0).astype(float),
+ 2))
+ omiga = 0.01
+ # thres = 2**(-1/omiga), threshold for positive weight
+ hm_weight = heatmaps_scaled_gt**(
+ omiga
+ ) * paddle.abs(1 - heatmaps_pred) + paddle.abs(heatmaps_pred) * (
+ 1 - heatmaps_scaled_gt**(omiga))
+
+ loss = (((heatmaps_pred - heatmaps_scaled_gt)**2) *
+ mask.cast('float').unsqueeze(1)) * hm_weight
+ loss = loss.mean()
+ loss = self.loss_factor * (loss + 1.0 * regularizer_loss)
+ return loss
+
+
+class AELoss(object):
+ def __init__(self, pull_factor=0.001, push_factor=0.001):
+ super(AELoss, self).__init__()
+ self.pull_factor = pull_factor
+ self.push_factor = push_factor
+
+ def apply_single(self, pred, tagmap):
+ if tagmap.numpy()[:, :, 3].sum() == 0:
+ return (paddle.zeros([1]), paddle.zeros([1]))
+ nonzero = paddle.nonzero(tagmap[:, :, 3] > 0)
+ if nonzero.shape[0] == 0:
+ return (paddle.zeros([1]), paddle.zeros([1]))
+ p_inds = paddle.unique(nonzero[:, 0])
+ num_person = p_inds.shape[0]
+ if num_person == 0:
+ return (paddle.zeros([1]), paddle.zeros([1]))
+
+ pull = 0
+ tagpull_num = 0
+ embs_all = []
+ person_unvalid = 0
+ for person_idx in p_inds.numpy():
+ valid_single = tagmap[person_idx.item()]
+ validkpts = paddle.nonzero(valid_single[:, 3] > 0)
+ valid_single = paddle.index_select(valid_single, validkpts)
+ emb = paddle.gather_nd(pred, valid_single[:, :3])
+ if emb.shape[0] == 1:
+ person_unvalid += 1
+ mean = paddle.mean(emb, axis=0)
+ embs_all.append(mean)
+ pull += paddle.mean(paddle.pow(emb - mean, 2), axis=0)
+ tagpull_num += emb.shape[0]
+ pull /= max(num_person - person_unvalid, 1)
+ if num_person < 2:
+ return pull, paddle.zeros([1])
+
+ embs_all = paddle.stack(embs_all)
+ A = embs_all.expand([num_person, num_person])
+ B = A.transpose([1, 0])
+ diff = A - B
+
+ diff = paddle.pow(diff, 2)
+ push = paddle.exp(-diff)
+ push = paddle.sum(push) - num_person
+
+ push /= 2 * num_person * (num_person - 1)
+ return pull, push
+
+ def __call__(self, preds, tagmaps):
+ bs = preds.shape[0]
+ losses = [
+ self.apply_single(preds[i:i + 1].squeeze(),
+ tagmaps[i:i + 1].squeeze()) for i in range(bs)
+ ]
+ pull = self.pull_factor * sum(loss[0] for loss in losses) / len(losses)
+ push = self.push_factor * sum(loss[1] for loss in losses) / len(losses)
+ return pull, push
+
+
+class ZipLoss(object):
+ def __init__(self, loss_funcs):
+ super(ZipLoss, self).__init__()
+ self.loss_funcs = loss_funcs
+
+ def __call__(self, inputs, targets):
+ assert len(self.loss_funcs) == len(targets) >= len(inputs)
+
+ def zip_repeat(*args):
+ longest = max(map(len, args))
+ filled = [islice(cycle(x), longest) for x in args]
+ return zip(*filled)
+
+ return tuple(
+ fn(x, y)
+ for x, y, fn in zip_repeat(inputs, targets, self.loss_funcs))
+
+
+def recursive_sum(inputs):
+ if isinstance(inputs, abc.Sequence):
+ return sum([recursive_sum(x) for x in inputs])
+ return inputs
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/solov2_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/solov2_loss.py
new file mode 100644
index 000000000..ef97a7707
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/solov2_loss.py
@@ -0,0 +1,101 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['SOLOv2Loss']
+
+
+@register
+@serializable
+class SOLOv2Loss(object):
+ """
+ SOLOv2Loss
+ Args:
+ ins_loss_weight (float): Weight of instance loss.
+ focal_loss_gamma (float): Gamma parameter for focal loss.
+ focal_loss_alpha (float): Alpha parameter for focal loss.
+ """
+
+ def __init__(self,
+ ins_loss_weight=3.0,
+ focal_loss_gamma=2.0,
+ focal_loss_alpha=0.25):
+ self.ins_loss_weight = ins_loss_weight
+ self.focal_loss_gamma = focal_loss_gamma
+ self.focal_loss_alpha = focal_loss_alpha
+
+ def _dice_loss(self, input, target):
+ input = paddle.reshape(input, shape=(paddle.shape(input)[0], -1))
+ target = paddle.reshape(target, shape=(paddle.shape(target)[0], -1))
+ a = paddle.sum(input * target, axis=1)
+ b = paddle.sum(input * input, axis=1) + 0.001
+ c = paddle.sum(target * target, axis=1) + 0.001
+ d = (2 * a) / (b + c)
+ return 1 - d
+
+ def __call__(self, ins_pred_list, ins_label_list, cate_preds, cate_labels,
+ num_ins):
+ """
+ Get loss of network of SOLOv2.
+ Args:
+ ins_pred_list (list): Variable list of instance branch output.
+ ins_label_list (list): List of instance labels pre batch.
+ cate_preds (list): Concat Variable list of categroy branch output.
+ cate_labels (list): Concat list of categroy labels pre batch.
+ num_ins (int): Number of positive samples in a mini-batch.
+ Returns:
+ loss_ins (Variable): The instance loss Variable of SOLOv2 network.
+ loss_cate (Variable): The category loss Variable of SOLOv2 network.
+ """
+
+ #1. Ues dice_loss to calculate instance loss
+ loss_ins = []
+ total_weights = paddle.zeros(shape=[1], dtype='float32')
+ for input, target in zip(ins_pred_list, ins_label_list):
+ if input is None:
+ continue
+ target = paddle.cast(target, 'float32')
+ target = paddle.reshape(
+ target,
+ shape=[-1, paddle.shape(input)[-2], paddle.shape(input)[-1]])
+ weights = paddle.cast(
+ paddle.sum(target, axis=[1, 2]) > 0, 'float32')
+ input = F.sigmoid(input)
+ dice_out = paddle.multiply(self._dice_loss(input, target), weights)
+ total_weights += paddle.sum(weights)
+ loss_ins.append(dice_out)
+ loss_ins = paddle.sum(paddle.concat(loss_ins)) / total_weights
+ loss_ins = loss_ins * self.ins_loss_weight
+
+ #2. Ues sigmoid_focal_loss to calculate category loss
+ # expand onehot labels
+ num_classes = cate_preds.shape[-1]
+ cate_labels_bin = F.one_hot(cate_labels, num_classes=num_classes + 1)
+ cate_labels_bin = cate_labels_bin[:, 1:]
+
+ loss_cate = F.sigmoid_focal_loss(
+ cate_preds,
+ label=cate_labels_bin,
+ normalizer=num_ins + 1.,
+ gamma=self.focal_loss_gamma,
+ alpha=self.focal_loss_alpha)
+
+ return loss_ins, loss_cate
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/sparsercnn_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/sparsercnn_loss.py
new file mode 100644
index 000000000..2d36b21a2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/sparsercnn_loss.py
@@ -0,0 +1,425 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/PeizeSun/SparseR-CNN/blob/main/projects/SparseRCNN/sparsercnn/loss.py
+Ths copyright of PeizeSun/SparseR-CNN is as follows:
+MIT License [see LICENSE for details]
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from scipy.optimize import linear_sum_assignment
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.metric import accuracy
+from ppdet.core.workspace import register
+from ppdet.modeling.losses.iou_loss import GIoULoss
+
+__all__ = ["SparseRCNNLoss"]
+
+
+@register
+class SparseRCNNLoss(nn.Layer):
+ """ This class computes the loss for SparseRCNN.
+ The process happens in two steps:
+ 1) we compute hungarian assignment between ground truth boxes and the outputs of the model
+ 2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
+ """
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ losses,
+ focal_loss_alpha,
+ focal_loss_gamma,
+ num_classes=80,
+ class_weight=2.,
+ l1_weight=5.,
+ giou_weight=2.):
+ """ Create the criterion.
+ Parameters:
+ num_classes: number of object categories, omitting the special no-object category
+ weight_dict: dict containing as key the names of the losses and as values their relative weight.
+ losses: list of all the losses to be applied. See get_loss for list of available losses.
+ matcher: module able to compute a matching between targets and proposals
+ """
+ super().__init__()
+ self.num_classes = num_classes
+ weight_dict = {
+ "loss_ce": class_weight,
+ "loss_bbox": l1_weight,
+ "loss_giou": giou_weight
+ }
+ self.weight_dict = weight_dict
+ self.losses = losses
+ self.giou_loss = GIoULoss(reduction="sum")
+
+ self.focal_loss_alpha = focal_loss_alpha
+ self.focal_loss_gamma = focal_loss_gamma
+
+ self.matcher = HungarianMatcher(focal_loss_alpha, focal_loss_gamma,
+ class_weight, l1_weight, giou_weight)
+
+ def loss_labels(self, outputs, targets, indices, num_boxes, log=True):
+ """Classification loss (NLL)
+ targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
+ """
+ assert 'pred_logits' in outputs
+ src_logits = outputs['pred_logits']
+
+ idx = self._get_src_permutation_idx(indices)
+ target_classes_o = paddle.concat([
+ paddle.gather(
+ t["labels"], J, axis=0) for t, (_, J) in zip(targets, indices)
+ ])
+ target_classes = paddle.full(
+ src_logits.shape[:2], self.num_classes, dtype="int32")
+ for i, ind in enumerate(zip(idx[0], idx[1])):
+ target_classes[int(ind[0]), int(ind[1])] = target_classes_o[i]
+ target_classes.stop_gradient = True
+
+ src_logits = src_logits.flatten(start_axis=0, stop_axis=1)
+
+ # prepare one_hot target.
+ target_classes = target_classes.flatten(start_axis=0, stop_axis=1)
+ class_ids = paddle.arange(0, self.num_classes)
+ labels = (target_classes.unsqueeze(-1) == class_ids).astype("float32")
+ labels.stop_gradient = True
+
+ # comp focal loss.
+ class_loss = sigmoid_focal_loss(
+ src_logits,
+ labels,
+ alpha=self.focal_loss_alpha,
+ gamma=self.focal_loss_gamma,
+ reduction="sum", ) / num_boxes
+ losses = {'loss_ce': class_loss}
+
+ if log:
+ label_acc = target_classes_o.unsqueeze(-1)
+ src_idx = [src for (src, _) in indices]
+
+ pred_list = []
+ for i in range(outputs["pred_logits"].shape[0]):
+ pred_list.append(
+ paddle.gather(
+ outputs["pred_logits"][i], src_idx[i], axis=0))
+
+ pred = F.sigmoid(paddle.concat(pred_list, axis=0))
+ acc = accuracy(pred, label_acc.astype("int64"))
+ losses["acc"] = acc
+
+ return losses
+
+ def loss_boxes(self, outputs, targets, indices, num_boxes):
+ """Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
+ targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
+ The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size.
+ """
+ assert 'pred_boxes' in outputs # [batch_size, num_proposals, 4]
+ src_idx = [src for (src, _) in indices]
+ src_boxes_list = []
+
+ for i in range(outputs["pred_boxes"].shape[0]):
+ src_boxes_list.append(
+ paddle.gather(
+ outputs["pred_boxes"][i], src_idx[i], axis=0))
+
+ src_boxes = paddle.concat(src_boxes_list, axis=0)
+
+ target_boxes = paddle.concat(
+ [
+ paddle.gather(
+ t['boxes'], I, axis=0)
+ for t, (_, I) in zip(targets, indices)
+ ],
+ axis=0)
+ target_boxes.stop_gradient = True
+ losses = {}
+
+ losses['loss_giou'] = self.giou_loss(src_boxes,
+ target_boxes) / num_boxes
+
+ image_size = paddle.concat([v["img_whwh_tgt"] for v in targets])
+ src_boxes_ = src_boxes / image_size
+ target_boxes_ = target_boxes / image_size
+
+ loss_bbox = F.l1_loss(src_boxes_, target_boxes_, reduction='sum')
+ losses['loss_bbox'] = loss_bbox / num_boxes
+
+ return losses
+
+ def _get_src_permutation_idx(self, indices):
+ # permute predictions following indices
+ batch_idx = paddle.concat(
+ [paddle.full_like(src, i) for i, (src, _) in enumerate(indices)])
+ src_idx = paddle.concat([src for (src, _) in indices])
+ return batch_idx, src_idx
+
+ def _get_tgt_permutation_idx(self, indices):
+ # permute targets following indices
+ batch_idx = paddle.concat(
+ [paddle.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
+ tgt_idx = paddle.concat([tgt for (_, tgt) in indices])
+ return batch_idx, tgt_idx
+
+ def get_loss(self, loss, outputs, targets, indices, num_boxes, **kwargs):
+ loss_map = {
+ 'labels': self.loss_labels,
+ 'boxes': self.loss_boxes,
+ }
+ assert loss in loss_map, f'do you really want to compute {loss} loss?'
+ return loss_map[loss](outputs, targets, indices, num_boxes, **kwargs)
+
+ def forward(self, outputs, targets):
+ """ This performs the loss computation.
+ Parameters:
+ outputs: dict of tensors, see the output specification of the model for the format
+ targets: list of dicts, such that len(targets) == batch_size.
+ The expected keys in each dict depends on the losses applied, see each loss' doc
+ """
+ outputs_without_aux = {
+ k: v
+ for k, v in outputs.items() if k != 'aux_outputs'
+ }
+
+ # Retrieve the matching between the outputs of the last layer and the targets
+ indices = self.matcher(outputs_without_aux, targets)
+
+ # Compute the average number of target boxes accross all nodes, for normalization purposes
+ num_boxes = sum(len(t["labels"]) for t in targets)
+ num_boxes = paddle.to_tensor(
+ [num_boxes],
+ dtype="float32",
+ place=next(iter(outputs.values())).place)
+
+ # Compute all the requested losses
+ losses = {}
+ for loss in self.losses:
+ losses.update(
+ self.get_loss(loss, outputs, targets, indices, num_boxes))
+
+ # In case of auxiliary losses, we repeat this process with the output of each intermediate layer.
+ if 'aux_outputs' in outputs:
+ for i, aux_outputs in enumerate(outputs['aux_outputs']):
+ indices = self.matcher(aux_outputs, targets)
+ for loss in self.losses:
+ kwargs = {}
+ if loss == 'labels':
+ # Logging is enabled only for the last layer
+ kwargs = {'log': False}
+ l_dict = self.get_loss(loss, aux_outputs, targets, indices,
+ num_boxes, **kwargs)
+
+ w_dict = {}
+ for k in l_dict.keys():
+ if k in self.weight_dict:
+ w_dict[k + f'_{i}'] = l_dict[k] * self.weight_dict[
+ k]
+ else:
+ w_dict[k + f'_{i}'] = l_dict[k]
+ losses.update(w_dict)
+
+ return losses
+
+
+class HungarianMatcher(nn.Layer):
+ """This class computes an assignment between the targets and the predictions of the network
+ For efficiency reasons, the targets don't include the no_object. Because of this, in general,
+ there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
+ while the others are un-matched (and thus treated as non-objects).
+ """
+
+ def __init__(self,
+ focal_loss_alpha,
+ focal_loss_gamma,
+ cost_class: float=1,
+ cost_bbox: float=1,
+ cost_giou: float=1):
+ """Creates the matcher
+ Params:
+ cost_class: This is the relative weight of the classification error in the matching cost
+ cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost
+ cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost
+ """
+ super().__init__()
+ self.cost_class = cost_class
+ self.cost_bbox = cost_bbox
+ self.cost_giou = cost_giou
+ self.focal_loss_alpha = focal_loss_alpha
+ self.focal_loss_gamma = focal_loss_gamma
+ assert cost_class != 0 or cost_bbox != 0 or cost_giou != 0, "all costs cant be 0"
+
+ @paddle.no_grad()
+ def forward(self, outputs, targets):
+ """ Performs the matching
+ Args:
+ outputs: This is a dict that contains at least these entries:
+ "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
+ "pred_boxes": Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates
+ eg. outputs = {"pred_logits": pred_logits, "pred_boxes": pred_boxes}
+ targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
+ "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth
+ objects in the target) containing the class labels
+ "boxes": Tensor of dim [num_target_boxes, 4] containing the target box coordinates
+ eg. targets = [{"labels":labels, "boxes": boxes}, ...,{"labels":labels, "boxes": boxes}]
+ Returns:
+ A list of size batch_size, containing tuples of (index_i, index_j) where:
+ - index_i is the indices of the selected predictions (in order)
+ - index_j is the indices of the corresponding selected targets (in order)
+ For each batch element, it holds:
+ len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
+ """
+ bs, num_queries = outputs["pred_logits"].shape[:2]
+
+ # We flatten to compute the cost matrices in a batch
+ out_prob = F.sigmoid(outputs["pred_logits"].flatten(
+ start_axis=0, stop_axis=1))
+ out_bbox = outputs["pred_boxes"].flatten(start_axis=0, stop_axis=1)
+
+ # Also concat the target labels and boxes
+ tgt_ids = paddle.concat([v["labels"] for v in targets])
+ assert (tgt_ids > -1).all()
+ tgt_bbox = paddle.concat([v["boxes"] for v in targets])
+
+ # Compute the classification cost. Contrary to the loss, we don't use the NLL,
+ # but approximate it in 1 - proba[target class].
+ # The 1 is a constant that doesn't change the matching, it can be ommitted.
+
+ # Compute the classification cost.
+ alpha = self.focal_loss_alpha
+ gamma = self.focal_loss_gamma
+
+ neg_cost_class = (1 - alpha) * (out_prob**gamma) * (-(
+ 1 - out_prob + 1e-8).log())
+ pos_cost_class = alpha * ((1 - out_prob)
+ **gamma) * (-(out_prob + 1e-8).log())
+
+ cost_class = paddle.gather(
+ pos_cost_class, tgt_ids, axis=1) - paddle.gather(
+ neg_cost_class, tgt_ids, axis=1)
+
+ # Compute the L1 cost between boxes
+ image_size_out = paddle.concat(
+ [v["img_whwh"].unsqueeze(0) for v in targets])
+ image_size_out = image_size_out.unsqueeze(1).tile(
+ [1, num_queries, 1]).flatten(
+ start_axis=0, stop_axis=1)
+ image_size_tgt = paddle.concat([v["img_whwh_tgt"] for v in targets])
+
+ out_bbox_ = out_bbox / image_size_out
+ tgt_bbox_ = tgt_bbox / image_size_tgt
+ cost_bbox = F.l1_loss(
+ out_bbox_.unsqueeze(-2), tgt_bbox_,
+ reduction='none').sum(-1) # [batch_size * num_queries, num_tgts]
+
+ # Compute the giou cost betwen boxes
+ cost_giou = -get_bboxes_giou(out_bbox, tgt_bbox)
+
+ # Final cost matrix
+ C = self.cost_bbox * cost_bbox + self.cost_class * cost_class + self.cost_giou * cost_giou
+ C = C.reshape([bs, num_queries, -1])
+
+ sizes = [len(v["boxes"]) for v in targets]
+
+ indices = [
+ linear_sum_assignment(c[i].numpy())
+ for i, c in enumerate(C.split(sizes, -1))
+ ]
+ return [(paddle.to_tensor(
+ i, dtype="int32"), paddle.to_tensor(
+ j, dtype="int32")) for i, j in indices]
+
+
+def box_area(boxes):
+ assert (boxes[:, 2:] >= boxes[:, :2]).all()
+ wh = boxes[:, 2:] - boxes[:, :2]
+ return wh[:, 0] * wh[:, 1]
+
+
+def boxes_iou(boxes1, boxes2):
+ '''
+ Compute iou
+
+ Args:
+ boxes1 (paddle.tensor) shape (N, 4)
+ boxes2 (paddle.tensor) shape (M, 4)
+
+ Return:
+ (paddle.tensor) shape (N, M)
+ '''
+ area1 = box_area(boxes1)
+ area2 = box_area(boxes2)
+
+ lt = paddle.maximum(boxes1.unsqueeze(-2)[:, :, :2], boxes2[:, :2])
+ rb = paddle.minimum(boxes1.unsqueeze(-2)[:, :, 2:], boxes2[:, 2:])
+
+ wh = (rb - lt).astype("float32").clip(min=1e-9)
+ inter = wh[:, :, 0] * wh[:, :, 1]
+
+ union = area1.unsqueeze(-1) + area2 - inter + 1e-9
+
+ iou = inter / union
+ return iou, union
+
+
+def get_bboxes_giou(boxes1, boxes2, eps=1e-9):
+ """calculate the ious of boxes1 and boxes2
+
+ Args:
+ boxes1 (Tensor): shape [N, 4]
+ boxes2 (Tensor): shape [M, 4]
+ eps (float): epsilon to avoid divide by zero
+
+ Return:
+ ious (Tensor): ious of boxes1 and boxes2, with the shape [N, M]
+ """
+ assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
+ assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
+
+ iou, union = boxes_iou(boxes1, boxes2)
+
+ lt = paddle.minimum(boxes1.unsqueeze(-2)[:, :, :2], boxes2[:, :2])
+ rb = paddle.maximum(boxes1.unsqueeze(-2)[:, :, 2:], boxes2[:, 2:])
+
+ wh = (rb - lt).astype("float32").clip(min=eps)
+ enclose_area = wh[:, :, 0] * wh[:, :, 1]
+
+ giou = iou - (enclose_area - union) / enclose_area
+
+ return giou
+
+
+def sigmoid_focal_loss(inputs, targets, alpha, gamma, reduction="sum"):
+
+ assert reduction in ["sum", "mean"
+ ], f'do not support this {reduction} reduction?'
+
+ p = F.sigmoid(inputs)
+ ce_loss = F.binary_cross_entropy_with_logits(
+ inputs, targets, reduction="none")
+ p_t = p * targets + (1 - p) * (1 - targets)
+ loss = ce_loss * ((1 - p_t)**gamma)
+
+ if alpha >= 0:
+ alpha_t = alpha * targets + (1 - alpha) * (1 - targets)
+ loss = alpha_t * loss
+
+ if reduction == "mean":
+ loss = loss.mean()
+ elif reduction == "sum":
+ loss = loss.sum()
+
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ssd_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ssd_loss.py
new file mode 100644
index 000000000..62aecc1f3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/ssd_loss.py
@@ -0,0 +1,169 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+from ..ops import iou_similarity
+from ..bbox_utils import bbox2delta
+
+__all__ = ['SSDLoss']
+
+
+@register
+class SSDLoss(nn.Layer):
+ """
+ SSDLoss
+
+ Args:
+ overlap_threshold (float32, optional): IoU threshold for negative bboxes
+ and positive bboxes, 0.5 by default.
+ neg_pos_ratio (float): The ratio of negative samples / positive samples.
+ loc_loss_weight (float): The weight of loc_loss.
+ conf_loss_weight (float): The weight of conf_loss.
+ prior_box_var (list): Variances corresponding to prior box coord, [0.1,
+ 0.1, 0.2, 0.2] by default.
+ """
+
+ def __init__(self,
+ overlap_threshold=0.5,
+ neg_pos_ratio=3.0,
+ loc_loss_weight=1.0,
+ conf_loss_weight=1.0,
+ prior_box_var=[0.1, 0.1, 0.2, 0.2]):
+ super(SSDLoss, self).__init__()
+ self.overlap_threshold = overlap_threshold
+ self.neg_pos_ratio = neg_pos_ratio
+ self.loc_loss_weight = loc_loss_weight
+ self.conf_loss_weight = conf_loss_weight
+ self.prior_box_var = [1. / a for a in prior_box_var]
+
+ def _bipartite_match_for_batch(self, gt_bbox, gt_label, prior_boxes,
+ bg_index):
+ """
+ Args:
+ gt_bbox (Tensor): [B, N, 4]
+ gt_label (Tensor): [B, N, 1]
+ prior_boxes (Tensor): [A, 4]
+ bg_index (int): Background class index
+ """
+ batch_size, num_priors = gt_bbox.shape[0], prior_boxes.shape[0]
+ ious = iou_similarity(gt_bbox.reshape((-1, 4)), prior_boxes).reshape(
+ (batch_size, -1, num_priors))
+
+ # For each prior box, get the max IoU of all GTs.
+ prior_max_iou, prior_argmax_iou = ious.max(axis=1), ious.argmax(axis=1)
+ # For each GT, get the max IoU of all prior boxes.
+ gt_max_iou, gt_argmax_iou = ious.max(axis=2), ious.argmax(axis=2)
+
+ # Gather target bbox and label according to 'prior_argmax_iou' index.
+ batch_ind = paddle.arange(end=batch_size, dtype='int64').unsqueeze(-1)
+ prior_argmax_iou = paddle.stack(
+ [batch_ind.tile([1, num_priors]), prior_argmax_iou], axis=-1)
+ targets_bbox = paddle.gather_nd(gt_bbox, prior_argmax_iou)
+ targets_label = paddle.gather_nd(gt_label, prior_argmax_iou)
+ # Assign negative
+ bg_index_tensor = paddle.full([batch_size, num_priors, 1], bg_index,
+ 'int64')
+ targets_label = paddle.where(
+ prior_max_iou.unsqueeze(-1) < self.overlap_threshold,
+ bg_index_tensor, targets_label)
+
+ # Ensure each GT can match the max IoU prior box.
+ batch_ind = (batch_ind * num_priors + gt_argmax_iou).flatten()
+ targets_bbox = paddle.scatter(
+ targets_bbox.reshape([-1, 4]), batch_ind,
+ gt_bbox.reshape([-1, 4])).reshape([batch_size, -1, 4])
+ targets_label = paddle.scatter(
+ targets_label.reshape([-1, 1]), batch_ind,
+ gt_label.reshape([-1, 1])).reshape([batch_size, -1, 1])
+ targets_label[:, :1] = bg_index
+
+ # Encode box
+ prior_boxes = prior_boxes.unsqueeze(0).tile([batch_size, 1, 1])
+ targets_bbox = bbox2delta(
+ prior_boxes.reshape([-1, 4]),
+ targets_bbox.reshape([-1, 4]), self.prior_box_var)
+ targets_bbox = targets_bbox.reshape([batch_size, -1, 4])
+
+ return targets_bbox, targets_label
+
+ def _mine_hard_example(self,
+ conf_loss,
+ targets_label,
+ bg_index,
+ mine_neg_ratio=0.01):
+ pos = (targets_label != bg_index).astype(conf_loss.dtype)
+ num_pos = pos.sum(axis=1, keepdim=True)
+ neg = (targets_label == bg_index).astype(conf_loss.dtype)
+
+ conf_loss = conf_loss.detach() * neg
+ loss_idx = conf_loss.argsort(axis=1, descending=True)
+ idx_rank = loss_idx.argsort(axis=1)
+ num_negs = []
+ for i in range(conf_loss.shape[0]):
+ cur_num_pos = num_pos[i]
+ num_neg = paddle.clip(
+ cur_num_pos * self.neg_pos_ratio, max=pos.shape[1])
+ num_neg = num_neg if num_neg > 0 else paddle.to_tensor(
+ [pos.shape[1] * mine_neg_ratio])
+ num_negs.append(num_neg)
+ num_negs = paddle.stack(num_negs).expand_as(idx_rank)
+ neg_mask = (idx_rank < num_negs).astype(conf_loss.dtype)
+
+ return (neg_mask + pos).astype('bool')
+
+ def forward(self, boxes, scores, gt_bbox, gt_label, prior_boxes):
+ boxes = paddle.concat(boxes, axis=1)
+ scores = paddle.concat(scores, axis=1)
+ gt_label = gt_label.unsqueeze(-1).astype('int64')
+ prior_boxes = paddle.concat(prior_boxes, axis=0)
+ bg_index = scores.shape[-1] - 1
+
+ # Match bbox and get targets.
+ targets_bbox, targets_label = \
+ self._bipartite_match_for_batch(gt_bbox, gt_label, prior_boxes, bg_index)
+ targets_bbox.stop_gradient = True
+ targets_label.stop_gradient = True
+
+ # Compute regression loss.
+ # Select positive samples.
+ bbox_mask = paddle.tile(targets_label != bg_index, [1, 1, 4])
+ if bbox_mask.astype(boxes.dtype).sum() > 0:
+ location = paddle.masked_select(boxes, bbox_mask)
+ targets_bbox = paddle.masked_select(targets_bbox, bbox_mask)
+ loc_loss = F.smooth_l1_loss(location, targets_bbox, reduction='sum')
+ loc_loss = loc_loss * self.loc_loss_weight
+ else:
+ loc_loss = paddle.zeros([1])
+
+ # Compute confidence loss.
+ conf_loss = F.cross_entropy(scores, targets_label, reduction="none")
+ # Mining hard examples.
+ label_mask = self._mine_hard_example(
+ conf_loss.squeeze(-1), targets_label.squeeze(-1), bg_index)
+ conf_loss = paddle.masked_select(conf_loss, label_mask.unsqueeze(-1))
+ conf_loss = conf_loss.sum() * self.conf_loss_weight
+
+ # Compute overall weighted loss.
+ normalizer = (targets_label != bg_index).astype('float32').sum().clip(
+ min=1)
+ loss = (conf_loss + loc_loss) / normalizer
+
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/varifocal_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/varifocal_loss.py
new file mode 100644
index 000000000..42d18a659
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/varifocal_loss.py
@@ -0,0 +1,152 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/losses/varifocal_loss.py
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling import ops
+
+__all__ = ['VarifocalLoss']
+
+
+def varifocal_loss(pred,
+ target,
+ alpha=0.75,
+ gamma=2.0,
+ iou_weighted=True,
+ use_sigmoid=True):
+ """`Varifocal Loss `_
+
+ Args:
+ pred (Tensor): The prediction with shape (N, C), C is the
+ number of classes
+ target (Tensor): The learning target of the iou-aware
+ classification score with shape (N, C), C is the number of classes.
+ alpha (float, optional): A balance factor for the negative part of
+ Varifocal Loss, which is different from the alpha of Focal Loss.
+ Defaults to 0.75.
+ gamma (float, optional): The gamma for calculating the modulating
+ factor. Defaults to 2.0.
+ iou_weighted (bool, optional): Whether to weight the loss of the
+ positive example with the iou target. Defaults to True.
+ """
+ # pred and target should be of the same size
+ assert pred.shape == target.shape
+ if use_sigmoid:
+ pred_new = F.sigmoid(pred)
+ else:
+ pred_new = pred
+ target = target.cast(pred.dtype)
+ if iou_weighted:
+ focal_weight = target * (target > 0.0).cast('float32') + \
+ alpha * (pred_new - target).abs().pow(gamma) * \
+ (target <= 0.0).cast('float32')
+ else:
+ focal_weight = (target > 0.0).cast('float32') + \
+ alpha * (pred_new - target).abs().pow(gamma) * \
+ (target <= 0.0).cast('float32')
+
+ if use_sigmoid:
+ loss = F.binary_cross_entropy_with_logits(
+ pred, target, reduction='none') * focal_weight
+ else:
+ loss = F.binary_cross_entropy(
+ pred, target, reduction='none') * focal_weight
+ loss = loss.sum(axis=1)
+ return loss
+
+
+@register
+@serializable
+class VarifocalLoss(nn.Layer):
+ def __init__(self,
+ use_sigmoid=True,
+ alpha=0.75,
+ gamma=2.0,
+ iou_weighted=True,
+ reduction='mean',
+ loss_weight=1.0):
+ """`Varifocal Loss `_
+
+ Args:
+ use_sigmoid (bool, optional): Whether the prediction is
+ used for sigmoid or softmax. Defaults to True.
+ alpha (float, optional): A balance factor for the negative part of
+ Varifocal Loss, which is different from the alpha of Focal
+ Loss. Defaults to 0.75.
+ gamma (float, optional): The gamma for calculating the modulating
+ factor. Defaults to 2.0.
+ iou_weighted (bool, optional): Whether to weight the loss of the
+ positive examples with the iou target. Defaults to True.
+ reduction (str, optional): The method used to reduce the loss into
+ a scalar. Defaults to 'mean'. Options are "none", "mean" and
+ "sum".
+ loss_weight (float, optional): Weight of loss. Defaults to 1.0.
+ """
+ super(VarifocalLoss, self).__init__()
+ assert alpha >= 0.0
+ self.use_sigmoid = use_sigmoid
+ self.alpha = alpha
+ self.gamma = gamma
+ self.iou_weighted = iou_weighted
+ self.reduction = reduction
+ self.loss_weight = loss_weight
+
+ def forward(self, pred, target, weight=None, avg_factor=None):
+ """Forward function.
+
+ Args:
+ pred (Tensor): The prediction.
+ target (Tensor): The learning target of the prediction.
+ weight (Tensor, optional): The weight of loss for each
+ prediction. Defaults to None.
+ avg_factor (int, optional): Average factor that is used to average
+ the loss. Defaults to None.
+ Returns:
+ Tensor: The calculated loss
+ """
+ loss = self.loss_weight * varifocal_loss(
+ pred,
+ target,
+ alpha=self.alpha,
+ gamma=self.gamma,
+ iou_weighted=self.iou_weighted,
+ use_sigmoid=self.use_sigmoid)
+
+ if weight is not None:
+ loss = loss * weight
+ if avg_factor is None:
+ if self.reduction == 'none':
+ return loss
+ elif self.reduction == 'mean':
+ return loss.mean()
+ elif self.reduction == 'sum':
+ return loss.sum()
+ else:
+ # if reduction is mean, then average the loss by avg_factor
+ if self.reduction == 'mean':
+ loss = loss.sum() / avg_factor
+ # if reduction is 'none', then do nothing, otherwise raise an error
+ elif self.reduction != 'none':
+ raise ValueError(
+ 'avg_factor can not be used with reduction="sum"')
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/yolo_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/yolo_loss.py
new file mode 100644
index 000000000..657959cd7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/losses/yolo_loss.py
@@ -0,0 +1,206 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+
+from ..bbox_utils import decode_yolo, xywh2xyxy, iou_similarity
+
+__all__ = ['YOLOv3Loss']
+
+
+def bbox_transform(pbox, anchor, downsample):
+ pbox = decode_yolo(pbox, anchor, downsample)
+ pbox = xywh2xyxy(pbox)
+ return pbox
+
+
+@register
+class YOLOv3Loss(nn.Layer):
+
+ __inject__ = ['iou_loss', 'iou_aware_loss']
+ __shared__ = ['num_classes']
+
+ def __init__(self,
+ num_classes=80,
+ ignore_thresh=0.7,
+ label_smooth=False,
+ downsample=[32, 16, 8],
+ scale_x_y=1.,
+ iou_loss=None,
+ iou_aware_loss=None):
+ """
+ YOLOv3Loss layer
+
+ Args:
+ num_calsses (int): number of foreground classes
+ ignore_thresh (float): threshold to ignore confidence loss
+ label_smooth (bool): whether to use label smoothing
+ downsample (list): downsample ratio for each detection block
+ scale_x_y (float): scale_x_y factor
+ iou_loss (object): IoULoss instance
+ iou_aware_loss (object): IouAwareLoss instance
+ """
+ super(YOLOv3Loss, self).__init__()
+ self.num_classes = num_classes
+ self.ignore_thresh = ignore_thresh
+ self.label_smooth = label_smooth
+ self.downsample = downsample
+ self.scale_x_y = scale_x_y
+ self.iou_loss = iou_loss
+ self.iou_aware_loss = iou_aware_loss
+ self.distill_pairs = []
+
+ def obj_loss(self, pbox, gbox, pobj, tobj, anchor, downsample):
+ # pbox
+ pbox = decode_yolo(pbox, anchor, downsample)
+ pbox = xywh2xyxy(pbox)
+ pbox = paddle.concat(pbox, axis=-1)
+ b = pbox.shape[0]
+ pbox = pbox.reshape((b, -1, 4))
+ # gbox
+ gxy = gbox[:, :, 0:2] - gbox[:, :, 2:4] * 0.5
+ gwh = gbox[:, :, 0:2] + gbox[:, :, 2:4] * 0.5
+ gbox = paddle.concat([gxy, gwh], axis=-1)
+
+ iou = iou_similarity(pbox, gbox)
+ iou.stop_gradient = True
+ iou_max = iou.max(2) # [N, M1]
+ iou_mask = paddle.cast(iou_max <= self.ignore_thresh, dtype=pbox.dtype)
+ iou_mask.stop_gradient = True
+
+ pobj = pobj.reshape((b, -1))
+ tobj = tobj.reshape((b, -1))
+ obj_mask = paddle.cast(tobj > 0, dtype=pbox.dtype)
+ obj_mask.stop_gradient = True
+
+ loss_obj = F.binary_cross_entropy_with_logits(
+ pobj, obj_mask, reduction='none')
+ loss_obj_pos = (loss_obj * tobj)
+ loss_obj_neg = (loss_obj * (1 - obj_mask) * iou_mask)
+ return loss_obj_pos + loss_obj_neg
+
+ def cls_loss(self, pcls, tcls):
+ if self.label_smooth:
+ delta = min(1. / self.num_classes, 1. / 40)
+ pos, neg = 1 - delta, delta
+ # 1 for positive, 0 for negative
+ tcls = pos * paddle.cast(
+ tcls > 0., dtype=tcls.dtype) + neg * paddle.cast(
+ tcls <= 0., dtype=tcls.dtype)
+
+ loss_cls = F.binary_cross_entropy_with_logits(
+ pcls, tcls, reduction='none')
+ return loss_cls
+
+ def yolov3_loss(self, p, t, gt_box, anchor, downsample, scale=1.,
+ eps=1e-10):
+ na = len(anchor)
+ b, c, h, w = p.shape
+ if self.iou_aware_loss:
+ ioup, p = p[:, 0:na, :, :], p[:, na:, :, :]
+ ioup = ioup.unsqueeze(-1)
+ p = p.reshape((b, na, -1, h, w)).transpose((0, 1, 3, 4, 2))
+ x, y = p[:, :, :, :, 0:1], p[:, :, :, :, 1:2]
+ w, h = p[:, :, :, :, 2:3], p[:, :, :, :, 3:4]
+ obj, pcls = p[:, :, :, :, 4:5], p[:, :, :, :, 5:]
+ self.distill_pairs.append([x, y, w, h, obj, pcls])
+
+ t = t.transpose((0, 1, 3, 4, 2))
+ tx, ty = t[:, :, :, :, 0:1], t[:, :, :, :, 1:2]
+ tw, th = t[:, :, :, :, 2:3], t[:, :, :, :, 3:4]
+ tscale = t[:, :, :, :, 4:5]
+ tobj, tcls = t[:, :, :, :, 5:6], t[:, :, :, :, 6:]
+
+ tscale_obj = tscale * tobj
+ loss = dict()
+
+ x = scale * F.sigmoid(x) - 0.5 * (scale - 1.)
+ y = scale * F.sigmoid(y) - 0.5 * (scale - 1.)
+
+ if abs(scale - 1.) < eps:
+ loss_x = F.binary_cross_entropy(x, tx, reduction='none')
+ loss_y = F.binary_cross_entropy(y, ty, reduction='none')
+ loss_xy = tscale_obj * (loss_x + loss_y)
+ else:
+ loss_x = paddle.abs(x - tx)
+ loss_y = paddle.abs(y - ty)
+ loss_xy = tscale_obj * (loss_x + loss_y)
+
+ loss_xy = loss_xy.sum([1, 2, 3, 4]).mean()
+
+ loss_w = paddle.abs(w - tw)
+ loss_h = paddle.abs(h - th)
+ loss_wh = tscale_obj * (loss_w + loss_h)
+ loss_wh = loss_wh.sum([1, 2, 3, 4]).mean()
+
+ loss['loss_xy'] = loss_xy
+ loss['loss_wh'] = loss_wh
+
+ if self.iou_loss is not None:
+ # warn: do not modify x, y, w, h in place
+ box, tbox = [x, y, w, h], [tx, ty, tw, th]
+ pbox = bbox_transform(box, anchor, downsample)
+ gbox = bbox_transform(tbox, anchor, downsample)
+ loss_iou = self.iou_loss(pbox, gbox)
+ loss_iou = loss_iou * tscale_obj
+ loss_iou = loss_iou.sum([1, 2, 3, 4]).mean()
+ loss['loss_iou'] = loss_iou
+
+ if self.iou_aware_loss is not None:
+ box, tbox = [x, y, w, h], [tx, ty, tw, th]
+ pbox = bbox_transform(box, anchor, downsample)
+ gbox = bbox_transform(tbox, anchor, downsample)
+ loss_iou_aware = self.iou_aware_loss(ioup, pbox, gbox)
+ loss_iou_aware = loss_iou_aware * tobj
+ loss_iou_aware = loss_iou_aware.sum([1, 2, 3, 4]).mean()
+ loss['loss_iou_aware'] = loss_iou_aware
+
+ box = [x, y, w, h]
+ loss_obj = self.obj_loss(box, gt_box, obj, tobj, anchor, downsample)
+ loss_obj = loss_obj.sum(-1).mean()
+ loss['loss_obj'] = loss_obj
+ loss_cls = self.cls_loss(pcls, tcls) * tobj
+ loss_cls = loss_cls.sum([1, 2, 3, 4]).mean()
+ loss['loss_cls'] = loss_cls
+ return loss
+
+ def forward(self, inputs, targets, anchors):
+ np = len(inputs)
+ gt_targets = [targets['target{}'.format(i)] for i in range(np)]
+ gt_box = targets['gt_bbox']
+ yolo_losses = dict()
+ self.distill_pairs.clear()
+ for x, t, anchor, downsample in zip(inputs, gt_targets, anchors,
+ self.downsample):
+ yolo_loss = self.yolov3_loss(x, t, gt_box, anchor, downsample,
+ self.scale_x_y)
+ for k, v in yolo_loss.items():
+ if k in yolo_losses:
+ yolo_losses[k] += v
+ else:
+ yolo_losses[k] = v
+
+ loss = 0
+ for k, v in yolo_losses.items():
+ loss += v
+
+ yolo_losses['loss'] = loss
+ return yolo_losses
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__init__.py
new file mode 100644
index 000000000..258e4c901
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__init__.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import matching
+from . import tracker
+from . import motion
+from . import visualization
+from . import utils
+
+from .matching import *
+from .tracker import *
+from .motion import *
+from .visualization import *
+from .utils import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..2ee33f2d9
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/utils.cpython-37.pyc
new file mode 100644
index 000000000..e00b5655a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/visualization.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/visualization.cpython-37.pyc
new file mode 100644
index 000000000..5b2c5807a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/__pycache__/visualization.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__init__.py
new file mode 100644
index 000000000..54c6680f7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__init__.py
@@ -0,0 +1,19 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import jde_matching
+from . import deepsort_matching
+
+from .jde_matching import *
+from .deepsort_matching import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..2ea1093b4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/deepsort_matching.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/deepsort_matching.cpython-37.pyc
new file mode 100644
index 000000000..c5cb999da
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/deepsort_matching.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/jde_matching.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/jde_matching.cpython-37.pyc
new file mode 100644
index 000000000..37ac0e28c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/__pycache__/jde_matching.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/deepsort_matching.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/deepsort_matching.py
new file mode 100644
index 000000000..3859ccfbd
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/deepsort_matching.py
@@ -0,0 +1,379 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/nwojke/deep_sort/tree/master/deep_sort
+"""
+
+import numpy as np
+from scipy.optimize import linear_sum_assignment
+from ..motion import kalman_filter
+
+INFTY_COST = 1e+5
+
+__all__ = [
+ 'iou_1toN',
+ 'iou_cost',
+ '_nn_euclidean_distance',
+ '_nn_cosine_distance',
+ 'NearestNeighborDistanceMetric',
+ 'min_cost_matching',
+ 'matching_cascade',
+ 'gate_cost_matrix',
+]
+
+
+def iou_1toN(bbox, candidates):
+ """
+ Computer intersection over union (IoU) by one box to N candidates.
+
+ Args:
+ bbox (ndarray): A bounding box in format `(top left x, top left y, width, height)`.
+ candidates (ndarray): A matrix of candidate bounding boxes (one per row) in the
+ same format as `bbox`.
+
+ Returns:
+ ious (ndarray): The intersection over union in [0, 1] between the `bbox`
+ and each candidate. A higher score means a larger fraction of the
+ `bbox` is occluded by the candidate.
+ """
+ bbox_tl = bbox[:2]
+ bbox_br = bbox[:2] + bbox[2:]
+ candidates_tl = candidates[:, :2]
+ candidates_br = candidates[:, :2] + candidates[:, 2:]
+
+ tl = np.c_[np.maximum(bbox_tl[0], candidates_tl[:, 0])[:, np.newaxis],
+ np.maximum(bbox_tl[1], candidates_tl[:, 1])[:, np.newaxis]]
+ br = np.c_[np.minimum(bbox_br[0], candidates_br[:, 0])[:, np.newaxis],
+ np.minimum(bbox_br[1], candidates_br[:, 1])[:, np.newaxis]]
+ wh = np.maximum(0., br - tl)
+
+ area_intersection = wh.prod(axis=1)
+ area_bbox = bbox[2:].prod()
+ area_candidates = candidates[:, 2:].prod(axis=1)
+ ious = area_intersection / (area_bbox + area_candidates - area_intersection)
+ return ious
+
+
+def iou_cost(tracks, detections, track_indices=None, detection_indices=None):
+ """
+ IoU distance metric.
+
+ Args:
+ tracks (list[Track]): A list of tracks.
+ detections (list[Detection]): A list of detections.
+ track_indices (Optional[list[int]]): A list of indices to tracks that
+ should be matched. Defaults to all `tracks`.
+ detection_indices (Optional[list[int]]): A list of indices to detections
+ that should be matched. Defaults to all `detections`.
+
+ Returns:
+ cost_matrix (ndarray): A cost matrix of shape len(track_indices),
+ len(detection_indices) where entry (i, j) is
+ `1 - iou(tracks[track_indices[i]], detections[detection_indices[j]])`.
+ """
+ if track_indices is None:
+ track_indices = np.arange(len(tracks))
+ if detection_indices is None:
+ detection_indices = np.arange(len(detections))
+
+ cost_matrix = np.zeros((len(track_indices), len(detection_indices)))
+ for row, track_idx in enumerate(track_indices):
+ if tracks[track_idx].time_since_update > 1:
+ cost_matrix[row, :] = 1e+5
+ continue
+
+ bbox = tracks[track_idx].to_tlwh()
+ candidates = np.asarray([detections[i].tlwh for i in detection_indices])
+ cost_matrix[row, :] = 1. - iou_1toN(bbox, candidates)
+ return cost_matrix
+
+
+def _nn_euclidean_distance(s, q):
+ """
+ Compute pair-wise squared (Euclidean) distance between points in `s` and `q`.
+
+ Args:
+ s (ndarray): Sample points: an NxM matrix of N samples of dimensionality M.
+ q (ndarray): Query points: an LxM matrix of L samples of dimensionality M.
+
+ Returns:
+ distances (ndarray): A vector of length M that contains for each entry in `q` the
+ smallest Euclidean distance to a sample in `s`.
+ """
+ s, q = np.asarray(s), np.asarray(q)
+ if len(s) == 0 or len(q) == 0:
+ return np.zeros((len(s), len(q)))
+ s2, q2 = np.square(s).sum(axis=1), np.square(q).sum(axis=1)
+ distances = -2. * np.dot(s, q.T) + s2[:, None] + q2[None, :]
+ distances = np.clip(distances, 0., float(np.inf))
+
+ return np.maximum(0.0, distances.min(axis=0))
+
+
+def _nn_cosine_distance(s, q):
+ """
+ Compute pair-wise cosine distance between points in `s` and `q`.
+
+ Args:
+ s (ndarray): Sample points: an NxM matrix of N samples of dimensionality M.
+ q (ndarray): Query points: an LxM matrix of L samples of dimensionality M.
+
+ Returns:
+ distances (ndarray): A vector of length M that contains for each entry in `q` the
+ smallest Euclidean distance to a sample in `s`.
+ """
+ s = np.asarray(s) / np.linalg.norm(s, axis=1, keepdims=True)
+ q = np.asarray(q) / np.linalg.norm(q, axis=1, keepdims=True)
+ distances = 1. - np.dot(s, q.T)
+
+ return distances.min(axis=0)
+
+
+class NearestNeighborDistanceMetric(object):
+ """
+ A nearest neighbor distance metric that, for each target, returns
+ the closest distance to any sample that has been observed so far.
+
+ Args:
+ metric (str): Either "euclidean" or "cosine".
+ matching_threshold (float): The matching threshold. Samples with larger
+ distance are considered an invalid match.
+ budget (Optional[int]): If not None, fix samples per class to at most
+ this number. Removes the oldest samples when the budget is reached.
+
+ Attributes:
+ samples (Dict[int -> List[ndarray]]): A dictionary that maps from target
+ identities to the list of samples that have been observed so far.
+ """
+
+ def __init__(self, metric, matching_threshold, budget=None):
+ if metric == "euclidean":
+ self._metric = _nn_euclidean_distance
+ elif metric == "cosine":
+ self._metric = _nn_cosine_distance
+ else:
+ raise ValueError(
+ "Invalid metric; must be either 'euclidean' or 'cosine'")
+ self.matching_threshold = matching_threshold
+ self.budget = budget
+ self.samples = {}
+
+ def partial_fit(self, features, targets, active_targets):
+ """
+ Update the distance metric with new data.
+
+ Args:
+ features (ndarray): An NxM matrix of N features of dimensionality M.
+ targets (ndarray): An integer array of associated target identities.
+ active_targets (List[int]): A list of targets that are currently
+ present in the scene.
+ """
+ for feature, target in zip(features, targets):
+ self.samples.setdefault(target, []).append(feature)
+ if self.budget is not None:
+ self.samples[target] = self.samples[target][-self.budget:]
+ self.samples = {k: self.samples[k] for k in active_targets}
+
+ def distance(self, features, targets):
+ """
+ Compute distance between features and targets.
+
+ Args:
+ features (ndarray): An NxM matrix of N features of dimensionality M.
+ targets (list[int]): A list of targets to match the given `features` against.
+
+ Returns:
+ cost_matrix (ndarray): a cost matrix of shape len(targets), len(features),
+ where element (i, j) contains the closest squared distance between
+ `targets[i]` and `features[j]`.
+ """
+ cost_matrix = np.zeros((len(targets), len(features)))
+ for i, target in enumerate(targets):
+ cost_matrix[i, :] = self._metric(self.samples[target], features)
+ return cost_matrix
+
+
+def min_cost_matching(distance_metric,
+ max_distance,
+ tracks,
+ detections,
+ track_indices=None,
+ detection_indices=None):
+ """
+ Solve linear assignment problem.
+
+ Args:
+ distance_metric :
+ Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
+ The distance metric is given a list of tracks and detections as
+ well as a list of N track indices and M detection indices. The
+ metric should return the NxM dimensional cost matrix, where element
+ (i, j) is the association cost between the i-th track in the given
+ track indices and the j-th detection in the given detection_indices.
+ max_distance (float): Gating threshold. Associations with cost larger
+ than this value are disregarded.
+ tracks (list[Track]): A list of predicted tracks at the current time
+ step.
+ detections (list[Detection]): A list of detections at the current time
+ step.
+ track_indices (list[int]): List of track indices that maps rows in
+ `cost_matrix` to tracks in `tracks`.
+ detection_indices (List[int]): List of detection indices that maps
+ columns in `cost_matrix` to detections in `detections`.
+
+ Returns:
+ A tuple (List[(int, int)], List[int], List[int]) with the following
+ three entries:
+ * A list of matched track and detection indices.
+ * A list of unmatched track indices.
+ * A list of unmatched detection indices.
+ """
+ if track_indices is None:
+ track_indices = np.arange(len(tracks))
+ if detection_indices is None:
+ detection_indices = np.arange(len(detections))
+
+ if len(detection_indices) == 0 or len(track_indices) == 0:
+ return [], track_indices, detection_indices # Nothing to match.
+
+ cost_matrix = distance_metric(tracks, detections, track_indices,
+ detection_indices)
+
+ cost_matrix[cost_matrix > max_distance] = max_distance + 1e-5
+ indices = linear_sum_assignment(cost_matrix)
+
+ matches, unmatched_tracks, unmatched_detections = [], [], []
+ for col, detection_idx in enumerate(detection_indices):
+ if col not in indices[1]:
+ unmatched_detections.append(detection_idx)
+ for row, track_idx in enumerate(track_indices):
+ if row not in indices[0]:
+ unmatched_tracks.append(track_idx)
+ for row, col in zip(indices[0], indices[1]):
+ track_idx = track_indices[row]
+ detection_idx = detection_indices[col]
+ if cost_matrix[row, col] > max_distance:
+ unmatched_tracks.append(track_idx)
+ unmatched_detections.append(detection_idx)
+ else:
+ matches.append((track_idx, detection_idx))
+ return matches, unmatched_tracks, unmatched_detections
+
+
+def matching_cascade(distance_metric,
+ max_distance,
+ cascade_depth,
+ tracks,
+ detections,
+ track_indices=None,
+ detection_indices=None):
+ """
+ Run matching cascade.
+
+ Args:
+ distance_metric :
+ Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
+ The distance metric is given a list of tracks and detections as
+ well as a list of N track indices and M detection indices. The
+ metric should return the NxM dimensional cost matrix, where element
+ (i, j) is the association cost between the i-th track in the given
+ track indices and the j-th detection in the given detection_indices.
+ max_distance (float): Gating threshold. Associations with cost larger
+ than this value are disregarded.
+ cascade_depth (int): The cascade depth, should be se to the maximum
+ track age.
+ tracks (list[Track]): A list of predicted tracks at the current time
+ step.
+ detections (list[Detection]): A list of detections at the current time
+ step.
+ track_indices (list[int]): List of track indices that maps rows in
+ `cost_matrix` to tracks in `tracks`.
+ detection_indices (List[int]): List of detection indices that maps
+ columns in `cost_matrix` to detections in `detections`.
+
+ Returns:
+ A tuple (List[(int, int)], List[int], List[int]) with the following
+ three entries:
+ * A list of matched track and detection indices.
+ * A list of unmatched track indices.
+ * A list of unmatched detection indices.
+ """
+ if track_indices is None:
+ track_indices = list(range(len(tracks)))
+ if detection_indices is None:
+ detection_indices = list(range(len(detections)))
+
+ unmatched_detections = detection_indices
+ matches = []
+ for level in range(cascade_depth):
+ if len(unmatched_detections) == 0: # No detections left
+ break
+
+ track_indices_l = [
+ k for k in track_indices if tracks[k].time_since_update == 1 + level
+ ]
+ if len(track_indices_l) == 0: # Nothing to match at this level
+ continue
+
+ matches_l, _, unmatched_detections = \
+ min_cost_matching(
+ distance_metric, max_distance, tracks, detections,
+ track_indices_l, unmatched_detections)
+ matches += matches_l
+ unmatched_tracks = list(set(track_indices) - set(k for k, _ in matches))
+ return matches, unmatched_tracks, unmatched_detections
+
+
+def gate_cost_matrix(kf,
+ cost_matrix,
+ tracks,
+ detections,
+ track_indices,
+ detection_indices,
+ gated_cost=INFTY_COST,
+ only_position=False):
+ """
+ Invalidate infeasible entries in cost matrix based on the state
+ distributions obtained by Kalman filtering.
+
+ Args:
+ kf (object): The Kalman filter.
+ cost_matrix (ndarray): The NxM dimensional cost matrix, where N is the
+ number of track indices and M is the number of detection indices,
+ such that entry (i, j) is the association cost between
+ `tracks[track_indices[i]]` and `detections[detection_indices[j]]`.
+ tracks (list[Track]): A list of predicted tracks at the current time
+ step.
+ detections (list[Detection]): A list of detections at the current time
+ step.
+ track_indices (List[int]): List of track indices that maps rows in
+ `cost_matrix` to tracks in `tracks`.
+ detection_indices (List[int]): List of detection indices that maps
+ columns in `cost_matrix` to detections in `detections`.
+ gated_cost (Optional[float]): Entries in the cost matrix corresponding
+ to infeasible associations are set this value. Defaults to a very
+ large value.
+ only_position (Optional[bool]): If True, only the x, y position of the
+ state distribution is considered during gating. Default False.
+ """
+ gating_dim = 2 if only_position else 4
+ gating_threshold = kalman_filter.chi2inv95[gating_dim]
+ measurements = np.asarray(
+ [detections[i].to_xyah() for i in detection_indices])
+ for row, track_idx in enumerate(track_indices):
+ track = tracks[track_idx]
+ gating_distance = kf.gating_distance(track.mean, track.covariance,
+ measurements, only_position)
+ cost_matrix[row, gating_distance > gating_threshold] = gated_cost
+ return cost_matrix
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/jde_matching.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/jde_matching.py
new file mode 100644
index 000000000..e9c40dba4
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/matching/jde_matching.py
@@ -0,0 +1,144 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/matching.py
+"""
+
+import lap
+import scipy
+import numpy as np
+from scipy.spatial.distance import cdist
+from ..motion import kalman_filter
+import warnings
+warnings.filterwarnings("ignore")
+
+__all__ = [
+ 'merge_matches',
+ 'linear_assignment',
+ 'cython_bbox_ious',
+ 'iou_distance',
+ 'embedding_distance',
+ 'fuse_motion',
+]
+
+
+def merge_matches(m1, m2, shape):
+ O, P, Q = shape
+ m1 = np.asarray(m1)
+ m2 = np.asarray(m2)
+
+ M1 = scipy.sparse.coo_matrix(
+ (np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P))
+ M2 = scipy.sparse.coo_matrix(
+ (np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q))
+
+ mask = M1 * M2
+ match = mask.nonzero()
+ match = list(zip(match[0], match[1]))
+ unmatched_O = tuple(set(range(O)) - set([i for i, j in match]))
+ unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match]))
+
+ return match, unmatched_O, unmatched_Q
+
+
+def linear_assignment(cost_matrix, thresh):
+ if cost_matrix.size == 0:
+ return np.empty(
+ (0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(
+ range(cost_matrix.shape[1]))
+ matches, unmatched_a, unmatched_b = [], [], []
+ cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh)
+ for ix, mx in enumerate(x):
+ if mx >= 0:
+ matches.append([ix, mx])
+ unmatched_a = np.where(x < 0)[0]
+ unmatched_b = np.where(y < 0)[0]
+ matches = np.asarray(matches)
+ return matches, unmatched_a, unmatched_b
+
+
+def cython_bbox_ious(atlbrs, btlbrs):
+ ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)
+ if ious.size == 0:
+ return ious
+ try:
+ import cython_bbox
+ except Exception as e:
+ print('cython_bbox not found, please install cython_bbox.'
+ 'for example: `pip install cython_bbox`.')
+ raise e
+
+ ious = cython_bbox.bbox_overlaps(
+ np.ascontiguousarray(
+ atlbrs, dtype=np.float),
+ np.ascontiguousarray(
+ btlbrs, dtype=np.float))
+ return ious
+
+
+def iou_distance(atracks, btracks):
+ """
+ Compute cost based on IoU between two list[STrack].
+ """
+ if (len(atracks) > 0 and isinstance(atracks[0], np.ndarray)) or (
+ len(btracks) > 0 and isinstance(btracks[0], np.ndarray)):
+ atlbrs = atracks
+ btlbrs = btracks
+ else:
+ atlbrs = [track.tlbr for track in atracks]
+ btlbrs = [track.tlbr for track in btracks]
+ _ious = cython_bbox_ious(atlbrs, btlbrs)
+ cost_matrix = 1 - _ious
+
+ return cost_matrix
+
+
+def embedding_distance(tracks, detections, metric='euclidean'):
+ """
+ Compute cost based on features between two list[STrack].
+ """
+ cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)
+ if cost_matrix.size == 0:
+ return cost_matrix
+ det_features = np.asarray(
+ [track.curr_feat for track in detections], dtype=np.float)
+ track_features = np.asarray(
+ [track.smooth_feat for track in tracks], dtype=np.float)
+ cost_matrix = np.maximum(0.0, cdist(track_features, det_features,
+ metric)) # Nomalized features
+ return cost_matrix
+
+
+def fuse_motion(kf,
+ cost_matrix,
+ tracks,
+ detections,
+ only_position=False,
+ lambda_=0.98):
+ if cost_matrix.size == 0:
+ return cost_matrix
+ gating_dim = 2 if only_position else 4
+ gating_threshold = kalman_filter.chi2inv95[gating_dim]
+ measurements = np.asarray([det.to_xyah() for det in detections])
+ for row, track in enumerate(tracks):
+ gating_distance = kf.gating_distance(
+ track.mean,
+ track.covariance,
+ measurements,
+ only_position,
+ metric='maha')
+ cost_matrix[row, gating_distance > gating_threshold] = np.inf
+ cost_matrix[row] = lambda_ * cost_matrix[row] + (1 - lambda_
+ ) * gating_distance
+ return cost_matrix
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__init__.py
new file mode 100644
index 000000000..e42dd0b01
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__init__.py
@@ -0,0 +1,17 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import kalman_filter
+
+from .kalman_filter import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..710870d0f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/kalman_filter.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/kalman_filter.cpython-37.pyc
new file mode 100644
index 000000000..6df67d811
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/__pycache__/kalman_filter.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/kalman_filter.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/kalman_filter.py
new file mode 100644
index 000000000..e3d42ea14
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/motion/kalman_filter.py
@@ -0,0 +1,270 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/nwojke/deep_sort/blob/master/deep_sort/kalman_filter.py
+"""
+
+import numpy as np
+import scipy.linalg
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['KalmanFilter']
+"""
+Table for the 0.95 quantile of the chi-square distribution with N degrees of
+freedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv
+function and used as Mahalanobis gating threshold.
+"""
+
+chi2inv95 = {
+ 1: 3.8415,
+ 2: 5.9915,
+ 3: 7.8147,
+ 4: 9.4877,
+ 5: 11.070,
+ 6: 12.592,
+ 7: 14.067,
+ 8: 15.507,
+ 9: 16.919
+}
+
+
+@register
+@serializable
+class KalmanFilter(object):
+ """
+ A simple Kalman filter for tracking bounding boxes in image space.
+
+ The 8-dimensional state space
+
+ x, y, a, h, vx, vy, va, vh
+
+ contains the bounding box center position (x, y), aspect ratio a, height h,
+ and their respective velocities.
+
+ Object motion follows a constant velocity model. The bounding box location
+ (x, y, a, h) is taken as direct observation of the state space (linear
+ observation model).
+
+ """
+
+ def __init__(self):
+ ndim, dt = 4, 1.
+
+ # Create Kalman filter model matrices.
+ self._motion_mat = np.eye(2 * ndim, 2 * ndim)
+ for i in range(ndim):
+ self._motion_mat[i, ndim + i] = dt
+ self._update_mat = np.eye(ndim, 2 * ndim)
+
+ # Motion and observation uncertainty are chosen relative to the current
+ # state estimate. These weights control the amount of uncertainty in
+ # the model. This is a bit hacky.
+ self._std_weight_position = 1. / 20
+ self._std_weight_velocity = 1. / 160
+
+ def initiate(self, measurement):
+ """
+ Create track from unassociated measurement.
+
+ Args:
+ measurement (ndarray): Bounding box coordinates (x, y, a, h) with
+ center position (x, y), aspect ratio a, and height h.
+
+ Returns:
+ The mean vector (8 dimensional) and covariance matrix (8x8
+ dimensional) of the new track. Unobserved velocities are
+ initialized to 0 mean.
+ """
+ mean_pos = measurement
+ mean_vel = np.zeros_like(mean_pos)
+ mean = np.r_[mean_pos, mean_vel]
+
+ std = [
+ 2 * self._std_weight_position * measurement[3],
+ 2 * self._std_weight_position * measurement[3], 1e-2,
+ 2 * self._std_weight_position * measurement[3],
+ 10 * self._std_weight_velocity * measurement[3],
+ 10 * self._std_weight_velocity * measurement[3], 1e-5,
+ 10 * self._std_weight_velocity * measurement[3]
+ ]
+ covariance = np.diag(np.square(std))
+ return mean, covariance
+
+ def predict(self, mean, covariance):
+ """
+ Run Kalman filter prediction step.
+
+ Args:
+ mean (ndarray): The 8 dimensional mean vector of the object state
+ at the previous time step.
+ covariance (ndarray): The 8x8 dimensional covariance matrix of the
+ object state at the previous time step.
+
+ Returns:
+ The mean vector and covariance matrix of the predicted state.
+ Unobserved velocities are initialized to 0 mean.
+ """
+ std_pos = [
+ self._std_weight_position * mean[3], self._std_weight_position *
+ mean[3], 1e-2, self._std_weight_position * mean[3]
+ ]
+ std_vel = [
+ self._std_weight_velocity * mean[3], self._std_weight_velocity *
+ mean[3], 1e-5, self._std_weight_velocity * mean[3]
+ ]
+ motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
+
+ #mean = np.dot(self._motion_mat, mean)
+ mean = np.dot(mean, self._motion_mat.T)
+ covariance = np.linalg.multi_dot(
+ (self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
+
+ return mean, covariance
+
+ def project(self, mean, covariance):
+ """
+ Project state distribution to measurement space.
+
+ Args
+ mean (ndarray): The state's mean vector (8 dimensional array).
+ covariance (ndarray): The state's covariance matrix (8x8 dimensional).
+
+ Returns:
+ The projected mean and covariance matrix of the given state estimate.
+ """
+ std = [
+ self._std_weight_position * mean[3], self._std_weight_position *
+ mean[3], 1e-1, self._std_weight_position * mean[3]
+ ]
+ innovation_cov = np.diag(np.square(std))
+
+ mean = np.dot(self._update_mat, mean)
+ covariance = np.linalg.multi_dot((self._update_mat, covariance,
+ self._update_mat.T))
+ return mean, covariance + innovation_cov
+
+ def multi_predict(self, mean, covariance):
+ """
+ Run Kalman filter prediction step (Vectorized version).
+
+ Args:
+ mean (ndarray): The Nx8 dimensional mean matrix of the object states
+ at the previous time step.
+ covariance (ndarray): The Nx8x8 dimensional covariance matrics of the
+ object states at the previous time step.
+
+ Returns:
+ The mean vector and covariance matrix of the predicted state.
+ Unobserved velocities are initialized to 0 mean.
+ """
+ std_pos = [
+ self._std_weight_position * mean[:, 3], self._std_weight_position *
+ mean[:, 3], 1e-2 * np.ones_like(mean[:, 3]),
+ self._std_weight_position * mean[:, 3]
+ ]
+ std_vel = [
+ self._std_weight_velocity * mean[:, 3], self._std_weight_velocity *
+ mean[:, 3], 1e-5 * np.ones_like(mean[:, 3]),
+ self._std_weight_velocity * mean[:, 3]
+ ]
+ sqr = np.square(np.r_[std_pos, std_vel]).T
+
+ motion_cov = []
+ for i in range(len(mean)):
+ motion_cov.append(np.diag(sqr[i]))
+ motion_cov = np.asarray(motion_cov)
+
+ mean = np.dot(mean, self._motion_mat.T)
+ left = np.dot(self._motion_mat, covariance).transpose((1, 0, 2))
+ covariance = np.dot(left, self._motion_mat.T) + motion_cov
+
+ return mean, covariance
+
+ def update(self, mean, covariance, measurement):
+ """
+ Run Kalman filter correction step.
+
+ Args:
+ mean (ndarray): The predicted state's mean vector (8 dimensional).
+ covariance (ndarray): The state's covariance matrix (8x8 dimensional).
+ measurement (ndarray): The 4 dimensional measurement vector
+ (x, y, a, h), where (x, y) is the center position, a the aspect
+ ratio, and h the height of the bounding box.
+
+ Returns:
+ The measurement-corrected state distribution.
+ """
+ projected_mean, projected_cov = self.project(mean, covariance)
+
+ chol_factor, lower = scipy.linalg.cho_factor(
+ projected_cov, lower=True, check_finite=False)
+ kalman_gain = scipy.linalg.cho_solve(
+ (chol_factor, lower),
+ np.dot(covariance, self._update_mat.T).T,
+ check_finite=False).T
+ innovation = measurement - projected_mean
+
+ new_mean = mean + np.dot(innovation, kalman_gain.T)
+ new_covariance = covariance - np.linalg.multi_dot(
+ (kalman_gain, projected_cov, kalman_gain.T))
+ return new_mean, new_covariance
+
+ def gating_distance(self,
+ mean,
+ covariance,
+ measurements,
+ only_position=False,
+ metric='maha'):
+ """
+ Compute gating distance between state distribution and measurements.
+ A suitable distance threshold can be obtained from `chi2inv95`. If
+ `only_position` is False, the chi-square distribution has 4 degrees of
+ freedom, otherwise 2.
+
+ Args:
+ mean (ndarray): Mean vector over the state distribution (8
+ dimensional).
+ covariance (ndarray): Covariance of the state distribution (8x8
+ dimensional).
+ measurements (ndarray): An Nx4 dimensional matrix of N measurements,
+ each in format (x, y, a, h) where (x, y) is the bounding box center
+ position, a the aspect ratio, and h the height.
+ only_position (Optional[bool]): If True, distance computation is
+ done with respect to the bounding box center position only.
+ metric (str): Metric type, 'gaussian' or 'maha'.
+
+ Returns
+ An array of length N, where the i-th element contains the squared
+ Mahalanobis distance between (mean, covariance) and `measurements[i]`.
+ """
+ mean, covariance = self.project(mean, covariance)
+ if only_position:
+ mean, covariance = mean[:2], covariance[:2, :2]
+ measurements = measurements[:, :2]
+
+ d = measurements - mean
+ if metric == 'gaussian':
+ return np.sum(d * d, axis=1)
+ elif metric == 'maha':
+ cholesky_factor = np.linalg.cholesky(covariance)
+ z = scipy.linalg.solve_triangular(
+ cholesky_factor,
+ d.T,
+ lower=True,
+ check_finite=False,
+ overwrite_b=True)
+ squared_maha = np.sum(z * z, axis=0)
+ return squared_maha
+ else:
+ raise ValueError('invalid distance metric')
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__init__.py
new file mode 100644
index 000000000..b74593b41
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__init__.py
@@ -0,0 +1,23 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import base_jde_tracker
+from . import base_sde_tracker
+from . import jde_tracker
+from . import deepsort_tracker
+
+from .base_jde_tracker import *
+from .base_sde_tracker import *
+from .jde_tracker import *
+from .deepsort_tracker import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..23093785a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_jde_tracker.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_jde_tracker.cpython-37.pyc
new file mode 100644
index 000000000..9faefa723
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_jde_tracker.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_sde_tracker.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_sde_tracker.cpython-37.pyc
new file mode 100644
index 000000000..7a16be8f4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/base_sde_tracker.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/deepsort_tracker.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/deepsort_tracker.cpython-37.pyc
new file mode 100644
index 000000000..ab2dffa68
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/deepsort_tracker.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/jde_tracker.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/jde_tracker.cpython-37.pyc
new file mode 100644
index 000000000..8c3fe9ad6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/__pycache__/jde_tracker.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_jde_tracker.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_jde_tracker.py
new file mode 100644
index 000000000..8e2ef38bc
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_jde_tracker.py
@@ -0,0 +1,297 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/multitracker.py
+"""
+
+import numpy as np
+from collections import defaultdict
+from collections import deque, OrderedDict
+from ..matching import jde_matching as matching
+from ppdet.core.workspace import register, serializable
+import warnings
+warnings.filterwarnings("ignore")
+
+__all__ = [
+ 'TrackState',
+ 'BaseTrack',
+ 'STrack',
+ 'joint_stracks',
+ 'sub_stracks',
+ 'remove_duplicate_stracks',
+]
+
+
+class TrackState(object):
+ New = 0
+ Tracked = 1
+ Lost = 2
+ Removed = 3
+
+
+@register
+@serializable
+class BaseTrack(object):
+ _count_dict = defaultdict(int) # support single class and multi classes
+
+ track_id = 0
+ is_activated = False
+ state = TrackState.New
+
+ history = OrderedDict()
+ features = []
+ curr_feature = None
+ score = 0
+ start_frame = 0
+ frame_id = 0
+ time_since_update = 0
+
+ # multi-camera
+ location = (np.inf, np.inf)
+
+ @property
+ def end_frame(self):
+ return self.frame_id
+
+ @staticmethod
+ def next_id(cls_id):
+ BaseTrack._count_dict[cls_id] += 1
+ return BaseTrack._count_dict[cls_id]
+
+ # @even: reset track id
+ @staticmethod
+ def init_count(num_classes):
+ """
+ Initiate _count for all object classes
+ :param num_classes:
+ """
+ for cls_id in range(num_classes):
+ BaseTrack._count_dict[cls_id] = 0
+
+ @staticmethod
+ def reset_track_count(cls_id):
+ BaseTrack._count_dict[cls_id] = 0
+
+ def activate(self, *args):
+ raise NotImplementedError
+
+ def predict(self):
+ raise NotImplementedError
+
+ def update(self, *args, **kwargs):
+ raise NotImplementedError
+
+ def mark_lost(self):
+ self.state = TrackState.Lost
+
+ def mark_removed(self):
+ self.state = TrackState.Removed
+
+
+@register
+@serializable
+class STrack(BaseTrack):
+ def __init__(self,
+ tlwh,
+ score,
+ temp_feat,
+ num_classes,
+ cls_id,
+ buff_size=30):
+ # object class id
+ self.cls_id = cls_id
+ # wait activate
+ self._tlwh = np.asarray(tlwh, dtype=np.float)
+ self.kalman_filter = None
+ self.mean, self.covariance = None, None
+ self.is_activated = False
+
+ self.score = score
+ self.track_len = 0
+
+ self.smooth_feat = None
+ self.update_features(temp_feat)
+ self.features = deque([], maxlen=buff_size)
+ self.alpha = 0.9
+
+ def update_features(self, feat):
+ # L2 normalizing
+ feat /= np.linalg.norm(feat)
+ self.curr_feat = feat
+ if self.smooth_feat is None:
+ self.smooth_feat = feat
+ else:
+ self.smooth_feat = self.alpha * self.smooth_feat + (1.0 - self.alpha
+ ) * feat
+ self.features.append(feat)
+ self.smooth_feat /= np.linalg.norm(self.smooth_feat)
+
+ def predict(self):
+ mean_state = self.mean.copy()
+ if self.state != TrackState.Tracked:
+ mean_state[7] = 0
+ self.mean, self.covariance = self.kalman_filter.predict(mean_state,
+ self.covariance)
+
+ @staticmethod
+ def multi_predict(tracks, kalman_filter):
+ if len(tracks) > 0:
+ multi_mean = np.asarray([track.mean.copy() for track in tracks])
+ multi_covariance = np.asarray(
+ [track.covariance for track in tracks])
+ for i, st in enumerate(tracks):
+ if st.state != TrackState.Tracked:
+ multi_mean[i][7] = 0
+ multi_mean, multi_covariance = kalman_filter.multi_predict(
+ multi_mean, multi_covariance)
+ for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
+ tracks[i].mean = mean
+ tracks[i].covariance = cov
+
+ def reset_track_id(self):
+ self.reset_track_count(self.cls_id)
+
+ def activate(self, kalman_filter, frame_id):
+ """Start a new track"""
+ self.kalman_filter = kalman_filter
+ # update track id for the object class
+ self.track_id = self.next_id(self.cls_id)
+ self.mean, self.covariance = self.kalman_filter.initiate(
+ self.tlwh_to_xyah(self._tlwh))
+
+ self.track_len = 0
+ self.state = TrackState.Tracked # set flag 'tracked'
+
+ if frame_id == 1: # to record the first frame's detection result
+ self.is_activated = True
+
+ self.frame_id = frame_id
+ self.start_frame = frame_id
+
+ def re_activate(self, new_track, frame_id, new_id=False):
+ self.mean, self.covariance = self.kalman_filter.update(
+ self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh))
+ self.update_features(new_track.curr_feat)
+ self.track_len = 0
+ self.state = TrackState.Tracked
+ self.is_activated = True
+ self.frame_id = frame_id
+ if new_id: # update track id for the object class
+ self.track_id = self.next_id(self.cls_id)
+
+ def update(self, new_track, frame_id, update_feature=True):
+ self.frame_id = frame_id
+ self.track_len += 1
+
+ new_tlwh = new_track.tlwh
+ self.mean, self.covariance = self.kalman_filter.update(
+ self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh))
+ self.state = TrackState.Tracked # set flag 'tracked'
+ self.is_activated = True # set flag 'activated'
+
+ self.score = new_track.score
+ if update_feature:
+ self.update_features(new_track.curr_feat)
+
+ @property
+ def tlwh(self):
+ """Get current position in bounding box format `(top left x, top left y,
+ width, height)`.
+ """
+ if self.mean is None:
+ return self._tlwh.copy()
+
+ ret = self.mean[:4].copy()
+ ret[2] *= ret[3]
+ ret[:2] -= ret[2:] / 2
+ return ret
+
+ @property
+ def tlbr(self):
+ """Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,
+ `(top left, bottom right)`.
+ """
+ ret = self.tlwh.copy()
+ ret[2:] += ret[:2]
+ return ret
+
+ @staticmethod
+ def tlwh_to_xyah(tlwh):
+ """Convert bounding box to format `(center x, center y, aspect ratio,
+ height)`, where the aspect ratio is `width / height`.
+ """
+ ret = np.asarray(tlwh).copy()
+ ret[:2] += ret[2:] / 2
+ ret[2] /= ret[3]
+ return ret
+
+ def to_xyah(self):
+ return self.tlwh_to_xyah(self.tlwh)
+
+ @staticmethod
+ def tlbr_to_tlwh(tlbr):
+ ret = np.asarray(tlbr).copy()
+ ret[2:] -= ret[:2]
+ return ret
+
+ @staticmethod
+ def tlwh_to_tlbr(tlwh):
+ ret = np.asarray(tlwh).copy()
+ ret[2:] += ret[:2]
+ return ret
+
+ def __repr__(self):
+ return 'OT_({}-{})_({}-{})'.format(self.cls_id, self.track_id,
+ self.start_frame, self.end_frame)
+
+
+def joint_stracks(tlista, tlistb):
+ exists = {}
+ res = []
+ for t in tlista:
+ exists[t.track_id] = 1
+ res.append(t)
+ for t in tlistb:
+ tid = t.track_id
+ if not exists.get(tid, 0):
+ exists[tid] = 1
+ res.append(t)
+ return res
+
+
+def sub_stracks(tlista, tlistb):
+ stracks = {}
+ for t in tlista:
+ stracks[t.track_id] = t
+ for t in tlistb:
+ tid = t.track_id
+ if stracks.get(tid, 0):
+ del stracks[tid]
+ return list(stracks.values())
+
+
+def remove_duplicate_stracks(stracksa, stracksb):
+ pdist = matching.iou_distance(stracksa, stracksb)
+ pairs = np.where(pdist < 0.15)
+ dupa, dupb = list(), list()
+ for p, q in zip(*pairs):
+ timep = stracksa[p].frame_id - stracksa[p].start_frame
+ timeq = stracksb[q].frame_id - stracksb[q].start_frame
+ if timep > timeq:
+ dupb.append(q)
+ else:
+ dupa.append(p)
+ resa = [t for i, t in enumerate(stracksa) if not i in dupa]
+ resb = [t for i, t in enumerate(stracksb) if not i in dupb]
+ return resa, resb
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_sde_tracker.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_sde_tracker.py
new file mode 100644
index 000000000..accc2016f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/base_sde_tracker.py
@@ -0,0 +1,156 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/nwojke/deep_sort/blob/master/deep_sort/track.py
+"""
+
+import datetime
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['TrackState', 'Track']
+
+
+class TrackState(object):
+ """
+ Enumeration type for the single target track state. Newly created tracks are
+ classified as `tentative` until enough evidence has been collected. Then,
+ the track state is changed to `confirmed`. Tracks that are no longer alive
+ are classified as `deleted` to mark them for removal from the set of active
+ tracks.
+ """
+ Tentative = 1
+ Confirmed = 2
+ Deleted = 3
+
+
+@register
+@serializable
+class Track(object):
+ """
+ A single target track with state space `(x, y, a, h)` and associated
+ velocities, where `(x, y)` is the center of the bounding box, `a` is the
+ aspect ratio and `h` is the height.
+
+ Args:
+ mean (ndarray): Mean vector of the initial state distribution.
+ covariance (ndarray): Covariance matrix of the initial state distribution.
+ track_id (int): A unique track identifier.
+ n_init (int): Number of consecutive detections before the track is confirmed.
+ The track state is set to `Deleted` if a miss occurs within the first
+ `n_init` frames.
+ max_age (int): The maximum number of consecutive misses before the track
+ state is set to `Deleted`.
+ cls_id (int): The category id of the tracked box.
+ score (float): The confidence score of the tracked box.
+ feature (Optional[ndarray]): Feature vector of the detection this track
+ originates from. If not None, this feature is added to the `features` cache.
+
+ Attributes:
+ hits (int): Total number of measurement updates.
+ age (int): Total number of frames since first occurance.
+ time_since_update (int): Total number of frames since last measurement
+ update.
+ state (TrackState): The current track state.
+ features (List[ndarray]): A cache of features. On each measurement update,
+ the associated feature vector is added to this list.
+ """
+
+ def __init__(self,
+ mean,
+ covariance,
+ track_id,
+ n_init,
+ max_age,
+ cls_id,
+ score,
+ feature=None):
+ self.mean = mean
+ self.covariance = covariance
+ self.track_id = track_id
+ self.hits = 1
+ self.age = 1
+ self.time_since_update = 0
+ self.cls_id = cls_id
+ self.score = score
+ self.start_time = datetime.datetime.now()
+
+ self.state = TrackState.Tentative
+ self.features = []
+ self.feat = feature
+ if feature is not None:
+ self.features.append(feature)
+
+ self._n_init = n_init
+ self._max_age = max_age
+
+ def to_tlwh(self):
+ """Get position in format `(top left x, top left y, width, height)`."""
+ ret = self.mean[:4].copy()
+ ret[2] *= ret[3]
+ ret[:2] -= ret[2:] / 2
+ return ret
+
+ def to_tlbr(self):
+ """Get position in bounding box format `(min x, miny, max x, max y)`."""
+ ret = self.to_tlwh()
+ ret[2:] = ret[:2] + ret[2:]
+ return ret
+
+ def predict(self, kalman_filter):
+ """
+ Propagate the state distribution to the current time step using a Kalman
+ filter prediction step.
+ """
+ self.mean, self.covariance = kalman_filter.predict(self.mean,
+ self.covariance)
+ self.age += 1
+ self.time_since_update += 1
+
+ def update(self, kalman_filter, detection):
+ """
+ Perform Kalman filter measurement update step and update the associated
+ detection feature cache.
+ """
+ self.mean, self.covariance = kalman_filter.update(self.mean,
+ self.covariance,
+ detection.to_xyah())
+ self.features.append(detection.feature)
+ self.feat = detection.feature
+ self.cls_id = detection.cls_id
+ self.score = detection.score
+
+ self.hits += 1
+ self.time_since_update = 0
+ if self.state == TrackState.Tentative and self.hits >= self._n_init:
+ self.state = TrackState.Confirmed
+
+ def mark_missed(self):
+ """Mark this track as missed (no association at the current time step).
+ """
+ if self.state == TrackState.Tentative:
+ self.state = TrackState.Deleted
+ elif self.time_since_update > self._max_age:
+ self.state = TrackState.Deleted
+
+ def is_tentative(self):
+ """Returns True if this track is tentative (unconfirmed)."""
+ return self.state == TrackState.Tentative
+
+ def is_confirmed(self):
+ """Returns True if this track is confirmed."""
+ return self.state == TrackState.Confirmed
+
+ def is_deleted(self):
+ """Returns True if this track is dead and should be deleted."""
+ return self.state == TrackState.Deleted
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/deepsort_tracker.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/deepsort_tracker.py
new file mode 100644
index 000000000..ef38a67f9
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/deepsort_tracker.py
@@ -0,0 +1,188 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/nwojke/deep_sort/blob/master/deep_sort/tracker.py
+"""
+
+import numpy as np
+
+from ..motion import KalmanFilter
+from ..matching.deepsort_matching import NearestNeighborDistanceMetric
+from ..matching.deepsort_matching import iou_cost, min_cost_matching, matching_cascade, gate_cost_matrix
+from .base_sde_tracker import Track
+from ..utils import Detection
+
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['DeepSORTTracker']
+
+
+@register
+@serializable
+class DeepSORTTracker(object):
+ """
+ DeepSORT tracker
+
+ Args:
+ input_size (list): input feature map size to reid model, [h, w] format,
+ [64, 192] as default.
+ min_box_area (int): min box area to filter out low quality boxes
+ vertical_ratio (float): w/h, the vertical ratio of the bbox to filter
+ bad results, set 1.6 default for pedestrian tracking. If set <=0
+ means no need to filter bboxes.
+ budget (int): If not None, fix samples per class to at most this number.
+ Removes the oldest samples when the budget is reached.
+ max_age (int): maximum number of missed misses before a track is deleted
+ n_init (float): Number of frames that a track remains in initialization
+ phase. Number of consecutive detections before the track is confirmed.
+ The track state is set to `Deleted` if a miss occurs within the first
+ `n_init` frames.
+ metric_type (str): either "euclidean" or "cosine", the distance metric
+ used for measurement to track association.
+ matching_threshold (float): samples with larger distance are
+ considered an invalid match.
+ max_iou_distance (float): max iou distance threshold
+ motion (object): KalmanFilter instance
+ """
+
+ def __init__(self,
+ input_size=[64, 192],
+ min_box_area=0,
+ vertical_ratio=-1,
+ budget=100,
+ max_age=70,
+ n_init=3,
+ metric_type='cosine',
+ matching_threshold=0.2,
+ max_iou_distance=0.9,
+ motion='KalmanFilter'):
+ self.input_size = input_size
+ self.min_box_area = min_box_area
+ self.vertical_ratio = vertical_ratio
+ self.max_age = max_age
+ self.n_init = n_init
+ self.metric = NearestNeighborDistanceMetric(metric_type,
+ matching_threshold, budget)
+ self.max_iou_distance = max_iou_distance
+ if motion == 'KalmanFilter':
+ self.motion = KalmanFilter()
+
+ self.tracks = []
+ self._next_id = 1
+
+ def predict(self):
+ """
+ Propagate track state distributions one time step forward.
+ This function should be called once every time step, before `update`.
+ """
+ for track in self.tracks:
+ track.predict(self.motion)
+
+ def update(self, pred_dets, pred_embs):
+ """
+ Perform measurement update and track management.
+ Args:
+ pred_dets (np.array): Detection results of the image, the shape is
+ [N, 6], means 'x0, y0, x1, y1, score, cls_id'.
+ pred_embs (np.array): Embedding results of the image, the shape is
+ [N, 128], usually pred_embs.shape[1] is a multiple of 128.
+ """
+ pred_tlwhs = pred_dets[:, :4]
+ pred_scores = pred_dets[:, 4:5]
+ pred_cls_ids = pred_dets[:, 5:]
+
+ detections = [
+ Detection(tlwh, score, feat, cls_id)
+ for tlwh, score, feat, cls_id in zip(pred_tlwhs, pred_scores,
+ pred_embs, pred_cls_ids)
+ ]
+
+ # Run matching cascade.
+ matches, unmatched_tracks, unmatched_detections = \
+ self._match(detections)
+
+ # Update track set.
+ for track_idx, detection_idx in matches:
+ self.tracks[track_idx].update(self.motion,
+ detections[detection_idx])
+ for track_idx in unmatched_tracks:
+ self.tracks[track_idx].mark_missed()
+ for detection_idx in unmatched_detections:
+ self._initiate_track(detections[detection_idx])
+ self.tracks = [t for t in self.tracks if not t.is_deleted()]
+
+ # Update distance metric.
+ active_targets = [t.track_id for t in self.tracks if t.is_confirmed()]
+ features, targets = [], []
+ for track in self.tracks:
+ if not track.is_confirmed():
+ continue
+ features += track.features
+ targets += [track.track_id for _ in track.features]
+ track.features = []
+ self.metric.partial_fit(
+ np.asarray(features), np.asarray(targets), active_targets)
+ output_stracks = self.tracks
+ return output_stracks
+
+ def _match(self, detections):
+ def gated_metric(tracks, dets, track_indices, detection_indices):
+ features = np.array([dets[i].feature for i in detection_indices])
+ targets = np.array([tracks[i].track_id for i in track_indices])
+ cost_matrix = self.metric.distance(features, targets)
+ cost_matrix = gate_cost_matrix(self.motion, cost_matrix, tracks,
+ dets, track_indices,
+ detection_indices)
+ return cost_matrix
+
+ # Split track set into confirmed and unconfirmed tracks.
+ confirmed_tracks = [
+ i for i, t in enumerate(self.tracks) if t.is_confirmed()
+ ]
+ unconfirmed_tracks = [
+ i for i, t in enumerate(self.tracks) if not t.is_confirmed()
+ ]
+
+ # Associate confirmed tracks using appearance features.
+ matches_a, unmatched_tracks_a, unmatched_detections = \
+ matching_cascade(
+ gated_metric, self.metric.matching_threshold, self.max_age,
+ self.tracks, detections, confirmed_tracks)
+
+ # Associate remaining tracks together with unconfirmed tracks using IOU.
+ iou_track_candidates = unconfirmed_tracks + [
+ k for k in unmatched_tracks_a
+ if self.tracks[k].time_since_update == 1
+ ]
+ unmatched_tracks_a = [
+ k for k in unmatched_tracks_a
+ if self.tracks[k].time_since_update != 1
+ ]
+ matches_b, unmatched_tracks_b, unmatched_detections = \
+ min_cost_matching(
+ iou_cost, self.max_iou_distance, self.tracks,
+ detections, iou_track_candidates, unmatched_detections)
+
+ matches = matches_a + matches_b
+ unmatched_tracks = list(set(unmatched_tracks_a + unmatched_tracks_b))
+ return matches, unmatched_tracks, unmatched_detections
+
+ def _initiate_track(self, detection):
+ mean, covariance = self.motion.initiate(detection.to_xyah())
+ self.tracks.append(
+ Track(mean, covariance, self._next_id, self.n_init, self.max_age,
+ detection.cls_id, detection.score, detection.feature))
+ self._next_id += 1
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/jde_tracker.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/jde_tracker.py
new file mode 100644
index 000000000..af5411a26
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/tracker/jde_tracker.py
@@ -0,0 +1,273 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+This code is based on https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/tracker/multitracker.py
+"""
+
+import numpy as np
+from collections import defaultdict
+
+from ..matching import jde_matching as matching
+from ..motion import KalmanFilter
+from .base_jde_tracker import TrackState, STrack
+from .base_jde_tracker import joint_stracks, sub_stracks, remove_duplicate_stracks
+
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['JDETracker']
+
+
+@register
+@serializable
+class JDETracker(object):
+ __shared__ = ['num_classes']
+ """
+ JDE tracker, support single class and multi classes
+
+ Args:
+ num_classes (int): the number of classes
+ det_thresh (float): threshold of detection score
+ track_buffer (int): buffer for tracker
+ min_box_area (int): min box area to filter out low quality boxes
+ vertical_ratio (float): w/h, the vertical ratio of the bbox to filter
+ bad results. If set <0 means no need to filter bboxes锛寀sually set
+ 1.6 for pedestrian tracking.
+ tracked_thresh (float): linear assignment threshold of tracked
+ stracks and detections
+ r_tracked_thresh (float): linear assignment threshold of
+ tracked stracks and unmatched detections
+ unconfirmed_thresh (float): linear assignment threshold of
+ unconfirmed stracks and unmatched detections
+ motion (str): motion model, KalmanFilter as default
+ conf_thres (float): confidence threshold for tracking
+ metric_type (str): either "euclidean" or "cosine", the distance metric
+ used for measurement to track association.
+ """
+
+ def __init__(self,
+ num_classes=1,
+ det_thresh=0.3,
+ track_buffer=30,
+ min_box_area=200,
+ vertical_ratio=1.6,
+ tracked_thresh=0.7,
+ r_tracked_thresh=0.5,
+ unconfirmed_thresh=0.7,
+ motion='KalmanFilter',
+ conf_thres=0,
+ metric_type='euclidean'):
+ self.num_classes = num_classes
+ self.det_thresh = det_thresh
+ self.track_buffer = track_buffer
+ self.min_box_area = min_box_area
+ self.vertical_ratio = vertical_ratio
+
+ self.tracked_thresh = tracked_thresh
+ self.r_tracked_thresh = r_tracked_thresh
+ self.unconfirmed_thresh = unconfirmed_thresh
+ if motion == 'KalmanFilter':
+ self.motion = KalmanFilter()
+ self.conf_thres = conf_thres
+ self.metric_type = metric_type
+
+ self.frame_id = 0
+ self.tracked_tracks_dict = defaultdict(list) # dict(list[STrack])
+ self.lost_tracks_dict = defaultdict(list) # dict(list[STrack])
+ self.removed_tracks_dict = defaultdict(list) # dict(list[STrack])
+
+ self.max_time_lost = 0
+ # max_time_lost will be calculated: int(frame_rate / 30.0 * track_buffer)
+
+ def update(self, pred_dets, pred_embs):
+ """
+ Processes the image frame and finds bounding box(detections).
+ Associates the detection with corresponding tracklets and also handles
+ lost, removed, refound and active tracklets.
+
+ Args:
+ pred_dets (np.array): Detection results of the image, the shape is
+ [N, 6], means 'x0, y0, x1, y1, score, cls_id'.
+ pred_embs (np.array): Embedding results of the image, the shape is
+ [N, 128] or [N, 512].
+
+ Return:
+ output_stracks_dict (dict(list)): The list contains information
+ regarding the online_tracklets for the recieved image tensor.
+ """
+ self.frame_id += 1
+ if self.frame_id == 1:
+ STrack.init_count(self.num_classes)
+ activated_tracks_dict = defaultdict(list)
+ refined_tracks_dict = defaultdict(list)
+ lost_tracks_dict = defaultdict(list)
+ removed_tracks_dict = defaultdict(list)
+ output_tracks_dict = defaultdict(list)
+
+ pred_dets_dict = defaultdict(list)
+ pred_embs_dict = defaultdict(list)
+
+ # unify single and multi classes detection and embedding results
+ for cls_id in range(self.num_classes):
+ cls_idx = (pred_dets[:, 5:] == cls_id).squeeze(-1)
+ pred_dets_dict[cls_id] = pred_dets[cls_idx]
+ pred_embs_dict[cls_id] = pred_embs[cls_idx]
+
+ for cls_id in range(self.num_classes):
+ """ Step 1: Get detections by class"""
+ pred_dets_cls = pred_dets_dict[cls_id]
+ pred_embs_cls = pred_embs_dict[cls_id]
+ remain_inds = (pred_dets_cls[:, 4:5] > self.conf_thres).squeeze(-1)
+ if remain_inds.sum() > 0:
+ pred_dets_cls = pred_dets_cls[remain_inds]
+ pred_embs_cls = pred_embs_cls[remain_inds]
+ detections = [
+ STrack(
+ STrack.tlbr_to_tlwh(tlbrs[:4]), tlbrs[4], f,
+ self.num_classes, cls_id, 30)
+ for (tlbrs, f) in zip(pred_dets_cls, pred_embs_cls)
+ ]
+ else:
+ detections = []
+ ''' Add newly detected tracklets to tracked_stracks'''
+ unconfirmed_dict = defaultdict(list)
+ tracked_tracks_dict = defaultdict(list)
+ for track in self.tracked_tracks_dict[cls_id]:
+ if not track.is_activated:
+ # previous tracks which are not active in the current frame are added in unconfirmed list
+ unconfirmed_dict[cls_id].append(track)
+ else:
+ # Active tracks are added to the local list 'tracked_stracks'
+ tracked_tracks_dict[cls_id].append(track)
+ """ Step 2: First association, with embedding"""
+ # building tracking pool for the current frame
+ track_pool_dict = defaultdict(list)
+ track_pool_dict[cls_id] = joint_stracks(
+ tracked_tracks_dict[cls_id], self.lost_tracks_dict[cls_id])
+
+ # Predict the current location with KalmanFilter
+ STrack.multi_predict(track_pool_dict[cls_id], self.motion)
+
+ dists = matching.embedding_distance(
+ track_pool_dict[cls_id], detections, metric=self.metric_type)
+ dists = matching.fuse_motion(self.motion, dists,
+ track_pool_dict[cls_id], detections)
+ matches, u_track, u_detection = matching.linear_assignment(
+ dists, thresh=self.tracked_thresh)
+
+ for i_tracked, idet in matches:
+ # i_tracked is the id of the track and idet is the detection
+ track = track_pool_dict[cls_id][i_tracked]
+ det = detections[idet]
+ if track.state == TrackState.Tracked:
+ # If the track is active, add the detection to the track
+ track.update(detections[idet], self.frame_id)
+ activated_tracks_dict[cls_id].append(track)
+ else:
+ # We have obtained a detection from a track which is not active,
+ # hence put the track in refind_stracks list
+ track.re_activate(det, self.frame_id, new_id=False)
+ refined_tracks_dict[cls_id].append(track)
+
+ # None of the steps below happen if there are no undetected tracks.
+ """ Step 3: Second association, with IOU"""
+ detections = [detections[i] for i in u_detection]
+ r_tracked_stracks = []
+ for i in u_track:
+ if track_pool_dict[cls_id][i].state == TrackState.Tracked:
+ r_tracked_stracks.append(track_pool_dict[cls_id][i])
+
+ dists = matching.iou_distance(r_tracked_stracks, detections)
+ matches, u_track, u_detection = matching.linear_assignment(
+ dists, thresh=self.r_tracked_thresh)
+
+ for i_tracked, idet in matches:
+ track = r_tracked_stracks[i_tracked]
+ det = detections[idet]
+ if track.state == TrackState.Tracked:
+ track.update(det, self.frame_id)
+ activated_tracks_dict[cls_id].append(track)
+ else:
+ track.re_activate(det, self.frame_id, new_id=False)
+ refined_tracks_dict[cls_id].append(track)
+
+ for it in u_track:
+ track = r_tracked_stracks[it]
+ if not track.state == TrackState.Lost:
+ track.mark_lost()
+ lost_tracks_dict[cls_id].append(track)
+ '''Deal with unconfirmed tracks, usually tracks with only one beginning frame'''
+ detections = [detections[i] for i in u_detection]
+ dists = matching.iou_distance(unconfirmed_dict[cls_id], detections)
+ matches, u_unconfirmed, u_detection = matching.linear_assignment(
+ dists, thresh=self.unconfirmed_thresh)
+ for i_tracked, idet in matches:
+ unconfirmed_dict[cls_id][i_tracked].update(detections[idet],
+ self.frame_id)
+ activated_tracks_dict[cls_id].append(unconfirmed_dict[cls_id][
+ i_tracked])
+ for it in u_unconfirmed:
+ track = unconfirmed_dict[cls_id][it]
+ track.mark_removed()
+ removed_tracks_dict[cls_id].append(track)
+ """ Step 4: Init new stracks"""
+ for inew in u_detection:
+ track = detections[inew]
+ if track.score < self.det_thresh:
+ continue
+ track.activate(self.motion, self.frame_id)
+ activated_tracks_dict[cls_id].append(track)
+ """ Step 5: Update state"""
+ for track in self.lost_tracks_dict[cls_id]:
+ if self.frame_id - track.end_frame > self.max_time_lost:
+ track.mark_removed()
+ removed_tracks_dict[cls_id].append(track)
+
+ self.tracked_tracks_dict[cls_id] = [
+ t for t in self.tracked_tracks_dict[cls_id]
+ if t.state == TrackState.Tracked
+ ]
+ self.tracked_tracks_dict[cls_id] = joint_stracks(
+ self.tracked_tracks_dict[cls_id], activated_tracks_dict[cls_id])
+ self.tracked_tracks_dict[cls_id] = joint_stracks(
+ self.tracked_tracks_dict[cls_id], refined_tracks_dict[cls_id])
+ self.lost_tracks_dict[cls_id] = sub_stracks(
+ self.lost_tracks_dict[cls_id], self.tracked_tracks_dict[cls_id])
+ self.lost_tracks_dict[cls_id].extend(lost_tracks_dict[cls_id])
+ self.lost_tracks_dict[cls_id] = sub_stracks(
+ self.lost_tracks_dict[cls_id], self.removed_tracks_dict[cls_id])
+ self.removed_tracks_dict[cls_id].extend(removed_tracks_dict[cls_id])
+ self.tracked_tracks_dict[cls_id], self.lost_tracks_dict[
+ cls_id] = remove_duplicate_stracks(
+ self.tracked_tracks_dict[cls_id],
+ self.lost_tracks_dict[cls_id])
+
+ # get scores of lost tracks
+ output_tracks_dict[cls_id] = [
+ track for track in self.tracked_tracks_dict[cls_id]
+ if track.is_activated
+ ]
+
+ logger.debug('===========Frame {}=========='.format(self.frame_id))
+ logger.debug('Activated: {}'.format(
+ [track.track_id for track in activated_tracks_dict[cls_id]]))
+ logger.debug('Refind: {}'.format(
+ [track.track_id for track in refined_tracks_dict[cls_id]]))
+ logger.debug('Lost: {}'.format(
+ [track.track_id for track in lost_tracks_dict[cls_id]]))
+ logger.debug('Removed: {}'.format(
+ [track.track_id for track in removed_tracks_dict[cls_id]]))
+
+ return output_tracks_dict
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/utils.py
new file mode 100644
index 000000000..b3657d257
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/utils.py
@@ -0,0 +1,262 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import cv2
+import time
+import numpy as np
+from .visualization import plot_tracking_dict, plot_tracking
+
+__all__ = [
+ 'MOTTimer',
+ 'Detection',
+ 'write_mot_results',
+ 'save_vis_results',
+ 'load_det_results',
+ 'preprocess_reid',
+ 'get_crops',
+ 'clip_box',
+ 'scale_coords',
+]
+
+
+class MOTTimer(object):
+ """
+ This class used to compute and print the current FPS while evaling.
+ """
+
+ def __init__(self):
+ self.total_time = 0.
+ self.calls = 0
+ self.start_time = 0.
+ self.diff = 0.
+ self.average_time = 0.
+ self.duration = 0.
+
+ def tic(self):
+ # using time.time instead of time.clock because time time.clock
+ # does not normalize for multithreading
+ self.start_time = time.time()
+
+ def toc(self, average=True):
+ self.diff = time.time() - self.start_time
+ self.total_time += self.diff
+ self.calls += 1
+ self.average_time = self.total_time / self.calls
+ if average:
+ self.duration = self.average_time
+ else:
+ self.duration = self.diff
+ return self.duration
+
+ def clear(self):
+ self.total_time = 0.
+ self.calls = 0
+ self.start_time = 0.
+ self.diff = 0.
+ self.average_time = 0.
+ self.duration = 0.
+
+
+class Detection(object):
+ """
+ This class represents a bounding box detection in a single image.
+
+ Args:
+ tlwh (Tensor): Bounding box in format `(top left x, top left y,
+ width, height)`.
+ score (Tensor): Bounding box confidence score.
+ feature (Tensor): A feature vector that describes the object
+ contained in this image.
+ cls_id (Tensor): Bounding box category id.
+ """
+
+ def __init__(self, tlwh, score, feature, cls_id):
+ self.tlwh = np.asarray(tlwh, dtype=np.float32)
+ self.score = float(score)
+ self.feature = np.asarray(feature, dtype=np.float32)
+ self.cls_id = int(cls_id)
+
+ def to_tlbr(self):
+ """
+ Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,
+ `(top left, bottom right)`.
+ """
+ ret = self.tlwh.copy()
+ ret[2:] += ret[:2]
+ return ret
+
+ def to_xyah(self):
+ """
+ Convert bounding box to format `(center x, center y, aspect ratio,
+ height)`, where the aspect ratio is `width / height`.
+ """
+ ret = self.tlwh.copy()
+ ret[:2] += ret[2:] / 2
+ ret[2] /= ret[3]
+ return ret
+
+
+def write_mot_results(filename, results, data_type='mot', num_classes=1):
+ # support single and multi classes
+ if data_type in ['mot', 'mcmot']:
+ save_format = '{frame},{id},{x1},{y1},{w},{h},{score},{cls_id},-1,-1\n'
+ elif data_type == 'kitti':
+ save_format = '{frame} {id} car 0 0 -10 {x1} {y1} {x2} {y2} -10 -10 -10 -1000 -1000 -1000 -10\n'
+ else:
+ raise ValueError(data_type)
+
+ f = open(filename, 'w')
+ for cls_id in range(num_classes):
+ for frame_id, tlwhs, tscores, track_ids in results[cls_id]:
+ if data_type == 'kitti':
+ frame_id -= 1
+ for tlwh, score, track_id in zip(tlwhs, tscores, track_ids):
+ if track_id < 0: continue
+ if data_type == 'mot':
+ cls_id = -1
+
+ x1, y1, w, h = tlwh
+ x2, y2 = x1 + w, y1 + h
+ line = save_format.format(
+ frame=frame_id,
+ id=track_id,
+ x1=x1,
+ y1=y1,
+ x2=x2,
+ y2=y2,
+ w=w,
+ h=h,
+ score=score,
+ cls_id=cls_id)
+ f.write(line)
+ print('MOT results save in {}'.format(filename))
+
+
+def save_vis_results(data,
+ frame_id,
+ online_ids,
+ online_tlwhs,
+ online_scores,
+ average_time,
+ show_image,
+ save_dir,
+ num_classes=1):
+ if show_image or save_dir is not None:
+ assert 'ori_image' in data
+ img0 = data['ori_image'].numpy()[0]
+ if online_ids is None:
+ online_im = img0
+ else:
+ if isinstance(online_tlwhs, dict):
+ online_im = plot_tracking_dict(
+ img0,
+ num_classes,
+ online_tlwhs,
+ online_ids,
+ online_scores,
+ frame_id=frame_id,
+ fps=1. / average_time)
+ else:
+ online_im = plot_tracking(
+ img0,
+ online_tlwhs,
+ online_ids,
+ online_scores,
+ frame_id=frame_id,
+ fps=1. / average_time)
+ if show_image:
+ cv2.imshow('online_im', online_im)
+ if save_dir is not None:
+ cv2.imwrite(
+ os.path.join(save_dir, '{:05d}.jpg'.format(frame_id)), online_im)
+
+
+def load_det_results(det_file, num_frames):
+ assert os.path.exists(det_file) and os.path.isfile(det_file), \
+ '{} is not exist or not a file.'.format(det_file)
+ labels = np.loadtxt(det_file, dtype='float32', delimiter=',')
+ assert labels.shape[1] == 7, \
+ "Each line of {} should have 7 items: '[frame_id],[x0],[y0],[w],[h],[score],[class_id]'.".format(det_file)
+ results_list = []
+ for frame_i in range(num_frames):
+ results = {'bbox': [], 'score': [], 'cls_id': []}
+ lables_with_frame = labels[labels[:, 0] == frame_i + 1]
+ # each line of lables_with_frame:
+ # [frame_id],[x0],[y0],[w],[h],[score],[class_id]
+ for l in lables_with_frame:
+ results['bbox'].append(l[1:5])
+ results['score'].append(l[5:6])
+ results['cls_id'].append(l[6:7])
+ results_list.append(results)
+ return results_list
+
+
+def scale_coords(coords, input_shape, im_shape, scale_factor):
+ # Note: ratio has only one value, scale_factor[0] == scale_factor[1]
+ #
+ # This function only used for JDE YOLOv3 or other detectors with
+ # LetterBoxResize and JDEBBoxPostProcess, coords output from detector had
+ # not scaled back to the origin image.
+
+ ratio = scale_factor[0]
+ pad_w = (input_shape[1] - int(im_shape[1])) / 2
+ pad_h = (input_shape[0] - int(im_shape[0])) / 2
+ coords[:, 0::2] -= pad_w
+ coords[:, 1::2] -= pad_h
+ coords[:, 0:4] /= ratio
+ coords[:, :4] = np.clip(coords[:, :4], a_min=0, a_max=coords[:, :4].max())
+ return coords.round()
+
+
+def clip_box(xyxy, ori_image_shape):
+ H, W = ori_image_shape
+ xyxy[:, 0::2] = np.clip(xyxy[:, 0::2], a_min=0, a_max=W)
+ xyxy[:, 1::2] = np.clip(xyxy[:, 1::2], a_min=0, a_max=H)
+ w = xyxy[:, 2:3] - xyxy[:, 0:1]
+ h = xyxy[:, 3:4] - xyxy[:, 1:2]
+ mask = np.logical_and(h > 0, w > 0)
+ keep_idx = np.nonzero(mask)
+ return xyxy[keep_idx[0]], keep_idx
+
+
+def get_crops(xyxy, ori_img, w, h):
+ crops = []
+ xyxy = xyxy.astype(np.int64)
+ ori_img = ori_img.numpy()
+ ori_img = np.squeeze(ori_img, axis=0).transpose(1, 0, 2) # [h,w,3]->[w,h,3]
+ for i, bbox in enumerate(xyxy):
+ crop = ori_img[bbox[0]:bbox[2], bbox[1]:bbox[3], :]
+ crops.append(crop)
+ crops = preprocess_reid(crops, w, h)
+ return crops
+
+
+def preprocess_reid(imgs,
+ w=64,
+ h=192,
+ mean=[0.485, 0.456, 0.406],
+ std=[0.229, 0.224, 0.225]):
+ im_batch = []
+ for img in imgs:
+ img = cv2.resize(img, (w, h))
+ img = img[:, :, ::-1].astype('float32').transpose((2, 0, 1)) / 255
+ img_mean = np.array(mean).reshape((3, 1, 1))
+ img_std = np.array(std).reshape((3, 1, 1))
+ img -= img_mean
+ img /= img_std
+ img = np.expand_dims(img, axis=0)
+ im_batch.append(img)
+ im_batch = np.concatenate(im_batch, 0)
+ return im_batch
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/visualization.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/visualization.py
new file mode 100644
index 000000000..6d13a2877
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/mot/visualization.py
@@ -0,0 +1,146 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import numpy as np
+
+
+def get_color(idx):
+ idx = idx * 3
+ color = ((37 * idx) % 255, (17 * idx) % 255, (29 * idx) % 255)
+ return color
+
+
+def plot_tracking(image,
+ tlwhs,
+ obj_ids,
+ scores=None,
+ frame_id=0,
+ fps=0.,
+ ids2names=[]):
+ im = np.ascontiguousarray(np.copy(image))
+ im_h, im_w = im.shape[:2]
+
+ top_view = np.zeros([im_w, im_w, 3], dtype=np.uint8) + 255
+
+ text_scale = max(1, image.shape[1] / 1600.)
+ text_thickness = 2
+ line_thickness = max(1, int(image.shape[1] / 500.))
+
+ radius = max(5, int(im_w / 140.))
+ cv2.putText(
+ im,
+ 'frame: %d fps: %.2f num: %d' % (frame_id, fps, len(tlwhs)),
+ (0, int(15 * text_scale)),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 0, 255),
+ thickness=2)
+
+ for i, tlwh in enumerate(tlwhs):
+ x1, y1, w, h = tlwh
+ intbox = tuple(map(int, (x1, y1, x1 + w, y1 + h)))
+ obj_id = int(obj_ids[i])
+ id_text = '{}'.format(int(obj_id))
+ if ids2names != []:
+ assert len(
+ ids2names) == 1, "plot_tracking only supports single classes."
+ id_text = '{}_'.format(ids2names[0]) + id_text
+ _line_thickness = 1 if obj_id <= 0 else line_thickness
+ color = get_color(abs(obj_id))
+ cv2.rectangle(
+ im, intbox[0:2], intbox[2:4], color=color, thickness=line_thickness)
+ cv2.putText(
+ im,
+ id_text, (intbox[0], intbox[1] - 10),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 0, 255),
+ thickness=text_thickness)
+
+ if scores is not None:
+ text = '{:.2f}'.format(float(scores[i]))
+ cv2.putText(
+ im,
+ text, (intbox[0], intbox[1] + 10),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 255, 255),
+ thickness=text_thickness)
+ return im
+
+
+def plot_tracking_dict(image,
+ num_classes,
+ tlwhs_dict,
+ obj_ids_dict,
+ scores_dict,
+ frame_id=0,
+ fps=0.,
+ ids2names=[]):
+ im = np.ascontiguousarray(np.copy(image))
+ im_h, im_w = im.shape[:2]
+
+ top_view = np.zeros([im_w, im_w, 3], dtype=np.uint8) + 255
+
+ text_scale = max(1, image.shape[1] / 1600.)
+ text_thickness = 2
+ line_thickness = max(1, int(image.shape[1] / 500.))
+
+ radius = max(5, int(im_w / 140.))
+
+ for cls_id in range(num_classes):
+ tlwhs = tlwhs_dict[cls_id]
+ obj_ids = obj_ids_dict[cls_id]
+ scores = scores_dict[cls_id]
+ cv2.putText(
+ im,
+ 'frame: %d fps: %.2f num: %d' % (frame_id, fps, len(tlwhs)),
+ (0, int(15 * text_scale)),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 0, 255),
+ thickness=2)
+
+ for i, tlwh in enumerate(tlwhs):
+ x1, y1, w, h = tlwh
+ intbox = tuple(map(int, (x1, y1, x1 + w, y1 + h)))
+ obj_id = int(obj_ids[i])
+
+ id_text = '{}'.format(int(obj_id))
+ if ids2names != []:
+ id_text = '{}_{}'.format(ids2names[cls_id], id_text)
+ else:
+ id_text = 'class{}_{}'.format(cls_id, id_text)
+
+ _line_thickness = 1 if obj_id <= 0 else line_thickness
+ color = get_color(abs(obj_id))
+ cv2.rectangle(
+ im,
+ intbox[0:2],
+ intbox[2:4],
+ color=color,
+ thickness=line_thickness)
+ cv2.putText(
+ im,
+ id_text, (intbox[0], intbox[1] - 10),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 0, 255),
+ thickness=text_thickness)
+
+ if scores is not None:
+ text = '{:.2f}'.format(float(scores[i]))
+ cv2.putText(
+ im,
+ text, (intbox[0], intbox[1] + 10),
+ cv2.FONT_HERSHEY_PLAIN,
+ text_scale, (0, 255, 255),
+ thickness=text_thickness)
+ return im
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__init__.py
new file mode 100644
index 000000000..d66697caf
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__init__.py
@@ -0,0 +1,30 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import fpn
+from . import yolo_fpn
+from . import hrfpn
+from . import ttf_fpn
+from . import centernet_fpn
+from . import bifpn
+from . import csp_pan
+
+from .fpn import *
+from .yolo_fpn import *
+from .hrfpn import *
+from .ttf_fpn import *
+from .centernet_fpn import *
+from .blazeface_fpn import *
+from .bifpn import *
+from .csp_pan import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..922d2c1d6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/bifpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/bifpn.cpython-37.pyc
new file mode 100644
index 000000000..57af360d2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/bifpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/blazeface_fpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/blazeface_fpn.cpython-37.pyc
new file mode 100644
index 000000000..256668459
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/blazeface_fpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/centernet_fpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/centernet_fpn.cpython-37.pyc
new file mode 100644
index 000000000..a1c91e7a2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/centernet_fpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/csp_pan.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/csp_pan.cpython-37.pyc
new file mode 100644
index 000000000..0b14b978b
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/csp_pan.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/fpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/fpn.cpython-37.pyc
new file mode 100644
index 000000000..d8b5164d3
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/fpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/hrfpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/hrfpn.cpython-37.pyc
new file mode 100644
index 000000000..a9738dac7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/hrfpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/ttf_fpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/ttf_fpn.cpython-37.pyc
new file mode 100644
index 000000000..f36ec7708
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/ttf_fpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/yolo_fpn.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/yolo_fpn.cpython-37.pyc
new file mode 100644
index 000000000..59830319f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/__pycache__/yolo_fpn.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/bifpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/bifpn.py
new file mode 100644
index 000000000..c60760893
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/bifpn.py
@@ -0,0 +1,302 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Constant
+
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.layers import ConvNormLayer
+from ..shape_spec import ShapeSpec
+
+__all__ = ['BiFPN']
+
+
+class SeparableConvLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels=None,
+ kernel_size=3,
+ norm_type='bn',
+ norm_groups=32,
+ act='swish'):
+ super(SeparableConvLayer, self).__init__()
+ assert norm_type in ['bn', 'sync_bn', 'gn', None]
+ assert act in ['swish', 'relu', None]
+
+ self.in_channels = in_channels
+ if out_channels is None:
+ self.out_channels = self.in_channels
+ self.norm_type = norm_type
+ self.norm_groups = norm_groups
+ self.depthwise_conv = nn.Conv2D(
+ in_channels,
+ in_channels,
+ kernel_size,
+ padding=kernel_size // 2,
+ groups=in_channels,
+ bias_attr=False)
+ self.pointwise_conv = nn.Conv2D(in_channels, self.out_channels, 1)
+
+ # norm type
+ if self.norm_type == 'bn':
+ self.norm = nn.BatchNorm2D(self.out_channels)
+ elif self.norm_type == 'sync_bn':
+ self.norm = nn.SyncBatchNorm(self.out_channels)
+ elif self.norm_type == 'gn':
+ self.norm = nn.GroupNorm(
+ num_groups=self.norm_groups, num_channels=self.out_channels)
+
+ # activation
+ if act == 'swish':
+ self.act = nn.Swish()
+ elif act == 'relu':
+ self.act = nn.ReLU()
+
+ def forward(self, x):
+ if self.act is not None:
+ x = self.act(x)
+ out = self.depthwise_conv(x)
+ out = self.pointwise_conv(out)
+ if self.norm_type is not None:
+ out = self.norm(out)
+ return out
+
+
+class BiFPNCell(nn.Layer):
+ def __init__(self,
+ channels=256,
+ num_levels=5,
+ eps=1e-5,
+ use_weighted_fusion=True,
+ kernel_size=3,
+ norm_type='bn',
+ norm_groups=32,
+ act='swish'):
+ super(BiFPNCell, self).__init__()
+ self.channels = channels
+ self.num_levels = num_levels
+ self.eps = eps
+ self.use_weighted_fusion = use_weighted_fusion
+
+ # up
+ self.conv_up = nn.LayerList([
+ SeparableConvLayer(
+ self.channels,
+ kernel_size=kernel_size,
+ norm_type=norm_type,
+ norm_groups=norm_groups,
+ act=act) for _ in range(self.num_levels - 1)
+ ])
+ # down
+ self.conv_down = nn.LayerList([
+ SeparableConvLayer(
+ self.channels,
+ kernel_size=kernel_size,
+ norm_type=norm_type,
+ norm_groups=norm_groups,
+ act=act) for _ in range(self.num_levels - 1)
+ ])
+
+ if self.use_weighted_fusion:
+ self.up_weights = self.create_parameter(
+ shape=[self.num_levels - 1, 2],
+ attr=ParamAttr(initializer=Constant(1.)))
+ self.down_weights = self.create_parameter(
+ shape=[self.num_levels - 1, 3],
+ attr=ParamAttr(initializer=Constant(1.)))
+
+ def _feature_fusion_cell(self,
+ conv_layer,
+ lateral_feat,
+ sampling_feat,
+ route_feat=None,
+ weights=None):
+ if self.use_weighted_fusion:
+ weights = F.relu(weights)
+ weights = weights / (weights.sum() + self.eps)
+ if route_feat is not None:
+ out_feat = weights[0] * lateral_feat + \
+ weights[1] * sampling_feat + \
+ weights[2] * route_feat
+ else:
+ out_feat = weights[0] * lateral_feat + \
+ weights[1] * sampling_feat
+ else:
+ if route_feat is not None:
+ out_feat = lateral_feat + sampling_feat + route_feat
+ else:
+ out_feat = lateral_feat + sampling_feat
+
+ out_feat = conv_layer(out_feat)
+ return out_feat
+
+ def forward(self, feats):
+ # feats: [P3 - P7]
+ lateral_feats = []
+
+ # up
+ up_feature = feats[-1]
+ for i, feature in enumerate(feats[::-1]):
+ if i == 0:
+ lateral_feats.append(feature)
+ else:
+ shape = paddle.shape(feature)
+ up_feature = F.interpolate(
+ up_feature, size=[shape[2], shape[3]])
+ lateral_feature = self._feature_fusion_cell(
+ self.conv_up[i - 1],
+ feature,
+ up_feature,
+ weights=self.up_weights[i - 1]
+ if self.use_weighted_fusion else None)
+ lateral_feats.append(lateral_feature)
+ up_feature = lateral_feature
+
+ out_feats = []
+ # down
+ down_feature = lateral_feats[-1]
+ for i, (lateral_feature,
+ route_feature) in enumerate(zip(lateral_feats[::-1], feats)):
+ if i == 0:
+ out_feats.append(lateral_feature)
+ else:
+ down_feature = F.max_pool2d(down_feature, 3, 2, 1)
+ if i == len(feats) - 1:
+ route_feature = None
+ weights = self.down_weights[
+ i - 1][:2] if self.use_weighted_fusion else None
+ else:
+ weights = self.down_weights[
+ i - 1] if self.use_weighted_fusion else None
+ out_feature = self._feature_fusion_cell(
+ self.conv_down[i - 1],
+ lateral_feature,
+ down_feature,
+ route_feature,
+ weights=weights)
+ out_feats.append(out_feature)
+ down_feature = out_feature
+
+ return out_feats
+
+
+@register
+@serializable
+class BiFPN(nn.Layer):
+ """
+ Bidirectional Feature Pyramid Network, see https://arxiv.org/abs/1911.09070
+
+ Args:
+ in_channels (list[int]): input channels of each level which can be
+ derived from the output shape of backbone by from_config.
+ out_channel (int): output channel of each level.
+ num_extra_levels (int): the number of extra stages added to the last level.
+ default: 2
+ fpn_strides (List): The stride of each level.
+ num_stacks (int): the number of stacks for BiFPN, default: 1.
+ use_weighted_fusion (bool): use weighted feature fusion in BiFPN, default: True.
+ norm_type (string|None): the normalization type in BiFPN module. If
+ norm_type is None, norm will not be used after conv and if
+ norm_type is string, bn, gn, sync_bn are available. default: bn.
+ norm_groups (int): if you use gn, set this param.
+ act (string|None): the activation function of BiFPN.
+ """
+
+ def __init__(self,
+ in_channels=(512, 1024, 2048),
+ out_channel=256,
+ num_extra_levels=2,
+ fpn_strides=[8, 16, 32, 64, 128],
+ num_stacks=1,
+ use_weighted_fusion=True,
+ norm_type='bn',
+ norm_groups=32,
+ act='swish'):
+ super(BiFPN, self).__init__()
+ assert num_stacks > 0, "The number of stacks of BiFPN is at least 1."
+ assert norm_type in ['bn', 'sync_bn', 'gn', None]
+ assert act in ['swish', 'relu', None]
+ assert num_extra_levels >= 0, \
+ "The `num_extra_levels` must be non negative(>=0)."
+
+ self.in_channels = in_channels
+ self.out_channel = out_channel
+ self.num_extra_levels = num_extra_levels
+ self.num_stacks = num_stacks
+ self.use_weighted_fusion = use_weighted_fusion
+ self.norm_type = norm_type
+ self.norm_groups = norm_groups
+ self.act = act
+ self.num_levels = len(self.in_channels) + self.num_extra_levels
+ if len(fpn_strides) != self.num_levels:
+ for i in range(self.num_extra_levels):
+ fpn_strides += [fpn_strides[-1] * 2]
+ self.fpn_strides = fpn_strides
+
+ self.lateral_convs = nn.LayerList()
+ for in_c in in_channels:
+ self.lateral_convs.append(
+ ConvNormLayer(in_c, self.out_channel, 1, 1))
+ if self.num_extra_levels > 0:
+ self.extra_convs = nn.LayerList()
+ for i in range(self.num_extra_levels):
+ if i == 0:
+ self.extra_convs.append(
+ ConvNormLayer(self.in_channels[-1], self.out_channel, 3,
+ 2))
+ else:
+ self.extra_convs.append(nn.MaxPool2D(3, 2, 1))
+
+ self.bifpn_cells = nn.LayerList()
+ for i in range(self.num_stacks):
+ self.bifpn_cells.append(
+ BiFPNCell(
+ self.out_channel,
+ self.num_levels,
+ use_weighted_fusion=self.use_weighted_fusion,
+ norm_type=self.norm_type,
+ norm_groups=self.norm_groups,
+ act=self.act))
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {
+ 'in_channels': [i.channels for i in input_shape],
+ 'fpn_strides': [i.stride for i in input_shape]
+ }
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self.out_channel, stride=s) for s in self.fpn_strides
+ ]
+
+ def forward(self, feats):
+ assert len(feats) == len(self.in_channels)
+ fpn_feats = []
+ for conv_layer, feature in zip(self.lateral_convs, feats):
+ fpn_feats.append(conv_layer(feature))
+ if self.num_extra_levels > 0:
+ feat = feats[-1]
+ for conv_layer in self.extra_convs:
+ feat = conv_layer(feat)
+ fpn_feats.append(feat)
+
+ for bifpn_cell in self.bifpn_cells:
+ fpn_feats = bifpn_cell(fpn_feats)
+ return fpn_feats
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/blazeface_fpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/blazeface_fpn.py
new file mode 100644
index 000000000..18d7f3cf1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/blazeface_fpn.py
@@ -0,0 +1,216 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn.functional as F
+from paddle import ParamAttr
+import paddle.nn as nn
+from paddle.nn.initializer import KaimingNormal
+from ppdet.core.workspace import register, serializable
+from ..shape_spec import ShapeSpec
+
+__all__ = ['BlazeNeck']
+
+
+def hard_swish(x):
+ return x * F.relu6(x + 3) / 6.
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size,
+ stride,
+ padding,
+ num_groups=1,
+ act='relu',
+ conv_lr=0.1,
+ conv_decay=0.,
+ norm_decay=0.,
+ norm_type='bn',
+ name=None):
+ super(ConvBNLayer, self).__init__()
+ self.act = act
+ self._conv = nn.Conv2D(
+ in_channels,
+ out_channels,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ groups=num_groups,
+ weight_attr=ParamAttr(
+ learning_rate=conv_lr, initializer=KaimingNormal()),
+ bias_attr=False)
+
+ if norm_type == 'sync_bn':
+ self._batch_norm = nn.SyncBatchNorm(out_channels)
+ else:
+ self._batch_norm = nn.BatchNorm(
+ out_channels, act=None, use_global_stats=False)
+
+ def forward(self, x):
+ x = self._conv(x)
+ x = self._batch_norm(x)
+ if self.act == "relu":
+ x = F.relu(x)
+ elif self.act == "relu6":
+ x = F.relu6(x)
+ elif self.act == 'leaky':
+ x = F.leaky_relu(x)
+ elif self.act == 'hard_swish':
+ x = hard_swish(x)
+ return x
+
+
+class FPN(nn.Layer):
+ def __init__(self, in_channels, out_channels, name=None):
+ super(FPN, self).__init__()
+ self.conv1_fpn = ConvBNLayer(
+ in_channels,
+ out_channels // 2,
+ kernel_size=1,
+ padding=0,
+ stride=1,
+ act='leaky',
+ name=name + '_output1')
+ self.conv2_fpn = ConvBNLayer(
+ in_channels,
+ out_channels // 2,
+ kernel_size=1,
+ padding=0,
+ stride=1,
+ act='leaky',
+ name=name + '_output2')
+ self.conv3_fpn = ConvBNLayer(
+ out_channels // 2,
+ out_channels // 2,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act='leaky',
+ name=name + '_merge')
+
+ def forward(self, input):
+ output1 = self.conv1_fpn(input[0])
+ output2 = self.conv2_fpn(input[1])
+ up2 = F.upsample(
+ output2, size=paddle.shape(output1)[-2:], mode='nearest')
+ output1 = paddle.add(output1, up2)
+ output1 = self.conv3_fpn(output1)
+ return output1, output2
+
+
+class SSH(nn.Layer):
+ def __init__(self, in_channels, out_channels, name=None):
+ super(SSH, self).__init__()
+ assert out_channels % 4 == 0
+ self.conv0_ssh = ConvBNLayer(
+ in_channels,
+ out_channels // 2,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act=None,
+ name=name + 'ssh_conv3')
+ self.conv1_ssh = ConvBNLayer(
+ out_channels // 2,
+ out_channels // 4,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act='leaky',
+ name=name + 'ssh_conv5_1')
+ self.conv2_ssh = ConvBNLayer(
+ out_channels // 4,
+ out_channels // 4,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act=None,
+ name=name + 'ssh_conv5_2')
+ self.conv3_ssh = ConvBNLayer(
+ out_channels // 4,
+ out_channels // 4,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act='leaky',
+ name=name + 'ssh_conv7_1')
+ self.conv4_ssh = ConvBNLayer(
+ out_channels // 4,
+ out_channels // 4,
+ kernel_size=3,
+ padding=1,
+ stride=1,
+ act=None,
+ name=name + 'ssh_conv7_2')
+
+ def forward(self, x):
+ conv0 = self.conv0_ssh(x)
+ conv1 = self.conv1_ssh(conv0)
+ conv2 = self.conv2_ssh(conv1)
+ conv3 = self.conv3_ssh(conv2)
+ conv4 = self.conv4_ssh(conv3)
+ concat = paddle.concat([conv0, conv2, conv4], axis=1)
+ return F.relu(concat)
+
+
+@register
+@serializable
+class BlazeNeck(nn.Layer):
+ def __init__(self, in_channel, neck_type="None", data_format='NCHW'):
+ super(BlazeNeck, self).__init__()
+ self.neck_type = neck_type
+ self.reture_input = False
+ self._out_channels = in_channel
+ if self.neck_type == 'None':
+ self.reture_input = True
+ if "fpn" in self.neck_type:
+ self.fpn = FPN(self._out_channels[0],
+ self._out_channels[1],
+ name='fpn')
+ self._out_channels = [
+ self._out_channels[0] // 2, self._out_channels[1] // 2
+ ]
+ if "ssh" in self.neck_type:
+ self.ssh1 = SSH(self._out_channels[0],
+ self._out_channels[0],
+ name='ssh1')
+ self.ssh2 = SSH(self._out_channels[1],
+ self._out_channels[1],
+ name='ssh2')
+ self._out_channels = [self._out_channels[0], self._out_channels[1]]
+
+ def forward(self, inputs):
+ if self.reture_input:
+ return inputs
+ output1, output2 = None, None
+ if "fpn" in self.neck_type:
+ backout_4, backout_1 = inputs
+ output1, output2 = self.fpn([backout_4, backout_1])
+ if self.neck_type == "only_fpn":
+ return [output1, output2]
+ if self.neck_type == "only_ssh":
+ output1, output2 = inputs
+ feature1 = self.ssh1(output1)
+ feature2 = self.ssh2(output2)
+ return [feature1, feature2]
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(channels=c)
+ for c in [self._out_channels[0], self._out_channels[1]]
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/centernet_fpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/centernet_fpn.py
new file mode 100644
index 000000000..df5ced2e7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/centernet_fpn.py
@@ -0,0 +1,420 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+import math
+import paddle
+import paddle.nn as nn
+from paddle import ParamAttr
+from paddle.nn.initializer import Uniform
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.layers import ConvNormLayer
+from ppdet.modeling.backbones.hardnet import ConvLayer, HarDBlock
+from ..shape_spec import ShapeSpec
+
+__all__ = ['CenterNetDLAFPN', 'CenterNetHarDNetFPN']
+
+
+# SGE attention
+class BasicConv(nn.Layer):
+ def __init__(self,
+ in_planes,
+ out_planes,
+ kernel_size,
+ stride=1,
+ padding=0,
+ dilation=1,
+ groups=1,
+ relu=True,
+ bn=True,
+ bias_attr=False):
+ super(BasicConv, self).__init__()
+ self.out_channels = out_planes
+ self.conv = nn.Conv2D(
+ in_planes,
+ out_planes,
+ kernel_size=kernel_size,
+ stride=stride,
+ padding=padding,
+ dilation=dilation,
+ groups=groups,
+ bias_attr=bias_attr)
+ self.bn = nn.BatchNorm2D(
+ out_planes,
+ epsilon=1e-5,
+ momentum=0.01,
+ weight_attr=False,
+ bias_attr=False) if bn else None
+ self.relu = nn.ReLU() if relu else None
+
+ def forward(self, x):
+ x = self.conv(x)
+ if self.bn is not None:
+ x = self.bn(x)
+ if self.relu is not None:
+ x = self.relu(x)
+ return x
+
+
+class ChannelPool(nn.Layer):
+ def forward(self, x):
+ return paddle.concat(
+ (paddle.max(x, 1).unsqueeze(1), paddle.mean(x, 1).unsqueeze(1)),
+ axis=1)
+
+
+class SpatialGate(nn.Layer):
+ def __init__(self):
+ super(SpatialGate, self).__init__()
+ kernel_size = 7
+ self.compress = ChannelPool()
+ self.spatial = BasicConv(
+ 2,
+ 1,
+ kernel_size,
+ stride=1,
+ padding=(kernel_size - 1) // 2,
+ relu=False)
+
+ def forward(self, x):
+ x_compress = self.compress(x)
+ x_out = self.spatial(x_compress)
+ scale = F.sigmoid(x_out) # broadcasting
+ return x * scale
+
+
+def fill_up_weights(up):
+ weight = up.weight.numpy()
+ f = math.ceil(weight.shape[2] / 2)
+ c = (2 * f - 1 - f % 2) / (2. * f)
+ for i in range(weight.shape[2]):
+ for j in range(weight.shape[3]):
+ weight[0, 0, i, j] = \
+ (1 - math.fabs(i / f - c)) * (1 - math.fabs(j / f - c))
+ for c in range(1, weight.shape[0]):
+ weight[c, 0, :, :] = weight[0, 0, :, :]
+ up.weight.set_value(weight)
+
+
+class IDAUp(nn.Layer):
+ def __init__(self, ch_ins, ch_out, up_strides, dcn_v2=True):
+ super(IDAUp, self).__init__()
+ for i in range(1, len(ch_ins)):
+ ch_in = ch_ins[i]
+ up_s = int(up_strides[i])
+ fan_in = ch_in * 3 * 3
+ stdv = 1. / math.sqrt(fan_in)
+ proj = nn.Sequential(
+ ConvNormLayer(
+ ch_in,
+ ch_out,
+ filter_size=3,
+ stride=1,
+ use_dcn=dcn_v2,
+ bias_on=dcn_v2,
+ norm_decay=None,
+ dcn_lr_scale=1.,
+ dcn_regularizer=None,
+ initializer=Uniform(-stdv, stdv)),
+ nn.ReLU())
+ node = nn.Sequential(
+ ConvNormLayer(
+ ch_out,
+ ch_out,
+ filter_size=3,
+ stride=1,
+ use_dcn=dcn_v2,
+ bias_on=dcn_v2,
+ norm_decay=None,
+ dcn_lr_scale=1.,
+ dcn_regularizer=None,
+ initializer=Uniform(-stdv, stdv)),
+ nn.ReLU())
+
+ kernel_size = up_s * 2
+ fan_in = ch_out * kernel_size * kernel_size
+ stdv = 1. / math.sqrt(fan_in)
+ up = nn.Conv2DTranspose(
+ ch_out,
+ ch_out,
+ kernel_size=up_s * 2,
+ stride=up_s,
+ padding=up_s // 2,
+ groups=ch_out,
+ weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
+ bias_attr=False)
+ fill_up_weights(up)
+ setattr(self, 'proj_' + str(i), proj)
+ setattr(self, 'up_' + str(i), up)
+ setattr(self, 'node_' + str(i), node)
+
+ def forward(self, inputs, start_level, end_level):
+ for i in range(start_level + 1, end_level):
+ upsample = getattr(self, 'up_' + str(i - start_level))
+ project = getattr(self, 'proj_' + str(i - start_level))
+
+ inputs[i] = project(inputs[i])
+ inputs[i] = upsample(inputs[i])
+ node = getattr(self, 'node_' + str(i - start_level))
+ inputs[i] = node(paddle.add(inputs[i], inputs[i - 1]))
+
+
+class DLAUp(nn.Layer):
+ def __init__(self, start_level, channels, scales, ch_in=None, dcn_v2=True):
+ super(DLAUp, self).__init__()
+ self.start_level = start_level
+ if ch_in is None:
+ ch_in = channels
+ self.channels = channels
+ channels = list(channels)
+ scales = np.array(scales, dtype=int)
+ for i in range(len(channels) - 1):
+ j = -i - 2
+ setattr(
+ self,
+ 'ida_{}'.format(i),
+ IDAUp(
+ ch_in[j:],
+ channels[j],
+ scales[j:] // scales[j],
+ dcn_v2=dcn_v2))
+ scales[j + 1:] = scales[j]
+ ch_in[j + 1:] = [channels[j] for _ in channels[j + 1:]]
+
+ def forward(self, inputs):
+ out = [inputs[-1]] # start with 32
+ for i in range(len(inputs) - self.start_level - 1):
+ ida = getattr(self, 'ida_{}'.format(i))
+ ida(inputs, len(inputs) - i - 2, len(inputs))
+ out.insert(0, inputs[-1])
+ return out
+
+
+@register
+@serializable
+class CenterNetDLAFPN(nn.Layer):
+ """
+ Args:
+ in_channels (list): number of input feature channels from backbone.
+ [16, 32, 64, 128, 256, 512] by default, means the channels of DLA-34
+ down_ratio (int): the down ratio from images to heatmap, 4 by default
+ last_level (int): the last level of input feature fed into the upsamplng block
+ out_channel (int): the channel of the output feature, 0 by default means
+ the channel of the input feature whose down ratio is `down_ratio`
+ first_level (None): the first level of input feature fed into the upsamplng block.
+ if None, the first level stands for logs(down_ratio)
+ dcn_v2 (bool): whether use the DCNv2, True by default
+ with_sge (bool): whether use SGE attention, False by default
+ """
+
+ def __init__(self,
+ in_channels,
+ down_ratio=4,
+ last_level=5,
+ out_channel=0,
+ first_level=None,
+ dcn_v2=True,
+ with_sge=False):
+ super(CenterNetDLAFPN, self).__init__()
+ self.first_level = int(np.log2(
+ down_ratio)) if first_level is None else first_level
+ assert self.first_level >= 0, "first level in CenterNetDLAFPN should be greater or equal to 0, but received {}".format(
+ self.first_level)
+ self.down_ratio = down_ratio
+ self.last_level = last_level
+ scales = [2**i for i in range(len(in_channels[self.first_level:]))]
+ self.dla_up = DLAUp(
+ self.first_level,
+ in_channels[self.first_level:],
+ scales,
+ dcn_v2=dcn_v2)
+ self.out_channel = out_channel
+ if out_channel == 0:
+ self.out_channel = in_channels[self.first_level]
+ self.ida_up = IDAUp(
+ in_channels[self.first_level:self.last_level],
+ self.out_channel,
+ [2**i for i in range(self.last_level - self.first_level)],
+ dcn_v2=dcn_v2)
+
+ self.with_sge = with_sge
+ if self.with_sge:
+ self.sge_attention = SpatialGate()
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape]}
+
+ def forward(self, body_feats):
+
+ dla_up_feats = self.dla_up(body_feats)
+
+ ida_up_feats = []
+ for i in range(self.last_level - self.first_level):
+ ida_up_feats.append(dla_up_feats[i].clone())
+
+ self.ida_up(ida_up_feats, 0, len(ida_up_feats))
+
+ feat = ida_up_feats[-1]
+ if self.with_sge:
+ feat = self.sge_attention(feat)
+ if self.down_ratio != 4:
+ feat = F.interpolate(feat, scale_factor=self.down_ratio // 4, mode="bilinear", align_corners=True)
+ return feat
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, stride=self.down_ratio)]
+
+
+class TransitionUp(nn.Layer):
+ def __init__(self, in_channels, out_channels):
+ super().__init__()
+
+ def forward(self, x, skip):
+ w, h = skip.shape[2], skip.shape[3]
+ out = F.interpolate(x, size=(w, h), mode="bilinear", align_corners=True)
+ out = paddle.concat([out, skip], 1)
+ return out
+
+
+@register
+@serializable
+class CenterNetHarDNetFPN(nn.Layer):
+ """
+ Args:
+ in_channels (list): number of input feature channels from backbone.
+ [96, 214, 458, 784] by default, means the channels of HarDNet85
+ num_layers (int): HarDNet laters, 85 by default
+ down_ratio (int): the down ratio from images to heatmap, 4 by default
+ first_level (int|None): the first level of input feature fed into the upsamplng block.
+ if None, the first level stands for logs(down_ratio) - 1
+
+ last_level (int): the last level of input feature fed into the upsamplng block
+ out_channel (int): the channel of the output feature, 0 by default means
+ the channel of the input feature whose down ratio is `down_ratio`
+ """
+
+ def __init__(self,
+ in_channels,
+ num_layers=85,
+ down_ratio=4,
+ first_level=None,
+ last_level=4,
+ out_channel=0):
+ super(CenterNetHarDNetFPN, self).__init__()
+ self.first_level = int(np.log2(
+ down_ratio)) - 1 if first_level is None else first_level
+ assert self.first_level >= 0, "first level in CenterNetDLAFPN should be greater or equal to 0, but received {}".format(
+ self.first_level)
+ self.down_ratio = down_ratio
+ self.last_level = last_level
+ self.last_pool = nn.AvgPool2D(kernel_size=2, stride=2)
+
+ assert num_layers in [68, 85], "HarDNet-{} not support.".format(
+ num_layers)
+ if num_layers == 85:
+ self.last_proj = ConvLayer(784, 256, kernel_size=1)
+ self.last_blk = HarDBlock(768, 80, 1.7, 8)
+ self.skip_nodes = [1, 3, 8, 13]
+ self.SC = [32, 32, 0]
+ gr = [64, 48, 28]
+ layers = [8, 8, 4]
+ ch_list2 = [224 + self.SC[0], 160 + self.SC[1], 96 + self.SC[2]]
+ channels = [96, 214, 458, 784]
+ self.skip_lv = 3
+
+ elif num_layers == 68:
+ self.last_proj = ConvLayer(654, 192, kernel_size=1)
+ self.last_blk = HarDBlock(576, 72, 1.7, 8)
+ self.skip_nodes = [1, 3, 8, 11]
+ self.SC = [32, 32, 0]
+ gr = [48, 32, 20]
+ layers = [8, 8, 4]
+ ch_list2 = [224 + self.SC[0], 96 + self.SC[1], 64 + self.SC[2]]
+ channels = [64, 124, 328, 654]
+ self.skip_lv = 2
+
+ self.transUpBlocks = nn.LayerList([])
+ self.denseBlocksUp = nn.LayerList([])
+ self.conv1x1_up = nn.LayerList([])
+ self.avg9x9 = nn.AvgPool2D(kernel_size=(9, 9), stride=1, padding=(4, 4))
+ prev_ch = self.last_blk.get_out_ch()
+
+ for i in range(3):
+ skip_ch = channels[3 - i]
+ self.transUpBlocks.append(TransitionUp(prev_ch, prev_ch))
+ if i < self.skip_lv:
+ cur_ch = prev_ch + skip_ch
+ else:
+ cur_ch = prev_ch
+ self.conv1x1_up.append(
+ ConvLayer(
+ cur_ch, ch_list2[i], kernel_size=1))
+ cur_ch = ch_list2[i]
+ cur_ch -= self.SC[i]
+ cur_ch *= 3
+
+ blk = HarDBlock(cur_ch, gr[i], 1.7, layers[i])
+ self.denseBlocksUp.append(blk)
+ prev_ch = blk.get_out_ch()
+
+ prev_ch += self.SC[0] + self.SC[1] + self.SC[2]
+ self.out_channel = prev_ch
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape]}
+
+ def forward(self, body_feats):
+ x = body_feats[-1]
+ x_sc = []
+ x = self.last_proj(x)
+ x = self.last_pool(x)
+ x2 = self.avg9x9(x)
+ x3 = x / (x.sum((2, 3), keepdim=True) + 0.1)
+ x = paddle.concat([x, x2, x3], 1)
+ x = self.last_blk(x)
+
+ for i in range(3):
+ skip_x = body_feats[3 - i]
+ x_up = self.transUpBlocks[i](x, skip_x)
+ x_ch = self.conv1x1_up[i](x_up)
+ if self.SC[i] > 0:
+ end = x_ch.shape[1]
+ new_st = end - self.SC[i]
+ x_sc.append(x_ch[:, new_st:, :, :])
+ x_ch = x_ch[:, :new_st, :, :]
+ x2 = self.avg9x9(x_ch)
+ x3 = x_ch / (x_ch.sum((2, 3), keepdim=True) + 0.1)
+ x_new = paddle.concat([x_ch, x2, x3], 1)
+ x = self.denseBlocksUp[i](x_new)
+
+ scs = [x]
+ for i in range(3):
+ if self.SC[i] > 0:
+ scs.insert(
+ 0,
+ F.interpolate(
+ x_sc[i],
+ size=(x.shape[2], x.shape[3]),
+ mode="bilinear",
+ align_corners=True))
+ neck_feat = paddle.concat(scs, 1)
+ return neck_feat
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.out_channel, stride=self.down_ratio)]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/csp_pan.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/csp_pan.py
new file mode 100644
index 000000000..7417c46ab
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/csp_pan.py
@@ -0,0 +1,364 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/necks/yolox_pafpn.py
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from ppdet.core.workspace import register, serializable
+from ..shape_spec import ShapeSpec
+
+__all__ = ['CSPPAN']
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ in_channel=96,
+ out_channel=96,
+ kernel_size=3,
+ stride=1,
+ groups=1,
+ act='leaky_relu'):
+ super(ConvBNLayer, self).__init__()
+ initializer = nn.initializer.KaimingUniform()
+ self.act = act
+ assert self.act in ['leaky_relu', "hard_swish"]
+ self.conv = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=out_channel,
+ kernel_size=kernel_size,
+ groups=groups,
+ padding=(kernel_size - 1) // 2,
+ stride=stride,
+ weight_attr=ParamAttr(initializer=initializer),
+ bias_attr=False)
+ self.bn = nn.BatchNorm2D(out_channel)
+
+ def forward(self, x):
+ x = self.bn(self.conv(x))
+ if self.act == "leaky_relu":
+ x = F.leaky_relu(x)
+ elif self.act == "hard_swish":
+ x = F.hardswish(x)
+ return x
+
+
+class DPModule(nn.Layer):
+ """
+ Depth-wise and point-wise module.
+ Args:
+ in_channel (int): The input channels of this Module.
+ out_channel (int): The output channels of this Module.
+ kernel_size (int): The conv2d kernel size of this Module.
+ stride (int): The conv2d's stride of this Module.
+ act (str): The activation function of this Module,
+ Now support `leaky_relu` and `hard_swish`.
+ """
+
+ def __init__(self,
+ in_channel=96,
+ out_channel=96,
+ kernel_size=3,
+ stride=1,
+ act='leaky_relu'):
+ super(DPModule, self).__init__()
+ initializer = nn.initializer.KaimingUniform()
+ self.act = act
+ self.dwconv = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=out_channel,
+ kernel_size=kernel_size,
+ groups=out_channel,
+ padding=(kernel_size - 1) // 2,
+ stride=stride,
+ weight_attr=ParamAttr(initializer=initializer),
+ bias_attr=False)
+ self.bn1 = nn.BatchNorm2D(out_channel)
+ self.pwconv = nn.Conv2D(
+ in_channels=out_channel,
+ out_channels=out_channel,
+ kernel_size=1,
+ groups=1,
+ padding=0,
+ weight_attr=ParamAttr(initializer=initializer),
+ bias_attr=False)
+ self.bn2 = nn.BatchNorm2D(out_channel)
+
+ def act_func(self, x):
+ if self.act == "leaky_relu":
+ x = F.leaky_relu(x)
+ elif self.act == "hard_swish":
+ x = F.hardswish(x)
+ return x
+
+ def forward(self, x):
+ x = self.act_func(self.bn1(self.dwconv(x)))
+ x = self.act_func(self.bn2(self.pwconv(x)))
+ return x
+
+
+class DarknetBottleneck(nn.Layer):
+ """The basic bottleneck block used in Darknet.
+
+ Each Block consists of two ConvModules and the input is added to the
+ final output. Each ConvModule is composed of Conv, BN, and act.
+ The first convLayer has filter size of 1x1 and the second one has the
+ filter size of 3x3.
+
+ Args:
+ in_channels (int): The input channels of this Module.
+ out_channels (int): The output channels of this Module.
+ expansion (int): The kernel size of the convolution. Default: 0.5
+ add_identity (bool): Whether to add identity to the out.
+ Default: True
+ use_depthwise (bool): Whether to use depthwise separable convolution.
+ Default: False
+ """
+
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size=3,
+ expansion=0.5,
+ add_identity=True,
+ use_depthwise=False,
+ act="leaky_relu"):
+ super(DarknetBottleneck, self).__init__()
+ hidden_channels = int(out_channels * expansion)
+ conv_func = DPModule if use_depthwise else ConvBNLayer
+ self.conv1 = ConvBNLayer(
+ in_channel=in_channels,
+ out_channel=hidden_channels,
+ kernel_size=1,
+ act=act)
+ self.conv2 = conv_func(
+ in_channel=hidden_channels,
+ out_channel=out_channels,
+ kernel_size=kernel_size,
+ stride=1,
+ act=act)
+ self.add_identity = \
+ add_identity and in_channels == out_channels
+
+ def forward(self, x):
+ identity = x
+ out = self.conv1(x)
+ out = self.conv2(out)
+
+ if self.add_identity:
+ return out + identity
+ else:
+ return out
+
+
+class CSPLayer(nn.Layer):
+ """Cross Stage Partial Layer.
+
+ Args:
+ in_channels (int): The input channels of the CSP layer.
+ out_channels (int): The output channels of the CSP layer.
+ expand_ratio (float): Ratio to adjust the number of channels of the
+ hidden layer. Default: 0.5
+ num_blocks (int): Number of blocks. Default: 1
+ add_identity (bool): Whether to add identity in blocks.
+ Default: True
+ use_depthwise (bool): Whether to depthwise separable convolution in
+ blocks. Default: False
+ """
+
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size=3,
+ expand_ratio=0.5,
+ num_blocks=1,
+ add_identity=True,
+ use_depthwise=False,
+ act="leaky_relu"):
+ super().__init__()
+ mid_channels = int(out_channels * expand_ratio)
+ self.main_conv = ConvBNLayer(in_channels, mid_channels, 1, act=act)
+ self.short_conv = ConvBNLayer(in_channels, mid_channels, 1, act=act)
+ self.final_conv = ConvBNLayer(
+ 2 * mid_channels, out_channels, 1, act=act)
+
+ self.blocks = nn.Sequential(* [
+ DarknetBottleneck(
+ mid_channels,
+ mid_channels,
+ kernel_size,
+ 1.0,
+ add_identity,
+ use_depthwise,
+ act=act) for _ in range(num_blocks)
+ ])
+
+ def forward(self, x):
+ x_short = self.short_conv(x)
+
+ x_main = self.main_conv(x)
+ x_main = self.blocks(x_main)
+
+ x_final = paddle.concat((x_main, x_short), axis=1)
+ return self.final_conv(x_final)
+
+
+class Channel_T(nn.Layer):
+ def __init__(self,
+ in_channels=[116, 232, 464],
+ out_channels=96,
+ act="leaky_relu"):
+ super(Channel_T, self).__init__()
+ self.convs = nn.LayerList()
+ for i in range(len(in_channels)):
+ self.convs.append(
+ ConvBNLayer(
+ in_channels[i], out_channels, 1, act=act))
+
+ def forward(self, x):
+ outs = [self.convs[i](x[i]) for i in range(len(x))]
+ return outs
+
+
+@register
+@serializable
+class CSPPAN(nn.Layer):
+ """Path Aggregation Network with CSP module.
+
+ Args:
+ in_channels (List[int]): Number of input channels per scale.
+ out_channels (int): Number of output channels (used at each scale)
+ kernel_size (int): The conv2d kernel size of this Module.
+ num_features (int): Number of output features of CSPPAN module.
+ num_csp_blocks (int): Number of bottlenecks in CSPLayer. Default: 1
+ use_depthwise (bool): Whether to depthwise separable convolution in
+ blocks. Default: True
+ """
+
+ def __init__(self,
+ in_channels,
+ out_channels,
+ kernel_size=5,
+ num_features=3,
+ num_csp_blocks=1,
+ use_depthwise=True,
+ act='hard_swish',
+ spatial_scales=[0.125, 0.0625, 0.03125]):
+ super(CSPPAN, self).__init__()
+ self.conv_t = Channel_T(in_channels, out_channels, act=act)
+ in_channels = [out_channels] * len(spatial_scales)
+ self.in_channels = in_channels
+ self.out_channels = out_channels
+ self.spatial_scales = spatial_scales
+ self.num_features = num_features
+ conv_func = DPModule if use_depthwise else ConvBNLayer
+
+ if self.num_features == 4:
+ self.first_top_conv = conv_func(
+ in_channels[0], in_channels[0], kernel_size, stride=2, act=act)
+ self.second_top_conv = conv_func(
+ in_channels[0], in_channels[0], kernel_size, stride=2, act=act)
+ self.spatial_scales.append(self.spatial_scales[-1] / 2)
+
+ # build top-down blocks
+ self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
+ self.top_down_blocks = nn.LayerList()
+ for idx in range(len(in_channels) - 1, 0, -1):
+ self.top_down_blocks.append(
+ CSPLayer(
+ in_channels[idx - 1] * 2,
+ in_channels[idx - 1],
+ kernel_size=kernel_size,
+ num_blocks=num_csp_blocks,
+ add_identity=False,
+ use_depthwise=use_depthwise,
+ act=act))
+
+ # build bottom-up blocks
+ self.downsamples = nn.LayerList()
+ self.bottom_up_blocks = nn.LayerList()
+ for idx in range(len(in_channels) - 1):
+ self.downsamples.append(
+ conv_func(
+ in_channels[idx],
+ in_channels[idx],
+ kernel_size=kernel_size,
+ stride=2,
+ act=act))
+ self.bottom_up_blocks.append(
+ CSPLayer(
+ in_channels[idx] * 2,
+ in_channels[idx + 1],
+ kernel_size=kernel_size,
+ num_blocks=num_csp_blocks,
+ add_identity=False,
+ use_depthwise=use_depthwise,
+ act=act))
+
+ def forward(self, inputs):
+ """
+ Args:
+ inputs (tuple[Tensor]): input features.
+
+ Returns:
+ tuple[Tensor]: CSPPAN features.
+ """
+ assert len(inputs) == len(self.in_channels)
+ inputs = self.conv_t(inputs)
+
+ # top-down path
+ inner_outs = [inputs[-1]]
+ for idx in range(len(self.in_channels) - 1, 0, -1):
+ feat_heigh = inner_outs[0]
+ feat_low = inputs[idx - 1]
+
+ upsample_feat = self.upsample(feat_heigh)
+
+ inner_out = self.top_down_blocks[len(self.in_channels) - 1 - idx](
+ paddle.concat([upsample_feat, feat_low], 1))
+ inner_outs.insert(0, inner_out)
+
+ # bottom-up path
+ outs = [inner_outs[0]]
+ for idx in range(len(self.in_channels) - 1):
+ feat_low = outs[-1]
+ feat_height = inner_outs[idx + 1]
+ downsample_feat = self.downsamples[idx](feat_low)
+ out = self.bottom_up_blocks[idx](paddle.concat(
+ [downsample_feat, feat_height], 1))
+ outs.append(out)
+
+ top_features = None
+ if self.num_features == 4:
+ top_features = self.first_top_conv(inputs[-1])
+ top_features = top_features + self.second_top_conv(outs[-1])
+ outs.append(top_features)
+
+ return tuple(outs)
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self.out_channels, stride=1. / s)
+ for s in self.spatial_scales
+ ]
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/fpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/fpn.py
new file mode 100644
index 000000000..0633fb5b2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/fpn.py
@@ -0,0 +1,231 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import XavierUniform
+
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.layers import ConvNormLayer
+from ..shape_spec import ShapeSpec
+
+__all__ = ['FPN']
+
+
+@register
+@serializable
+class FPN(nn.Layer):
+ """
+ Feature Pyramid Network, see https://arxiv.org/abs/1612.03144
+
+ Args:
+ in_channels (list[int]): input channels of each level which can be
+ derived from the output shape of backbone by from_config
+ out_channel (list[int]): output channel of each level
+ spatial_scales (list[float]): the spatial scales between input feature
+ maps and original input image which can be derived from the output
+ shape of backbone by from_config
+ has_extra_convs (bool): whether to add extra conv to the last level.
+ default False
+ extra_stage (int): the number of extra stages added to the last level.
+ default 1
+ use_c5 (bool): Whether to use c5 as the input of extra stage,
+ otherwise p5 is used. default True
+ norm_type (string|None): The normalization type in FPN module. If
+ norm_type is None, norm will not be used after conv and if
+ norm_type is string, bn, gn, sync_bn are available. default None
+ norm_decay (float): weight decay for normalization layer weights.
+ default 0.
+ freeze_norm (bool): whether to freeze normalization layer.
+ default False
+ relu_before_extra_convs (bool): whether to add relu before extra convs.
+ default False
+
+ """
+
+ def __init__(self,
+ in_channels,
+ out_channel,
+ spatial_scales=[0.25, 0.125, 0.0625, 0.03125],
+ has_extra_convs=False,
+ extra_stage=1,
+ use_c5=True,
+ norm_type=None,
+ norm_decay=0.,
+ freeze_norm=False,
+ relu_before_extra_convs=True):
+ super(FPN, self).__init__()
+ self.out_channel = out_channel
+ for s in range(extra_stage):
+ spatial_scales = spatial_scales + [spatial_scales[-1] / 2.]
+ self.spatial_scales = spatial_scales
+ self.has_extra_convs = has_extra_convs
+ self.extra_stage = extra_stage
+ self.use_c5 = use_c5
+ self.relu_before_extra_convs = relu_before_extra_convs
+ self.norm_type = norm_type
+ self.norm_decay = norm_decay
+ self.freeze_norm = freeze_norm
+
+ self.lateral_convs = []
+ self.fpn_convs = []
+ fan = out_channel * 3 * 3
+
+ # stage index 0,1,2,3 stands for res2,res3,res4,res5 on ResNet Backbone
+ # 0 <= st_stage < ed_stage <= 3
+ st_stage = 4 - len(in_channels)
+ ed_stage = st_stage + len(in_channels) - 1
+ for i in range(st_stage, ed_stage + 1):
+ if i == 3:
+ lateral_name = 'fpn_inner_res5_sum'
+ else:
+ lateral_name = 'fpn_inner_res{}_sum_lateral'.format(i + 2)
+ in_c = in_channels[i - st_stage]
+ if self.norm_type is not None:
+ lateral = self.add_sublayer(
+ lateral_name,
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=out_channel,
+ filter_size=1,
+ stride=1,
+ norm_type=self.norm_type,
+ norm_decay=self.norm_decay,
+ freeze_norm=self.freeze_norm,
+ initializer=XavierUniform(fan_out=in_c)))
+ else:
+ lateral = self.add_sublayer(
+ lateral_name,
+ nn.Conv2D(
+ in_channels=in_c,
+ out_channels=out_channel,
+ kernel_size=1,
+ weight_attr=ParamAttr(
+ initializer=XavierUniform(fan_out=in_c))))
+ self.lateral_convs.append(lateral)
+
+ fpn_name = 'fpn_res{}_sum'.format(i + 2)
+ if self.norm_type is not None:
+ fpn_conv = self.add_sublayer(
+ fpn_name,
+ ConvNormLayer(
+ ch_in=out_channel,
+ ch_out=out_channel,
+ filter_size=3,
+ stride=1,
+ norm_type=self.norm_type,
+ norm_decay=self.norm_decay,
+ freeze_norm=self.freeze_norm,
+ initializer=XavierUniform(fan_out=fan)))
+ else:
+ fpn_conv = self.add_sublayer(
+ fpn_name,
+ nn.Conv2D(
+ in_channels=out_channel,
+ out_channels=out_channel,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(
+ initializer=XavierUniform(fan_out=fan))))
+ self.fpn_convs.append(fpn_conv)
+
+ # add extra conv levels for RetinaNet(use_c5)/FCOS(use_p5)
+ if self.has_extra_convs:
+ for i in range(self.extra_stage):
+ lvl = ed_stage + 1 + i
+ if i == 0 and self.use_c5:
+ in_c = in_channels[-1]
+ else:
+ in_c = out_channel
+ extra_fpn_name = 'fpn_{}'.format(lvl + 2)
+ if self.norm_type is not None:
+ extra_fpn_conv = self.add_sublayer(
+ extra_fpn_name,
+ ConvNormLayer(
+ ch_in=in_c,
+ ch_out=out_channel,
+ filter_size=3,
+ stride=2,
+ norm_type=self.norm_type,
+ norm_decay=self.norm_decay,
+ freeze_norm=self.freeze_norm,
+ initializer=XavierUniform(fan_out=fan)))
+ else:
+ extra_fpn_conv = self.add_sublayer(
+ extra_fpn_name,
+ nn.Conv2D(
+ in_channels=in_c,
+ out_channels=out_channel,
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ weight_attr=ParamAttr(
+ initializer=XavierUniform(fan_out=fan))))
+ self.fpn_convs.append(extra_fpn_conv)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {
+ 'in_channels': [i.channels for i in input_shape],
+ 'spatial_scales': [1.0 / i.stride for i in input_shape],
+ }
+
+ def forward(self, body_feats):
+ laterals = []
+ num_levels = len(body_feats)
+ for i in range(num_levels):
+ laterals.append(self.lateral_convs[i](body_feats[i]))
+
+ for i in range(1, num_levels):
+ lvl = num_levels - i
+ upsample = F.interpolate(
+ laterals[lvl],
+ scale_factor=2.,
+ mode='nearest', )
+ laterals[lvl - 1] += upsample
+
+ fpn_output = []
+ for lvl in range(num_levels):
+ fpn_output.append(self.fpn_convs[lvl](laterals[lvl]))
+
+ if self.extra_stage > 0:
+ # use max pool to get more levels on top of outputs (Faster R-CNN, Mask R-CNN)
+ if not self.has_extra_convs:
+ assert self.extra_stage == 1, 'extra_stage should be 1 if FPN has not extra convs'
+ fpn_output.append(F.max_pool2d(fpn_output[-1], 1, stride=2))
+ # add extra conv levels for RetinaNet(use_c5)/FCOS(use_p5)
+ else:
+ if self.use_c5:
+ extra_source = body_feats[-1]
+ else:
+ extra_source = fpn_output[-1]
+ fpn_output.append(self.fpn_convs[num_levels](extra_source))
+
+ for i in range(1, self.extra_stage):
+ if self.relu_before_extra_convs:
+ fpn_output.append(self.fpn_convs[num_levels + i](F.relu(
+ fpn_output[-1])))
+ else:
+ fpn_output.append(self.fpn_convs[num_levels + i](
+ fpn_output[-1]))
+ return fpn_output
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self.out_channel, stride=1. / s)
+ for s in self.spatial_scales
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/hrfpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/hrfpn.py
new file mode 100644
index 000000000..eb4768b8e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/hrfpn.py
@@ -0,0 +1,126 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn.functional as F
+import paddle.nn as nn
+from ppdet.core.workspace import register
+from ..shape_spec import ShapeSpec
+
+__all__ = ['HRFPN']
+
+
+@register
+class HRFPN(nn.Layer):
+ """
+ Args:
+ in_channels (list): number of input feature channels from backbone
+ out_channel (int): number of output feature channels
+ share_conv (bool): whether to share conv for different layers' reduction
+ extra_stage (int): add extra stage for returning HRFPN fpn_feats
+ spatial_scales (list): feature map scaling factor
+ """
+
+ def __init__(self,
+ in_channels=[18, 36, 72, 144],
+ out_channel=256,
+ share_conv=False,
+ extra_stage=1,
+ spatial_scales=[1. / 4, 1. / 8, 1. / 16, 1. / 32]):
+ super(HRFPN, self).__init__()
+ in_channel = sum(in_channels)
+ self.in_channel = in_channel
+ self.out_channel = out_channel
+ self.share_conv = share_conv
+ for i in range(extra_stage):
+ spatial_scales = spatial_scales + [spatial_scales[-1] / 2.]
+ self.spatial_scales = spatial_scales
+ self.num_out = len(self.spatial_scales)
+
+ self.reduction = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=out_channel,
+ kernel_size=1,
+ bias_attr=False)
+
+ if share_conv:
+ self.fpn_conv = nn.Conv2D(
+ in_channels=out_channel,
+ out_channels=out_channel,
+ kernel_size=3,
+ padding=1,
+ bias_attr=False)
+ else:
+ self.fpn_conv = []
+ for i in range(self.num_out):
+ conv_name = "fpn_conv_" + str(i)
+ conv = self.add_sublayer(
+ conv_name,
+ nn.Conv2D(
+ in_channels=out_channel,
+ out_channels=out_channel,
+ kernel_size=3,
+ padding=1,
+ bias_attr=False))
+ self.fpn_conv.append(conv)
+
+ def forward(self, body_feats):
+ num_backbone_stages = len(body_feats)
+
+ outs = []
+ outs.append(body_feats[0])
+
+ # resize
+ for i in range(1, num_backbone_stages):
+ resized = F.interpolate(
+ body_feats[i], scale_factor=2**i, mode='bilinear')
+ outs.append(resized)
+
+ # concat
+ out = paddle.concat(outs, axis=1)
+ assert out.shape[
+ 1] == self.in_channel, 'in_channel should be {}, be received {}'.format(
+ out.shape[1], self.in_channel)
+
+ # reduction
+ out = self.reduction(out)
+
+ # conv
+ outs = [out]
+ for i in range(1, self.num_out):
+ outs.append(F.avg_pool2d(out, kernel_size=2**i, stride=2**i))
+ outputs = []
+
+ for i in range(self.num_out):
+ conv_func = self.fpn_conv if self.share_conv else self.fpn_conv[i]
+ conv = conv_func(outs[i])
+ outputs.append(conv)
+
+ fpn_feats = [outputs[k] for k in range(self.num_out)]
+ return fpn_feats
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {
+ 'in_channels': [i.channels for i in input_shape],
+ 'spatial_scales': [1.0 / i.stride for i in input_shape],
+ }
+
+ @property
+ def out_shape(self):
+ return [
+ ShapeSpec(
+ channels=self.out_channel, stride=1. / s)
+ for s in self.spatial_scales
+ ]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/ttf_fpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/ttf_fpn.py
new file mode 100644
index 000000000..60cc69f80
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/ttf_fpn.py
@@ -0,0 +1,242 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.nn.initializer import Constant, Uniform, Normal, XavierUniform
+from ppdet.core.workspace import register, serializable
+from paddle.regularizer import L2Decay
+from ppdet.modeling.layers import DeformableConvV2, ConvNormLayer, LiteConv
+import math
+from ppdet.modeling.ops import batch_norm
+from ..shape_spec import ShapeSpec
+
+__all__ = ['TTFFPN']
+
+
+class Upsample(nn.Layer):
+ def __init__(self, ch_in, ch_out, norm_type='bn'):
+ super(Upsample, self).__init__()
+ fan_in = ch_in * 3 * 3
+ stdv = 1. / math.sqrt(fan_in)
+ self.dcn = DeformableConvV2(
+ ch_in,
+ ch_out,
+ kernel_size=3,
+ weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
+ bias_attr=ParamAttr(
+ initializer=Constant(0),
+ regularizer=L2Decay(0.),
+ learning_rate=2.),
+ lr_scale=2.,
+ regularizer=L2Decay(0.))
+
+ self.bn = batch_norm(
+ ch_out, norm_type=norm_type, initializer=Constant(1.))
+
+ def forward(self, feat):
+ dcn = self.dcn(feat)
+ bn = self.bn(dcn)
+ relu = F.relu(bn)
+ out = F.interpolate(relu, scale_factor=2., mode='bilinear')
+ return out
+
+
+class DeConv(nn.Layer):
+ def __init__(self, ch_in, ch_out, norm_type='bn'):
+ super(DeConv, self).__init__()
+ self.deconv = nn.Sequential()
+ conv1 = ConvNormLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ stride=1,
+ filter_size=1,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+ conv2 = nn.Conv2DTranspose(
+ in_channels=ch_out,
+ out_channels=ch_out,
+ kernel_size=4,
+ padding=1,
+ stride=2,
+ groups=ch_out,
+ weight_attr=ParamAttr(initializer=XavierUniform()),
+ bias_attr=False)
+ bn = batch_norm(ch_out, norm_type=norm_type, norm_decay=0.)
+ conv3 = ConvNormLayer(
+ ch_in=ch_out,
+ ch_out=ch_out,
+ stride=1,
+ filter_size=1,
+ norm_type=norm_type,
+ initializer=XavierUniform())
+
+ self.deconv.add_sublayer('conv1', conv1)
+ self.deconv.add_sublayer('relu6_1', nn.ReLU6())
+ self.deconv.add_sublayer('conv2', conv2)
+ self.deconv.add_sublayer('bn', bn)
+ self.deconv.add_sublayer('relu6_2', nn.ReLU6())
+ self.deconv.add_sublayer('conv3', conv3)
+ self.deconv.add_sublayer('relu6_3', nn.ReLU6())
+
+ def forward(self, inputs):
+ return self.deconv(inputs)
+
+
+class LiteUpsample(nn.Layer):
+ def __init__(self, ch_in, ch_out, norm_type='bn'):
+ super(LiteUpsample, self).__init__()
+ self.deconv = DeConv(ch_in, ch_out, norm_type=norm_type)
+ self.conv = LiteConv(ch_in, ch_out, norm_type=norm_type)
+
+ def forward(self, inputs):
+ deconv_up = self.deconv(inputs)
+ conv = self.conv(inputs)
+ interp_up = F.interpolate(conv, scale_factor=2., mode='bilinear')
+ return deconv_up + interp_up
+
+
+class ShortCut(nn.Layer):
+ def __init__(self,
+ layer_num,
+ ch_in,
+ ch_out,
+ norm_type='bn',
+ lite_neck=False,
+ name=None):
+ super(ShortCut, self).__init__()
+ shortcut_conv = nn.Sequential()
+ for i in range(layer_num):
+ fan_out = 3 * 3 * ch_out
+ std = math.sqrt(2. / fan_out)
+ in_channels = ch_in if i == 0 else ch_out
+ shortcut_name = name + '.conv.{}'.format(i)
+ if lite_neck:
+ shortcut_conv.add_sublayer(
+ shortcut_name,
+ LiteConv(
+ in_channels=in_channels,
+ out_channels=ch_out,
+ with_act=i < layer_num - 1,
+ norm_type=norm_type))
+ else:
+ shortcut_conv.add_sublayer(
+ shortcut_name,
+ nn.Conv2D(
+ in_channels=in_channels,
+ out_channels=ch_out,
+ kernel_size=3,
+ padding=1,
+ weight_attr=ParamAttr(initializer=Normal(0, std)),
+ bias_attr=ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.))))
+ if i < layer_num - 1:
+ shortcut_conv.add_sublayer(shortcut_name + '.act',
+ nn.ReLU())
+ self.shortcut = self.add_sublayer('shortcut', shortcut_conv)
+
+ def forward(self, feat):
+ out = self.shortcut(feat)
+ return out
+
+
+@register
+@serializable
+class TTFFPN(nn.Layer):
+ """
+ Args:
+ in_channels (list): number of input feature channels from backbone.
+ [128,256,512,1024] by default, means the channels of DarkNet53
+ backbone return_idx [1,2,3,4].
+ planes (list): the number of output feature channels of FPN.
+ [256, 128, 64] by default
+ shortcut_num (list): the number of convolution layers in each shortcut.
+ [3,2,1] by default, means DarkNet53 backbone return_idx_1 has 3 convs
+ in its shortcut, return_idx_2 has 2 convs and return_idx_3 has 1 conv.
+ norm_type (string): norm type, 'sync_bn', 'bn', 'gn' are optional.
+ bn by default
+ lite_neck (bool): whether to use lite conv in TTFNet FPN,
+ False by default
+ fusion_method (string): the method to fusion upsample and lateral layer.
+ 'add' and 'concat' are optional, add by default
+ """
+
+ __shared__ = ['norm_type']
+
+ def __init__(self,
+ in_channels,
+ planes=[256, 128, 64],
+ shortcut_num=[3, 2, 1],
+ norm_type='bn',
+ lite_neck=False,
+ fusion_method='add'):
+ super(TTFFPN, self).__init__()
+ self.planes = planes
+ self.shortcut_num = shortcut_num[::-1]
+ self.shortcut_len = len(shortcut_num)
+ self.ch_in = in_channels[::-1]
+ self.fusion_method = fusion_method
+
+ self.upsample_list = []
+ self.shortcut_list = []
+ self.upper_list = []
+ for i, out_c in enumerate(self.planes):
+ in_c = self.ch_in[i] if i == 0 else self.upper_list[-1]
+ upsample_module = LiteUpsample if lite_neck else Upsample
+ upsample = self.add_sublayer(
+ 'upsample.' + str(i),
+ upsample_module(
+ in_c, out_c, norm_type=norm_type))
+ self.upsample_list.append(upsample)
+ if i < self.shortcut_len:
+ shortcut = self.add_sublayer(
+ 'shortcut.' + str(i),
+ ShortCut(
+ self.shortcut_num[i],
+ self.ch_in[i + 1],
+ out_c,
+ norm_type=norm_type,
+ lite_neck=lite_neck,
+ name='shortcut.' + str(i)))
+ self.shortcut_list.append(shortcut)
+ if self.fusion_method == 'add':
+ upper_c = out_c
+ elif self.fusion_method == 'concat':
+ upper_c = out_c * 2
+ else:
+ raise ValueError('Illegal fusion method. Expected add or\
+ concat, but received {}'.format(self.fusion_method))
+ self.upper_list.append(upper_c)
+
+ def forward(self, inputs):
+ feat = inputs[-1]
+ for i, out_c in enumerate(self.planes):
+ feat = self.upsample_list[i](feat)
+ if i < self.shortcut_len:
+ shortcut = self.shortcut_list[i](inputs[-i - 2])
+ if self.fusion_method == 'add':
+ feat = feat + shortcut
+ else:
+ feat = paddle.concat([feat, shortcut], axis=1)
+ return feat
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=self.upper_list[-1], )]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/yolo_fpn.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/yolo_fpn.py
new file mode 100644
index 000000000..4af0348d2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/necks/yolo_fpn.py
@@ -0,0 +1,988 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register, serializable
+from ppdet.modeling.layers import DropBlock
+from ..backbones.darknet import ConvBNLayer
+from ..shape_spec import ShapeSpec
+
+__all__ = ['YOLOv3FPN', 'PPYOLOFPN', 'PPYOLOTinyFPN', 'PPYOLOPAN']
+
+
+def add_coord(x, data_format):
+ b = paddle.shape(x)[0]
+ if data_format == 'NCHW':
+ h, w = x.shape[2], x.shape[3]
+ else:
+ h, w = x.shape[1], x.shape[2]
+
+ gx = paddle.cast(paddle.arange(w) / ((w - 1.) * 2.0) - 1., x.dtype)
+ gy = paddle.cast(paddle.arange(h) / ((h - 1.) * 2.0) - 1., x.dtype)
+
+ if data_format == 'NCHW':
+ gx = gx.reshape([1, 1, 1, w]).expand([b, 1, h, w])
+ gy = gy.reshape([1, 1, h, 1]).expand([b, 1, h, w])
+ else:
+ gx = gx.reshape([1, 1, w, 1]).expand([b, h, w, 1])
+ gy = gy.reshape([1, h, 1, 1]).expand([b, h, w, 1])
+
+ gx.stop_gradient = True
+ gy.stop_gradient = True
+ return gx, gy
+
+
+class YoloDetBlock(nn.Layer):
+ def __init__(self,
+ ch_in,
+ channel,
+ norm_type,
+ freeze_norm=False,
+ name='',
+ data_format='NCHW'):
+ """
+ YOLODetBlock layer for yolov3, see https://arxiv.org/abs/1804.02767
+
+ Args:
+ ch_in (int): input channel
+ channel (int): base channel
+ norm_type (str): batch norm type
+ freeze_norm (bool): whether to freeze norm, default False
+ name (str): layer name
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(YoloDetBlock, self).__init__()
+ self.ch_in = ch_in
+ self.channel = channel
+ assert channel % 2 == 0, \
+ "channel {} cannot be divided by 2".format(channel)
+ conv_def = [
+ ['conv0', ch_in, channel, 1, '.0.0'],
+ ['conv1', channel, channel * 2, 3, '.0.1'],
+ ['conv2', channel * 2, channel, 1, '.1.0'],
+ ['conv3', channel, channel * 2, 3, '.1.1'],
+ ['route', channel * 2, channel, 1, '.2'],
+ ]
+
+ self.conv_module = nn.Sequential()
+ for idx, (conv_name, ch_in, ch_out, filter_size,
+ post_name) in enumerate(conv_def):
+ self.conv_module.add_sublayer(
+ conv_name,
+ ConvBNLayer(
+ ch_in=ch_in,
+ ch_out=ch_out,
+ filter_size=filter_size,
+ padding=(filter_size - 1) // 2,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name + post_name))
+
+ self.tip = ConvBNLayer(
+ ch_in=channel,
+ ch_out=channel * 2,
+ filter_size=3,
+ padding=1,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name + '.tip')
+
+ def forward(self, inputs):
+ route = self.conv_module(inputs)
+ tip = self.tip(route)
+ return route, tip
+
+
+class SPP(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ k,
+ pool_size,
+ norm_type,
+ freeze_norm=False,
+ name='',
+ act='leaky',
+ data_format='NCHW'):
+ """
+ SPP layer, which consist of four pooling layer follwed by conv layer
+
+ Args:
+ ch_in (int): input channel of conv layer
+ ch_out (int): output channel of conv layer
+ k (int): kernel size of conv layer
+ norm_type (str): batch norm type
+ freeze_norm (bool): whether to freeze norm, default False
+ name (str): layer name
+ act (str): activation function
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(SPP, self).__init__()
+ self.pool = []
+ self.data_format = data_format
+ for size in pool_size:
+ pool = self.add_sublayer(
+ '{}.pool1'.format(name),
+ nn.MaxPool2D(
+ kernel_size=size,
+ stride=1,
+ padding=size // 2,
+ data_format=data_format,
+ ceil_mode=False))
+ self.pool.append(pool)
+ self.conv = ConvBNLayer(
+ ch_in,
+ ch_out,
+ k,
+ padding=k // 2,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ name=name,
+ act=act,
+ data_format=data_format)
+
+ def forward(self, x):
+ outs = [x]
+ for pool in self.pool:
+ outs.append(pool(x))
+ if self.data_format == "NCHW":
+ y = paddle.concat(outs, axis=1)
+ else:
+ y = paddle.concat(outs, axis=-1)
+
+ y = self.conv(y)
+ return y
+
+
+class CoordConv(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ filter_size,
+ padding,
+ norm_type,
+ freeze_norm=False,
+ name='',
+ data_format='NCHW'):
+ """
+ CoordConv layer, see https://arxiv.org/abs/1807.03247
+
+ Args:
+ ch_in (int): input channel
+ ch_out (int): output channel
+ filter_size (int): filter size, default 3
+ padding (int): padding size, default 0
+ norm_type (str): batch norm type, default bn
+ name (str): layer name
+ data_format (str): data format, NCHW or NHWC
+
+ """
+ super(CoordConv, self).__init__()
+ self.conv = ConvBNLayer(
+ ch_in + 2,
+ ch_out,
+ filter_size=filter_size,
+ padding=padding,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name)
+ self.data_format = data_format
+
+ def forward(self, x):
+ gx, gy = add_coord(x, self.data_format)
+ if self.data_format == 'NCHW':
+ y = paddle.concat([x, gx, gy], axis=1)
+ else:
+ y = paddle.concat([x, gx, gy], axis=-1)
+ y = self.conv(y)
+ return y
+
+
+class PPYOLODetBlock(nn.Layer):
+ def __init__(self, cfg, name, data_format='NCHW'):
+ """
+ PPYOLODetBlock layer
+
+ Args:
+ cfg (list): layer configs for this block
+ name (str): block name
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(PPYOLODetBlock, self).__init__()
+ self.conv_module = nn.Sequential()
+ for idx, (conv_name, layer, args, kwargs) in enumerate(cfg[:-1]):
+ kwargs.update(
+ name='{}.{}'.format(name, conv_name), data_format=data_format)
+ self.conv_module.add_sublayer(conv_name, layer(*args, **kwargs))
+
+ conv_name, layer, args, kwargs = cfg[-1]
+ kwargs.update(
+ name='{}.{}'.format(name, conv_name), data_format=data_format)
+ self.tip = layer(*args, **kwargs)
+
+ def forward(self, inputs):
+ route = self.conv_module(inputs)
+ tip = self.tip(route)
+ return route, tip
+
+
+class PPYOLOTinyDetBlock(nn.Layer):
+ def __init__(self,
+ ch_in,
+ ch_out,
+ name,
+ drop_block=False,
+ block_size=3,
+ keep_prob=0.9,
+ data_format='NCHW'):
+ """
+ PPYOLO Tiny DetBlock layer
+ Args:
+ ch_in (list): input channel number
+ ch_out (list): output channel number
+ name (str): block name
+ drop_block: whether user DropBlock
+ block_size: drop block size
+ keep_prob: probability to keep block in DropBlock
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(PPYOLOTinyDetBlock, self).__init__()
+ self.drop_block_ = drop_block
+ self.conv_module = nn.Sequential()
+
+ cfgs = [
+ # name, in channels, out channels, filter_size,
+ # stride, padding, groups
+ ['.0', ch_in, ch_out, 1, 1, 0, 1],
+ ['.1', ch_out, ch_out, 5, 1, 2, ch_out],
+ ['.2', ch_out, ch_out, 1, 1, 0, 1],
+ ['.route', ch_out, ch_out, 5, 1, 2, ch_out],
+ ]
+ for cfg in cfgs:
+ conv_name, conv_ch_in, conv_ch_out, filter_size, stride, padding, \
+ groups = cfg
+ self.conv_module.add_sublayer(
+ name + conv_name,
+ ConvBNLayer(
+ ch_in=conv_ch_in,
+ ch_out=conv_ch_out,
+ filter_size=filter_size,
+ stride=stride,
+ padding=padding,
+ groups=groups,
+ name=name + conv_name))
+
+ self.tip = ConvBNLayer(
+ ch_in=ch_out,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ groups=1,
+ name=name + conv_name)
+
+ if self.drop_block_:
+ self.drop_block = DropBlock(
+ block_size=block_size,
+ keep_prob=keep_prob,
+ data_format=data_format,
+ name=name + '.dropblock')
+
+ def forward(self, inputs):
+ if self.drop_block_:
+ inputs = self.drop_block(inputs)
+ route = self.conv_module(inputs)
+ tip = self.tip(route)
+ return route, tip
+
+
+class PPYOLODetBlockCSP(nn.Layer):
+ def __init__(self,
+ cfg,
+ ch_in,
+ ch_out,
+ act,
+ norm_type,
+ name,
+ data_format='NCHW'):
+ """
+ PPYOLODetBlockCSP layer
+
+ Args:
+ cfg (list): layer configs for this block
+ ch_in (int): input channel
+ ch_out (int): output channel
+ act (str): default mish
+ name (str): block name
+ data_format (str): data format, NCHW or NHWC
+ """
+ super(PPYOLODetBlockCSP, self).__init__()
+ self.data_format = data_format
+ self.conv1 = ConvBNLayer(
+ ch_in,
+ ch_out,
+ 1,
+ padding=0,
+ act=act,
+ norm_type=norm_type,
+ name=name + '.left',
+ data_format=data_format)
+ self.conv2 = ConvBNLayer(
+ ch_in,
+ ch_out,
+ 1,
+ padding=0,
+ act=act,
+ norm_type=norm_type,
+ name=name + '.right',
+ data_format=data_format)
+ self.conv3 = ConvBNLayer(
+ ch_out * 2,
+ ch_out * 2,
+ 1,
+ padding=0,
+ act=act,
+ norm_type=norm_type,
+ name=name,
+ data_format=data_format)
+ self.conv_module = nn.Sequential()
+ for idx, (layer_name, layer, args, kwargs) in enumerate(cfg):
+ kwargs.update(name=name + layer_name, data_format=data_format)
+ self.conv_module.add_sublayer(layer_name, layer(*args, **kwargs))
+
+ def forward(self, inputs):
+ conv_left = self.conv1(inputs)
+ conv_right = self.conv2(inputs)
+ conv_left = self.conv_module(conv_left)
+ if self.data_format == 'NCHW':
+ conv = paddle.concat([conv_left, conv_right], axis=1)
+ else:
+ conv = paddle.concat([conv_left, conv_right], axis=-1)
+
+ conv = self.conv3(conv)
+ return conv, conv
+
+
+@register
+@serializable
+class YOLOv3FPN(nn.Layer):
+ __shared__ = ['norm_type', 'data_format']
+
+ def __init__(self,
+ in_channels=[256, 512, 1024],
+ norm_type='bn',
+ freeze_norm=False,
+ data_format='NCHW'):
+ """
+ YOLOv3FPN layer
+
+ Args:
+ in_channels (list): input channels for fpn
+ norm_type (str): batch norm type, default bn
+ data_format (str): data format, NCHW or NHWC
+
+ """
+ super(YOLOv3FPN, self).__init__()
+ assert len(in_channels) > 0, "in_channels length should > 0"
+ self.in_channels = in_channels
+ self.num_blocks = len(in_channels)
+
+ self._out_channels = []
+ self.yolo_blocks = []
+ self.routes = []
+ self.data_format = data_format
+ for i in range(self.num_blocks):
+ name = 'yolo_block.{}'.format(i)
+ in_channel = in_channels[-i - 1]
+ if i > 0:
+ in_channel += 512 // (2**i)
+ yolo_block = self.add_sublayer(
+ name,
+ YoloDetBlock(
+ in_channel,
+ channel=512 // (2**i),
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name))
+ self.yolo_blocks.append(yolo_block)
+ # tip layer output channel doubled
+ self._out_channels.append(1024 // (2**i))
+
+ if i < self.num_blocks - 1:
+ name = 'yolo_transition.{}'.format(i)
+ route = self.add_sublayer(
+ name,
+ ConvBNLayer(
+ ch_in=512 // (2**i),
+ ch_out=256 // (2**i),
+ filter_size=1,
+ stride=1,
+ padding=0,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name))
+ self.routes.append(route)
+
+ def forward(self, blocks, for_mot=False):
+ assert len(blocks) == self.num_blocks
+ blocks = blocks[::-1]
+ yolo_feats = []
+
+ # add embedding features output for multi-object tracking model
+ if for_mot:
+ emb_feats = []
+
+ for i, block in enumerate(blocks):
+ if i > 0:
+ if self.data_format == 'NCHW':
+ block = paddle.concat([route, block], axis=1)
+ else:
+ block = paddle.concat([route, block], axis=-1)
+ route, tip = self.yolo_blocks[i](block)
+ yolo_feats.append(tip)
+
+ if for_mot:
+ # add embedding features output
+ emb_feats.append(route)
+
+ if i < self.num_blocks - 1:
+ route = self.routes[i](route)
+ route = F.interpolate(
+ route, scale_factor=2., data_format=self.data_format)
+
+ if for_mot:
+ return {'yolo_feats': yolo_feats, 'emb_feats': emb_feats}
+ else:
+ return yolo_feats
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
+
+
+@register
+@serializable
+class PPYOLOFPN(nn.Layer):
+ __shared__ = ['norm_type', 'data_format']
+
+ def __init__(self,
+ in_channels=[512, 1024, 2048],
+ norm_type='bn',
+ freeze_norm=False,
+ data_format='NCHW',
+ coord_conv=False,
+ conv_block_num=2,
+ drop_block=False,
+ block_size=3,
+ keep_prob=0.9,
+ spp=False):
+ """
+ PPYOLOFPN layer
+
+ Args:
+ in_channels (list): input channels for fpn
+ norm_type (str): batch norm type, default bn
+ data_format (str): data format, NCHW or NHWC
+ coord_conv (bool): whether use CoordConv or not
+ conv_block_num (int): conv block num of each pan block
+ drop_block (bool): whether use DropBlock or not
+ block_size (int): block size of DropBlock
+ keep_prob (float): keep probability of DropBlock
+ spp (bool): whether use spp or not
+
+ """
+ super(PPYOLOFPN, self).__init__()
+ assert len(in_channels) > 0, "in_channels length should > 0"
+ self.in_channels = in_channels
+ self.num_blocks = len(in_channels)
+ # parse kwargs
+ self.coord_conv = coord_conv
+ self.drop_block = drop_block
+ self.block_size = block_size
+ self.keep_prob = keep_prob
+ self.spp = spp
+ self.conv_block_num = conv_block_num
+ self.data_format = data_format
+ if self.coord_conv:
+ ConvLayer = CoordConv
+ else:
+ ConvLayer = ConvBNLayer
+
+ if self.drop_block:
+ dropblock_cfg = [[
+ 'dropblock', DropBlock, [self.block_size, self.keep_prob],
+ dict()
+ ]]
+ else:
+ dropblock_cfg = []
+
+ self._out_channels = []
+ self.yolo_blocks = []
+ self.routes = []
+ for i, ch_in in enumerate(self.in_channels[::-1]):
+ if i > 0:
+ ch_in += 512 // (2**i)
+ channel = 64 * (2**self.num_blocks) // (2**i)
+ base_cfg = []
+ c_in, c_out = ch_in, channel
+ for j in range(self.conv_block_num):
+ base_cfg += [
+ [
+ 'conv{}'.format(2 * j), ConvLayer, [c_in, c_out, 1],
+ dict(
+ padding=0,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm)
+ ],
+ [
+ 'conv{}'.format(2 * j + 1), ConvBNLayer,
+ [c_out, c_out * 2, 3], dict(
+ padding=1,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm)
+ ],
+ ]
+ c_in, c_out = c_out * 2, c_out
+
+ base_cfg += [[
+ 'route', ConvLayer, [c_in, c_out, 1], dict(
+ padding=0, norm_type=norm_type, freeze_norm=freeze_norm)
+ ], [
+ 'tip', ConvLayer, [c_out, c_out * 2, 3], dict(
+ padding=1, norm_type=norm_type, freeze_norm=freeze_norm)
+ ]]
+
+ if self.conv_block_num == 2:
+ if i == 0:
+ if self.spp:
+ spp_cfg = [[
+ 'spp', SPP, [channel * 4, channel, 1], dict(
+ pool_size=[5, 9, 13],
+ norm_type=norm_type,
+ freeze_norm=freeze_norm)
+ ]]
+ else:
+ spp_cfg = []
+ cfg = base_cfg[0:3] + spp_cfg + base_cfg[
+ 3:4] + dropblock_cfg + base_cfg[4:6]
+ else:
+ cfg = base_cfg[0:2] + dropblock_cfg + base_cfg[2:6]
+ elif self.conv_block_num == 0:
+ if self.spp and i == 0:
+ spp_cfg = [[
+ 'spp', SPP, [c_in * 4, c_in, 1], dict(
+ pool_size=[5, 9, 13],
+ norm_type=norm_type,
+ freeze_norm=freeze_norm)
+ ]]
+ else:
+ spp_cfg = []
+ cfg = spp_cfg + dropblock_cfg + base_cfg
+ name = 'yolo_block.{}'.format(i)
+ yolo_block = self.add_sublayer(name, PPYOLODetBlock(cfg, name))
+ self.yolo_blocks.append(yolo_block)
+ self._out_channels.append(channel * 2)
+ if i < self.num_blocks - 1:
+ name = 'yolo_transition.{}'.format(i)
+ route = self.add_sublayer(
+ name,
+ ConvBNLayer(
+ ch_in=channel,
+ ch_out=256 // (2**i),
+ filter_size=1,
+ stride=1,
+ padding=0,
+ norm_type=norm_type,
+ freeze_norm=freeze_norm,
+ data_format=data_format,
+ name=name))
+ self.routes.append(route)
+
+ def forward(self, blocks, for_mot=False):
+ assert len(blocks) == self.num_blocks
+ blocks = blocks[::-1]
+ yolo_feats = []
+
+ # add embedding features output for multi-object tracking model
+ if for_mot:
+ emb_feats = []
+
+ for i, block in enumerate(blocks):
+ if i > 0:
+ if self.data_format == 'NCHW':
+ block = paddle.concat([route, block], axis=1)
+ else:
+ block = paddle.concat([route, block], axis=-1)
+ route, tip = self.yolo_blocks[i](block)
+ yolo_feats.append(tip)
+
+ if for_mot:
+ # add embedding features output
+ emb_feats.append(route)
+
+ if i < self.num_blocks - 1:
+ route = self.routes[i](route)
+ route = F.interpolate(
+ route, scale_factor=2., data_format=self.data_format)
+
+ if for_mot:
+ return {'yolo_feats': yolo_feats, 'emb_feats': emb_feats}
+ else:
+ return yolo_feats
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
+
+
+@register
+@serializable
+class PPYOLOTinyFPN(nn.Layer):
+ __shared__ = ['norm_type', 'data_format']
+
+ def __init__(self,
+ in_channels=[80, 56, 34],
+ detection_block_channels=[160, 128, 96],
+ norm_type='bn',
+ data_format='NCHW',
+ **kwargs):
+ """
+ PPYOLO Tiny FPN layer
+ Args:
+ in_channels (list): input channels for fpn
+ detection_block_channels (list): channels in fpn
+ norm_type (str): batch norm type, default bn
+ data_format (str): data format, NCHW or NHWC
+ kwargs: extra key-value pairs, such as parameter of DropBlock and spp
+ """
+ super(PPYOLOTinyFPN, self).__init__()
+ assert len(in_channels) > 0, "in_channels length should > 0"
+ self.in_channels = in_channels[::-1]
+ assert len(detection_block_channels
+ ) > 0, "detection_block_channelslength should > 0"
+ self.detection_block_channels = detection_block_channels
+ self.data_format = data_format
+ self.num_blocks = len(in_channels)
+ # parse kwargs
+ self.drop_block = kwargs.get('drop_block', False)
+ self.block_size = kwargs.get('block_size', 3)
+ self.keep_prob = kwargs.get('keep_prob', 0.9)
+
+ self.spp_ = kwargs.get('spp', False)
+ if self.spp_:
+ self.spp = SPP(self.in_channels[0] * 4,
+ self.in_channels[0],
+ k=1,
+ pool_size=[5, 9, 13],
+ norm_type=norm_type,
+ name='spp')
+
+ self._out_channels = []
+ self.yolo_blocks = []
+ self.routes = []
+ for i, (
+ ch_in, ch_out
+ ) in enumerate(zip(self.in_channels, self.detection_block_channels)):
+ name = 'yolo_block.{}'.format(i)
+ if i > 0:
+ ch_in += self.detection_block_channels[i - 1]
+ yolo_block = self.add_sublayer(
+ name,
+ PPYOLOTinyDetBlock(
+ ch_in,
+ ch_out,
+ name,
+ drop_block=self.drop_block,
+ block_size=self.block_size,
+ keep_prob=self.keep_prob))
+ self.yolo_blocks.append(yolo_block)
+ self._out_channels.append(ch_out)
+
+ if i < self.num_blocks - 1:
+ name = 'yolo_transition.{}'.format(i)
+ route = self.add_sublayer(
+ name,
+ ConvBNLayer(
+ ch_in=ch_out,
+ ch_out=ch_out,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ norm_type=norm_type,
+ data_format=data_format,
+ name=name))
+ self.routes.append(route)
+
+ def forward(self, blocks, for_mot=False):
+ assert len(blocks) == self.num_blocks
+ blocks = blocks[::-1]
+ yolo_feats = []
+
+ # add embedding features output for multi-object tracking model
+ if for_mot:
+ emb_feats = []
+
+ for i, block in enumerate(blocks):
+ if i == 0 and self.spp_:
+ block = self.spp(block)
+
+ if i > 0:
+ if self.data_format == 'NCHW':
+ block = paddle.concat([route, block], axis=1)
+ else:
+ block = paddle.concat([route, block], axis=-1)
+ route, tip = self.yolo_blocks[i](block)
+ yolo_feats.append(tip)
+
+ if for_mot:
+ # add embedding features output
+ emb_feats.append(route)
+
+ if i < self.num_blocks - 1:
+ route = self.routes[i](route)
+ route = F.interpolate(
+ route, scale_factor=2., data_format=self.data_format)
+
+ if for_mot:
+ return {'yolo_feats': yolo_feats, 'emb_feats': emb_feats}
+ else:
+ return yolo_feats
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
+
+
+@register
+@serializable
+class PPYOLOPAN(nn.Layer):
+ __shared__ = ['norm_type', 'data_format']
+
+ def __init__(self,
+ in_channels=[512, 1024, 2048],
+ norm_type='bn',
+ data_format='NCHW',
+ act='mish',
+ conv_block_num=3,
+ drop_block=False,
+ block_size=3,
+ keep_prob=0.9,
+ spp=False):
+ """
+ PPYOLOPAN layer with SPP, DropBlock and CSP connection.
+
+ Args:
+ in_channels (list): input channels for fpn
+ norm_type (str): batch norm type, default bn
+ data_format (str): data format, NCHW or NHWC
+ act (str): activation function, default mish
+ conv_block_num (int): conv block num of each pan block
+ drop_block (bool): whether use DropBlock or not
+ block_size (int): block size of DropBlock
+ keep_prob (float): keep probability of DropBlock
+ spp (bool): whether use spp or not
+
+ """
+ super(PPYOLOPAN, self).__init__()
+ assert len(in_channels) > 0, "in_channels length should > 0"
+ self.in_channels = in_channels
+ self.num_blocks = len(in_channels)
+ # parse kwargs
+ self.drop_block = drop_block
+ self.block_size = block_size
+ self.keep_prob = keep_prob
+ self.spp = spp
+ self.conv_block_num = conv_block_num
+ self.data_format = data_format
+ if self.drop_block:
+ dropblock_cfg = [[
+ 'dropblock', DropBlock, [self.block_size, self.keep_prob],
+ dict()
+ ]]
+ else:
+ dropblock_cfg = []
+
+ # fpn
+ self.fpn_blocks = []
+ self.fpn_routes = []
+ fpn_channels = []
+ for i, ch_in in enumerate(self.in_channels[::-1]):
+ if i > 0:
+ ch_in += 512 // (2**(i - 1))
+ channel = 512 // (2**i)
+ base_cfg = []
+ for j in range(self.conv_block_num):
+ base_cfg += [
+ # name, layer, args
+ [
+ '{}.0'.format(j), ConvBNLayer, [channel, channel, 1],
+ dict(
+ padding=0, act=act, norm_type=norm_type)
+ ],
+ [
+ '{}.1'.format(j), ConvBNLayer, [channel, channel, 3],
+ dict(
+ padding=1, act=act, norm_type=norm_type)
+ ]
+ ]
+
+ if i == 0 and self.spp:
+ base_cfg[3] = [
+ 'spp', SPP, [channel * 4, channel, 1], dict(
+ pool_size=[5, 9, 13], act=act, norm_type=norm_type)
+ ]
+
+ cfg = base_cfg[:4] + dropblock_cfg + base_cfg[4:]
+ name = 'fpn.{}'.format(i)
+ fpn_block = self.add_sublayer(
+ name,
+ PPYOLODetBlockCSP(cfg, ch_in, channel, act, norm_type, name,
+ data_format))
+ self.fpn_blocks.append(fpn_block)
+ fpn_channels.append(channel * 2)
+ if i < self.num_blocks - 1:
+ name = 'fpn_transition.{}'.format(i)
+ route = self.add_sublayer(
+ name,
+ ConvBNLayer(
+ ch_in=channel * 2,
+ ch_out=channel,
+ filter_size=1,
+ stride=1,
+ padding=0,
+ act=act,
+ norm_type=norm_type,
+ data_format=data_format,
+ name=name))
+ self.fpn_routes.append(route)
+ # pan
+ self.pan_blocks = []
+ self.pan_routes = []
+ self._out_channels = [512 // (2**(self.num_blocks - 2)), ]
+ for i in reversed(range(self.num_blocks - 1)):
+ name = 'pan_transition.{}'.format(i)
+ route = self.add_sublayer(
+ name,
+ ConvBNLayer(
+ ch_in=fpn_channels[i + 1],
+ ch_out=fpn_channels[i + 1],
+ filter_size=3,
+ stride=2,
+ padding=1,
+ act=act,
+ norm_type=norm_type,
+ data_format=data_format,
+ name=name))
+ self.pan_routes = [route, ] + self.pan_routes
+ base_cfg = []
+ ch_in = fpn_channels[i] + fpn_channels[i + 1]
+ channel = 512 // (2**i)
+ for j in range(self.conv_block_num):
+ base_cfg += [
+ # name, layer, args
+ [
+ '{}.0'.format(j), ConvBNLayer, [channel, channel, 1],
+ dict(
+ padding=0, act=act, norm_type=norm_type)
+ ],
+ [
+ '{}.1'.format(j), ConvBNLayer, [channel, channel, 3],
+ dict(
+ padding=1, act=act, norm_type=norm_type)
+ ]
+ ]
+
+ cfg = base_cfg[:4] + dropblock_cfg + base_cfg[4:]
+ name = 'pan.{}'.format(i)
+ pan_block = self.add_sublayer(
+ name,
+ PPYOLODetBlockCSP(cfg, ch_in, channel, act, norm_type, name,
+ data_format))
+
+ self.pan_blocks = [pan_block, ] + self.pan_blocks
+ self._out_channels.append(channel * 2)
+
+ self._out_channels = self._out_channels[::-1]
+
+ def forward(self, blocks, for_mot=False):
+ assert len(blocks) == self.num_blocks
+ blocks = blocks[::-1]
+ fpn_feats = []
+
+ # add embedding features output for multi-object tracking model
+ if for_mot:
+ emb_feats = []
+
+ for i, block in enumerate(blocks):
+ if i > 0:
+ if self.data_format == 'NCHW':
+ block = paddle.concat([route, block], axis=1)
+ else:
+ block = paddle.concat([route, block], axis=-1)
+ route, tip = self.fpn_blocks[i](block)
+ fpn_feats.append(tip)
+
+ if for_mot:
+ # add embedding features output
+ emb_feats.append(route)
+
+ if i < self.num_blocks - 1:
+ route = self.fpn_routes[i](route)
+ route = F.interpolate(
+ route, scale_factor=2., data_format=self.data_format)
+
+ pan_feats = [fpn_feats[-1], ]
+ route = fpn_feats[self.num_blocks - 1]
+ for i in reversed(range(self.num_blocks - 1)):
+ block = fpn_feats[i]
+ route = self.pan_routes[i](route)
+ if self.data_format == 'NCHW':
+ block = paddle.concat([route, block], axis=1)
+ else:
+ block = paddle.concat([route, block], axis=-1)
+
+ route, tip = self.pan_blocks[i](block)
+ pan_feats.append(tip)
+
+ if for_mot:
+ return {'yolo_feats': pan_feats[::-1], 'emb_feats': emb_feats}
+ else:
+ return pan_feats[::-1]
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'in_channels': [i.channels for i in input_shape], }
+
+ @property
+ def out_shape(self):
+ return [ShapeSpec(channels=c) for c in self._out_channels]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/ops.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/ops.py
new file mode 100644
index 000000000..593d8dd37
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/ops.py
@@ -0,0 +1,1601 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn.functional as F
+import paddle.nn as nn
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+
+from paddle.fluid.framework import Variable, in_dygraph_mode
+from paddle.fluid import core
+from paddle.fluid.layer_helper import LayerHelper
+from paddle.fluid.data_feeder import check_variable_and_dtype, check_type, check_dtype
+
+__all__ = [
+ 'roi_pool',
+ 'roi_align',
+ 'prior_box',
+ 'generate_proposals',
+ 'iou_similarity',
+ 'box_coder',
+ 'yolo_box',
+ 'multiclass_nms',
+ 'distribute_fpn_proposals',
+ 'collect_fpn_proposals',
+ 'matrix_nms',
+ 'batch_norm',
+ 'mish',
+]
+
+
+def mish(x):
+ return x * paddle.tanh(F.softplus(x))
+
+
+def batch_norm(ch,
+ norm_type='bn',
+ norm_decay=0.,
+ freeze_norm=False,
+ initializer=None,
+ data_format='NCHW'):
+ if norm_type == 'sync_bn':
+ batch_norm = nn.SyncBatchNorm
+ else:
+ batch_norm = nn.BatchNorm2D
+
+ norm_lr = 0. if freeze_norm else 1.
+ weight_attr = ParamAttr(
+ initializer=initializer,
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+ bias_attr = ParamAttr(
+ learning_rate=norm_lr,
+ regularizer=L2Decay(norm_decay),
+ trainable=False if freeze_norm else True)
+
+ norm_layer = batch_norm(
+ ch,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr,
+ data_format=data_format)
+
+ norm_params = norm_layer.parameters()
+ if freeze_norm:
+ for param in norm_params:
+ param.stop_gradient = True
+
+ return norm_layer
+
+
+@paddle.jit.not_to_static
+def roi_pool(input,
+ rois,
+ output_size,
+ spatial_scale=1.0,
+ rois_num=None,
+ name=None):
+ """
+
+ This operator implements the roi_pooling layer.
+ Region of interest pooling (also known as RoI pooling) is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).
+
+ The operator has three steps:
+
+ 1. Dividing each region proposal into equal-sized sections with output_size(h, w);
+ 2. Finding the largest value in each section;
+ 3. Copying these max values to the output buffer.
+
+ For more information, please refer to https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn
+
+ Args:
+ input (Tensor): Input feature, 4D-Tensor with the shape of [N,C,H,W],
+ where N is the batch size, C is the input channel, H is Height, W is weight.
+ The data type is float32 or float64.
+ rois (Tensor): ROIs (Regions of Interest) to pool over.
+ 2D-Tensor or 2D-LoDTensor with the shape of [num_rois,4], the lod level is 1.
+ Given as [[x1, y1, x2, y2], ...], (x1, y1) is the top left coordinates,
+ and (x2, y2) is the bottom right coordinates.
+ output_size (int or tuple[int, int]): The pooled output size(h, w), data type is int32. If int, h and w are both equal to output_size.
+ spatial_scale (float, optional): Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling. Default: 1.0
+ rois_num (Tensor): The number of RoIs in each image. Default: None
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+
+ Returns:
+ Tensor: The pooled feature, 4D-Tensor with the shape of [num_rois, C, output_size[0], output_size[1]].
+
+
+ Examples:
+
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+
+ x = paddle.static.data(
+ name='data', shape=[None, 256, 32, 32], dtype='float32')
+ rois = paddle.static.data(
+ name='rois', shape=[None, 4], dtype='float32')
+ rois_num = paddle.static.data(name='rois_num', shape=[None], dtype='int32')
+
+ pool_out = ops.roi_pool(
+ input=x,
+ rois=rois,
+ output_size=(1, 1),
+ spatial_scale=1.0,
+ rois_num=rois_num)
+ """
+ check_type(output_size, 'output_size', (int, tuple), 'roi_pool')
+ if isinstance(output_size, int):
+ output_size = (output_size, output_size)
+
+ pooled_height, pooled_width = output_size
+ if in_dygraph_mode():
+ assert rois_num is not None, "rois_num should not be None in dygraph mode."
+ pool_out, argmaxes = core.ops.roi_pool(
+ input, rois, rois_num, "pooled_height", pooled_height,
+ "pooled_width", pooled_width, "spatial_scale", spatial_scale)
+ return pool_out, argmaxes
+
+ else:
+ check_variable_and_dtype(input, 'input', ['float32'], 'roi_pool')
+ check_variable_and_dtype(rois, 'rois', ['float32'], 'roi_pool')
+ helper = LayerHelper('roi_pool', **locals())
+ dtype = helper.input_dtype()
+ pool_out = helper.create_variable_for_type_inference(dtype)
+ argmaxes = helper.create_variable_for_type_inference(dtype='int32')
+
+ inputs = {
+ "X": input,
+ "ROIs": rois,
+ }
+ if rois_num is not None:
+ inputs['RoisNum'] = rois_num
+ helper.append_op(
+ type="roi_pool",
+ inputs=inputs,
+ outputs={"Out": pool_out,
+ "Argmax": argmaxes},
+ attrs={
+ "pooled_height": pooled_height,
+ "pooled_width": pooled_width,
+ "spatial_scale": spatial_scale
+ })
+ return pool_out, argmaxes
+
+
+@paddle.jit.not_to_static
+def roi_align(input,
+ rois,
+ output_size,
+ spatial_scale=1.0,
+ sampling_ratio=-1,
+ rois_num=None,
+ aligned=True,
+ name=None):
+ """
+
+ Region of interest align (also known as RoI align) is to perform
+ bilinear interpolation on inputs of nonuniform sizes to obtain
+ fixed-size feature maps (e.g. 7*7)
+
+ Dividing each region proposal into equal-sized sections with
+ the pooled_width and pooled_height. Location remains the origin
+ result.
+
+ In each ROI bin, the value of the four regularly sampled locations
+ are computed directly through bilinear interpolation. The output is
+ the mean of four locations.
+ Thus avoid the misaligned problem.
+
+ Args:
+ input (Tensor): Input feature, 4D-Tensor with the shape of [N,C,H,W],
+ where N is the batch size, C is the input channel, H is Height, W is weight.
+ The data type is float32 or float64.
+ rois (Tensor): ROIs (Regions of Interest) to pool over.It should be
+ a 2-D Tensor or 2-D LoDTensor of shape (num_rois, 4), the lod level is 1.
+ The data type is float32 or float64. Given as [[x1, y1, x2, y2], ...],
+ (x1, y1) is the top left coordinates, and (x2, y2) is the bottom right coordinates.
+ output_size (int or tuple[int, int]): The pooled output size(h, w), data type is int32. If int, h and w are both equal to output_size.
+ spatial_scale (float32, optional): Multiplicative spatial scale factor to translate ROI coords
+ from their input scale to the scale used when pooling. Default: 1.0
+ sampling_ratio(int32, optional): number of sampling points in the interpolation grid.
+ If <=0, then grid points are adaptive to roi_width and pooled_w, likewise for height. Default: -1
+ rois_num (Tensor): The number of RoIs in each image. Default: None
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Tensor:
+
+ Output: The output of ROIAlignOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w). The data type is float32 or float64.
+
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+
+ x = paddle.static.data(
+ name='data', shape=[None, 256, 32, 32], dtype='float32')
+ rois = paddle.static.data(
+ name='rois', shape=[None, 4], dtype='float32')
+ rois_num = paddle.static.data(name='rois_num', shape=[None], dtype='int32')
+ align_out = ops.roi_align(input=x,
+ rois=rois,
+ ouput_size=(7, 7),
+ spatial_scale=0.5,
+ sampling_ratio=-1,
+ rois_num=rois_num)
+ """
+ check_type(output_size, 'output_size', (int, tuple), 'roi_align')
+ if isinstance(output_size, int):
+ output_size = (output_size, output_size)
+
+ pooled_height, pooled_width = output_size
+
+ if in_dygraph_mode():
+ assert rois_num is not None, "rois_num should not be None in dygraph mode."
+ align_out = core.ops.roi_align(
+ input, rois, rois_num, "pooled_height", pooled_height,
+ "pooled_width", pooled_width, "spatial_scale", spatial_scale,
+ "sampling_ratio", sampling_ratio, "aligned", aligned)
+ return align_out
+
+ else:
+ check_variable_and_dtype(input, 'input', ['float32', 'float64'],
+ 'roi_align')
+ check_variable_and_dtype(rois, 'rois', ['float32', 'float64'],
+ 'roi_align')
+ helper = LayerHelper('roi_align', **locals())
+ dtype = helper.input_dtype()
+ align_out = helper.create_variable_for_type_inference(dtype)
+ inputs = {
+ "X": input,
+ "ROIs": rois,
+ }
+ if rois_num is not None:
+ inputs['RoisNum'] = rois_num
+ helper.append_op(
+ type="roi_align",
+ inputs=inputs,
+ outputs={"Out": align_out},
+ attrs={
+ "pooled_height": pooled_height,
+ "pooled_width": pooled_width,
+ "spatial_scale": spatial_scale,
+ "sampling_ratio": sampling_ratio,
+ "aligned": aligned,
+ })
+ return align_out
+
+
+@paddle.jit.not_to_static
+def iou_similarity(x, y, box_normalized=True, name=None):
+ """
+ Computes intersection-over-union (IOU) between two box lists.
+ Box list 'X' should be a LoDTensor and 'Y' is a common Tensor,
+ boxes in 'Y' are shared by all instance of the batched inputs of X.
+ Given two boxes A and B, the calculation of IOU is as follows:
+
+ $$
+ IOU(A, B) =
+ \\frac{area(A\\cap B)}{area(A)+area(B)-area(A\\cap B)}
+ $$
+
+ Args:
+ x (Tensor): Box list X is a 2-D Tensor with shape [N, 4] holds N
+ boxes, each box is represented as [xmin, ymin, xmax, ymax],
+ the shape of X is [N, 4]. [xmin, ymin] is the left top
+ coordinate of the box if the input is image feature map, they
+ are close to the origin of the coordinate system.
+ [xmax, ymax] is the right bottom coordinate of the box.
+ The data type is float32 or float64.
+ y (Tensor): Box list Y holds M boxes, each box is represented as
+ [xmin, ymin, xmax, ymax], the shape of X is [N, 4].
+ [xmin, ymin] is the left top coordinate of the box if the
+ input is image feature map, and [xmax, ymax] is the right
+ bottom coordinate of the box. The data type is float32 or float64.
+ box_normalized(bool): Whether treat the priorbox as a normalized box.
+ Set true by default.
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Tensor: The output of iou_similarity op, a tensor with shape [N, M]
+ representing pairwise iou scores. The data type is same with x.
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+
+ x = paddle.static.data(name='x', shape=[None, 4], dtype='float32')
+ y = paddle.static.data(name='y', shape=[None, 4], dtype='float32')
+ iou = ops.iou_similarity(x=x, y=y)
+ """
+
+ if in_dygraph_mode():
+ out = core.ops.iou_similarity(x, y, 'box_normalized', box_normalized)
+ return out
+ else:
+ helper = LayerHelper("iou_similarity", **locals())
+ out = helper.create_variable_for_type_inference(dtype=x.dtype)
+
+ helper.append_op(
+ type="iou_similarity",
+ inputs={"X": x,
+ "Y": y},
+ attrs={"box_normalized": box_normalized},
+ outputs={"Out": out})
+ return out
+
+
+@paddle.jit.not_to_static
+def collect_fpn_proposals(multi_rois,
+ multi_scores,
+ min_level,
+ max_level,
+ post_nms_top_n,
+ rois_num_per_level=None,
+ name=None):
+ """
+
+ **This OP only supports LoDTensor as input**. Concat multi-level RoIs
+ (Region of Interest) and select N RoIs with respect to multi_scores.
+ This operation performs the following steps:
+
+ 1. Choose num_level RoIs and scores as input: num_level = max_level - min_level
+ 2. Concat multi-level RoIs and scores
+ 3. Sort scores and select post_nms_top_n scores
+ 4. Gather RoIs by selected indices from scores
+ 5. Re-sort RoIs by corresponding batch_id
+
+ Args:
+ multi_rois(list): List of RoIs to collect. Element in list is 2-D
+ LoDTensor with shape [N, 4] and data type is float32 or float64,
+ N is the number of RoIs.
+ multi_scores(list): List of scores of RoIs to collect. Element in list
+ is 2-D LoDTensor with shape [N, 1] and data type is float32 or
+ float64, N is the number of RoIs.
+ min_level(int): The lowest level of FPN layer to collect
+ max_level(int): The highest level of FPN layer to collect
+ post_nms_top_n(int): The number of selected RoIs
+ rois_num_per_level(list, optional): The List of RoIs' numbers.
+ Each element is 1-D Tensor which contains the RoIs' number of each
+ image on each level and the shape is [B] and data type is
+ int32, B is the number of images. If it is not None then return
+ a 1-D Tensor contains the output RoIs' number of each image and
+ the shape is [B]. Default: None
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Variable:
+
+ fpn_rois(Variable): 2-D LoDTensor with shape [N, 4] and data type is
+ float32 or float64. Selected RoIs.
+
+ rois_num(Tensor): 1-D Tensor contains the RoIs's number of each
+ image. The shape is [B] and data type is int32. B is the number of
+ images.
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+ multi_rois = []
+ multi_scores = []
+ for i in range(4):
+ multi_rois.append(paddle.static.data(
+ name='roi_'+str(i), shape=[None, 4], dtype='float32', lod_level=1))
+ for i in range(4):
+ multi_scores.append(paddle.static.data(
+ name='score_'+str(i), shape=[None, 1], dtype='float32', lod_level=1))
+
+ fpn_rois = ops.collect_fpn_proposals(
+ multi_rois=multi_rois,
+ multi_scores=multi_scores,
+ min_level=2,
+ max_level=5,
+ post_nms_top_n=2000)
+ """
+ check_type(multi_rois, 'multi_rois', list, 'collect_fpn_proposals')
+ check_type(multi_scores, 'multi_scores', list, 'collect_fpn_proposals')
+ num_lvl = max_level - min_level + 1
+ input_rois = multi_rois[:num_lvl]
+ input_scores = multi_scores[:num_lvl]
+
+ if in_dygraph_mode():
+ assert rois_num_per_level is not None, "rois_num_per_level should not be None in dygraph mode."
+ attrs = ('post_nms_topN', post_nms_top_n)
+ output_rois, rois_num = core.ops.collect_fpn_proposals(
+ input_rois, input_scores, rois_num_per_level, *attrs)
+ return output_rois, rois_num
+
+ else:
+ helper = LayerHelper('collect_fpn_proposals', **locals())
+ dtype = helper.input_dtype('multi_rois')
+ check_dtype(dtype, 'multi_rois', ['float32', 'float64'],
+ 'collect_fpn_proposals')
+ output_rois = helper.create_variable_for_type_inference(dtype)
+ output_rois.stop_gradient = True
+
+ inputs = {
+ 'MultiLevelRois': input_rois,
+ 'MultiLevelScores': input_scores,
+ }
+ outputs = {'FpnRois': output_rois}
+ if rois_num_per_level is not None:
+ inputs['MultiLevelRoIsNum'] = rois_num_per_level
+ rois_num = helper.create_variable_for_type_inference(dtype='int32')
+ rois_num.stop_gradient = True
+ outputs['RoisNum'] = rois_num
+ helper.append_op(
+ type='collect_fpn_proposals',
+ inputs=inputs,
+ outputs=outputs,
+ attrs={'post_nms_topN': post_nms_top_n})
+ return output_rois, rois_num
+
+
+@paddle.jit.not_to_static
+def distribute_fpn_proposals(fpn_rois,
+ min_level,
+ max_level,
+ refer_level,
+ refer_scale,
+ pixel_offset=False,
+ rois_num=None,
+ name=None):
+ """
+
+ **This op only takes LoDTensor as input.** In Feature Pyramid Networks
+ (FPN) models, it is needed to distribute all proposals into different FPN
+ level, with respect to scale of the proposals, the referring scale and the
+ referring level. Besides, to restore the order of proposals, we return an
+ array which indicates the original index of rois in current proposals.
+ To compute FPN level for each roi, the formula is given as follows:
+
+ .. math::
+
+ roi\_scale &= \sqrt{BBoxArea(fpn\_roi)}
+
+ level = floor(&\log(\\frac{roi\_scale}{refer\_scale}) + refer\_level)
+
+ where BBoxArea is a function to compute the area of each roi.
+
+ Args:
+
+ fpn_rois(Variable): 2-D Tensor with shape [N, 4] and data type is
+ float32 or float64. The input fpn_rois.
+ min_level(int32): The lowest level of FPN layer where the proposals come
+ from.
+ max_level(int32): The highest level of FPN layer where the proposals
+ come from.
+ refer_level(int32): The referring level of FPN layer with specified scale.
+ refer_scale(int32): The referring scale of FPN layer with specified level.
+ rois_num(Tensor): 1-D Tensor contains the number of RoIs in each image.
+ The shape is [B] and data type is int32. B is the number of images.
+ If it is not None then return a list of 1-D Tensor. Each element
+ is the output RoIs' number of each image on the corresponding level
+ and the shape is [B]. None by default.
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Tuple:
+
+ multi_rois(List) : A list of 2-D LoDTensor with shape [M, 4]
+ and data type of float32 and float64. The length is
+ max_level-min_level+1. The proposals in each FPN level.
+
+ restore_ind(Variable): A 2-D Tensor with shape [N, 1], N is
+ the number of total rois. The data type is int32. It is
+ used to restore the order of fpn_rois.
+
+ rois_num_per_level(List): A list of 1-D Tensor and each Tensor is
+ the RoIs' number in each image on the corresponding level. The shape
+ is [B] and data type of int32. B is the number of images
+
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+ fpn_rois = paddle.static.data(
+ name='data', shape=[None, 4], dtype='float32', lod_level=1)
+ multi_rois, restore_ind = ops.distribute_fpn_proposals(
+ fpn_rois=fpn_rois,
+ min_level=2,
+ max_level=5,
+ refer_level=4,
+ refer_scale=224)
+ """
+ num_lvl = max_level - min_level + 1
+
+ if in_dygraph_mode():
+ assert rois_num is not None, "rois_num should not be None in dygraph mode."
+ attrs = ('min_level', min_level, 'max_level', max_level, 'refer_level',
+ refer_level, 'refer_scale', refer_scale, 'pixel_offset',
+ pixel_offset)
+ multi_rois, restore_ind, rois_num_per_level = core.ops.distribute_fpn_proposals(
+ fpn_rois, rois_num, num_lvl, num_lvl, *attrs)
+ return multi_rois, restore_ind, rois_num_per_level
+
+ else:
+ check_variable_and_dtype(fpn_rois, 'fpn_rois', ['float32', 'float64'],
+ 'distribute_fpn_proposals')
+ helper = LayerHelper('distribute_fpn_proposals', **locals())
+ dtype = helper.input_dtype('fpn_rois')
+ multi_rois = [
+ helper.create_variable_for_type_inference(dtype)
+ for i in range(num_lvl)
+ ]
+
+ restore_ind = helper.create_variable_for_type_inference(dtype='int32')
+
+ inputs = {'FpnRois': fpn_rois}
+ outputs = {
+ 'MultiFpnRois': multi_rois,
+ 'RestoreIndex': restore_ind,
+ }
+
+ if rois_num is not None:
+ inputs['RoisNum'] = rois_num
+ rois_num_per_level = [
+ helper.create_variable_for_type_inference(dtype='int32')
+ for i in range(num_lvl)
+ ]
+ outputs['MultiLevelRoIsNum'] = rois_num_per_level
+
+ helper.append_op(
+ type='distribute_fpn_proposals',
+ inputs=inputs,
+ outputs=outputs,
+ attrs={
+ 'min_level': min_level,
+ 'max_level': max_level,
+ 'refer_level': refer_level,
+ 'refer_scale': refer_scale,
+ 'pixel_offset': pixel_offset
+ })
+ return multi_rois, restore_ind, rois_num_per_level
+
+
+@paddle.jit.not_to_static
+def yolo_box(
+ x,
+ origin_shape,
+ anchors,
+ class_num,
+ conf_thresh,
+ downsample_ratio,
+ clip_bbox=True,
+ scale_x_y=1.,
+ name=None, ):
+ """
+
+ This operator generates YOLO detection boxes from output of YOLOv3 network.
+
+ The output of previous network is in shape [N, C, H, W], while H and W
+ should be the same, H and W specify the grid size, each grid point predict
+ given number boxes, this given number, which following will be represented as S,
+ is specified by the number of anchors. In the second dimension(the channel
+ dimension), C should be equal to S * (5 + class_num), class_num is the object
+ category number of source dataset(such as 80 in coco dataset), so the
+ second(channel) dimension, apart from 4 box location coordinates x, y, w, h,
+ also includes confidence score of the box and class one-hot key of each anchor
+ box.
+ Assume the 4 location coordinates are :math:`t_x, t_y, t_w, t_h`, the box
+ predictions should be as follows:
+ $$
+ b_x = \\sigma(t_x) + c_x
+ $$
+ $$
+ b_y = \\sigma(t_y) + c_y
+ $$
+ $$
+ b_w = p_w e^{t_w}
+ $$
+ $$
+ b_h = p_h e^{t_h}
+ $$
+ in the equation above, :math:`c_x, c_y` is the left top corner of current grid
+ and :math:`p_w, p_h` is specified by anchors.
+ The logistic regression value of the 5th channel of each anchor prediction boxes
+ represents the confidence score of each prediction box, and the logistic
+ regression value of the last :attr:`class_num` channels of each anchor prediction
+ boxes represents the classifcation scores. Boxes with confidence scores less than
+ :attr:`conf_thresh` should be ignored, and box final scores is the product of
+ confidence scores and classification scores.
+ $$
+ score_{pred} = score_{conf} * score_{class}
+ $$
+
+ Args:
+ x (Tensor): The input tensor of YoloBox operator is a 4-D tensor with shape of [N, C, H, W].
+ The second dimension(C) stores box locations, confidence score and
+ classification one-hot keys of each anchor box. Generally, X should be the output of YOLOv3 network.
+ The data type is float32 or float64.
+ origin_shape (Tensor): The image size tensor of YoloBox operator, This is a 2-D tensor with shape of [N, 2].
+ This tensor holds height and width of each input image used for resizing output box in input image
+ scale. The data type is int32.
+ anchors (list|tuple): The anchor width and height, it will be parsed pair by pair.
+ class_num (int): The number of classes to predict.
+ conf_thresh (float): The confidence scores threshold of detection boxes. Boxes with confidence scores
+ under threshold should be ignored.
+ downsample_ratio (int): The downsample ratio from network input to YoloBox operator input,
+ so 32, 16, 8 should be set for the first, second, and thrid YoloBox operators.
+ clip_bbox (bool): Whether clip output bonding box in Input(ImgSize) boundary. Default true.
+ scale_x_y (float): Scale the center point of decoded bounding box. Default 1.0.
+ name (string): The default value is None. Normally there is no need
+ for user to set this property. For more information,
+ please refer to :ref:`api_guide_Name`
+
+ Returns:
+ boxes Tensor: A 3-D tensor with shape [N, M, 4], the coordinates of boxes, N is the batch num,
+ M is output box number, and the 3rd dimension stores [xmin, ymin, xmax, ymax] coordinates of boxes.
+ scores Tensor: A 3-D tensor with shape [N, M, :attr:`class_num`], the coordinates of boxes, N is the batch num,
+ M is output box number.
+
+ Raises:
+ TypeError: Attr anchors of yolo box must be list or tuple
+ TypeError: Attr class_num of yolo box must be an integer
+ TypeError: Attr conf_thresh of yolo box must be a float number
+
+ Examples:
+
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+
+ paddle.enable_static()
+ x = paddle.static.data(name='x', shape=[None, 255, 13, 13], dtype='float32')
+ img_size = paddle.static.data(name='img_size',shape=[None, 2],dtype='int64')
+ anchors = [10, 13, 16, 30, 33, 23]
+ boxes,scores = ops.yolo_box(x=x, img_size=img_size, class_num=80, anchors=anchors,
+ conf_thresh=0.01, downsample_ratio=32)
+ """
+ helper = LayerHelper('yolo_box', **locals())
+
+ if not isinstance(anchors, list) and not isinstance(anchors, tuple):
+ raise TypeError("Attr anchors of yolo_box must be list or tuple")
+ if not isinstance(class_num, int):
+ raise TypeError("Attr class_num of yolo_box must be an integer")
+ if not isinstance(conf_thresh, float):
+ raise TypeError("Attr ignore_thresh of yolo_box must be a float number")
+
+ if in_dygraph_mode():
+ attrs = ('anchors', anchors, 'class_num', class_num, 'conf_thresh',
+ conf_thresh, 'downsample_ratio', downsample_ratio, 'clip_bbox',
+ clip_bbox, 'scale_x_y', scale_x_y)
+ boxes, scores = core.ops.yolo_box(x, origin_shape, *attrs)
+ return boxes, scores
+ else:
+ boxes = helper.create_variable_for_type_inference(dtype=x.dtype)
+ scores = helper.create_variable_for_type_inference(dtype=x.dtype)
+
+ attrs = {
+ "anchors": anchors,
+ "class_num": class_num,
+ "conf_thresh": conf_thresh,
+ "downsample_ratio": downsample_ratio,
+ "clip_bbox": clip_bbox,
+ "scale_x_y": scale_x_y,
+ }
+
+ helper.append_op(
+ type='yolo_box',
+ inputs={
+ "X": x,
+ "ImgSize": origin_shape,
+ },
+ outputs={
+ 'Boxes': boxes,
+ 'Scores': scores,
+ },
+ attrs=attrs)
+ return boxes, scores
+
+
+@paddle.jit.not_to_static
+def prior_box(input,
+ image,
+ min_sizes,
+ max_sizes=None,
+ aspect_ratios=[1.],
+ variance=[0.1, 0.1, 0.2, 0.2],
+ flip=False,
+ clip=False,
+ steps=[0.0, 0.0],
+ offset=0.5,
+ min_max_aspect_ratios_order=False,
+ name=None):
+ """
+
+ This op generates prior boxes for SSD(Single Shot MultiBox Detector) algorithm.
+ Each position of the input produce N prior boxes, N is determined by
+ the count of min_sizes, max_sizes and aspect_ratios, The size of the
+ box is in range(min_size, max_size) interval, which is generated in
+ sequence according to the aspect_ratios.
+
+ Parameters:
+ input(Tensor): 4-D tensor(NCHW), the data type should be float32 or float64.
+ image(Tensor): 4-D tensor(NCHW), the input image data of PriorBoxOp,
+ the data type should be float32 or float64.
+ min_sizes(list|tuple|float): the min sizes of generated prior boxes.
+ max_sizes(list|tuple|None): the max sizes of generated prior boxes.
+ Default: None.
+ aspect_ratios(list|tuple|float): the aspect ratios of generated
+ prior boxes. Default: [1.].
+ variance(list|tuple): the variances to be encoded in prior boxes.
+ Default:[0.1, 0.1, 0.2, 0.2].
+ flip(bool): Whether to flip aspect ratios. Default:False.
+ clip(bool): Whether to clip out-of-boundary boxes. Default: False.
+ step(list|tuple): Prior boxes step across width and height, If
+ step[0] equals to 0.0 or step[1] equals to 0.0, the prior boxes step across
+ height or weight of the input will be automatically calculated.
+ Default: [0., 0.]
+ offset(float): Prior boxes center offset. Default: 0.5
+ min_max_aspect_ratios_order(bool): If set True, the output prior box is
+ in order of [min, max, aspect_ratios], which is consistent with
+ Caffe. Please note, this order affects the weights order of
+ convolution layer followed by and does not affect the final
+ detection results. Default: False.
+ name(str, optional): The default value is None. Normally there is no need for
+ user to set this property. For more information, please refer to :ref:`api_guide_Name`
+
+ Returns:
+ Tuple: A tuple with two Variable (boxes, variances)
+
+ boxes(Tensor): the output prior boxes of PriorBox.
+ 4-D tensor, the layout is [H, W, num_priors, 4].
+ H is the height of input, W is the width of input,
+ num_priors is the total box count of each position of input.
+
+ variances(Tensor): the expanded variances of PriorBox.
+ 4-D tensor, the layput is [H, W, num_priors, 4].
+ H is the height of input, W is the width of input
+ num_priors is the total box count of each position of input
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+
+ paddle.enable_static()
+ input = paddle.static.data(name="input", shape=[None,3,6,9])
+ image = paddle.static.data(name="image", shape=[None,3,9,12])
+ box, var = ops.prior_box(
+ input=input,
+ image=image,
+ min_sizes=[100.],
+ clip=True,
+ flip=True)
+ """
+ helper = LayerHelper("prior_box", **locals())
+ dtype = helper.input_dtype()
+ check_variable_and_dtype(
+ input, 'input', ['uint8', 'int8', 'float32', 'float64'], 'prior_box')
+
+ def _is_list_or_tuple_(data):
+ return (isinstance(data, list) or isinstance(data, tuple))
+
+ if not _is_list_or_tuple_(min_sizes):
+ min_sizes = [min_sizes]
+ if not _is_list_or_tuple_(aspect_ratios):
+ aspect_ratios = [aspect_ratios]
+ if not (_is_list_or_tuple_(steps) and len(steps) == 2):
+ raise ValueError('steps should be a list or tuple ',
+ 'with length 2, (step_width, step_height).')
+
+ min_sizes = list(map(float, min_sizes))
+ aspect_ratios = list(map(float, aspect_ratios))
+ steps = list(map(float, steps))
+
+ cur_max_sizes = None
+ if max_sizes is not None and len(max_sizes) > 0 and max_sizes[0] > 0:
+ if not _is_list_or_tuple_(max_sizes):
+ max_sizes = [max_sizes]
+ cur_max_sizes = max_sizes
+
+ if in_dygraph_mode():
+ attrs = ('min_sizes', min_sizes, 'aspect_ratios', aspect_ratios,
+ 'variances', variance, 'flip', flip, 'clip', clip, 'step_w',
+ steps[0], 'step_h', steps[1], 'offset', offset,
+ 'min_max_aspect_ratios_order', min_max_aspect_ratios_order)
+ if cur_max_sizes is not None:
+ attrs += ('max_sizes', cur_max_sizes)
+ box, var = core.ops.prior_box(input, image, *attrs)
+ return box, var
+ else:
+ attrs = {
+ 'min_sizes': min_sizes,
+ 'aspect_ratios': aspect_ratios,
+ 'variances': variance,
+ 'flip': flip,
+ 'clip': clip,
+ 'step_w': steps[0],
+ 'step_h': steps[1],
+ 'offset': offset,
+ 'min_max_aspect_ratios_order': min_max_aspect_ratios_order
+ }
+
+ if cur_max_sizes is not None:
+ attrs['max_sizes'] = cur_max_sizes
+
+ box = helper.create_variable_for_type_inference(dtype)
+ var = helper.create_variable_for_type_inference(dtype)
+ helper.append_op(
+ type="prior_box",
+ inputs={"Input": input,
+ "Image": image},
+ outputs={"Boxes": box,
+ "Variances": var},
+ attrs=attrs, )
+ box.stop_gradient = True
+ var.stop_gradient = True
+ return box, var
+
+
+@paddle.jit.not_to_static
+def multiclass_nms(bboxes,
+ scores,
+ score_threshold,
+ nms_top_k,
+ keep_top_k,
+ nms_threshold=0.3,
+ normalized=True,
+ nms_eta=1.,
+ background_label=-1,
+ return_index=False,
+ return_rois_num=True,
+ rois_num=None,
+ name=None):
+ """
+ This operator is to do multi-class non maximum suppression (NMS) on
+ boxes and scores.
+ In the NMS step, this operator greedily selects a subset of detection bounding
+ boxes that have high scores larger than score_threshold, if providing this
+ threshold, then selects the largest nms_top_k confidences scores if nms_top_k
+ is larger than -1. Then this operator pruns away boxes that have high IOU
+ (intersection over union) overlap with already selected boxes by adaptive
+ threshold NMS based on parameters of nms_threshold and nms_eta.
+ Aftern NMS step, at most keep_top_k number of total bboxes are to be kept
+ per image if keep_top_k is larger than -1.
+ Args:
+ bboxes (Tensor): Two types of bboxes are supported:
+ 1. (Tensor) A 3-D Tensor with shape
+ [N, M, 4 or 8 16 24 32] represents the
+ predicted locations of M bounding bboxes,
+ N is the batch size. Each bounding box has four
+ coordinate values and the layout is
+ [xmin, ymin, xmax, ymax], when box size equals to 4.
+ 2. (LoDTensor) A 3-D Tensor with shape [M, C, 4]
+ M is the number of bounding boxes, C is the
+ class number
+ scores (Tensor): Two types of scores are supported:
+ 1. (Tensor) A 3-D Tensor with shape [N, C, M]
+ represents the predicted confidence predictions.
+ N is the batch size, C is the class number, M is
+ number of bounding boxes. For each category there
+ are total M scores which corresponding M bounding
+ boxes. Please note, M is equal to the 2nd dimension
+ of BBoxes.
+ 2. (LoDTensor) A 2-D LoDTensor with shape [M, C].
+ M is the number of bbox, C is the class number.
+ In this case, input BBoxes should be the second
+ case with shape [M, C, 4].
+ background_label (int): The index of background label, the background
+ label will be ignored. If set to -1, then all
+ categories will be considered. Default: 0
+ score_threshold (float): Threshold to filter out bounding boxes with
+ low confidence score. If not provided,
+ consider all boxes.
+ nms_top_k (int): Maximum number of detections to be kept according to
+ the confidences after the filtering detections based
+ on score_threshold.
+ nms_threshold (float): The threshold to be used in NMS. Default: 0.3
+ nms_eta (float): The threshold to be used in NMS. Default: 1.0
+ keep_top_k (int): Number of total bboxes to be kept per image after NMS
+ step. -1 means keeping all bboxes after NMS step.
+ normalized (bool): Whether detections are normalized. Default: True
+ return_index(bool): Whether return selected index. Default: False
+ rois_num(Tensor): 1-D Tensor contains the number of RoIs in each image.
+ The shape is [B] and data type is int32. B is the number of images.
+ If it is not None then return a list of 1-D Tensor. Each element
+ is the output RoIs' number of each image on the corresponding level
+ and the shape is [B]. None by default.
+ name(str): Name of the multiclass nms op. Default: None.
+ Returns:
+ A tuple with two Variables: (Out, Index) if return_index is True,
+ otherwise, a tuple with one Variable(Out) is returned.
+ Out: A 2-D LoDTensor with shape [No, 6] represents the detections.
+ Each row has 6 values: [label, confidence, xmin, ymin, xmax, ymax]
+ or A 2-D LoDTensor with shape [No, 10] represents the detections.
+ Each row has 10 values: [label, confidence, x1, y1, x2, y2, x3, y3,
+ x4, y4]. No is the total number of detections.
+ If all images have not detected results, all elements in LoD will be
+ 0, and output tensor is empty (None).
+ Index: Only return when return_index is True. A 2-D LoDTensor with
+ shape [No, 1] represents the selected index which type is Integer.
+ The index is the absolute value cross batches. No is the same number
+ as Out. If the index is used to gather other attribute such as age,
+ one needs to reshape the input(N, M, 1) to (N * M, 1) as first, where
+ N is the batch size and M is the number of boxes.
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ boxes = paddle.static.data(name='bboxes', shape=[81, 4],
+ dtype='float32', lod_level=1)
+ scores = paddle.static.data(name='scores', shape=[81],
+ dtype='float32', lod_level=1)
+ out, index = ops.multiclass_nms(bboxes=boxes,
+ scores=scores,
+ background_label=0,
+ score_threshold=0.5,
+ nms_top_k=400,
+ nms_threshold=0.3,
+ keep_top_k=200,
+ normalized=False,
+ return_index=True)
+ """
+ helper = LayerHelper('multiclass_nms3', **locals())
+
+ if in_dygraph_mode():
+ attrs = ('background_label', background_label, 'score_threshold',
+ score_threshold, 'nms_top_k', nms_top_k, 'nms_threshold',
+ nms_threshold, 'keep_top_k', keep_top_k, 'nms_eta', nms_eta,
+ 'normalized', normalized)
+ output, index, nms_rois_num = core.ops.multiclass_nms3(bboxes, scores,
+ rois_num, *attrs)
+ if not return_index:
+ index = None
+ return output, nms_rois_num, index
+
+ else:
+ output = helper.create_variable_for_type_inference(dtype=bboxes.dtype)
+ index = helper.create_variable_for_type_inference(dtype='int32')
+
+ inputs = {'BBoxes': bboxes, 'Scores': scores}
+ outputs = {'Out': output, 'Index': index}
+
+ if rois_num is not None:
+ inputs['RoisNum'] = rois_num
+
+ if return_rois_num:
+ nms_rois_num = helper.create_variable_for_type_inference(
+ dtype='int32')
+ outputs['NmsRoisNum'] = nms_rois_num
+
+ helper.append_op(
+ type="multiclass_nms3",
+ inputs=inputs,
+ attrs={
+ 'background_label': background_label,
+ 'score_threshold': score_threshold,
+ 'nms_top_k': nms_top_k,
+ 'nms_threshold': nms_threshold,
+ 'keep_top_k': keep_top_k,
+ 'nms_eta': nms_eta,
+ 'normalized': normalized
+ },
+ outputs=outputs)
+ output.stop_gradient = True
+ index.stop_gradient = True
+ if not return_index:
+ index = None
+ if not return_rois_num:
+ nms_rois_num = None
+
+ return output, nms_rois_num, index
+
+
+@paddle.jit.not_to_static
+def matrix_nms(bboxes,
+ scores,
+ score_threshold,
+ post_threshold,
+ nms_top_k,
+ keep_top_k,
+ use_gaussian=False,
+ gaussian_sigma=2.,
+ background_label=0,
+ normalized=True,
+ return_index=False,
+ return_rois_num=True,
+ name=None):
+ """
+ **Matrix NMS**
+ This operator does matrix non maximum suppression (NMS).
+ First selects a subset of candidate bounding boxes that have higher scores
+ than score_threshold (if provided), then the top k candidate is selected if
+ nms_top_k is larger than -1. Score of the remaining candidate are then
+ decayed according to the Matrix NMS scheme.
+ Aftern NMS step, at most keep_top_k number of total bboxes are to be kept
+ per image if keep_top_k is larger than -1.
+ Args:
+ bboxes (Tensor): A 3-D Tensor with shape [N, M, 4] represents the
+ predicted locations of M bounding bboxes,
+ N is the batch size. Each bounding box has four
+ coordinate values and the layout is
+ [xmin, ymin, xmax, ymax], when box size equals to 4.
+ The data type is float32 or float64.
+ scores (Tensor): A 3-D Tensor with shape [N, C, M]
+ represents the predicted confidence predictions.
+ N is the batch size, C is the class number, M is
+ number of bounding boxes. For each category there
+ are total M scores which corresponding M bounding
+ boxes. Please note, M is equal to the 2nd dimension
+ of BBoxes. The data type is float32 or float64.
+ score_threshold (float): Threshold to filter out bounding boxes with
+ low confidence score.
+ post_threshold (float): Threshold to filter out bounding boxes with
+ low confidence score AFTER decaying.
+ nms_top_k (int): Maximum number of detections to be kept according to
+ the confidences after the filtering detections based
+ on score_threshold.
+ keep_top_k (int): Number of total bboxes to be kept per image after NMS
+ step. -1 means keeping all bboxes after NMS step.
+ use_gaussian (bool): Use Gaussian as the decay function. Default: False
+ gaussian_sigma (float): Sigma for Gaussian decay function. Default: 2.0
+ background_label (int): The index of background label, the background
+ label will be ignored. If set to -1, then all
+ categories will be considered. Default: 0
+ normalized (bool): Whether detections are normalized. Default: True
+ return_index(bool): Whether return selected index. Default: False
+ return_rois_num(bool): whether return rois_num. Default: True
+ name(str): Name of the matrix nms op. Default: None.
+ Returns:
+ A tuple with three Tensor: (Out, Index, RoisNum) if return_index is True,
+ otherwise, a tuple with two Tensor (Out, RoisNum) is returned.
+ Out (Tensor): A 2-D Tensor with shape [No, 6] containing the
+ detection results.
+ Each row has 6 values: [label, confidence, xmin, ymin, xmax, ymax]
+ (After version 1.3, when no boxes detected, the lod is changed
+ from {0} to {1})
+ Index (Tensor): A 2-D Tensor with shape [No, 1] containing the
+ selected indices, which are absolute values cross batches.
+ rois_num (Tensor): A 1-D Tensor with shape [N] containing
+ the number of detected boxes in each image.
+ Examples:
+ .. code-block:: python
+ import paddle
+ from ppdet.modeling import ops
+ boxes = paddle.static.data(name='bboxes', shape=[None,81, 4],
+ dtype='float32', lod_level=1)
+ scores = paddle.static.data(name='scores', shape=[None,81],
+ dtype='float32', lod_level=1)
+ out = ops.matrix_nms(bboxes=boxes, scores=scores, background_label=0,
+ score_threshold=0.5, post_threshold=0.1,
+ nms_top_k=400, keep_top_k=200, normalized=False)
+ """
+ check_variable_and_dtype(bboxes, 'BBoxes', ['float32', 'float64'],
+ 'matrix_nms')
+ check_variable_and_dtype(scores, 'Scores', ['float32', 'float64'],
+ 'matrix_nms')
+ check_type(score_threshold, 'score_threshold', float, 'matrix_nms')
+ check_type(post_threshold, 'post_threshold', float, 'matrix_nms')
+ check_type(nms_top_k, 'nums_top_k', int, 'matrix_nms')
+ check_type(keep_top_k, 'keep_top_k', int, 'matrix_nms')
+ check_type(normalized, 'normalized', bool, 'matrix_nms')
+ check_type(use_gaussian, 'use_gaussian', bool, 'matrix_nms')
+ check_type(gaussian_sigma, 'gaussian_sigma', float, 'matrix_nms')
+ check_type(background_label, 'background_label', int, 'matrix_nms')
+
+ if in_dygraph_mode():
+ attrs = ('background_label', background_label, 'score_threshold',
+ score_threshold, 'post_threshold', post_threshold, 'nms_top_k',
+ nms_top_k, 'gaussian_sigma', gaussian_sigma, 'use_gaussian',
+ use_gaussian, 'keep_top_k', keep_top_k, 'normalized',
+ normalized)
+ out, index, rois_num = core.ops.matrix_nms(bboxes, scores, *attrs)
+ if not return_index:
+ index = None
+ if not return_rois_num:
+ rois_num = None
+ return out, rois_num, index
+ else:
+ helper = LayerHelper('matrix_nms', **locals())
+ output = helper.create_variable_for_type_inference(dtype=bboxes.dtype)
+ index = helper.create_variable_for_type_inference(dtype='int32')
+ outputs = {'Out': output, 'Index': index}
+ if return_rois_num:
+ rois_num = helper.create_variable_for_type_inference(dtype='int32')
+ outputs['RoisNum'] = rois_num
+
+ helper.append_op(
+ type="matrix_nms",
+ inputs={'BBoxes': bboxes,
+ 'Scores': scores},
+ attrs={
+ 'background_label': background_label,
+ 'score_threshold': score_threshold,
+ 'post_threshold': post_threshold,
+ 'nms_top_k': nms_top_k,
+ 'gaussian_sigma': gaussian_sigma,
+ 'use_gaussian': use_gaussian,
+ 'keep_top_k': keep_top_k,
+ 'normalized': normalized
+ },
+ outputs=outputs)
+ output.stop_gradient = True
+
+ if not return_index:
+ index = None
+ if not return_rois_num:
+ rois_num = None
+ return output, rois_num, index
+
+
+def bipartite_match(dist_matrix,
+ match_type=None,
+ dist_threshold=None,
+ name=None):
+ """
+
+ This operator implements a greedy bipartite matching algorithm, which is
+ used to obtain the matching with the maximum distance based on the input
+ distance matrix. For input 2D matrix, the bipartite matching algorithm can
+ find the matched column for each row (matched means the largest distance),
+ also can find the matched row for each column. And this operator only
+ calculate matched indices from column to row. For each instance,
+ the number of matched indices is the column number of the input distance
+ matrix. **The OP only supports CPU**.
+
+ There are two outputs, matched indices and distance.
+ A simple description, this algorithm matched the best (maximum distance)
+ row entity to the column entity and the matched indices are not duplicated
+ in each row of ColToRowMatchIndices. If the column entity is not matched
+ any row entity, set -1 in ColToRowMatchIndices.
+
+ NOTE: the input DistMat can be LoDTensor (with LoD) or Tensor.
+ If LoDTensor with LoD, the height of ColToRowMatchIndices is batch size.
+ If Tensor, the height of ColToRowMatchIndices is 1.
+
+ NOTE: This API is a very low level API. It is used by :code:`ssd_loss`
+ layer. Please consider to use :code:`ssd_loss` instead.
+
+ Args:
+ dist_matrix(Tensor): This input is a 2-D LoDTensor with shape
+ [K, M]. The data type is float32 or float64. It is pair-wise
+ distance matrix between the entities represented by each row and
+ each column. For example, assumed one entity is A with shape [K],
+ another entity is B with shape [M]. The dist_matrix[i][j] is the
+ distance between A[i] and B[j]. The bigger the distance is, the
+ better matching the pairs are. NOTE: This tensor can contain LoD
+ information to represent a batch of inputs. One instance of this
+ batch can contain different numbers of entities.
+ match_type(str, optional): The type of matching method, should be
+ 'bipartite' or 'per_prediction'. None ('bipartite') by default.
+ dist_threshold(float32, optional): If `match_type` is 'per_prediction',
+ this threshold is to determine the extra matching bboxes based
+ on the maximum distance, 0.5 by default.
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Tuple:
+
+ matched_indices(Tensor): A 2-D Tensor with shape [N, M]. The data
+ type is int32. N is the batch size. If match_indices[i][j] is -1, it
+ means B[j] does not match any entity in i-th instance.
+ Otherwise, it means B[j] is matched to row
+ match_indices[i][j] in i-th instance. The row number of
+ i-th instance is saved in match_indices[i][j].
+
+ matched_distance(Tensor): A 2-D Tensor with shape [N, M]. The data
+ type is float32. N is batch size. If match_indices[i][j] is -1,
+ match_distance[i][j] is also -1.0. Otherwise, assumed
+ match_distance[i][j] = d, and the row offsets of each instance
+ are called LoD. Then match_distance[i][j] =
+ dist_matrix[d+LoD[i]][j].
+
+ Examples:
+
+ .. code-block:: python
+ import paddle
+ from ppdet.modeling import ops
+ from ppdet.modeling.utils import iou_similarity
+
+ paddle.enable_static()
+
+ x = paddle.static.data(name='x', shape=[None, 4], dtype='float32')
+ y = paddle.static.data(name='y', shape=[None, 4], dtype='float32')
+ iou = iou_similarity(x=x, y=y)
+ matched_indices, matched_dist = ops.bipartite_match(iou)
+ """
+ check_variable_and_dtype(dist_matrix, 'dist_matrix',
+ ['float32', 'float64'], 'bipartite_match')
+
+ if in_dygraph_mode():
+ match_indices, match_distance = core.ops.bipartite_match(
+ dist_matrix, "match_type", match_type, "dist_threshold",
+ dist_threshold)
+ return match_indices, match_distance
+
+ helper = LayerHelper('bipartite_match', **locals())
+ match_indices = helper.create_variable_for_type_inference(dtype='int32')
+ match_distance = helper.create_variable_for_type_inference(
+ dtype=dist_matrix.dtype)
+ helper.append_op(
+ type='bipartite_match',
+ inputs={'DistMat': dist_matrix},
+ attrs={
+ 'match_type': match_type,
+ 'dist_threshold': dist_threshold,
+ },
+ outputs={
+ 'ColToRowMatchIndices': match_indices,
+ 'ColToRowMatchDist': match_distance
+ })
+ return match_indices, match_distance
+
+
+@paddle.jit.not_to_static
+def box_coder(prior_box,
+ prior_box_var,
+ target_box,
+ code_type="encode_center_size",
+ box_normalized=True,
+ axis=0,
+ name=None):
+ """
+ **Box Coder Layer**
+ Encode/Decode the target bounding box with the priorbox information.
+
+ The Encoding schema described below:
+ .. math::
+ ox = (tx - px) / pw / pxv
+ oy = (ty - py) / ph / pyv
+ ow = \log(\abs(tw / pw)) / pwv
+ oh = \log(\abs(th / ph)) / phv
+ The Decoding schema described below:
+
+ .. math::
+
+ ox = (pw * pxv * tx * + px) - tw / 2
+ oy = (ph * pyv * ty * + py) - th / 2
+ ow = \exp(pwv * tw) * pw + tw / 2
+ oh = \exp(phv * th) * ph + th / 2
+ where `tx`, `ty`, `tw`, `th` denote the target box's center coordinates,
+ width and height respectively. Similarly, `px`, `py`, `pw`, `ph` denote
+ the priorbox's (anchor) center coordinates, width and height. `pxv`,
+ `pyv`, `pwv`, `phv` denote the variance of the priorbox and `ox`, `oy`,
+ `ow`, `oh` denote the encoded/decoded coordinates, width and height.
+ During Box Decoding, two modes for broadcast are supported. Say target
+ box has shape [N, M, 4], and the shape of prior box can be [N, 4] or
+ [M, 4]. Then prior box will broadcast to target box along the
+ assigned axis.
+
+ Args:
+ prior_box(Tensor): Box list prior_box is a 2-D Tensor with shape
+ [M, 4] holds M boxes and data type is float32 or float64. Each box
+ is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the
+ left top coordinate of the anchor box, if the input is image feature
+ map, they are close to the origin of the coordinate system.
+ [xmax, ymax] is the right bottom coordinate of the anchor box.
+ prior_box_var(List|Tensor|None): prior_box_var supports three types
+ of input. One is Tensor with shape [M, 4] which holds M group and
+ data type is float32 or float64. The second is list consist of
+ 4 elements shared by all boxes and data type is float32 or float64.
+ Other is None and not involved in calculation.
+ target_box(Tensor): This input can be a 2-D LoDTensor with shape
+ [N, 4] when code_type is 'encode_center_size'. This input also can
+ be a 3-D Tensor with shape [N, M, 4] when code_type is
+ 'decode_center_size'. Each box is represented as
+ [xmin, ymin, xmax, ymax]. The data type is float32 or float64.
+ code_type(str): The code type used with the target box. It can be
+ `encode_center_size` or `decode_center_size`. `encode_center_size`
+ by default.
+ box_normalized(bool): Whether treat the priorbox as a normalized box.
+ Set true by default.
+ axis(int): Which axis in PriorBox to broadcast for box decode,
+ for example, if axis is 0 and TargetBox has shape [N, M, 4] and
+ PriorBox has shape [M, 4], then PriorBox will broadcast to [N, M, 4]
+ for decoding. It is only valid when code type is
+ `decode_center_size`. Set 0 by default.
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ Tensor:
+ output_box(Tensor): When code_type is 'encode_center_size', the
+ output tensor of box_coder_op with shape [N, M, 4] representing the
+ result of N target boxes encoded with M Prior boxes and variances.
+ When code_type is 'decode_center_size', N represents the batch size
+ and M represents the number of decoded boxes.
+
+ Examples:
+
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+ # For encode
+ prior_box_encode = paddle.static.data(name='prior_box_encode',
+ shape=[512, 4],
+ dtype='float32')
+ target_box_encode = paddle.static.data(name='target_box_encode',
+ shape=[81, 4],
+ dtype='float32')
+ output_encode = ops.box_coder(prior_box=prior_box_encode,
+ prior_box_var=[0.1,0.1,0.2,0.2],
+ target_box=target_box_encode,
+ code_type="encode_center_size")
+ # For decode
+ prior_box_decode = paddle.static.data(name='prior_box_decode',
+ shape=[512, 4],
+ dtype='float32')
+ target_box_decode = paddle.static.data(name='target_box_decode',
+ shape=[512, 81, 4],
+ dtype='float32')
+ output_decode = ops.box_coder(prior_box=prior_box_decode,
+ prior_box_var=[0.1,0.1,0.2,0.2],
+ target_box=target_box_decode,
+ code_type="decode_center_size",
+ box_normalized=False,
+ axis=1)
+ """
+ check_variable_and_dtype(prior_box, 'prior_box', ['float32', 'float64'],
+ 'box_coder')
+ check_variable_and_dtype(target_box, 'target_box', ['float32', 'float64'],
+ 'box_coder')
+
+ if in_dygraph_mode():
+ if isinstance(prior_box_var, Variable):
+ output_box = core.ops.box_coder(
+ prior_box, prior_box_var, target_box, "code_type", code_type,
+ "box_normalized", box_normalized, "axis", axis)
+
+ elif isinstance(prior_box_var, list):
+ output_box = core.ops.box_coder(
+ prior_box, None, target_box, "code_type", code_type,
+ "box_normalized", box_normalized, "axis", axis, "variance",
+ prior_box_var)
+ else:
+ raise TypeError(
+ "Input variance of box_coder must be Variable or list")
+ return output_box
+ else:
+ helper = LayerHelper("box_coder", **locals())
+
+ output_box = helper.create_variable_for_type_inference(
+ dtype=prior_box.dtype)
+
+ inputs = {"PriorBox": prior_box, "TargetBox": target_box}
+ attrs = {
+ "code_type": code_type,
+ "box_normalized": box_normalized,
+ "axis": axis
+ }
+ if isinstance(prior_box_var, Variable):
+ inputs['PriorBoxVar'] = prior_box_var
+ elif isinstance(prior_box_var, list):
+ attrs['variance'] = prior_box_var
+ else:
+ raise TypeError(
+ "Input variance of box_coder must be Variable or list")
+ helper.append_op(
+ type="box_coder",
+ inputs=inputs,
+ attrs=attrs,
+ outputs={"OutputBox": output_box})
+ return output_box
+
+
+@paddle.jit.not_to_static
+def generate_proposals(scores,
+ bbox_deltas,
+ im_shape,
+ anchors,
+ variances,
+ pre_nms_top_n=6000,
+ post_nms_top_n=1000,
+ nms_thresh=0.5,
+ min_size=0.1,
+ eta=1.0,
+ pixel_offset=False,
+ return_rois_num=False,
+ name=None):
+ """
+ **Generate proposal Faster-RCNN**
+ This operation proposes RoIs according to each box with their
+ probability to be a foreground object and
+ the box can be calculated by anchors. Bbox_deltais and scores
+ to be an object are the output of RPN. Final proposals
+ could be used to train detection net.
+ For generating proposals, this operation performs following steps:
+ 1. Transposes and resizes scores and bbox_deltas in size of
+ (H*W*A, 1) and (H*W*A, 4)
+ 2. Calculate box locations as proposals candidates.
+ 3. Clip boxes to image
+ 4. Remove predicted boxes with small area.
+ 5. Apply NMS to get final proposals as output.
+ Args:
+ scores(Tensor): A 4-D Tensor with shape [N, A, H, W] represents
+ the probability for each box to be an object.
+ N is batch size, A is number of anchors, H and W are height and
+ width of the feature map. The data type must be float32.
+ bbox_deltas(Tensor): A 4-D Tensor with shape [N, 4*A, H, W]
+ represents the difference between predicted box location and
+ anchor location. The data type must be float32.
+ im_shape(Tensor): A 2-D Tensor with shape [N, 2] represents H, W, the
+ origin image size or input size. The data type can be float32 or
+ float64.
+ anchors(Tensor): A 4-D Tensor represents the anchors with a layout
+ of [H, W, A, 4]. H and W are height and width of the feature map,
+ num_anchors is the box count of each position. Each anchor is
+ in (xmin, ymin, xmax, ymax) format an unnormalized. The data type must be float32.
+ variances(Tensor): A 4-D Tensor. The expanded variances of anchors with a layout of
+ [H, W, num_priors, 4]. Each variance is in
+ (xcenter, ycenter, w, h) format. The data type must be float32.
+ pre_nms_top_n(float): Number of total bboxes to be kept per
+ image before NMS. The data type must be float32. `6000` by default.
+ post_nms_top_n(float): Number of total bboxes to be kept per
+ image after NMS. The data type must be float32. `1000` by default.
+ nms_thresh(float): Threshold in NMS. The data type must be float32. `0.5` by default.
+ min_size(float): Remove predicted boxes with either height or
+ width < min_size. The data type must be float32. `0.1` by default.
+ eta(float): Apply in adaptive NMS, if adaptive `threshold > 0.5`,
+ `adaptive_threshold = adaptive_threshold * eta` in each iteration.
+ return_rois_num(bool): When setting True, it will return a 1D Tensor with shape [N, ] that includes Rois's
+ num of each image in one batch. The N is the image's num. For example, the tensor has values [4,5] that represents
+ the first image has 4 Rois, the second image has 5 Rois. It only used in rcnn model.
+ 'False' by default.
+ name(str, optional): For detailed information, please refer
+ to :ref:`api_guide_Name`. Usually name is no need to set and
+ None by default.
+
+ Returns:
+ tuple:
+ A tuple with format ``(rpn_rois, rpn_roi_probs)``.
+ - **rpn_rois**: The generated RoIs. 2-D Tensor with shape ``[N, 4]`` while ``N`` is the number of RoIs. The data type is the same as ``scores``.
+ - **rpn_roi_probs**: The scores of generated RoIs. 2-D Tensor with shape ``[N, 1]`` while ``N`` is the number of RoIs. The data type is the same as ``scores``.
+
+ Examples:
+ .. code-block:: python
+
+ import paddle
+ from ppdet.modeling import ops
+ paddle.enable_static()
+ scores = paddle.static.data(name='scores', shape=[None, 4, 5, 5], dtype='float32')
+ bbox_deltas = paddle.static.data(name='bbox_deltas', shape=[None, 16, 5, 5], dtype='float32')
+ im_shape = paddle.static.data(name='im_shape', shape=[None, 2], dtype='float32')
+ anchors = paddle.static.data(name='anchors', shape=[None, 5, 4, 4], dtype='float32')
+ variances = paddle.static.data(name='variances', shape=[None, 5, 10, 4], dtype='float32')
+ rois, roi_probs = ops.generate_proposals(scores, bbox_deltas,
+ im_shape, anchors, variances)
+ """
+ if in_dygraph_mode():
+ assert return_rois_num, "return_rois_num should be True in dygraph mode."
+ attrs = ('pre_nms_topN', pre_nms_top_n, 'post_nms_topN', post_nms_top_n,
+ 'nms_thresh', nms_thresh, 'min_size', min_size, 'eta', eta,
+ 'pixel_offset', pixel_offset)
+ rpn_rois, rpn_roi_probs, rpn_rois_num = core.ops.generate_proposals_v2(
+ scores, bbox_deltas, im_shape, anchors, variances, *attrs)
+ return rpn_rois, rpn_roi_probs, rpn_rois_num
+
+ else:
+ helper = LayerHelper('generate_proposals_v2', **locals())
+
+ check_variable_and_dtype(scores, 'scores', ['float32'],
+ 'generate_proposals_v2')
+ check_variable_and_dtype(bbox_deltas, 'bbox_deltas', ['float32'],
+ 'generate_proposals_v2')
+ check_variable_and_dtype(im_shape, 'im_shape', ['float32', 'float64'],
+ 'generate_proposals_v2')
+ check_variable_and_dtype(anchors, 'anchors', ['float32'],
+ 'generate_proposals_v2')
+ check_variable_and_dtype(variances, 'variances', ['float32'],
+ 'generate_proposals_v2')
+
+ rpn_rois = helper.create_variable_for_type_inference(
+ dtype=bbox_deltas.dtype)
+ rpn_roi_probs = helper.create_variable_for_type_inference(
+ dtype=scores.dtype)
+ outputs = {
+ 'RpnRois': rpn_rois,
+ 'RpnRoiProbs': rpn_roi_probs,
+ }
+ if return_rois_num:
+ rpn_rois_num = helper.create_variable_for_type_inference(
+ dtype='int32')
+ rpn_rois_num.stop_gradient = True
+ outputs['RpnRoisNum'] = rpn_rois_num
+
+ helper.append_op(
+ type="generate_proposals_v2",
+ inputs={
+ 'Scores': scores,
+ 'BboxDeltas': bbox_deltas,
+ 'ImShape': im_shape,
+ 'Anchors': anchors,
+ 'Variances': variances
+ },
+ attrs={
+ 'pre_nms_topN': pre_nms_top_n,
+ 'post_nms_topN': post_nms_top_n,
+ 'nms_thresh': nms_thresh,
+ 'min_size': min_size,
+ 'eta': eta,
+ 'pixel_offset': pixel_offset
+ },
+ outputs=outputs)
+ rpn_rois.stop_gradient = True
+ rpn_roi_probs.stop_gradient = True
+
+ return rpn_rois, rpn_roi_probs, rpn_rois_num
+
+
+def sigmoid_cross_entropy_with_logits(input,
+ label,
+ ignore_index=-100,
+ normalize=False):
+ output = F.binary_cross_entropy_with_logits(input, label, reduction='none')
+ mask_tensor = paddle.cast(label != ignore_index, 'float32')
+ output = paddle.multiply(output, mask_tensor)
+ if normalize:
+ sum_valid_mask = paddle.sum(mask_tensor)
+ output = output / sum_valid_mask
+ return output
+
+
+def smooth_l1(input, label, inside_weight=None, outside_weight=None,
+ sigma=None):
+ input_new = paddle.multiply(input, inside_weight)
+ label_new = paddle.multiply(label, inside_weight)
+ delta = 1 / (sigma * sigma)
+ out = F.smooth_l1_loss(input_new, label_new, reduction='none', delta=delta)
+ out = paddle.multiply(out, outside_weight)
+ out = out / delta
+ out = paddle.reshape(out, shape=[out.shape[0], -1])
+ out = paddle.sum(out, axis=1)
+ return out
+
+
+def channel_shuffle(x, groups):
+ batch_size, num_channels, height, width = x.shape[0:4]
+ assert num_channels % groups == 0, 'num_channels should be divisible by groups'
+ channels_per_group = num_channels // groups
+ x = paddle.reshape(
+ x=x, shape=[batch_size, groups, channels_per_group, height, width])
+ x = paddle.transpose(x=x, perm=[0, 2, 1, 3, 4])
+ x = paddle.reshape(x=x, shape=[batch_size, num_channels, height, width])
+ return x
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/post_process.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/post_process.py
new file mode 100644
index 000000000..679e09134
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/post_process.py
@@ -0,0 +1,656 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from ppdet.core.workspace import register
+from ppdet.modeling.bbox_utils import nonempty_bbox, rbox2poly
+from ppdet.modeling.layers import TTFBox
+from .transformers import bbox_cxcywh_to_xyxy
+try:
+ from collections.abc import Sequence
+except Exception:
+ from collections import Sequence
+
+__all__ = [
+ 'BBoxPostProcess', 'MaskPostProcess', 'FCOSPostProcess',
+ 'S2ANetBBoxPostProcess', 'JDEBBoxPostProcess', 'CenterNetPostProcess',
+ 'DETRBBoxPostProcess', 'SparsePostProcess'
+]
+
+
+@register
+class BBoxPostProcess(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['decode', 'nms']
+
+ def __init__(self, num_classes=80, decode=None, nms=None):
+ super(BBoxPostProcess, self).__init__()
+ self.num_classes = num_classes
+ self.decode = decode
+ self.nms = nms
+ self.fake_bboxes = paddle.to_tensor(
+ np.array(
+ [[-1, 0.0, 0.0, 0.0, 0.0, 0.0]], dtype='float32'))
+ self.fake_bbox_num = paddle.to_tensor(np.array([1], dtype='int32'))
+
+ def forward(self, head_out, rois, im_shape, scale_factor):
+ """
+ Decode the bbox and do NMS if needed.
+
+ Args:
+ head_out (tuple): bbox_pred and cls_prob of bbox_head output.
+ rois (tuple): roi and rois_num of rpn_head output.
+ im_shape (Tensor): The shape of the input image.
+ scale_factor (Tensor): The scale factor of the input image.
+ Returns:
+ bbox_pred (Tensor): The output prediction with shape [N, 6], including
+ labels, scores and bboxes. The size of bboxes are corresponding
+ to the input image, the bboxes may be used in other branch.
+ bbox_num (Tensor): The number of prediction boxes of each batch with
+ shape [1], and is N.
+ """
+ if self.nms is not None:
+ bboxes, score = self.decode(head_out, rois, im_shape, scale_factor)
+ bbox_pred, bbox_num, _ = self.nms(bboxes, score, self.num_classes)
+ else:
+ bbox_pred, bbox_num = self.decode(head_out, rois, im_shape,
+ scale_factor)
+ return bbox_pred, bbox_num
+
+ def get_pred(self, bboxes, bbox_num, im_shape, scale_factor):
+ """
+ Rescale, clip and filter the bbox from the output of NMS to
+ get final prediction.
+
+ Notes:
+ Currently only support bs = 1.
+
+ Args:
+ bboxes (Tensor): The output bboxes with shape [N, 6] after decode
+ and NMS, including labels, scores and bboxes.
+ bbox_num (Tensor): The number of prediction boxes of each batch with
+ shape [1], and is N.
+ im_shape (Tensor): The shape of the input image.
+ scale_factor (Tensor): The scale factor of the input image.
+ Returns:
+ pred_result (Tensor): The final prediction results with shape [N, 6]
+ including labels, scores and bboxes.
+ """
+
+ if bboxes.shape[0] == 0:
+ bboxes = self.fake_bboxes
+ bbox_num = self.fake_bbox_num
+
+ origin_shape = paddle.floor(im_shape / scale_factor + 0.5)
+
+ origin_shape_list = []
+ scale_factor_list = []
+ # scale_factor: scale_y, scale_x
+ for i in range(bbox_num.shape[0]):
+ expand_shape = paddle.expand(origin_shape[i:i + 1, :],
+ [bbox_num[i], 2])
+ scale_y, scale_x = scale_factor[i][0], scale_factor[i][1]
+ scale = paddle.concat([scale_x, scale_y, scale_x, scale_y])
+ expand_scale = paddle.expand(scale, [bbox_num[i], 4])
+ origin_shape_list.append(expand_shape)
+ scale_factor_list.append(expand_scale)
+
+ self.origin_shape_list = paddle.concat(origin_shape_list)
+ scale_factor_list = paddle.concat(scale_factor_list)
+
+ # bboxes: [N, 6], label, score, bbox
+ pred_label = bboxes[:, 0:1]
+ pred_score = bboxes[:, 1:2]
+ pred_bbox = bboxes[:, 2:]
+ # rescale bbox to original image
+ scaled_bbox = pred_bbox / scale_factor_list
+ origin_h = self.origin_shape_list[:, 0]
+ origin_w = self.origin_shape_list[:, 1]
+ zeros = paddle.zeros_like(origin_h)
+ # clip bbox to [0, original_size]
+ x1 = paddle.maximum(paddle.minimum(scaled_bbox[:, 0], origin_w), zeros)
+ y1 = paddle.maximum(paddle.minimum(scaled_bbox[:, 1], origin_h), zeros)
+ x2 = paddle.maximum(paddle.minimum(scaled_bbox[:, 2], origin_w), zeros)
+ y2 = paddle.maximum(paddle.minimum(scaled_bbox[:, 3], origin_h), zeros)
+ pred_bbox = paddle.stack([x1, y1, x2, y2], axis=-1)
+ # filter empty bbox
+ keep_mask = nonempty_bbox(pred_bbox, return_mask=True)
+ keep_mask = paddle.unsqueeze(keep_mask, [1])
+ pred_label = paddle.where(keep_mask, pred_label,
+ paddle.ones_like(pred_label) * -1)
+ pred_result = paddle.concat([pred_label, pred_score, pred_bbox], axis=1)
+ return pred_result
+
+ def get_origin_shape(self, ):
+ return self.origin_shape_list
+
+
+@register
+class MaskPostProcess(object):
+ """
+ refer to:
+ https://github.com/facebookresearch/detectron2/layers/mask_ops.py
+
+ Get Mask output according to the output from model
+ """
+
+ def __init__(self, binary_thresh=0.5):
+ super(MaskPostProcess, self).__init__()
+ self.binary_thresh = binary_thresh
+
+ def paste_mask(self, masks, boxes, im_h, im_w):
+ """
+ Paste the mask prediction to the original image.
+ """
+ x0, y0, x1, y1 = paddle.split(boxes, 4, axis=1)
+ masks = paddle.unsqueeze(masks, [0, 1])
+ img_y = paddle.arange(0, im_h, dtype='float32') + 0.5
+ img_x = paddle.arange(0, im_w, dtype='float32') + 0.5
+ img_y = (img_y - y0) / (y1 - y0) * 2 - 1
+ img_x = (img_x - x0) / (x1 - x0) * 2 - 1
+ img_x = paddle.unsqueeze(img_x, [1])
+ img_y = paddle.unsqueeze(img_y, [2])
+ N = boxes.shape[0]
+
+ gx = paddle.expand(img_x, [N, img_y.shape[1], img_x.shape[2]])
+ gy = paddle.expand(img_y, [N, img_y.shape[1], img_x.shape[2]])
+ grid = paddle.stack([gx, gy], axis=3)
+ img_masks = F.grid_sample(masks, grid, align_corners=False)
+ return img_masks[:, 0]
+
+ def __call__(self, mask_out, bboxes, bbox_num, origin_shape):
+ """
+ Decode the mask_out and paste the mask to the origin image.
+
+ Args:
+ mask_out (Tensor): mask_head output with shape [N, 28, 28].
+ bbox_pred (Tensor): The output bboxes with shape [N, 6] after decode
+ and NMS, including labels, scores and bboxes.
+ bbox_num (Tensor): The number of prediction boxes of each batch with
+ shape [1], and is N.
+ origin_shape (Tensor): The origin shape of the input image, the tensor
+ shape is [N, 2], and each row is [h, w].
+ Returns:
+ pred_result (Tensor): The final prediction mask results with shape
+ [N, h, w] in binary mask style.
+ """
+ num_mask = mask_out.shape[0]
+ origin_shape = paddle.cast(origin_shape, 'int32')
+ # TODO: support bs > 1 and mask output dtype is bool
+ pred_result = paddle.zeros(
+ [num_mask, origin_shape[0][0], origin_shape[0][1]], dtype='int32')
+ if bbox_num == 1 and bboxes[0][0] == -1:
+ return pred_result
+
+ # TODO: optimize chunk paste
+ pred_result = []
+ for i in range(bboxes.shape[0]):
+ im_h, im_w = origin_shape[i][0], origin_shape[i][1]
+ pred_mask = self.paste_mask(mask_out[i], bboxes[i:i + 1, 2:], im_h,
+ im_w)
+ pred_mask = pred_mask >= self.binary_thresh
+ pred_mask = paddle.cast(pred_mask, 'int32')
+ pred_result.append(pred_mask)
+ pred_result = paddle.concat(pred_result)
+ return pred_result
+
+
+@register
+class FCOSPostProcess(object):
+ __inject__ = ['decode', 'nms']
+
+ def __init__(self, decode=None, nms=None):
+ super(FCOSPostProcess, self).__init__()
+ self.decode = decode
+ self.nms = nms
+
+ def __call__(self, fcos_head_outs, scale_factor):
+ """
+ Decode the bbox and do NMS in FCOS.
+ """
+ locations, cls_logits, bboxes_reg, centerness = fcos_head_outs
+ bboxes, score = self.decode(locations, cls_logits, bboxes_reg,
+ centerness, scale_factor)
+ bbox_pred, bbox_num, _ = self.nms(bboxes, score)
+ return bbox_pred, bbox_num
+
+
+@register
+class S2ANetBBoxPostProcess(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['nms']
+
+ def __init__(self, num_classes=15, nms_pre=2000, min_bbox_size=0, nms=None):
+ super(S2ANetBBoxPostProcess, self).__init__()
+ self.num_classes = num_classes
+ self.nms_pre = paddle.to_tensor(nms_pre)
+ self.min_bbox_size = min_bbox_size
+ self.nms = nms
+ self.origin_shape_list = []
+ self.fake_pred_cls_score_bbox = paddle.to_tensor(
+ np.array(
+ [[-1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]],
+ dtype='float32'))
+ self.fake_bbox_num = paddle.to_tensor(np.array([1], dtype='int32'))
+
+ def forward(self, pred_scores, pred_bboxes):
+ """
+ pred_scores : [N, M] score
+ pred_bboxes : [N, 5] xc, yc, w, h, a
+ im_shape : [N, 2] im_shape
+ scale_factor : [N, 2] scale_factor
+ """
+ pred_ploys0 = rbox2poly(pred_bboxes)
+ pred_ploys = paddle.unsqueeze(pred_ploys0, axis=0)
+
+ # pred_scores [NA, 16] --> [16, NA]
+ pred_scores0 = paddle.transpose(pred_scores, [1, 0])
+ pred_scores = paddle.unsqueeze(pred_scores0, axis=0)
+
+ pred_cls_score_bbox, bbox_num, _ = self.nms(pred_ploys, pred_scores,
+ self.num_classes)
+ # Prevent empty bbox_pred from decode or NMS.
+ # Bboxes and score before NMS may be empty due to the score threshold.
+ if pred_cls_score_bbox.shape[0] <= 0 or pred_cls_score_bbox.shape[
+ 1] <= 1:
+ pred_cls_score_bbox = self.fake_pred_cls_score_bbox
+ bbox_num = self.fake_bbox_num
+
+ pred_cls_score_bbox = paddle.reshape(pred_cls_score_bbox, [-1, 10])
+ return pred_cls_score_bbox, bbox_num
+
+ def get_pred(self, bboxes, bbox_num, im_shape, scale_factor):
+ """
+ Rescale, clip and filter the bbox from the output of NMS to
+ get final prediction.
+ Args:
+ bboxes(Tensor): bboxes [N, 10]
+ bbox_num(Tensor): bbox_num
+ im_shape(Tensor): [1 2]
+ scale_factor(Tensor): [1 2]
+ Returns:
+ bbox_pred(Tensor): The output is the prediction with shape [N, 8]
+ including labels, scores and bboxes. The size of
+ bboxes are corresponding to the original image.
+ """
+ origin_shape = paddle.floor(im_shape / scale_factor + 0.5)
+
+ origin_shape_list = []
+ scale_factor_list = []
+ # scale_factor: scale_y, scale_x
+ for i in range(bbox_num.shape[0]):
+ expand_shape = paddle.expand(origin_shape[i:i + 1, :],
+ [bbox_num[i], 2])
+ scale_y, scale_x = scale_factor[i][0], scale_factor[i][1]
+ scale = paddle.concat([
+ scale_x, scale_y, scale_x, scale_y, scale_x, scale_y, scale_x,
+ scale_y
+ ])
+ expand_scale = paddle.expand(scale, [bbox_num[i], 8])
+ origin_shape_list.append(expand_shape)
+ scale_factor_list.append(expand_scale)
+
+ origin_shape_list = paddle.concat(origin_shape_list)
+ scale_factor_list = paddle.concat(scale_factor_list)
+
+ # bboxes: [N, 10], label, score, bbox
+ pred_label_score = bboxes[:, 0:2]
+ pred_bbox = bboxes[:, 2:]
+
+ # rescale bbox to original image
+ pred_bbox = pred_bbox.reshape([-1, 8])
+ scaled_bbox = pred_bbox / scale_factor_list
+ origin_h = origin_shape_list[:, 0]
+ origin_w = origin_shape_list[:, 1]
+
+ bboxes = scaled_bbox
+ zeros = paddle.zeros_like(origin_h)
+ x1 = paddle.maximum(paddle.minimum(bboxes[:, 0], origin_w - 1), zeros)
+ y1 = paddle.maximum(paddle.minimum(bboxes[:, 1], origin_h - 1), zeros)
+ x2 = paddle.maximum(paddle.minimum(bboxes[:, 2], origin_w - 1), zeros)
+ y2 = paddle.maximum(paddle.minimum(bboxes[:, 3], origin_h - 1), zeros)
+ x3 = paddle.maximum(paddle.minimum(bboxes[:, 4], origin_w - 1), zeros)
+ y3 = paddle.maximum(paddle.minimum(bboxes[:, 5], origin_h - 1), zeros)
+ x4 = paddle.maximum(paddle.minimum(bboxes[:, 6], origin_w - 1), zeros)
+ y4 = paddle.maximum(paddle.minimum(bboxes[:, 7], origin_h - 1), zeros)
+ pred_bbox = paddle.stack([x1, y1, x2, y2, x3, y3, x4, y4], axis=-1)
+ pred_result = paddle.concat([pred_label_score, pred_bbox], axis=1)
+ return pred_result
+
+
+@register
+class JDEBBoxPostProcess(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['decode', 'nms']
+
+ def __init__(self, num_classes=1, decode=None, nms=None, return_idx=True):
+ super(JDEBBoxPostProcess, self).__init__()
+ self.num_classes = num_classes
+ self.decode = decode
+ self.nms = nms
+ self.return_idx = return_idx
+
+ self.fake_bbox_pred = paddle.to_tensor(
+ np.array(
+ [[-1, 0.0, 0.0, 0.0, 0.0, 0.0]], dtype='float32'))
+ self.fake_bbox_num = paddle.to_tensor(np.array([1], dtype='int32'))
+ self.fake_nms_keep_idx = paddle.to_tensor(
+ np.array(
+ [[0]], dtype='int32'))
+
+ self.fake_yolo_boxes_out = paddle.to_tensor(
+ np.array(
+ [[[0.0, 0.0, 0.0, 0.0]]], dtype='float32'))
+ self.fake_yolo_scores_out = paddle.to_tensor(
+ np.array(
+ [[[0.0]]], dtype='float32'))
+ self.fake_boxes_idx = paddle.to_tensor(np.array([[0]], dtype='int64'))
+
+ def forward(self, head_out, anchors):
+ """
+ Decode the bbox and do NMS for JDE model.
+
+ Args:
+ head_out (list): Bbox_pred and cls_prob of bbox_head output.
+ anchors (list): Anchors of JDE model.
+
+ Returns:
+ boxes_idx (Tensor): The index of kept bboxes after decode 'JDEBox'.
+ bbox_pred (Tensor): The output is the prediction with shape [N, 6]
+ including labels, scores and bboxes.
+ bbox_num (Tensor): The number of prediction of each batch with shape [N].
+ nms_keep_idx (Tensor): The index of kept bboxes after NMS.
+ """
+ boxes_idx, yolo_boxes_scores = self.decode(head_out, anchors)
+
+ if len(boxes_idx) == 0:
+ boxes_idx = self.fake_boxes_idx
+ yolo_boxes_out = self.fake_yolo_boxes_out
+ yolo_scores_out = self.fake_yolo_scores_out
+ else:
+ yolo_boxes = paddle.gather_nd(yolo_boxes_scores, boxes_idx)
+ # TODO: only support bs=1 now
+ yolo_boxes_out = paddle.reshape(
+ yolo_boxes[:, :4], shape=[1, len(boxes_idx), 4])
+ yolo_scores_out = paddle.reshape(
+ yolo_boxes[:, 4:5], shape=[1, 1, len(boxes_idx)])
+ boxes_idx = boxes_idx[:, 1:]
+
+ if self.return_idx:
+ bbox_pred, bbox_num, nms_keep_idx = self.nms(
+ yolo_boxes_out, yolo_scores_out, self.num_classes)
+ if bbox_pred.shape[0] == 0:
+ bbox_pred = self.fake_bbox_pred
+ bbox_num = self.fake_bbox_num
+ nms_keep_idx = self.fake_nms_keep_idx
+ return boxes_idx, bbox_pred, bbox_num, nms_keep_idx
+ else:
+ bbox_pred, bbox_num, _ = self.nms(yolo_boxes_out, yolo_scores_out,
+ self.num_classes)
+ if bbox_pred.shape[0] == 0:
+ bbox_pred = self.fake_bbox_pred
+ bbox_num = self.fake_bbox_num
+ return _, bbox_pred, bbox_num, _
+
+
+@register
+class CenterNetPostProcess(TTFBox):
+ """
+ Postprocess the model outputs to get final prediction:
+ 1. Do NMS for heatmap to get top `max_per_img` bboxes.
+ 2. Decode bboxes using center offset and box size.
+ 3. Rescale decoded bboxes reference to the origin image shape.
+
+ Args:
+ max_per_img(int): the maximum number of predicted objects in a image,
+ 500 by default.
+ down_ratio(int): the down ratio from images to heatmap, 4 by default.
+ regress_ltrb (bool): whether to regress left/top/right/bottom or
+ width/height for a box, true by default.
+ for_mot (bool): whether return other features used in tracking model.
+ """
+
+ __shared__ = ['down_ratio', 'for_mot']
+
+ def __init__(self,
+ max_per_img=500,
+ down_ratio=4,
+ regress_ltrb=True,
+ for_mot=False):
+ super(TTFBox, self).__init__()
+ self.max_per_img = max_per_img
+ self.down_ratio = down_ratio
+ self.regress_ltrb = regress_ltrb
+ self.for_mot = for_mot
+
+ def __call__(self, hm, wh, reg, im_shape, scale_factor):
+ heat = self._simple_nms(hm)
+ scores, inds, topk_clses, ys, xs = self._topk(heat)
+ scores = scores.unsqueeze(1)
+ clses = topk_clses.unsqueeze(1)
+
+ reg_t = paddle.transpose(reg, [0, 2, 3, 1])
+ # Like TTFBox, batch size is 1.
+ # TODO: support batch size > 1
+ reg = paddle.reshape(reg_t, [-1, reg_t.shape[-1]])
+ reg = paddle.gather(reg, inds)
+ xs = paddle.cast(xs, 'float32')
+ ys = paddle.cast(ys, 'float32')
+ xs = xs + reg[:, 0:1]
+ ys = ys + reg[:, 1:2]
+
+ wh_t = paddle.transpose(wh, [0, 2, 3, 1])
+ wh = paddle.reshape(wh_t, [-1, wh_t.shape[-1]])
+ wh = paddle.gather(wh, inds)
+
+ if self.regress_ltrb:
+ x1 = xs - wh[:, 0:1]
+ y1 = ys - wh[:, 1:2]
+ x2 = xs + wh[:, 2:3]
+ y2 = ys + wh[:, 3:4]
+ else:
+ x1 = xs - wh[:, 0:1] / 2
+ y1 = ys - wh[:, 1:2] / 2
+ x2 = xs + wh[:, 0:1] / 2
+ y2 = ys + wh[:, 1:2] / 2
+
+ n, c, feat_h, feat_w = hm.shape[:]
+ padw = (feat_w * self.down_ratio - im_shape[0, 1]) / 2
+ padh = (feat_h * self.down_ratio - im_shape[0, 0]) / 2
+ x1 = x1 * self.down_ratio
+ y1 = y1 * self.down_ratio
+ x2 = x2 * self.down_ratio
+ y2 = y2 * self.down_ratio
+
+ x1 = x1 - padw
+ y1 = y1 - padh
+ x2 = x2 - padw
+ y2 = y2 - padh
+
+ bboxes = paddle.concat([x1, y1, x2, y2], axis=1)
+ scale_y = scale_factor[:, 0:1]
+ scale_x = scale_factor[:, 1:2]
+ scale_expand = paddle.concat(
+ [scale_x, scale_y, scale_x, scale_y], axis=1)
+ boxes_shape = bboxes.shape[:]
+ scale_expand = paddle.expand(scale_expand, shape=boxes_shape)
+ bboxes = paddle.divide(bboxes, scale_expand)
+ if self.for_mot:
+ results = paddle.concat([bboxes, scores, clses], axis=1)
+ return results, inds, topk_clses
+ else:
+ results = paddle.concat([clses, scores, bboxes], axis=1)
+ return results, paddle.shape(results)[0:1], topk_clses
+
+
+@register
+class DETRBBoxPostProcess(object):
+ __shared__ = ['num_classes', 'use_focal_loss']
+ __inject__ = []
+
+ def __init__(self,
+ num_classes=80,
+ num_top_queries=100,
+ use_focal_loss=False):
+ super(DETRBBoxPostProcess, self).__init__()
+ self.num_classes = num_classes
+ self.num_top_queries = num_top_queries
+ self.use_focal_loss = use_focal_loss
+
+ def __call__(self, head_out, im_shape, scale_factor):
+ """
+ Decode the bbox.
+
+ Args:
+ head_out (tuple): bbox_pred, cls_logit and masks of bbox_head output.
+ im_shape (Tensor): The shape of the input image.
+ scale_factor (Tensor): The scale factor of the input image.
+ Returns:
+ bbox_pred (Tensor): The output prediction with shape [N, 6], including
+ labels, scores and bboxes. The size of bboxes are corresponding
+ to the input image, the bboxes may be used in other branch.
+ bbox_num (Tensor): The number of prediction boxes of each batch with
+ shape [bs], and is N.
+ """
+ bboxes, logits, masks = head_out
+
+ bbox_pred = bbox_cxcywh_to_xyxy(bboxes)
+ origin_shape = paddle.floor(im_shape / scale_factor + 0.5)
+ img_h, img_w = origin_shape.unbind(1)
+ origin_shape = paddle.stack(
+ [img_w, img_h, img_w, img_h], axis=-1).unsqueeze(0)
+ bbox_pred *= origin_shape
+
+ scores = F.sigmoid(logits) if self.use_focal_loss else F.softmax(
+ logits)[:, :, :-1]
+
+ if not self.use_focal_loss:
+ scores, labels = scores.max(-1), scores.argmax(-1)
+ if scores.shape[1] > self.num_top_queries:
+ scores, index = paddle.topk(
+ scores, self.num_top_queries, axis=-1)
+ labels = paddle.stack(
+ [paddle.gather(l, i) for l, i in zip(labels, index)])
+ bbox_pred = paddle.stack(
+ [paddle.gather(b, i) for b, i in zip(bbox_pred, index)])
+ else:
+ scores, index = paddle.topk(
+ scores.reshape([logits.shape[0], -1]),
+ self.num_top_queries,
+ axis=-1)
+ labels = index % logits.shape[2]
+ index = index // logits.shape[2]
+ bbox_pred = paddle.stack(
+ [paddle.gather(b, i) for b, i in zip(bbox_pred, index)])
+
+ bbox_pred = paddle.concat(
+ [
+ labels.unsqueeze(-1).astype('float32'), scores.unsqueeze(-1),
+ bbox_pred
+ ],
+ axis=-1)
+ bbox_num = paddle.to_tensor(
+ bbox_pred.shape[1], dtype='int32').tile([bbox_pred.shape[0]])
+ bbox_pred = bbox_pred.reshape([-1, 6])
+ return bbox_pred, bbox_num
+
+
+@register
+class SparsePostProcess(object):
+ __shared__ = ['num_classes']
+
+ def __init__(self, num_proposals, num_classes=80):
+ super(SparsePostProcess, self).__init__()
+ self.num_classes = num_classes
+ self.num_proposals = num_proposals
+
+ def __call__(self, box_cls, box_pred, scale_factor_wh, img_whwh):
+ """
+ Arguments:
+ box_cls (Tensor): tensor of shape (batch_size, num_proposals, K).
+ The tensor predicts the classification probability for each proposal.
+ box_pred (Tensor): tensors of shape (batch_size, num_proposals, 4).
+ The tensor predicts 4-vector (x,y,w,h) box
+ regression values for every proposal
+ scale_factor_wh (Tensor): tensors of shape [batch_size, 2] the scalor of per img
+ img_whwh (Tensor): tensors of shape [batch_size, 4]
+ Returns:
+ bbox_pred (Tensor): tensors of shape [num_boxes, 6] Each row has 6 values:
+ [label, confidence, xmin, ymin, xmax, ymax]
+ bbox_num (Tensor): tensors of shape [batch_size] the number of RoIs in each image.
+ """
+ assert len(box_cls) == len(scale_factor_wh) == len(img_whwh)
+
+ img_wh = img_whwh[:, :2]
+
+ scores = F.sigmoid(box_cls)
+ labels = paddle.arange(0, self.num_classes). \
+ unsqueeze(0).tile([self.num_proposals, 1]).flatten(start_axis=0, stop_axis=1)
+
+ classes_all = []
+ scores_all = []
+ boxes_all = []
+ for i, (scores_per_image,
+ box_pred_per_image) in enumerate(zip(scores, box_pred)):
+
+ scores_per_image, topk_indices = scores_per_image.flatten(
+ 0, 1).topk(
+ self.num_proposals, sorted=False)
+ labels_per_image = paddle.gather(labels, topk_indices, axis=0)
+
+ box_pred_per_image = box_pred_per_image.reshape([-1, 1, 4]).tile(
+ [1, self.num_classes, 1]).reshape([-1, 4])
+ box_pred_per_image = paddle.gather(
+ box_pred_per_image, topk_indices, axis=0)
+
+ classes_all.append(labels_per_image)
+ scores_all.append(scores_per_image)
+ boxes_all.append(box_pred_per_image)
+
+ bbox_num = paddle.zeros([len(scale_factor_wh)], dtype="int32")
+ boxes_final = []
+
+ for i in range(len(scale_factor_wh)):
+ classes = classes_all[i]
+ boxes = boxes_all[i]
+ scores = scores_all[i]
+
+ boxes[:, 0::2] = paddle.clip(
+ boxes[:, 0::2], min=0, max=img_wh[i][0]) / scale_factor_wh[i][0]
+ boxes[:, 1::2] = paddle.clip(
+ boxes[:, 1::2], min=0, max=img_wh[i][1]) / scale_factor_wh[i][1]
+ boxes_w, boxes_h = (boxes[:, 2] - boxes[:, 0]).numpy(), (
+ boxes[:, 3] - boxes[:, 1]).numpy()
+
+ keep = (boxes_w > 1.) & (boxes_h > 1.)
+
+ if (keep.sum() == 0):
+ bboxes = paddle.zeros([1, 6]).astype("float32")
+ else:
+ boxes = paddle.to_tensor(boxes.numpy()[keep]).astype("float32")
+ classes = paddle.to_tensor(classes.numpy()[keep]).astype(
+ "float32").unsqueeze(-1)
+ scores = paddle.to_tensor(scores.numpy()[keep]).astype(
+ "float32").unsqueeze(-1)
+
+ bboxes = paddle.concat([classes, scores, boxes], axis=-1)
+
+ boxes_final.append(bboxes)
+ bbox_num[i] = bboxes.shape[0]
+
+ bbox_pred = paddle.concat(boxes_final)
+ return bbox_pred, bbox_num
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__init__.py
new file mode 100644
index 000000000..9fb518f2a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__init__.py
@@ -0,0 +1,2 @@
+from . import rpn_head
+from .rpn_head import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..3e5c82d31
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/anchor_generator.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/anchor_generator.cpython-37.pyc
new file mode 100644
index 000000000..db9b88dd8
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/anchor_generator.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/proposal_generator.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/proposal_generator.cpython-37.pyc
new file mode 100644
index 000000000..950055728
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/proposal_generator.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/rpn_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/rpn_head.cpython-37.pyc
new file mode 100644
index 000000000..797bb0a85
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/rpn_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target.cpython-37.pyc
new file mode 100644
index 000000000..8d5c58349
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target_layer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target_layer.cpython-37.pyc
new file mode 100644
index 000000000..79ee00500
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/__pycache__/target_layer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/anchor_generator.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/anchor_generator.py
new file mode 100644
index 000000000..34f03c0ef
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/anchor_generator.py
@@ -0,0 +1,131 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on
+# https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/anchor_generator.py
+
+import math
+
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register
+
+
+@register
+class AnchorGenerator(nn.Layer):
+ """
+ Generate anchors according to the feature maps
+
+ Args:
+ anchor_sizes (list[float] | list[list[float]]): The anchor sizes at
+ each feature point. list[float] means all feature levels share the
+ same sizes. list[list[float]] means the anchor sizes for
+ each level. The sizes stand for the scale of input size.
+ aspect_ratios (list[float] | list[list[float]]): The aspect ratios at
+ each feature point. list[float] means all feature levels share the
+ same ratios. list[list[float]] means the aspect ratios for
+ each level.
+ strides (list[float]): The strides of feature maps which generate
+ anchors
+ offset (float): The offset of the coordinate of anchors, default 0.
+
+ """
+
+ def __init__(self,
+ anchor_sizes=[32, 64, 128, 256, 512],
+ aspect_ratios=[0.5, 1.0, 2.0],
+ strides=[16.0],
+ variance=[1.0, 1.0, 1.0, 1.0],
+ offset=0.):
+ super(AnchorGenerator, self).__init__()
+ self.anchor_sizes = anchor_sizes
+ self.aspect_ratios = aspect_ratios
+ self.strides = strides
+ self.variance = variance
+ self.cell_anchors = self._calculate_anchors(len(strides))
+ self.offset = offset
+
+ def _broadcast_params(self, params, num_features):
+ if not isinstance(params[0], (list, tuple)): # list[float]
+ return [params] * num_features
+ if len(params) == 1:
+ return list(params) * num_features
+ return params
+
+ def generate_cell_anchors(self, sizes, aspect_ratios):
+ anchors = []
+ for size in sizes:
+ area = size**2.0
+ for aspect_ratio in aspect_ratios:
+ w = math.sqrt(area / aspect_ratio)
+ h = aspect_ratio * w
+ x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0
+ anchors.append([x0, y0, x1, y1])
+ return paddle.to_tensor(anchors, dtype='float32')
+
+ def _calculate_anchors(self, num_features):
+ sizes = self._broadcast_params(self.anchor_sizes, num_features)
+ aspect_ratios = self._broadcast_params(self.aspect_ratios, num_features)
+ cell_anchors = [
+ self.generate_cell_anchors(s, a)
+ for s, a in zip(sizes, aspect_ratios)
+ ]
+ [
+ self.register_buffer(
+ t.name, t, persistable=False) for t in cell_anchors
+ ]
+ return cell_anchors
+
+ def _create_grid_offsets(self, size, stride, offset):
+ grid_height, grid_width = size[0], size[1]
+ shifts_x = paddle.arange(
+ offset * stride, grid_width * stride, step=stride, dtype='float32')
+ shifts_y = paddle.arange(
+ offset * stride, grid_height * stride, step=stride, dtype='float32')
+ shift_y, shift_x = paddle.meshgrid(shifts_y, shifts_x)
+ shift_x = paddle.reshape(shift_x, [-1])
+ shift_y = paddle.reshape(shift_y, [-1])
+ return shift_x, shift_y
+
+ def _grid_anchors(self, grid_sizes):
+ anchors = []
+ for size, stride, base_anchors in zip(grid_sizes, self.strides,
+ self.cell_anchors):
+ shift_x, shift_y = self._create_grid_offsets(size, stride,
+ self.offset)
+ shifts = paddle.stack((shift_x, shift_y, shift_x, shift_y), axis=1)
+ shifts = paddle.reshape(shifts, [-1, 1, 4])
+ base_anchors = paddle.reshape(base_anchors, [1, -1, 4])
+
+ anchors.append(paddle.reshape(shifts + base_anchors, [-1, 4]))
+
+ return anchors
+
+ def forward(self, input):
+ grid_sizes = [paddle.shape(feature_map)[-2:] for feature_map in input]
+ anchors_over_all_feature_maps = self._grid_anchors(grid_sizes)
+ return anchors_over_all_feature_maps
+
+ @property
+ def num_anchors(self):
+ """
+ Returns:
+ int: number of anchors at every pixel
+ location, on that feature map.
+ For example, if at every pixel we use anchors of 3 aspect
+ ratios and 5 sizes, the number of anchors is 15.
+ For FPN models, `num_anchors` on every feature map is the same.
+ """
+ return len(self.cell_anchors[0])
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/proposal_generator.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/proposal_generator.py
new file mode 100644
index 000000000..1fcb8b1e2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/proposal_generator.py
@@ -0,0 +1,77 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+
+from ppdet.core.workspace import register, serializable
+from .. import ops
+
+
+@register
+@serializable
+class ProposalGenerator(object):
+ """
+ Proposal generation module
+
+ For more details, please refer to the document of generate_proposals
+ in ppdet/modeing/ops.py
+
+ Args:
+ pre_nms_top_n (int): Number of total bboxes to be kept per
+ image before NMS. default 6000
+ post_nms_top_n (int): Number of total bboxes to be kept per
+ image after NMS. default 1000
+ nms_thresh (float): Threshold in NMS. default 0.5
+ min_size (flaot): Remove predicted boxes with either height or
+ width < min_size. default 0.1
+ eta (float): Apply in adaptive NMS, if adaptive `threshold > 0.5`,
+ `adaptive_threshold = adaptive_threshold * eta` in each iteration.
+ default 1.
+ topk_after_collect (bool): whether to adopt topk after batch
+ collection. If topk_after_collect is true, box filter will not be
+ used after NMS at each image in proposal generation. default false
+ """
+
+ def __init__(self,
+ pre_nms_top_n=12000,
+ post_nms_top_n=2000,
+ nms_thresh=.5,
+ min_size=.1,
+ eta=1.,
+ topk_after_collect=False):
+ super(ProposalGenerator, self).__init__()
+ self.pre_nms_top_n = pre_nms_top_n
+ self.post_nms_top_n = post_nms_top_n
+ self.nms_thresh = nms_thresh
+ self.min_size = min_size
+ self.eta = eta
+ self.topk_after_collect = topk_after_collect
+
+ def __call__(self, scores, bbox_deltas, anchors, im_shape):
+
+ top_n = self.pre_nms_top_n if self.topk_after_collect else self.post_nms_top_n
+ variances = paddle.ones_like(anchors)
+ rpn_rois, rpn_rois_prob, rpn_rois_num = ops.generate_proposals(
+ scores,
+ bbox_deltas,
+ im_shape,
+ anchors,
+ variances,
+ pre_nms_top_n=self.pre_nms_top_n,
+ post_nms_top_n=top_n,
+ nms_thresh=self.nms_thresh,
+ min_size=self.min_size,
+ eta=self.eta,
+ return_rois_num=True)
+ return rpn_rois, rpn_rois_prob, rpn_rois_num, self.post_nms_top_n
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/rpn_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/rpn_head.py
new file mode 100644
index 000000000..1664d7839
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/rpn_head.py
@@ -0,0 +1,259 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal
+
+from ppdet.core.workspace import register
+from .anchor_generator import AnchorGenerator
+from .target_layer import RPNTargetAssign
+from .proposal_generator import ProposalGenerator
+
+
+class RPNFeat(nn.Layer):
+ """
+ Feature extraction in RPN head
+
+ Args:
+ in_channel (int): Input channel
+ out_channel (int): Output channel
+ """
+
+ def __init__(self, in_channel=1024, out_channel=1024):
+ super(RPNFeat, self).__init__()
+ # rpn feat is shared with each level
+ self.rpn_conv = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=out_channel,
+ kernel_size=3,
+ padding=1,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0., std=0.01)))
+ self.rpn_conv.skip_quant = True
+
+ def forward(self, feats):
+ rpn_feats = []
+ for feat in feats:
+ rpn_feats.append(F.relu(self.rpn_conv(feat)))
+ return rpn_feats
+
+
+@register
+class RPNHead(nn.Layer):
+ """
+ Region Proposal Network
+
+ Args:
+ anchor_generator (dict): configure of anchor generation
+ rpn_target_assign (dict): configure of rpn targets assignment
+ train_proposal (dict): configure of proposals generation
+ at the stage of training
+ test_proposal (dict): configure of proposals generation
+ at the stage of prediction
+ in_channel (int): channel of input feature maps which can be
+ derived by from_config
+ """
+
+ def __init__(self,
+ anchor_generator=AnchorGenerator().__dict__,
+ rpn_target_assign=RPNTargetAssign().__dict__,
+ train_proposal=ProposalGenerator(12000, 2000).__dict__,
+ test_proposal=ProposalGenerator().__dict__,
+ in_channel=1024):
+ super(RPNHead, self).__init__()
+ self.anchor_generator = anchor_generator
+ self.rpn_target_assign = rpn_target_assign
+ self.train_proposal = train_proposal
+ self.test_proposal = test_proposal
+ if isinstance(anchor_generator, dict):
+ self.anchor_generator = AnchorGenerator(**anchor_generator)
+ if isinstance(rpn_target_assign, dict):
+ self.rpn_target_assign = RPNTargetAssign(**rpn_target_assign)
+ if isinstance(train_proposal, dict):
+ self.train_proposal = ProposalGenerator(**train_proposal)
+ if isinstance(test_proposal, dict):
+ self.test_proposal = ProposalGenerator(**test_proposal)
+
+ num_anchors = self.anchor_generator.num_anchors
+ self.rpn_feat = RPNFeat(in_channel, in_channel)
+ # rpn head is shared with each level
+ # rpn roi classification scores
+ self.rpn_rois_score = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=num_anchors,
+ kernel_size=1,
+ padding=0,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0., std=0.01)))
+ self.rpn_rois_score.skip_quant = True
+
+ # rpn roi bbox regression deltas
+ self.rpn_rois_delta = nn.Conv2D(
+ in_channels=in_channel,
+ out_channels=4 * num_anchors,
+ kernel_size=1,
+ padding=0,
+ weight_attr=paddle.ParamAttr(initializer=Normal(
+ mean=0., std=0.01)))
+ self.rpn_rois_delta.skip_quant = True
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ # FPN share same rpn head
+ if isinstance(input_shape, (list, tuple)):
+ input_shape = input_shape[0]
+ return {'in_channel': input_shape.channels}
+
+ def forward(self, feats, inputs):
+ rpn_feats = self.rpn_feat(feats)
+ scores = []
+ deltas = []
+
+ for rpn_feat in rpn_feats:
+ rrs = self.rpn_rois_score(rpn_feat)
+ rrd = self.rpn_rois_delta(rpn_feat)
+ scores.append(rrs)
+ deltas.append(rrd)
+
+ anchors = self.anchor_generator(rpn_feats)
+
+ rois, rois_num = self._gen_proposal(scores, deltas, anchors, inputs)
+ if self.training:
+ loss = self.get_loss(scores, deltas, anchors, inputs)
+ return rois, rois_num, loss
+ else:
+ return rois, rois_num, None
+
+ def _gen_proposal(self, scores, bbox_deltas, anchors, inputs):
+ """
+ scores (list[Tensor]): Multi-level scores prediction
+ bbox_deltas (list[Tensor]): Multi-level deltas prediction
+ anchors (list[Tensor]): Multi-level anchors
+ inputs (dict): ground truth info
+ """
+ prop_gen = self.train_proposal if self.training else self.test_proposal
+ im_shape = inputs['im_shape']
+
+ # Collect multi-level proposals for each batch
+ # Get 'topk' of them as final output
+ bs_rois_collect = []
+ bs_rois_num_collect = []
+ batch_size = paddle.slice(paddle.shape(im_shape), [0], [0], [1])
+
+ # Generate proposals for each level and each batch.
+ # Discard batch-computing to avoid sorting bbox cross different batches.
+ for i in range(batch_size):
+ rpn_rois_list = []
+ rpn_prob_list = []
+ rpn_rois_num_list = []
+
+ for rpn_score, rpn_delta, anchor in zip(scores, bbox_deltas,
+ anchors):
+ rpn_rois, rpn_rois_prob, rpn_rois_num, post_nms_top_n = prop_gen(
+ scores=rpn_score[i:i + 1],
+ bbox_deltas=rpn_delta[i:i + 1],
+ anchors=anchor,
+ im_shape=im_shape[i:i + 1])
+ if rpn_rois.shape[0] > 0:
+ rpn_rois_list.append(rpn_rois)
+ rpn_prob_list.append(rpn_rois_prob)
+ rpn_rois_num_list.append(rpn_rois_num)
+
+ if len(scores) > 1:
+ rpn_rois = paddle.concat(rpn_rois_list)
+ rpn_prob = paddle.concat(rpn_prob_list).flatten()
+
+ if rpn_prob.shape[0] > post_nms_top_n:
+ topk_prob, topk_inds = paddle.topk(rpn_prob, post_nms_top_n)
+ topk_rois = paddle.gather(rpn_rois, topk_inds)
+ else:
+ topk_rois = rpn_rois
+ topk_prob = rpn_prob
+ else:
+ topk_rois = rpn_rois_list[0]
+ topk_prob = rpn_prob_list[0].flatten()
+
+ bs_rois_collect.append(topk_rois)
+ bs_rois_num_collect.append(paddle.shape(topk_rois)[0])
+
+ bs_rois_num_collect = paddle.concat(bs_rois_num_collect)
+
+ return bs_rois_collect, bs_rois_num_collect
+
+ def get_loss(self, pred_scores, pred_deltas, anchors, inputs):
+ """
+ pred_scores (list[Tensor]): Multi-level scores prediction
+ pred_deltas (list[Tensor]): Multi-level deltas prediction
+ anchors (list[Tensor]): Multi-level anchors
+ inputs (dict): ground truth info, including im, gt_bbox, gt_score
+ """
+ anchors = [paddle.reshape(a, shape=(-1, 4)) for a in anchors]
+ anchors = paddle.concat(anchors)
+
+ scores = [
+ paddle.reshape(
+ paddle.transpose(
+ v, perm=[0, 2, 3, 1]),
+ shape=(v.shape[0], -1, 1)) for v in pred_scores
+ ]
+ scores = paddle.concat(scores, axis=1)
+
+ deltas = [
+ paddle.reshape(
+ paddle.transpose(
+ v, perm=[0, 2, 3, 1]),
+ shape=(v.shape[0], -1, 4)) for v in pred_deltas
+ ]
+ deltas = paddle.concat(deltas, axis=1)
+
+ score_tgt, bbox_tgt, loc_tgt, norm = self.rpn_target_assign(inputs,
+ anchors)
+
+ scores = paddle.reshape(x=scores, shape=(-1, ))
+ deltas = paddle.reshape(x=deltas, shape=(-1, 4))
+
+ score_tgt = paddle.concat(score_tgt)
+ score_tgt.stop_gradient = True
+
+ pos_mask = score_tgt == 1
+ pos_ind = paddle.nonzero(pos_mask)
+
+ valid_mask = score_tgt >= 0
+ valid_ind = paddle.nonzero(valid_mask)
+
+ # cls loss
+ if valid_ind.shape[0] == 0:
+ loss_rpn_cls = paddle.zeros([1], dtype='float32')
+ else:
+ score_pred = paddle.gather(scores, valid_ind)
+ score_label = paddle.gather(score_tgt, valid_ind).cast('float32')
+ score_label.stop_gradient = True
+ loss_rpn_cls = F.binary_cross_entropy_with_logits(
+ logit=score_pred, label=score_label, reduction="sum")
+
+ # reg loss
+ if pos_ind.shape[0] == 0:
+ loss_rpn_reg = paddle.zeros([1], dtype='float32')
+ else:
+ loc_pred = paddle.gather(deltas, pos_ind)
+ loc_tgt = paddle.concat(loc_tgt)
+ loc_tgt = paddle.gather(loc_tgt, pos_ind)
+ loc_tgt.stop_gradient = True
+ loss_rpn_reg = paddle.abs(loc_pred - loc_tgt).sum()
+ return {
+ 'loss_rpn_cls': loss_rpn_cls / norm,
+ 'loss_rpn_reg': loss_rpn_reg / norm
+ }
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target.py
new file mode 100644
index 000000000..af83cfdb8
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target.py
@@ -0,0 +1,675 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+import paddle
+from ..bbox_utils import bbox2delta, bbox_overlaps
+
+
+def rpn_anchor_target(anchors,
+ gt_boxes,
+ rpn_batch_size_per_im,
+ rpn_positive_overlap,
+ rpn_negative_overlap,
+ rpn_fg_fraction,
+ use_random=True,
+ batch_size=1,
+ ignore_thresh=-1,
+ is_crowd=None,
+ weights=[1., 1., 1., 1.],
+ assign_on_cpu=False):
+ tgt_labels = []
+ tgt_bboxes = []
+ tgt_deltas = []
+ for i in range(batch_size):
+ gt_bbox = gt_boxes[i]
+ is_crowd_i = is_crowd[i] if is_crowd else None
+ # Step1: match anchor and gt_bbox
+ matches, match_labels = label_box(
+ anchors, gt_bbox, rpn_positive_overlap, rpn_negative_overlap, True,
+ ignore_thresh, is_crowd_i, assign_on_cpu)
+ # Step2: sample anchor
+ fg_inds, bg_inds = subsample_labels(match_labels, rpn_batch_size_per_im,
+ rpn_fg_fraction, 0, use_random)
+ # Fill with the ignore label (-1), then set positive and negative labels
+ labels = paddle.full(match_labels.shape, -1, dtype='int32')
+ if bg_inds.shape[0] > 0:
+ labels = paddle.scatter(labels, bg_inds, paddle.zeros_like(bg_inds))
+ if fg_inds.shape[0] > 0:
+ labels = paddle.scatter(labels, fg_inds, paddle.ones_like(fg_inds))
+ # Step3: make output
+ if gt_bbox.shape[0] == 0:
+ matched_gt_boxes = paddle.zeros([0, 4])
+ tgt_delta = paddle.zeros([0, 4])
+ else:
+ matched_gt_boxes = paddle.gather(gt_bbox, matches)
+ tgt_delta = bbox2delta(anchors, matched_gt_boxes, weights)
+ matched_gt_boxes.stop_gradient = True
+ tgt_delta.stop_gradient = True
+ labels.stop_gradient = True
+ tgt_labels.append(labels)
+ tgt_bboxes.append(matched_gt_boxes)
+ tgt_deltas.append(tgt_delta)
+
+ return tgt_labels, tgt_bboxes, tgt_deltas
+
+
+def label_box(anchors,
+ gt_boxes,
+ positive_overlap,
+ negative_overlap,
+ allow_low_quality,
+ ignore_thresh,
+ is_crowd=None,
+ assign_on_cpu=False):
+ if assign_on_cpu:
+ paddle.set_device("cpu")
+ iou = bbox_overlaps(gt_boxes, anchors)
+ paddle.set_device("gpu")
+ else:
+ iou = bbox_overlaps(gt_boxes, anchors)
+ n_gt = gt_boxes.shape[0]
+ if n_gt == 0 or is_crowd is None:
+ n_gt_crowd = 0
+ else:
+ n_gt_crowd = paddle.nonzero(is_crowd).shape[0]
+ if iou.shape[0] == 0 or n_gt_crowd == n_gt:
+ # No truth, assign everything to background
+ default_matches = paddle.full((iou.shape[1], ), 0, dtype='int64')
+ default_match_labels = paddle.full((iou.shape[1], ), 0, dtype='int32')
+ return default_matches, default_match_labels
+ # if ignore_thresh > 0, remove anchor if it is closed to
+ # one of the crowded ground-truth
+ if n_gt_crowd > 0:
+ N_a = anchors.shape[0]
+ ones = paddle.ones([N_a])
+ mask = is_crowd * ones
+
+ if ignore_thresh > 0:
+ crowd_iou = iou * mask
+ valid = (paddle.sum((crowd_iou > ignore_thresh).cast('int32'),
+ axis=0) > 0).cast('float32')
+ iou = iou * (1 - valid) - valid
+
+ # ignore the iou between anchor and crowded ground-truth
+ iou = iou * (1 - mask) - mask
+
+ matched_vals, matches = paddle.topk(iou, k=1, axis=0)
+ match_labels = paddle.full(matches.shape, -1, dtype='int32')
+ # set ignored anchor with iou = -1
+ neg_cond = paddle.logical_and(matched_vals > -1,
+ matched_vals < negative_overlap)
+ match_labels = paddle.where(neg_cond,
+ paddle.zeros_like(match_labels), match_labels)
+ match_labels = paddle.where(matched_vals >= positive_overlap,
+ paddle.ones_like(match_labels), match_labels)
+ if allow_low_quality:
+ highest_quality_foreach_gt = iou.max(axis=1, keepdim=True)
+ pred_inds_with_highest_quality = paddle.logical_and(
+ iou > 0, iou == highest_quality_foreach_gt).cast('int32').sum(
+ 0, keepdim=True)
+ match_labels = paddle.where(pred_inds_with_highest_quality > 0,
+ paddle.ones_like(match_labels),
+ match_labels)
+
+ matches = matches.flatten()
+ match_labels = match_labels.flatten()
+
+ return matches, match_labels
+
+
+def subsample_labels(labels,
+ num_samples,
+ fg_fraction,
+ bg_label=0,
+ use_random=True):
+ positive = paddle.nonzero(
+ paddle.logical_and(labels != -1, labels != bg_label))
+ negative = paddle.nonzero(labels == bg_label)
+
+ fg_num = int(num_samples * fg_fraction)
+ fg_num = min(positive.numel(), fg_num)
+ bg_num = num_samples - fg_num
+ bg_num = min(negative.numel(), bg_num)
+ if fg_num == 0 and bg_num == 0:
+ fg_inds = paddle.zeros([0], dtype='int32')
+ bg_inds = paddle.zeros([0], dtype='int32')
+ return fg_inds, bg_inds
+
+ # randomly select positive and negative examples
+
+ negative = negative.cast('int32').flatten()
+ bg_perm = paddle.randperm(negative.numel(), dtype='int32')
+ bg_perm = paddle.slice(bg_perm, axes=[0], starts=[0], ends=[bg_num])
+ if use_random:
+ bg_inds = paddle.gather(negative, bg_perm)
+ else:
+ bg_inds = paddle.slice(negative, axes=[0], starts=[0], ends=[bg_num])
+ if fg_num == 0:
+ fg_inds = paddle.zeros([0], dtype='int32')
+ return fg_inds, bg_inds
+
+ positive = positive.cast('int32').flatten()
+ fg_perm = paddle.randperm(positive.numel(), dtype='int32')
+ fg_perm = paddle.slice(fg_perm, axes=[0], starts=[0], ends=[fg_num])
+ if use_random:
+ fg_inds = paddle.gather(positive, fg_perm)
+ else:
+ fg_inds = paddle.slice(positive, axes=[0], starts=[0], ends=[fg_num])
+
+ return fg_inds, bg_inds
+
+
+def generate_proposal_target(rpn_rois,
+ gt_classes,
+ gt_boxes,
+ batch_size_per_im,
+ fg_fraction,
+ fg_thresh,
+ bg_thresh,
+ num_classes,
+ ignore_thresh=-1.,
+ is_crowd=None,
+ use_random=True,
+ is_cascade=False,
+ cascade_iou=0.5,
+ assign_on_cpu=False):
+
+ rois_with_gt = []
+ tgt_labels = []
+ tgt_bboxes = []
+ tgt_gt_inds = []
+ new_rois_num = []
+
+ # In cascade rcnn, the threshold for foreground and background
+ # is used from cascade_iou
+ fg_thresh = cascade_iou if is_cascade else fg_thresh
+ bg_thresh = cascade_iou if is_cascade else bg_thresh
+ for i, rpn_roi in enumerate(rpn_rois):
+ gt_bbox = gt_boxes[i]
+ is_crowd_i = is_crowd[i] if is_crowd else None
+ gt_class = paddle.squeeze(gt_classes[i], axis=-1)
+
+ # Concat RoIs and gt boxes except cascade rcnn or none gt
+ if not is_cascade and gt_bbox.shape[0] > 0:
+ bbox = paddle.concat([rpn_roi, gt_bbox])
+ else:
+ bbox = rpn_roi
+
+ # Step1: label bbox
+ matches, match_labels = label_box(bbox, gt_bbox, fg_thresh, bg_thresh,
+ False, ignore_thresh, is_crowd_i,
+ assign_on_cpu)
+ # Step2: sample bbox
+ sampled_inds, sampled_gt_classes = sample_bbox(
+ matches, match_labels, gt_class, batch_size_per_im, fg_fraction,
+ num_classes, use_random, is_cascade)
+
+ # Step3: make output
+ rois_per_image = bbox if is_cascade else paddle.gather(bbox,
+ sampled_inds)
+ sampled_gt_ind = matches if is_cascade else paddle.gather(matches,
+ sampled_inds)
+ if gt_bbox.shape[0] > 0:
+ sampled_bbox = paddle.gather(gt_bbox, sampled_gt_ind)
+ else:
+ num = rois_per_image.shape[0]
+ sampled_bbox = paddle.zeros([num, 4], dtype='float32')
+
+ rois_per_image.stop_gradient = True
+ sampled_gt_ind.stop_gradient = True
+ sampled_bbox.stop_gradient = True
+ tgt_labels.append(sampled_gt_classes)
+ tgt_bboxes.append(sampled_bbox)
+ rois_with_gt.append(rois_per_image)
+ tgt_gt_inds.append(sampled_gt_ind)
+ new_rois_num.append(paddle.shape(sampled_inds)[0])
+ new_rois_num = paddle.concat(new_rois_num)
+ return rois_with_gt, tgt_labels, tgt_bboxes, tgt_gt_inds, new_rois_num
+
+
+def sample_bbox(matches,
+ match_labels,
+ gt_classes,
+ batch_size_per_im,
+ fg_fraction,
+ num_classes,
+ use_random=True,
+ is_cascade=False):
+
+ n_gt = gt_classes.shape[0]
+ if n_gt == 0:
+ # No truth, assign everything to background
+ gt_classes = paddle.ones(matches.shape, dtype='int32') * num_classes
+ #return matches, match_labels + num_classes
+ else:
+ gt_classes = paddle.gather(gt_classes, matches)
+ gt_classes = paddle.where(match_labels == 0,
+ paddle.ones_like(gt_classes) * num_classes,
+ gt_classes)
+ gt_classes = paddle.where(match_labels == -1,
+ paddle.ones_like(gt_classes) * -1, gt_classes)
+ if is_cascade:
+ index = paddle.arange(matches.shape[0])
+ return index, gt_classes
+ rois_per_image = int(batch_size_per_im)
+
+ fg_inds, bg_inds = subsample_labels(gt_classes, rois_per_image, fg_fraction,
+ num_classes, use_random)
+ if fg_inds.shape[0] == 0 and bg_inds.shape[0] == 0:
+ # fake output labeled with -1 when all boxes are neither
+ # foreground nor background
+ sampled_inds = paddle.zeros([1], dtype='int32')
+ else:
+ sampled_inds = paddle.concat([fg_inds, bg_inds])
+ sampled_gt_classes = paddle.gather(gt_classes, sampled_inds)
+ return sampled_inds, sampled_gt_classes
+
+
+def polygons_to_mask(polygons, height, width):
+ """
+ Convert the polygons to mask format
+
+ Args:
+ polygons (list[ndarray]): each array has shape (Nx2,)
+ height (int): mask height
+ width (int): mask width
+ Returns:
+ ndarray: a bool mask of shape (height, width)
+ """
+ import pycocotools.mask as mask_util
+ assert len(polygons) > 0, "COCOAPI does not support empty polygons"
+ rles = mask_util.frPyObjects(polygons, height, width)
+ rle = mask_util.merge(rles)
+ return mask_util.decode(rle).astype(np.bool)
+
+
+def rasterize_polygons_within_box(poly, box, resolution):
+ w, h = box[2] - box[0], box[3] - box[1]
+ polygons = [np.asarray(p, dtype=np.float64) for p in poly]
+ for p in polygons:
+ p[0::2] = p[0::2] - box[0]
+ p[1::2] = p[1::2] - box[1]
+
+ ratio_h = resolution / max(h, 0.1)
+ ratio_w = resolution / max(w, 0.1)
+
+ if ratio_h == ratio_w:
+ for p in polygons:
+ p *= ratio_h
+ else:
+ for p in polygons:
+ p[0::2] *= ratio_w
+ p[1::2] *= ratio_h
+
+ # 3. Rasterize the polygons with coco api
+ mask = polygons_to_mask(polygons, resolution, resolution)
+ mask = paddle.to_tensor(mask, dtype='int32')
+ return mask
+
+
+def generate_mask_target(gt_segms, rois, labels_int32, sampled_gt_inds,
+ num_classes, resolution):
+ mask_rois = []
+ mask_rois_num = []
+ tgt_masks = []
+ tgt_classes = []
+ mask_index = []
+ tgt_weights = []
+ for k in range(len(rois)):
+ labels_per_im = labels_int32[k]
+ # select rois labeled with foreground
+ fg_inds = paddle.nonzero(
+ paddle.logical_and(labels_per_im != -1, labels_per_im !=
+ num_classes))
+ has_fg = True
+ # generate fake roi if foreground is empty
+ if fg_inds.numel() == 0:
+ has_fg = False
+ fg_inds = paddle.ones([1], dtype='int32')
+ inds_per_im = sampled_gt_inds[k]
+ inds_per_im = paddle.gather(inds_per_im, fg_inds)
+
+ rois_per_im = rois[k]
+ fg_rois = paddle.gather(rois_per_im, fg_inds)
+ # Copy the foreground roi to cpu
+ # to generate mask target with ground-truth
+ boxes = fg_rois.numpy()
+ gt_segms_per_im = gt_segms[k]
+
+ new_segm = []
+ inds_per_im = inds_per_im.numpy()
+ if len(gt_segms_per_im) > 0:
+ for i in inds_per_im:
+ new_segm.append(gt_segms_per_im[i])
+ fg_inds_new = fg_inds.reshape([-1]).numpy()
+ results = []
+ if len(gt_segms_per_im) > 0:
+ for j in fg_inds_new:
+ results.append(
+ rasterize_polygons_within_box(new_segm[j], boxes[j],
+ resolution))
+ else:
+ results.append(paddle.ones([resolution, resolution], dtype='int32'))
+
+ fg_classes = paddle.gather(labels_per_im, fg_inds)
+ weight = paddle.ones([fg_rois.shape[0]], dtype='float32')
+ if not has_fg:
+ # now all sampled classes are background
+ # which will cause error in loss calculation,
+ # make fake classes with weight of 0.
+ fg_classes = paddle.zeros([1], dtype='int32')
+ weight = weight - 1
+ tgt_mask = paddle.stack(results)
+ tgt_mask.stop_gradient = True
+ fg_rois.stop_gradient = True
+
+ mask_index.append(fg_inds)
+ mask_rois.append(fg_rois)
+ mask_rois_num.append(paddle.shape(fg_rois)[0])
+ tgt_classes.append(fg_classes)
+ tgt_masks.append(tgt_mask)
+ tgt_weights.append(weight)
+
+ mask_index = paddle.concat(mask_index)
+ mask_rois_num = paddle.concat(mask_rois_num)
+ tgt_classes = paddle.concat(tgt_classes, axis=0)
+ tgt_masks = paddle.concat(tgt_masks, axis=0)
+ tgt_weights = paddle.concat(tgt_weights, axis=0)
+
+ return mask_rois, mask_rois_num, tgt_classes, tgt_masks, mask_index, tgt_weights
+
+
+def libra_sample_pos(max_overlaps, max_classes, pos_inds, num_expected):
+ if len(pos_inds) <= num_expected:
+ return pos_inds
+ else:
+ unique_gt_inds = np.unique(max_classes[pos_inds])
+ num_gts = len(unique_gt_inds)
+ num_per_gt = int(round(num_expected / float(num_gts)) + 1)
+
+ sampled_inds = []
+ for i in unique_gt_inds:
+ inds = np.nonzero(max_classes == i)[0]
+ before_len = len(inds)
+ inds = list(set(inds) & set(pos_inds))
+ after_len = len(inds)
+ if len(inds) > num_per_gt:
+ inds = np.random.choice(inds, size=num_per_gt, replace=False)
+ sampled_inds.extend(list(inds)) # combine as a new sampler
+ if len(sampled_inds) < num_expected:
+ num_extra = num_expected - len(sampled_inds)
+ extra_inds = np.array(list(set(pos_inds) - set(sampled_inds)))
+ assert len(sampled_inds) + len(extra_inds) == len(pos_inds), \
+ "sum of sampled_inds({}) and extra_inds({}) length must be equal with pos_inds({})!".format(
+ len(sampled_inds), len(extra_inds), len(pos_inds))
+ if len(extra_inds) > num_extra:
+ extra_inds = np.random.choice(
+ extra_inds, size=num_extra, replace=False)
+ sampled_inds.extend(extra_inds.tolist())
+ elif len(sampled_inds) > num_expected:
+ sampled_inds = np.random.choice(
+ sampled_inds, size=num_expected, replace=False)
+ return paddle.to_tensor(sampled_inds)
+
+
+def libra_sample_via_interval(max_overlaps, full_set, num_expected, floor_thr,
+ num_bins, bg_thresh):
+ max_iou = max_overlaps.max()
+ iou_interval = (max_iou - floor_thr) / num_bins
+ per_num_expected = int(num_expected / num_bins)
+
+ sampled_inds = []
+ for i in range(num_bins):
+ start_iou = floor_thr + i * iou_interval
+ end_iou = floor_thr + (i + 1) * iou_interval
+
+ tmp_set = set(
+ np.where(
+ np.logical_and(max_overlaps >= start_iou, max_overlaps <
+ end_iou))[0])
+ tmp_inds = list(tmp_set & full_set)
+
+ if len(tmp_inds) > per_num_expected:
+ tmp_sampled_set = np.random.choice(
+ tmp_inds, size=per_num_expected, replace=False)
+ else:
+ tmp_sampled_set = np.array(tmp_inds, dtype=np.int)
+ sampled_inds.append(tmp_sampled_set)
+
+ sampled_inds = np.concatenate(sampled_inds)
+ if len(sampled_inds) < num_expected:
+ num_extra = num_expected - len(sampled_inds)
+ extra_inds = np.array(list(full_set - set(sampled_inds)))
+ assert len(sampled_inds) + len(extra_inds) == len(full_set), \
+ "sum of sampled_inds({}) and extra_inds({}) length must be equal with full_set({})!".format(
+ len(sampled_inds), len(extra_inds), len(full_set))
+
+ if len(extra_inds) > num_extra:
+ extra_inds = np.random.choice(extra_inds, num_extra, replace=False)
+ sampled_inds = np.concatenate([sampled_inds, extra_inds])
+
+ return sampled_inds
+
+
+def libra_sample_neg(max_overlaps,
+ max_classes,
+ neg_inds,
+ num_expected,
+ floor_thr=-1,
+ floor_fraction=0,
+ num_bins=3,
+ bg_thresh=0.5):
+ if len(neg_inds) <= num_expected:
+ return neg_inds
+ else:
+ # balance sampling for negative samples
+ neg_set = set(neg_inds.tolist())
+ if floor_thr > 0:
+ floor_set = set(
+ np.where(
+ np.logical_and(max_overlaps >= 0, max_overlaps < floor_thr))
+ [0])
+ iou_sampling_set = set(np.where(max_overlaps >= floor_thr)[0])
+ elif floor_thr == 0:
+ floor_set = set(np.where(max_overlaps == 0)[0])
+ iou_sampling_set = set(np.where(max_overlaps > floor_thr)[0])
+ else:
+ floor_set = set()
+ iou_sampling_set = set(np.where(max_overlaps > floor_thr)[0])
+ floor_thr = 0
+
+ floor_neg_inds = list(floor_set & neg_set)
+ iou_sampling_neg_inds = list(iou_sampling_set & neg_set)
+
+ num_expected_iou_sampling = int(num_expected * (1 - floor_fraction))
+ if len(iou_sampling_neg_inds) > num_expected_iou_sampling:
+ if num_bins >= 2:
+ iou_sampled_inds = libra_sample_via_interval(
+ max_overlaps,
+ set(iou_sampling_neg_inds), num_expected_iou_sampling,
+ floor_thr, num_bins, bg_thresh)
+ else:
+ iou_sampled_inds = np.random.choice(
+ iou_sampling_neg_inds,
+ size=num_expected_iou_sampling,
+ replace=False)
+ else:
+ iou_sampled_inds = np.array(iou_sampling_neg_inds, dtype=np.int)
+ num_expected_floor = num_expected - len(iou_sampled_inds)
+ if len(floor_neg_inds) > num_expected_floor:
+ sampled_floor_inds = np.random.choice(
+ floor_neg_inds, size=num_expected_floor, replace=False)
+ else:
+ sampled_floor_inds = np.array(floor_neg_inds, dtype=np.int)
+ sampled_inds = np.concatenate((sampled_floor_inds, iou_sampled_inds))
+ if len(sampled_inds) < num_expected:
+ num_extra = num_expected - len(sampled_inds)
+ extra_inds = np.array(list(neg_set - set(sampled_inds)))
+ if len(extra_inds) > num_extra:
+ extra_inds = np.random.choice(
+ extra_inds, size=num_extra, replace=False)
+ sampled_inds = np.concatenate((sampled_inds, extra_inds))
+ return paddle.to_tensor(sampled_inds)
+
+
+def libra_label_box(anchors, gt_boxes, gt_classes, positive_overlap,
+ negative_overlap, num_classes):
+ # TODO: use paddle API to speed up
+ gt_classes = gt_classes.numpy()
+ gt_overlaps = np.zeros((anchors.shape[0], num_classes))
+ matches = np.zeros((anchors.shape[0]), dtype=np.int32)
+ if len(gt_boxes) > 0:
+ proposal_to_gt_overlaps = bbox_overlaps(anchors, gt_boxes).numpy()
+ overlaps_argmax = proposal_to_gt_overlaps.argmax(axis=1)
+ overlaps_max = proposal_to_gt_overlaps.max(axis=1)
+ # Boxes which with non-zero overlap with gt boxes
+ overlapped_boxes_ind = np.where(overlaps_max > 0)[0]
+ overlapped_boxes_gt_classes = gt_classes[overlaps_argmax[
+ overlapped_boxes_ind]]
+
+ for idx in range(len(overlapped_boxes_ind)):
+ gt_overlaps[overlapped_boxes_ind[idx], overlapped_boxes_gt_classes[
+ idx]] = overlaps_max[overlapped_boxes_ind[idx]]
+ matches[overlapped_boxes_ind[idx]] = overlaps_argmax[
+ overlapped_boxes_ind[idx]]
+
+ gt_overlaps = paddle.to_tensor(gt_overlaps)
+ matches = paddle.to_tensor(matches)
+
+ matched_vals = paddle.max(gt_overlaps, axis=1)
+ match_labels = paddle.full(matches.shape, -1, dtype='int32')
+ match_labels = paddle.where(matched_vals < negative_overlap,
+ paddle.zeros_like(match_labels), match_labels)
+ match_labels = paddle.where(matched_vals >= positive_overlap,
+ paddle.ones_like(match_labels), match_labels)
+
+ return matches, match_labels, matched_vals
+
+
+def libra_sample_bbox(matches,
+ match_labels,
+ matched_vals,
+ gt_classes,
+ batch_size_per_im,
+ num_classes,
+ fg_fraction,
+ fg_thresh,
+ bg_thresh,
+ num_bins,
+ use_random=True,
+ is_cascade_rcnn=False):
+ rois_per_image = int(batch_size_per_im)
+ fg_rois_per_im = int(np.round(fg_fraction * rois_per_image))
+ bg_rois_per_im = rois_per_image - fg_rois_per_im
+
+ if is_cascade_rcnn:
+ fg_inds = paddle.nonzero(matched_vals >= fg_thresh)
+ bg_inds = paddle.nonzero(matched_vals < bg_thresh)
+ else:
+ matched_vals_np = matched_vals.numpy()
+ match_labels_np = match_labels.numpy()
+
+ # sample fg
+ fg_inds = paddle.nonzero(matched_vals >= fg_thresh).flatten()
+ fg_nums = int(np.minimum(fg_rois_per_im, fg_inds.shape[0]))
+ if (fg_inds.shape[0] > fg_nums) and use_random:
+ fg_inds = libra_sample_pos(matched_vals_np, match_labels_np,
+ fg_inds.numpy(), fg_rois_per_im)
+ fg_inds = fg_inds[:fg_nums]
+
+ # sample bg
+ bg_inds = paddle.nonzero(matched_vals < bg_thresh).flatten()
+ bg_nums = int(np.minimum(rois_per_image - fg_nums, bg_inds.shape[0]))
+ if (bg_inds.shape[0] > bg_nums) and use_random:
+ bg_inds = libra_sample_neg(
+ matched_vals_np,
+ match_labels_np,
+ bg_inds.numpy(),
+ bg_rois_per_im,
+ num_bins=num_bins,
+ bg_thresh=bg_thresh)
+ bg_inds = bg_inds[:bg_nums]
+
+ sampled_inds = paddle.concat([fg_inds, bg_inds])
+
+ gt_classes = paddle.gather(gt_classes, matches)
+ gt_classes = paddle.where(match_labels == 0,
+ paddle.ones_like(gt_classes) * num_classes,
+ gt_classes)
+ gt_classes = paddle.where(match_labels == -1,
+ paddle.ones_like(gt_classes) * -1, gt_classes)
+ sampled_gt_classes = paddle.gather(gt_classes, sampled_inds)
+
+ return sampled_inds, sampled_gt_classes
+
+
+def libra_generate_proposal_target(rpn_rois,
+ gt_classes,
+ gt_boxes,
+ batch_size_per_im,
+ fg_fraction,
+ fg_thresh,
+ bg_thresh,
+ num_classes,
+ use_random=True,
+ is_cascade_rcnn=False,
+ max_overlaps=None,
+ num_bins=3):
+
+ rois_with_gt = []
+ tgt_labels = []
+ tgt_bboxes = []
+ sampled_max_overlaps = []
+ tgt_gt_inds = []
+ new_rois_num = []
+
+ for i, rpn_roi in enumerate(rpn_rois):
+ max_overlap = max_overlaps[i] if is_cascade_rcnn else None
+ gt_bbox = gt_boxes[i]
+ gt_class = paddle.squeeze(gt_classes[i], axis=-1)
+ if is_cascade_rcnn:
+ rpn_roi = filter_roi(rpn_roi, max_overlap)
+ bbox = paddle.concat([rpn_roi, gt_bbox])
+
+ # Step1: label bbox
+ matches, match_labels, matched_vals = libra_label_box(
+ bbox, gt_bbox, gt_class, fg_thresh, bg_thresh, num_classes)
+
+ # Step2: sample bbox
+ sampled_inds, sampled_gt_classes = libra_sample_bbox(
+ matches, match_labels, matched_vals, gt_class, batch_size_per_im,
+ num_classes, fg_fraction, fg_thresh, bg_thresh, num_bins,
+ use_random, is_cascade_rcnn)
+
+ # Step3: make output
+ rois_per_image = paddle.gather(bbox, sampled_inds)
+ sampled_gt_ind = paddle.gather(matches, sampled_inds)
+ sampled_bbox = paddle.gather(gt_bbox, sampled_gt_ind)
+ sampled_overlap = paddle.gather(matched_vals, sampled_inds)
+
+ rois_per_image.stop_gradient = True
+ sampled_gt_ind.stop_gradient = True
+ sampled_bbox.stop_gradient = True
+ sampled_overlap.stop_gradient = True
+
+ tgt_labels.append(sampled_gt_classes)
+ tgt_bboxes.append(sampled_bbox)
+ rois_with_gt.append(rois_per_image)
+ sampled_max_overlaps.append(sampled_overlap)
+ tgt_gt_inds.append(sampled_gt_ind)
+ new_rois_num.append(paddle.shape(sampled_inds)[0])
+ new_rois_num = paddle.concat(new_rois_num)
+ # rois_with_gt, tgt_labels, tgt_bboxes, tgt_gt_inds, new_rois_num
+ return rois_with_gt, tgt_labels, tgt_bboxes, tgt_gt_inds, new_rois_num
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target_layer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target_layer.py
new file mode 100644
index 000000000..3b5a09601
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/proposal_generator/target_layer.py
@@ -0,0 +1,490 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import sys
+import paddle
+from ppdet.core.workspace import register, serializable
+
+from .target import rpn_anchor_target, generate_proposal_target, generate_mask_target, libra_generate_proposal_target
+import numpy as np
+
+
+@register
+@serializable
+class RPNTargetAssign(object):
+ __shared__ = ['assign_on_cpu']
+ """
+ RPN targets assignment module
+
+ The assignment consists of three steps:
+ 1. Match anchor and ground-truth box, label the anchor with foreground
+ or background sample
+ 2. Sample anchors to keep the properly ratio between foreground and
+ background
+ 3. Generate the targets for classification and regression branch
+
+
+ Args:
+ batch_size_per_im (int): Total number of RPN samples per image.
+ default 256
+ fg_fraction (float): Fraction of anchors that is labeled
+ foreground, default 0.5
+ positive_overlap (float): Minimum overlap required between an anchor
+ and ground-truth box for the (anchor, gt box) pair to be
+ a foreground sample. default 0.7
+ negative_overlap (float): Maximum overlap allowed between an anchor
+ and ground-truth box for the (anchor, gt box) pair to be
+ a background sample. default 0.3
+ ignore_thresh(float): Threshold for ignoring the is_crowd ground-truth
+ if the value is larger than zero.
+ use_random (bool): Use random sampling to choose foreground and
+ background boxes, default true.
+ assign_on_cpu (bool): In case the number of gt box is too large,
+ compute IoU on CPU, default false.
+ """
+
+ def __init__(self,
+ batch_size_per_im=256,
+ fg_fraction=0.5,
+ positive_overlap=0.7,
+ negative_overlap=0.3,
+ ignore_thresh=-1.,
+ use_random=True,
+ assign_on_cpu=False):
+ super(RPNTargetAssign, self).__init__()
+ self.batch_size_per_im = batch_size_per_im
+ self.fg_fraction = fg_fraction
+ self.positive_overlap = positive_overlap
+ self.negative_overlap = negative_overlap
+ self.ignore_thresh = ignore_thresh
+ self.use_random = use_random
+ self.assign_on_cpu = assign_on_cpu
+
+ def __call__(self, inputs, anchors):
+ """
+ inputs: ground-truth instances.
+ anchor_box (Tensor): [num_anchors, 4], num_anchors are all anchors in all feature maps.
+ """
+ gt_boxes = inputs['gt_bbox']
+ is_crowd = inputs.get('is_crowd', None)
+ batch_size = len(gt_boxes)
+ tgt_labels, tgt_bboxes, tgt_deltas = rpn_anchor_target(
+ anchors,
+ gt_boxes,
+ self.batch_size_per_im,
+ self.positive_overlap,
+ self.negative_overlap,
+ self.fg_fraction,
+ self.use_random,
+ batch_size,
+ self.ignore_thresh,
+ is_crowd,
+ assign_on_cpu=self.assign_on_cpu)
+ norm = self.batch_size_per_im * batch_size
+
+ return tgt_labels, tgt_bboxes, tgt_deltas, norm
+
+
+@register
+class BBoxAssigner(object):
+ __shared__ = ['num_classes', 'assign_on_cpu']
+ """
+ RCNN targets assignment module
+
+ The assignment consists of three steps:
+ 1. Match RoIs and ground-truth box, label the RoIs with foreground
+ or background sample
+ 2. Sample anchors to keep the properly ratio between foreground and
+ background
+ 3. Generate the targets for classification and regression branch
+
+ Args:
+ batch_size_per_im (int): Total number of RoIs per image.
+ default 512
+ fg_fraction (float): Fraction of RoIs that is labeled
+ foreground, default 0.25
+ fg_thresh (float): Minimum overlap required between a RoI
+ and ground-truth box for the (roi, gt box) pair to be
+ a foreground sample. default 0.5
+ bg_thresh (float): Maximum overlap allowed between a RoI
+ and ground-truth box for the (roi, gt box) pair to be
+ a background sample. default 0.5
+ ignore_thresh(float): Threshold for ignoring the is_crowd ground-truth
+ if the value is larger than zero.
+ use_random (bool): Use random sampling to choose foreground and
+ background boxes, default true
+ cascade_iou (list[iou]): The list of overlap to select foreground and
+ background of each stage, which is only used In Cascade RCNN.
+ num_classes (int): The number of class.
+ assign_on_cpu (bool): In case the number of gt box is too large,
+ compute IoU on CPU, default false.
+ """
+
+ def __init__(self,
+ batch_size_per_im=512,
+ fg_fraction=.25,
+ fg_thresh=.5,
+ bg_thresh=.5,
+ ignore_thresh=-1.,
+ use_random=True,
+ cascade_iou=[0.5, 0.6, 0.7],
+ num_classes=80,
+ assign_on_cpu=False):
+ super(BBoxAssigner, self).__init__()
+ self.batch_size_per_im = batch_size_per_im
+ self.fg_fraction = fg_fraction
+ self.fg_thresh = fg_thresh
+ self.bg_thresh = bg_thresh
+ self.ignore_thresh = ignore_thresh
+ self.use_random = use_random
+ self.cascade_iou = cascade_iou
+ self.num_classes = num_classes
+ self.assign_on_cpu = assign_on_cpu
+
+ def __call__(self,
+ rpn_rois,
+ rpn_rois_num,
+ inputs,
+ stage=0,
+ is_cascade=False):
+ gt_classes = inputs['gt_class']
+ gt_boxes = inputs['gt_bbox']
+ is_crowd = inputs.get('is_crowd', None)
+ # rois, tgt_labels, tgt_bboxes, tgt_gt_inds
+ # new_rois_num
+ outs = generate_proposal_target(
+ rpn_rois, gt_classes, gt_boxes, self.batch_size_per_im,
+ self.fg_fraction, self.fg_thresh, self.bg_thresh, self.num_classes,
+ self.ignore_thresh, is_crowd, self.use_random, is_cascade,
+ self.cascade_iou[stage], self.assign_on_cpu)
+ rois = outs[0]
+ rois_num = outs[-1]
+ # tgt_labels, tgt_bboxes, tgt_gt_inds
+ targets = outs[1:4]
+ return rois, rois_num, targets
+
+
+@register
+class BBoxLibraAssigner(object):
+ __shared__ = ['num_classes']
+ """
+ Libra-RCNN targets assignment module
+
+ The assignment consists of three steps:
+ 1. Match RoIs and ground-truth box, label the RoIs with foreground
+ or background sample
+ 2. Sample anchors to keep the properly ratio between foreground and
+ background
+ 3. Generate the targets for classification and regression branch
+
+ Args:
+ batch_size_per_im (int): Total number of RoIs per image.
+ default 512
+ fg_fraction (float): Fraction of RoIs that is labeled
+ foreground, default 0.25
+ fg_thresh (float): Minimum overlap required between a RoI
+ and ground-truth box for the (roi, gt box) pair to be
+ a foreground sample. default 0.5
+ bg_thresh (float): Maximum overlap allowed between a RoI
+ and ground-truth box for the (roi, gt box) pair to be
+ a background sample. default 0.5
+ use_random (bool): Use random sampling to choose foreground and
+ background boxes, default true
+ cascade_iou (list[iou]): The list of overlap to select foreground and
+ background of each stage, which is only used In Cascade RCNN.
+ num_classes (int): The number of class.
+ num_bins (int): The number of libra_sample.
+ """
+
+ def __init__(self,
+ batch_size_per_im=512,
+ fg_fraction=.25,
+ fg_thresh=.5,
+ bg_thresh=.5,
+ use_random=True,
+ cascade_iou=[0.5, 0.6, 0.7],
+ num_classes=80,
+ num_bins=3):
+ super(BBoxLibraAssigner, self).__init__()
+ self.batch_size_per_im = batch_size_per_im
+ self.fg_fraction = fg_fraction
+ self.fg_thresh = fg_thresh
+ self.bg_thresh = bg_thresh
+ self.use_random = use_random
+ self.cascade_iou = cascade_iou
+ self.num_classes = num_classes
+ self.num_bins = num_bins
+
+ def __call__(self,
+ rpn_rois,
+ rpn_rois_num,
+ inputs,
+ stage=0,
+ is_cascade=False):
+ gt_classes = inputs['gt_class']
+ gt_boxes = inputs['gt_bbox']
+ # rois, tgt_labels, tgt_bboxes, tgt_gt_inds
+ outs = libra_generate_proposal_target(
+ rpn_rois, gt_classes, gt_boxes, self.batch_size_per_im,
+ self.fg_fraction, self.fg_thresh, self.bg_thresh, self.num_classes,
+ self.use_random, is_cascade, self.cascade_iou[stage], self.num_bins)
+ rois = outs[0]
+ rois_num = outs[-1]
+ # tgt_labels, tgt_bboxes, tgt_gt_inds
+ targets = outs[1:4]
+ return rois, rois_num, targets
+
+
+@register
+@serializable
+class MaskAssigner(object):
+ __shared__ = ['num_classes', 'mask_resolution']
+ """
+ Mask targets assignment module
+
+ The assignment consists of three steps:
+ 1. Select RoIs labels with foreground.
+ 2. Encode the RoIs and corresponding gt polygons to generate
+ mask target
+
+ Args:
+ num_classes (int): The number of class
+ mask_resolution (int): The resolution of mask target, default 14
+ """
+
+ def __init__(self, num_classes=80, mask_resolution=14):
+ super(MaskAssigner, self).__init__()
+ self.num_classes = num_classes
+ self.mask_resolution = mask_resolution
+
+ def __call__(self, rois, tgt_labels, tgt_gt_inds, inputs):
+ gt_segms = inputs['gt_poly']
+
+ outs = generate_mask_target(gt_segms, rois, tgt_labels, tgt_gt_inds,
+ self.num_classes, self.mask_resolution)
+
+ # mask_rois, mask_rois_num, tgt_classes, tgt_masks, mask_index, tgt_weights
+ return outs
+
+
+@register
+class RBoxAssigner(object):
+ """
+ assigner of rbox
+ Args:
+ pos_iou_thr (float): threshold of pos samples
+ neg_iou_thr (float): threshold of neg samples
+ min_iou_thr (float): the min threshold of samples
+ ignore_iof_thr (int): the ignored threshold
+ """
+
+ def __init__(self,
+ pos_iou_thr=0.5,
+ neg_iou_thr=0.4,
+ min_iou_thr=0.0,
+ ignore_iof_thr=-2):
+ super(RBoxAssigner, self).__init__()
+
+ self.pos_iou_thr = pos_iou_thr
+ self.neg_iou_thr = neg_iou_thr
+ self.min_iou_thr = min_iou_thr
+ self.ignore_iof_thr = ignore_iof_thr
+
+ def anchor_valid(self, anchors):
+ """
+
+ Args:
+ anchor: M x 4
+
+ Returns:
+
+ """
+ if anchors.ndim == 3:
+ anchors = anchors.reshape(-1, anchors.shape[-1])
+ assert anchors.ndim == 2
+ anchor_num = anchors.shape[0]
+ anchor_valid = np.ones((anchor_num), np.int32)
+ anchor_inds = np.arange(anchor_num)
+ return anchor_inds
+
+ def rbox2delta(self,
+ proposals,
+ gt,
+ means=[0, 0, 0, 0, 0],
+ stds=[1, 1, 1, 1, 1]):
+ """
+ Args:
+ proposals: tensor [N, 5]
+ gt: gt [N, 5]
+ means: means [5]
+ stds: stds [5]
+ Returns:
+
+ """
+ proposals = proposals.astype(np.float64)
+
+ PI = np.pi
+
+ gt_widths = gt[..., 2]
+ gt_heights = gt[..., 3]
+ gt_angle = gt[..., 4]
+
+ proposals_widths = proposals[..., 2]
+ proposals_heights = proposals[..., 3]
+ proposals_angle = proposals[..., 4]
+
+ coord = gt[..., 0:2] - proposals[..., 0:2]
+ dx = (np.cos(proposals[..., 4]) * coord[..., 0] +
+ np.sin(proposals[..., 4]) * coord[..., 1]) / proposals_widths
+ dy = (-np.sin(proposals[..., 4]) * coord[..., 0] +
+ np.cos(proposals[..., 4]) * coord[..., 1]) / proposals_heights
+ dw = np.log(gt_widths / proposals_widths)
+ dh = np.log(gt_heights / proposals_heights)
+ da = (gt_angle - proposals_angle)
+
+ da = (da + PI / 4) % PI - PI / 4
+ da /= PI
+
+ deltas = np.stack([dx, dy, dw, dh, da], axis=-1)
+ means = np.array(means, dtype=deltas.dtype)
+ stds = np.array(stds, dtype=deltas.dtype)
+ deltas = (deltas - means) / stds
+ deltas = deltas.astype(np.float32)
+ return deltas
+
+ def assign_anchor(self,
+ anchors,
+ gt_bboxes,
+ gt_lables,
+ pos_iou_thr,
+ neg_iou_thr,
+ min_iou_thr=0.0,
+ ignore_iof_thr=-2):
+ """
+
+ Args:
+ anchors:
+ gt_bboxes:[M, 5] rc,yc,w,h,angle
+ gt_lables:
+
+ Returns:
+
+ """
+ assert anchors.shape[1] == 4 or anchors.shape[1] == 5
+ assert gt_bboxes.shape[1] == 4 or gt_bboxes.shape[1] == 5
+ anchors_xc_yc = anchors
+ gt_bboxes_xc_yc = gt_bboxes
+
+ # calc rbox iou
+ anchors_xc_yc = anchors_xc_yc.astype(np.float32)
+ gt_bboxes_xc_yc = gt_bboxes_xc_yc.astype(np.float32)
+ anchors_xc_yc = paddle.to_tensor(anchors_xc_yc)
+ gt_bboxes_xc_yc = paddle.to_tensor(gt_bboxes_xc_yc)
+
+ try:
+ from rbox_iou_ops import rbox_iou
+ except Exception as e:
+ print("import custom_ops error, try install rbox_iou_ops " \
+ "following ppdet/ext_op/README.md", e)
+ sys.stdout.flush()
+ sys.exit(-1)
+
+ iou = rbox_iou(gt_bboxes_xc_yc, anchors_xc_yc)
+ iou = iou.numpy()
+ iou = iou.T
+
+ # every gt's anchor's index
+ gt_bbox_anchor_inds = iou.argmax(axis=0)
+ gt_bbox_anchor_iou = iou[gt_bbox_anchor_inds, np.arange(iou.shape[1])]
+ gt_bbox_anchor_iou_inds = np.where(iou == gt_bbox_anchor_iou)[0]
+
+ # every anchor's gt bbox's index
+ anchor_gt_bbox_inds = iou.argmax(axis=1)
+ anchor_gt_bbox_iou = iou[np.arange(iou.shape[0]), anchor_gt_bbox_inds]
+
+ # (1) set labels=-2 as default
+ labels = np.ones((iou.shape[0], ), dtype=np.int32) * ignore_iof_thr
+
+ # (2) assign ignore
+ labels[anchor_gt_bbox_iou < min_iou_thr] = ignore_iof_thr
+
+ # (3) assign neg_ids -1
+ assign_neg_ids1 = anchor_gt_bbox_iou >= min_iou_thr
+ assign_neg_ids2 = anchor_gt_bbox_iou < neg_iou_thr
+ assign_neg_ids = np.logical_and(assign_neg_ids1, assign_neg_ids2)
+ labels[assign_neg_ids] = -1
+
+ # anchor_gt_bbox_iou_inds
+ # (4) assign max_iou as pos_ids >=0
+ anchor_gt_bbox_iou_inds = anchor_gt_bbox_inds[gt_bbox_anchor_iou_inds]
+ # gt_bbox_anchor_iou_inds = np.logical_and(gt_bbox_anchor_iou_inds, anchor_gt_bbox_iou >= min_iou_thr)
+ labels[gt_bbox_anchor_iou_inds] = gt_lables[anchor_gt_bbox_iou_inds]
+
+ # (5) assign >= pos_iou_thr as pos_ids
+ iou_pos_iou_thr_ids = anchor_gt_bbox_iou >= pos_iou_thr
+ iou_pos_iou_thr_ids_box_inds = anchor_gt_bbox_inds[iou_pos_iou_thr_ids]
+ labels[iou_pos_iou_thr_ids] = gt_lables[iou_pos_iou_thr_ids_box_inds]
+ return anchor_gt_bbox_inds, anchor_gt_bbox_iou, labels
+
+ def __call__(self, anchors, gt_bboxes, gt_labels, is_crowd):
+
+ assert anchors.ndim == 2
+ assert anchors.shape[1] == 5
+ assert gt_bboxes.ndim == 2
+ assert gt_bboxes.shape[1] == 5
+
+ pos_iou_thr = self.pos_iou_thr
+ neg_iou_thr = self.neg_iou_thr
+ min_iou_thr = self.min_iou_thr
+ ignore_iof_thr = self.ignore_iof_thr
+
+ anchor_num = anchors.shape[0]
+
+ gt_bboxes = gt_bboxes
+ is_crowd_slice = is_crowd
+ not_crowd_inds = np.where(is_crowd_slice == 0)
+
+ # Step1: match anchor and gt_bbox
+ anchor_gt_bbox_inds, anchor_gt_bbox_iou, labels = self.assign_anchor(
+ anchors, gt_bboxes,
+ gt_labels.reshape(-1), pos_iou_thr, neg_iou_thr, min_iou_thr,
+ ignore_iof_thr)
+
+ # Step2: sample anchor
+ pos_inds = np.where(labels >= 0)[0]
+ neg_inds = np.where(labels == -1)[0]
+
+ # Step3: make output
+ anchors_num = anchors.shape[0]
+ bbox_targets = np.zeros_like(anchors)
+ bbox_weights = np.zeros_like(anchors)
+ bbox_gt_bboxes = np.zeros_like(anchors)
+ pos_labels = np.zeros(anchors_num, dtype=np.int32)
+ pos_labels_weights = np.zeros(anchors_num, dtype=np.float32)
+
+ pos_sampled_anchors = anchors[pos_inds]
+ pos_sampled_gt_boxes = gt_bboxes[anchor_gt_bbox_inds[pos_inds]]
+ if len(pos_inds) > 0:
+ pos_bbox_targets = self.rbox2delta(pos_sampled_anchors,
+ pos_sampled_gt_boxes)
+ bbox_targets[pos_inds, :] = pos_bbox_targets
+ bbox_gt_bboxes[pos_inds, :] = pos_sampled_gt_boxes
+ bbox_weights[pos_inds, :] = 1.0
+
+ pos_labels[pos_inds] = labels[pos_inds]
+ pos_labels_weights[pos_inds] = 1.0
+
+ if len(neg_inds) > 0:
+ pos_labels_weights[neg_inds] = 1.0
+ return (pos_labels, pos_labels_weights, bbox_targets, bbox_weights,
+ bbox_gt_bboxes, pos_inds, neg_inds)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__init__.py
new file mode 100644
index 000000000..3b461325f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__init__.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import jde_embedding_head
+from . import fairmot_embedding_head
+from . import resnet
+from . import pyramidal_embedding
+from . import pplcnet_embedding
+
+from .fairmot_embedding_head import *
+from .jde_embedding_head import *
+from .resnet import *
+from .pyramidal_embedding import *
+from .pplcnet_embedding import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..c892d8565
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/fairmot_embedding_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/fairmot_embedding_head.cpython-37.pyc
new file mode 100644
index 000000000..67188be62
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/fairmot_embedding_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/jde_embedding_head.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/jde_embedding_head.cpython-37.pyc
new file mode 100644
index 000000000..997d37bc1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/jde_embedding_head.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pplcnet_embedding.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pplcnet_embedding.cpython-37.pyc
new file mode 100644
index 000000000..52c1121b2
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pplcnet_embedding.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pyramidal_embedding.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pyramidal_embedding.cpython-37.pyc
new file mode 100644
index 000000000..24c90d066
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/pyramidal_embedding.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/resnet.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/resnet.cpython-37.pyc
new file mode 100644
index 000000000..43525e482
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/__pycache__/resnet.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/fairmot_embedding_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/fairmot_embedding_head.py
new file mode 100644
index 000000000..98ca257fd
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/fairmot_embedding_head.py
@@ -0,0 +1,224 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import numpy as np
+import math
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import KaimingUniform, Uniform
+from ppdet.core.workspace import register
+from ppdet.modeling.heads.centernet_head import ConvLayer
+
+__all__ = ['FairMOTEmbeddingHead']
+
+
+@register
+class FairMOTEmbeddingHead(nn.Layer):
+ __shared__ = ['num_classes']
+ """
+ Args:
+ in_channels (int): the channel number of input to FairMOTEmbeddingHead.
+ ch_head (int): the channel of features before fed into embedding, 256 by default.
+ ch_emb (int): the channel of the embedding feature, 128 by default.
+ num_identities_dict (dict): the number of identities of each category,
+ support single class and multi-calss, {0: 14455} as default.
+ """
+
+ def __init__(self,
+ in_channels,
+ ch_head=256,
+ ch_emb=128,
+ num_classes=1,
+ num_identities_dict={0: 14455}):
+ super(FairMOTEmbeddingHead, self).__init__()
+ assert num_classes >= 1
+ self.num_classes = num_classes
+ self.ch_emb = ch_emb
+ self.num_identities_dict = num_identities_dict
+ self.reid = nn.Sequential(
+ ConvLayer(
+ in_channels, ch_head, kernel_size=3, padding=1, bias=True),
+ nn.ReLU(),
+ ConvLayer(
+ ch_head, ch_emb, kernel_size=1, stride=1, padding=0, bias=True))
+ param_attr = paddle.ParamAttr(initializer=KaimingUniform())
+ bound = 1 / math.sqrt(ch_emb)
+ bias_attr = paddle.ParamAttr(initializer=Uniform(-bound, bound))
+ self.reid_loss = nn.CrossEntropyLoss(ignore_index=-1, reduction='sum')
+
+ if num_classes == 1:
+ nID = self.num_identities_dict[0] # single class
+ self.classifier = nn.Linear(
+ ch_emb, nID, weight_attr=param_attr, bias_attr=bias_attr)
+ # When num_identities(nID) is 1, emb_scale is set as 1
+ self.emb_scale = math.sqrt(2) * math.log(nID - 1) if nID > 1 else 1
+ else:
+ self.classifiers = dict()
+ self.emb_scale_dict = dict()
+ for cls_id, nID in self.num_identities_dict.items():
+ self.classifiers[str(cls_id)] = nn.Linear(
+ ch_emb, nID, weight_attr=param_attr, bias_attr=bias_attr)
+ # When num_identities(nID) is 1, emb_scale is set as 1
+ self.emb_scale_dict[str(cls_id)] = math.sqrt(2) * math.log(
+ nID - 1) if nID > 1 else 1
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ if isinstance(input_shape, (list, tuple)):
+ input_shape = input_shape[0]
+ return {'in_channels': input_shape.channels}
+
+ def process_by_class(self, bboxes, embedding, bbox_inds, topk_clses):
+ pred_dets, pred_embs = [], []
+ for cls_id in range(self.num_classes):
+ inds_masks = topk_clses == cls_id
+ inds_masks = paddle.cast(inds_masks, 'float32')
+
+ pos_num = inds_masks.sum().numpy()
+ if pos_num == 0:
+ continue
+
+ cls_inds_mask = inds_masks > 0
+
+ bbox_mask = paddle.nonzero(cls_inds_mask)
+ cls_bboxes = paddle.gather_nd(bboxes, bbox_mask)
+ pred_dets.append(cls_bboxes)
+
+ cls_inds = paddle.masked_select(bbox_inds, cls_inds_mask)
+ cls_inds = cls_inds.unsqueeze(-1)
+ cls_embedding = paddle.gather_nd(embedding, cls_inds)
+ pred_embs.append(cls_embedding)
+
+ return paddle.concat(pred_dets), paddle.concat(pred_embs)
+
+ def forward(self,
+ neck_feat,
+ inputs,
+ bboxes=None,
+ bbox_inds=None,
+ topk_clses=None):
+ reid_feat = self.reid(neck_feat)
+ if self.training:
+ if self.num_classes == 1:
+ loss = self.get_loss(reid_feat, inputs)
+ else:
+ loss = self.get_mc_loss(reid_feat, inputs)
+ return loss
+ else:
+ assert bboxes is not None and bbox_inds is not None
+ reid_feat = F.normalize(reid_feat)
+ embedding = paddle.transpose(reid_feat, [0, 2, 3, 1])
+ embedding = paddle.reshape(embedding, [-1, self.ch_emb])
+ # embedding shape: [bs * h * w, ch_emb]
+
+ if self.num_classes == 1:
+ pred_dets = bboxes
+ pred_embs = paddle.gather(embedding, bbox_inds)
+ else:
+ pred_dets, pred_embs = self.process_by_class(
+ bboxes, embedding, bbox_inds, topk_clses)
+ return pred_dets, pred_embs
+
+ def get_loss(self, feat, inputs):
+ index = inputs['index']
+ mask = inputs['index_mask']
+ target = inputs['reid']
+ target = paddle.masked_select(target, mask > 0)
+ target = paddle.unsqueeze(target, 1)
+
+ feat = paddle.transpose(feat, perm=[0, 2, 3, 1])
+ feat_n, feat_h, feat_w, feat_c = feat.shape
+ feat = paddle.reshape(feat, shape=[feat_n, -1, feat_c])
+ index = paddle.unsqueeze(index, 2)
+ batch_inds = list()
+ for i in range(feat_n):
+ batch_ind = paddle.full(
+ shape=[1, index.shape[1], 1], fill_value=i, dtype='int64')
+ batch_inds.append(batch_ind)
+ batch_inds = paddle.concat(batch_inds, axis=0)
+ index = paddle.concat(x=[batch_inds, index], axis=2)
+ feat = paddle.gather_nd(feat, index=index)
+
+ mask = paddle.unsqueeze(mask, axis=2)
+ mask = paddle.expand_as(mask, feat)
+ mask.stop_gradient = True
+ feat = paddle.masked_select(feat, mask > 0)
+ feat = paddle.reshape(feat, shape=[-1, feat_c])
+ feat = F.normalize(feat)
+ feat = self.emb_scale * feat
+ logit = self.classifier(feat)
+ target.stop_gradient = True
+ loss = self.reid_loss(logit, target)
+ valid = (target != self.reid_loss.ignore_index)
+ valid.stop_gradient = True
+ count = paddle.sum((paddle.cast(valid, dtype=np.int32)))
+ count.stop_gradient = True
+ if count > 0:
+ loss = loss / count
+
+ return loss
+
+ def get_mc_loss(self, feat, inputs):
+ # feat.shape = [bs, ch_emb, h, w]
+ assert 'cls_id_map' in inputs and 'cls_tr_ids' in inputs
+ index = inputs['index']
+ mask = inputs['index_mask']
+ cls_id_map = inputs['cls_id_map'] # [bs, h, w]
+ cls_tr_ids = inputs['cls_tr_ids'] # [bs, num_classes, h, w]
+
+ feat = paddle.transpose(feat, perm=[0, 2, 3, 1])
+ feat_n, feat_h, feat_w, feat_c = feat.shape
+ feat = paddle.reshape(feat, shape=[feat_n, -1, feat_c])
+
+ index = paddle.unsqueeze(index, 2)
+ batch_inds = list()
+ for i in range(feat_n):
+ batch_ind = paddle.full(
+ shape=[1, index.shape[1], 1], fill_value=i, dtype='int64')
+ batch_inds.append(batch_ind)
+ batch_inds = paddle.concat(batch_inds, axis=0)
+ index = paddle.concat(x=[batch_inds, index], axis=2)
+ feat = paddle.gather_nd(feat, index=index)
+
+ mask = paddle.unsqueeze(mask, axis=2)
+ mask = paddle.expand_as(mask, feat)
+ mask.stop_gradient = True
+ feat = paddle.masked_select(feat, mask > 0)
+ feat = paddle.reshape(feat, shape=[-1, feat_c])
+
+ reid_losses = 0
+ for cls_id, id_num in self.num_identities_dict.items():
+ # target
+ cur_cls_tr_ids = paddle.reshape(
+ cls_tr_ids[:, cls_id, :, :], shape=[feat_n, -1]) # [bs, h*w]
+ cls_id_target = paddle.gather_nd(cur_cls_tr_ids, index=index)
+ mask = inputs['index_mask']
+ cls_id_target = paddle.masked_select(cls_id_target, mask > 0)
+ cls_id_target.stop_gradient = True
+
+ # feat
+ cls_id_feat = self.emb_scale_dict[str(cls_id)] * F.normalize(feat)
+ cls_id_pred = self.classifiers[str(cls_id)](cls_id_feat)
+
+ loss = self.reid_loss(cls_id_pred, cls_id_target)
+ valid = (cls_id_target != self.reid_loss.ignore_index)
+ valid.stop_gradient = True
+ count = paddle.sum((paddle.cast(valid, dtype=np.int32)))
+ count.stop_gradient = True
+ if count > 0:
+ loss = loss / count
+ reid_losses += loss
+
+ return reid_losses
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/jde_embedding_head.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/jde_embedding_head.py
new file mode 100644
index 000000000..c35f8cfb0
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/jde_embedding_head.py
@@ -0,0 +1,212 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import numpy as np
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+from paddle.regularizer import L2Decay
+from ppdet.core.workspace import register
+from paddle.nn.initializer import Normal, Constant
+
+__all__ = ['JDEEmbeddingHead']
+
+
+class LossParam(nn.Layer):
+ def __init__(self, init_value=0., use_uncertainy=True):
+ super(LossParam, self).__init__()
+ self.loss_param = self.create_parameter(
+ shape=[1],
+ attr=ParamAttr(initializer=Constant(value=init_value)),
+ dtype="float32")
+
+ def forward(self, inputs):
+ out = paddle.exp(-self.loss_param) * inputs + self.loss_param
+ return out * 0.5
+
+
+@register
+class JDEEmbeddingHead(nn.Layer):
+ __shared__ = ['num_classes']
+ __inject__ = ['emb_loss', 'jde_loss']
+ """
+ JDEEmbeddingHead
+ Args:
+ num_classes(int): Number of classes. Only support one class tracking.
+ num_identities(int): Number of identities.
+ anchor_levels(int): Number of anchor levels, same as FPN levels.
+ anchor_scales(int): Number of anchor scales on each FPN level.
+ embedding_dim(int): Embedding dimension. Default: 512.
+ emb_loss(object): Instance of 'JDEEmbeddingLoss'
+ jde_loss(object): Instance of 'JDELoss'
+ """
+
+ def __init__(
+ self,
+ num_classes=1,
+ num_identities=14455, # dataset.num_identities_dict[0]
+ anchor_levels=3,
+ anchor_scales=4,
+ embedding_dim=512,
+ emb_loss='JDEEmbeddingLoss',
+ jde_loss='JDELoss'):
+ super(JDEEmbeddingHead, self).__init__()
+ self.num_classes = num_classes
+ self.num_identities = num_identities
+ self.anchor_levels = anchor_levels
+ self.anchor_scales = anchor_scales
+ self.embedding_dim = embedding_dim
+ self.emb_loss = emb_loss
+ self.jde_loss = jde_loss
+
+ self.emb_scale = math.sqrt(2) * math.log(
+ self.num_identities - 1) if self.num_identities > 1 else 1
+
+ self.identify_outputs = []
+ self.loss_params_cls = []
+ self.loss_params_reg = []
+ self.loss_params_ide = []
+ for i in range(self.anchor_levels):
+ name = 'identify_output.{}'.format(i)
+ identify_output = self.add_sublayer(
+ name,
+ nn.Conv2D(
+ in_channels=64 * (2**self.anchor_levels) // (2**i),
+ out_channels=self.embedding_dim,
+ kernel_size=3,
+ stride=1,
+ padding=1,
+ bias_attr=ParamAttr(regularizer=L2Decay(0.))))
+ self.identify_outputs.append(identify_output)
+
+ loss_p_cls = self.add_sublayer('cls.{}'.format(i), LossParam(-4.15))
+ self.loss_params_cls.append(loss_p_cls)
+ loss_p_reg = self.add_sublayer('reg.{}'.format(i), LossParam(-4.85))
+ self.loss_params_reg.append(loss_p_reg)
+ loss_p_ide = self.add_sublayer('ide.{}'.format(i), LossParam(-2.3))
+ self.loss_params_ide.append(loss_p_ide)
+
+ self.classifier = self.add_sublayer(
+ 'classifier',
+ nn.Linear(
+ self.embedding_dim,
+ self.num_identities,
+ weight_attr=ParamAttr(
+ learning_rate=1., initializer=Normal(
+ mean=0.0, std=0.01)),
+ bias_attr=ParamAttr(
+ learning_rate=2., regularizer=L2Decay(0.))))
+
+ def forward(self,
+ identify_feats,
+ targets,
+ loss_confs=None,
+ loss_boxes=None,
+ bboxes=None,
+ boxes_idx=None,
+ nms_keep_idx=None):
+ assert self.num_classes == 1, 'JDE only support sindle class MOT.'
+ assert len(identify_feats) == self.anchor_levels
+ ide_outs = []
+ for feat, ide_head in zip(identify_feats, self.identify_outputs):
+ ide_outs.append(ide_head(feat))
+
+ if self.training:
+ assert len(loss_confs) == len(loss_boxes) == self.anchor_levels
+ loss_ides = self.emb_loss(ide_outs, targets, self.emb_scale,
+ self.classifier)
+ jde_losses = self.jde_loss(
+ loss_confs, loss_boxes, loss_ides, self.loss_params_cls,
+ self.loss_params_reg, self.loss_params_ide, targets)
+ return jde_losses
+ else:
+ assert bboxes is not None
+ assert boxes_idx is not None
+ assert nms_keep_idx is not None
+
+ emb_outs = self.get_emb_outs(ide_outs)
+ emb_valid = paddle.gather_nd(emb_outs, boxes_idx)
+ pred_embs = paddle.gather_nd(emb_valid, nms_keep_idx)
+
+ input_shape = targets['image'].shape[2:]
+ # input_shape: [h, w], before data transforms, set in model config
+ im_shape = targets['im_shape'][0].numpy()
+ # im_shape: [new_h, new_w], after data transforms
+ scale_factor = targets['scale_factor'][0].numpy()
+ bboxes[:, 2:] = self.scale_coords(bboxes[:, 2:], input_shape,
+ im_shape, scale_factor)
+ # tlwhs, scores, cls_ids
+ pred_dets = paddle.concat(
+ (bboxes[:, 2:], bboxes[:, 1:2], bboxes[:, 0:1]), axis=1)
+ return pred_dets, pred_embs
+
+ def scale_coords(self, coords, input_shape, im_shape, scale_factor):
+ ratio = scale_factor[0]
+ pad_w = (input_shape[1] - int(im_shape[1])) / 2
+ pad_h = (input_shape[0] - int(im_shape[0])) / 2
+ coords = paddle.cast(coords, 'float32')
+ coords[:, 0::2] -= pad_w
+ coords[:, 1::2] -= pad_h
+ coords[:, 0:4] /= ratio
+ coords[:, :4] = paddle.clip(
+ coords[:, :4], min=0, max=coords[:, :4].max())
+ return coords.round()
+
+ def get_emb_and_gt_outs(self, ide_outs, targets):
+ emb_and_gts = []
+ for i, p_ide in enumerate(ide_outs):
+ t_conf = targets['tconf{}'.format(i)]
+ t_ide = targets['tide{}'.format(i)]
+
+ p_ide = p_ide.transpose((0, 2, 3, 1))
+ p_ide_flatten = paddle.reshape(p_ide, [-1, self.embedding_dim])
+
+ mask = t_conf > 0
+ mask = paddle.cast(mask, dtype="int64")
+ emb_mask = mask.max(1).flatten()
+ emb_mask_inds = paddle.nonzero(emb_mask > 0).flatten()
+ if len(emb_mask_inds) > 0:
+ t_ide_flatten = paddle.reshape(t_ide.max(1), [-1, 1])
+ tids = paddle.gather(t_ide_flatten, emb_mask_inds)
+
+ embedding = paddle.gather(p_ide_flatten, emb_mask_inds)
+ embedding = self.emb_scale * F.normalize(embedding)
+ emb_and_gt = paddle.concat([embedding, tids], axis=1)
+ emb_and_gts.append(emb_and_gt)
+
+ if len(emb_and_gts) > 0:
+ return paddle.concat(emb_and_gts, axis=0)
+ else:
+ return paddle.zeros((1, self.embedding_dim + 1))
+
+ def get_emb_outs(self, ide_outs):
+ emb_outs = []
+ for i, p_ide in enumerate(ide_outs):
+ p_ide = p_ide.transpose((0, 2, 3, 1))
+
+ p_ide_repeat = paddle.tile(p_ide, [self.anchor_scales, 1, 1, 1])
+ embedding = F.normalize(p_ide_repeat, axis=-1)
+ emb = paddle.reshape(embedding, [-1, self.embedding_dim])
+ emb_outs.append(emb)
+
+ if len(emb_outs) > 0:
+ return paddle.concat(emb_outs, axis=0)
+ else:
+ return paddle.zeros((1, self.embedding_dim))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pplcnet_embedding.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pplcnet_embedding.py
new file mode 100644
index 000000000..cad9f85be
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pplcnet_embedding.py
@@ -0,0 +1,281 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, Constant
+from paddle import ParamAttr
+from paddle.nn import AdaptiveAvgPool2D, BatchNorm, Conv2D, Linear
+from paddle.regularizer import L2Decay
+from paddle.nn.initializer import KaimingNormal, XavierNormal
+from ppdet.core.workspace import register
+
+__all__ = ['PPLCNetEmbedding']
+
+
+# Each element(list) represents a depthwise block, which is composed of k, in_c, out_c, s, use_se.
+# k: kernel_size
+# in_c: input channel number in depthwise block
+# out_c: output channel number in depthwise block
+# s: stride in depthwise block
+# use_se: whether to use SE block
+
+NET_CONFIG = {
+ "blocks2":
+ #k, in_c, out_c, s, use_se
+ [[3, 16, 32, 1, False]],
+ "blocks3": [[3, 32, 64, 2, False], [3, 64, 64, 1, False]],
+ "blocks4": [[3, 64, 128, 2, False], [3, 128, 128, 1, False]],
+ "blocks5": [[3, 128, 256, 2, False], [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False], [5, 256, 256, 1, False],
+ [5, 256, 256, 1, False], [5, 256, 256, 1, False]],
+ "blocks6": [[5, 256, 512, 2, True], [5, 512, 512, 1, True]]
+}
+
+
+def make_divisible(v, divisor=8, min_value=None):
+ if min_value is None:
+ min_value = divisor
+ new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
+ if new_v < 0.9 * v:
+ new_v += divisor
+ return new_v
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ num_channels,
+ filter_size,
+ num_filters,
+ stride,
+ num_groups=1):
+ super().__init__()
+
+ self.conv = Conv2D(
+ in_channels=num_channels,
+ out_channels=num_filters,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=num_groups,
+ weight_attr=ParamAttr(initializer=KaimingNormal()),
+ bias_attr=False)
+
+ self.bn = BatchNorm(
+ num_filters,
+ param_attr=ParamAttr(regularizer=L2Decay(0.0)),
+ bias_attr=ParamAttr(regularizer=L2Decay(0.0)))
+ self.hardswish = nn.Hardswish()
+
+ def forward(self, x):
+ x = self.conv(x)
+ x = self.bn(x)
+ x = self.hardswish(x)
+ return x
+
+
+class DepthwiseSeparable(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ stride,
+ dw_size=3,
+ use_se=False):
+ super().__init__()
+ self.use_se = use_se
+ self.dw_conv = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_channels,
+ filter_size=dw_size,
+ stride=stride,
+ num_groups=num_channels)
+ if use_se:
+ self.se = SEModule(num_channels)
+ self.pw_conv = ConvBNLayer(
+ num_channels=num_channels,
+ filter_size=1,
+ num_filters=num_filters,
+ stride=1)
+
+ def forward(self, x):
+ x = self.dw_conv(x)
+ if self.use_se:
+ x = self.se(x)
+ x = self.pw_conv(x)
+ return x
+
+
+class SEModule(nn.Layer):
+ def __init__(self, channel, reduction=4):
+ super().__init__()
+ self.avg_pool = AdaptiveAvgPool2D(1)
+ self.conv1 = Conv2D(
+ in_channels=channel,
+ out_channels=channel // reduction,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self.relu = nn.ReLU()
+ self.conv2 = Conv2D(
+ in_channels=channel // reduction,
+ out_channels=channel,
+ kernel_size=1,
+ stride=1,
+ padding=0)
+ self.hardsigmoid = nn.Hardsigmoid()
+
+ def forward(self, x):
+ identity = x
+ x = self.avg_pool(x)
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.conv2(x)
+ x = self.hardsigmoid(x)
+ x = paddle.multiply(x=identity, y=x)
+ return x
+
+
+class PPLCNet(nn.Layer):
+ """
+ PP-LCNet, see https://arxiv.org/abs/2109.15099.
+ This code is different from PPLCNet in ppdet/modeling/backbones/lcnet.py
+ or in PaddleClas, because the output is the flatten feature of last_conv.
+
+ Args:
+ scale (float): Scale ratio of channels.
+ class_expand (int): Number of channels of conv feature.
+ """
+
+ def __init__(self, scale=1.0, class_expand=1280):
+ super(PPLCNet, self).__init__()
+ self.scale = scale
+ self.class_expand = class_expand
+
+ self.conv1 = ConvBNLayer(
+ num_channels=3,
+ filter_size=3,
+ num_filters=make_divisible(16 * scale),
+ stride=2)
+
+ self.blocks2 = nn.Sequential(*[
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks2"])
+ ])
+
+ self.blocks3 = nn.Sequential(*[
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks3"])
+ ])
+
+ self.blocks4 = nn.Sequential(*[
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks4"])
+ ])
+
+ self.blocks5 = nn.Sequential(*[
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks5"])
+ ])
+
+ self.blocks6 = nn.Sequential(*[
+ DepthwiseSeparable(
+ num_channels=make_divisible(in_c * scale),
+ num_filters=make_divisible(out_c * scale),
+ dw_size=k,
+ stride=s,
+ use_se=se)
+ for i, (k, in_c, out_c, s, se) in enumerate(NET_CONFIG["blocks6"])
+ ])
+
+ self.avg_pool = AdaptiveAvgPool2D(1)
+ self.last_conv = Conv2D(
+ in_channels=make_divisible(NET_CONFIG["blocks6"][-1][2] * scale),
+ out_channels=self.class_expand,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ bias_attr=False)
+ self.hardswish = nn.Hardswish()
+ self.flatten = nn.Flatten(start_axis=1, stop_axis=-1)
+
+ def forward(self, x):
+ x = self.conv1(x)
+
+ x = self.blocks2(x)
+ x = self.blocks3(x)
+ x = self.blocks4(x)
+ x = self.blocks5(x)
+ x = self.blocks6(x)
+
+ x = self.avg_pool(x)
+ x = self.last_conv(x)
+ x = self.hardswish(x)
+ x = self.flatten(x)
+ return x
+
+
+class FC(nn.Layer):
+ def __init__(self, input_ch, output_ch):
+ super(FC, self).__init__()
+ weight_attr = ParamAttr(initializer=XavierNormal())
+ self.fc = paddle.nn.Linear(input_ch, output_ch, weight_attr=weight_attr)
+
+ def forward(self, x):
+ out = self.fc(x)
+ return out
+
+
+@register
+class PPLCNetEmbedding(nn.Layer):
+ """
+ PPLCNet Embedding
+
+ Args:
+ input_ch (int): Number of channels of input conv feature.
+ output_ch (int): Number of channels of output conv feature.
+ """
+ def __init__(self, scale=2.5, input_ch=1280, output_ch=512):
+ super(PPLCNetEmbedding, self).__init__()
+ self.backbone = PPLCNet(scale=scale)
+ self.neck = FC(input_ch, output_ch)
+
+ def forward(self, x):
+ feat = self.backbone(x)
+ feat_out = self.neck(feat)
+ return feat_out
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pyramidal_embedding.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pyramidal_embedding.py
new file mode 100644
index 000000000..a90d4e1ef
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/pyramidal_embedding.py
@@ -0,0 +1,144 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal, Constant
+from paddle import ParamAttr
+from .resnet import *
+from ppdet.core.workspace import register
+
+__all__ = ['PCBPyramid']
+
+
+@register
+class PCBPyramid(nn.Layer):
+ """
+ PCB (Part-based Convolutional Baseline), see https://arxiv.org/abs/1711.09349,
+ Pyramidal Person Re-IDentification, see https://arxiv.org/abs/1810.12193
+
+ Args:
+ input_ch (int): Number of channels of the input feature.
+ num_stripes (int): Number of sub-parts.
+ used_levels (tuple): Whether the level is used, 1 means used.
+ num_classes (int): Number of classes for identities, default 751 in
+ Market-1501 dataset.
+ last_conv_stride (int): Stride of the last conv.
+ last_conv_dilation (int): Dilation of the last conv.
+ num_conv_out_channels (int): Number of channels of conv feature.
+ """
+
+ def __init__(self,
+ input_ch=2048,
+ num_stripes=6,
+ used_levels=(1, 1, 1, 1, 1, 1),
+ num_classes=751,
+ last_conv_stride=1,
+ last_conv_dilation=1,
+ num_conv_out_channels=128):
+ super(PCBPyramid, self).__init__()
+ self.num_stripes = num_stripes
+ self.used_levels = used_levels
+ self.num_classes = num_classes
+
+ self.num_in_each_level = [i for i in range(self.num_stripes, 0, -1)]
+ self.num_branches = sum(self.num_in_each_level)
+
+ self.base = ResNet101(
+ lr_mult=0.1,
+ last_conv_stride=last_conv_stride,
+ last_conv_dilation=last_conv_dilation)
+ self.dropout_layer = nn.Dropout(p=0.2)
+ self.pyramid_conv_list0, self.pyramid_fc_list0 = self.basic_branch(
+ num_conv_out_channels, input_ch)
+
+ def basic_branch(self, num_conv_out_channels, input_ch):
+ # the level indexes are defined from fine to coarse,
+ # the branch will contain one more part than that of its previous level
+ # the sliding step is set to 1
+ pyramid_conv_list = nn.LayerList()
+ pyramid_fc_list = nn.LayerList()
+
+ idx_levels = 0
+ for idx_branches in range(self.num_branches):
+ if idx_branches >= sum(self.num_in_each_level[0:idx_levels + 1]):
+ idx_levels += 1
+
+ pyramid_conv_list.append(
+ nn.Sequential(
+ nn.Conv2D(input_ch, num_conv_out_channels, 1),
+ nn.BatchNorm2D(num_conv_out_channels), nn.ReLU()))
+
+ idx_levels = 0
+ for idx_branches in range(self.num_branches):
+ if idx_branches >= sum(self.num_in_each_level[0:idx_levels + 1]):
+ idx_levels += 1
+
+ fc = nn.Linear(
+ in_features=num_conv_out_channels,
+ out_features=self.num_classes,
+ weight_attr=ParamAttr(initializer=Normal(
+ mean=0., std=0.001)),
+ bias_attr=ParamAttr(initializer=Constant(value=0.)))
+ pyramid_fc_list.append(fc)
+ return pyramid_conv_list, pyramid_fc_list
+
+ def pyramid_forward(self, feat):
+ each_stripe_size = int(feat.shape[2] / self.num_stripes)
+
+ feat_list, logits_list = [], []
+ idx_levels = 0
+ used_branches = 0
+ for idx_branches in range(self.num_branches):
+ if idx_branches >= sum(self.num_in_each_level[0:idx_levels + 1]):
+ idx_levels += 1
+ idx_in_each_level = idx_branches - sum(self.num_in_each_level[
+ 0:idx_levels])
+ stripe_size_in_each_level = each_stripe_size * (idx_levels + 1)
+ start = idx_in_each_level * each_stripe_size
+ end = start + stripe_size_in_each_level
+
+ k = feat.shape[-1]
+ local_feat_avgpool = F.avg_pool2d(
+ feat[:, :, start:end, :],
+ kernel_size=(stripe_size_in_each_level, k))
+ local_feat_maxpool = F.max_pool2d(
+ feat[:, :, start:end, :],
+ kernel_size=(stripe_size_in_each_level, k))
+ local_feat = local_feat_avgpool + local_feat_maxpool
+
+ local_feat = self.pyramid_conv_list0[used_branches](local_feat)
+ local_feat = paddle.reshape(
+ local_feat, shape=[local_feat.shape[0], -1])
+ feat_list.append(local_feat)
+
+ local_logits = self.pyramid_fc_list0[used_branches](
+ self.dropout_layer(local_feat))
+ logits_list.append(local_logits)
+
+ used_branches += 1
+
+ return feat_list, logits_list
+
+ def forward(self, x):
+ feat = self.base(x)
+ assert feat.shape[2] % self.num_stripes == 0
+ feat_list, logits_list = self.pyramid_forward(feat)
+ feat_out = paddle.concat(feat_list, axis=-1)
+ return feat_out
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/resnet.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/resnet.py
new file mode 100644
index 000000000..968fe9774
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/reid/resnet.py
@@ -0,0 +1,310 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import math
+import paddle
+from paddle import ParamAttr
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle.nn.initializer import Normal
+
+__all__ = ["ResNet18", "ResNet34", "ResNet50", "ResNet101", "ResNet152"]
+
+
+class ConvBNLayer(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ filter_size,
+ stride=1,
+ dilation=1,
+ groups=1,
+ act=None,
+ lr_mult=1.0,
+ name=None,
+ data_format="NCHW"):
+ super(ConvBNLayer, self).__init__()
+ conv_stdv = filter_size * filter_size * num_filters
+ self._conv = nn.Conv2D(
+ in_channels=num_channels,
+ out_channels=num_filters,
+ kernel_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ dilation=dilation,
+ groups=groups,
+ weight_attr=ParamAttr(
+ learning_rate=lr_mult,
+ initializer=Normal(0, math.sqrt(2. / conv_stdv))),
+ bias_attr=False,
+ data_format=data_format)
+
+ self._batch_norm = nn.BatchNorm(
+ num_filters, act=act, data_layout=data_format)
+
+ def forward(self, inputs):
+ y = self._conv(inputs)
+ y = self._batch_norm(y)
+ return y
+
+
+class BottleneckBlock(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ stride,
+ shortcut=True,
+ name=None,
+ lr_mult=1.0,
+ dilation=1,
+ data_format="NCHW"):
+ super(BottleneckBlock, self).__init__()
+ self.conv0 = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_filters,
+ filter_size=1,
+ dilation=dilation,
+ act="relu",
+ lr_mult=lr_mult,
+ name=name + "_branch2a",
+ data_format=data_format)
+ self.conv1 = ConvBNLayer(
+ num_channels=num_filters,
+ num_filters=num_filters,
+ filter_size=3,
+ dilation=dilation,
+ stride=stride,
+ act="relu",
+ lr_mult=lr_mult,
+ name=name + "_branch2b",
+ data_format=data_format)
+ self.conv2 = ConvBNLayer(
+ num_channels=num_filters,
+ num_filters=num_filters * 4,
+ filter_size=1,
+ dilation=dilation,
+ act=None,
+ lr_mult=lr_mult,
+ name=name + "_branch2c",
+ data_format=data_format)
+ if not shortcut:
+ self.short = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_filters * 4,
+ filter_size=1,
+ dilation=dilation,
+ stride=stride,
+ lr_mult=lr_mult,
+ name=name + "_branch1",
+ data_format=data_format)
+ self.shortcut = shortcut
+ self._num_channels_out = num_filters * 4
+
+ def forward(self, inputs):
+ y = self.conv0(inputs)
+ conv1 = self.conv1(y)
+ conv2 = self.conv2(conv1)
+ if self.shortcut:
+ short = inputs
+ else:
+ short = self.short(inputs)
+ y = paddle.add(x=short, y=conv2)
+ y = F.relu(y)
+ return y
+
+
+class BasicBlock(nn.Layer):
+ def __init__(self,
+ num_channels,
+ num_filters,
+ stride,
+ shortcut=True,
+ name=None,
+ data_format="NCHW"):
+ super(BasicBlock, self).__init__()
+ self.stride = stride
+ self.conv0 = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_filters,
+ filter_size=3,
+ stride=stride,
+ act="relu",
+ name=name + "_branch2a",
+ data_format=data_format)
+ self.conv1 = ConvBNLayer(
+ num_channels=num_filters,
+ num_filters=num_filters,
+ filter_size=3,
+ act=None,
+ name=name + "_branch2b",
+ data_format=data_format)
+ if not shortcut:
+ self.short = ConvBNLayer(
+ num_channels=num_channels,
+ num_filters=num_filters,
+ filter_size=1,
+ stride=stride,
+ name=name + "_branch1",
+ data_format=data_format)
+ self.shortcut = shortcut
+
+ def forward(self, inputs):
+ y = self.conv0(inputs)
+ conv1 = self.conv1(y)
+ if self.shortcut:
+ short = inputs
+ else:
+ short = self.short(inputs)
+ y = paddle.add(x=short, y=conv1)
+ y = F.relu(y)
+ return y
+
+
+class ResNet(nn.Layer):
+ def __init__(self,
+ layers=50,
+ lr_mult=1.0,
+ last_conv_stride=2,
+ last_conv_dilation=1):
+ super(ResNet, self).__init__()
+ self.layers = layers
+ self.data_format = "NCHW"
+ self.input_image_channel = 3
+ supported_layers = [18, 34, 50, 101, 152]
+ assert layers in supported_layers, \
+ "supported layers are {} but input layer is {}".format(
+ supported_layers, layers)
+ if layers == 18:
+ depth = [2, 2, 2, 2]
+ elif layers == 34 or layers == 50:
+ depth = [3, 4, 6, 3]
+ elif layers == 101:
+ depth = [3, 4, 23, 3]
+ elif layers == 152:
+ depth = [3, 8, 36, 3]
+ num_channels = [64, 256, 512,
+ 1024] if layers >= 50 else [64, 64, 128, 256]
+ num_filters = [64, 128, 256, 512]
+ self.conv = ConvBNLayer(
+ num_channels=self.input_image_channel,
+ num_filters=64,
+ filter_size=7,
+ stride=2,
+ act="relu",
+ lr_mult=lr_mult,
+ name="conv1",
+ data_format=self.data_format)
+ self.pool2d_max = nn.MaxPool2D(
+ kernel_size=3, stride=2, padding=1, data_format=self.data_format)
+ self.block_list = []
+ if layers >= 50:
+ for block in range(len(depth)):
+ shortcut = False
+ for i in range(depth[block]):
+ if layers in [101, 152] and block == 2:
+ if i == 0:
+ conv_name = "res" + str(block + 2) + "a"
+ else:
+ conv_name = "res" + str(block + 2) + "b" + str(i)
+ else:
+ conv_name = "res" + str(block + 2) + chr(97 + i)
+ if i != 0 or block == 0:
+ stride = 1
+ elif block == len(depth) - 1:
+ stride = last_conv_stride
+ else:
+ stride = 2
+ bottleneck_block = self.add_sublayer(
+ conv_name,
+ BottleneckBlock(
+ num_channels=num_channels[block]
+ if i == 0 else num_filters[block] * 4,
+ num_filters=num_filters[block],
+ stride=stride,
+ shortcut=shortcut,
+ name=conv_name,
+ lr_mult=lr_mult,
+ dilation=last_conv_dilation
+ if block == len(depth) - 1 else 1,
+ data_format=self.data_format))
+ self.block_list.append(bottleneck_block)
+ shortcut = True
+ else:
+ for block in range(len(depth)):
+ shortcut = False
+ for i in range(depth[block]):
+ conv_name = "res" + str(block + 2) + chr(97 + i)
+ basic_block = self.add_sublayer(
+ conv_name,
+ BasicBlock(
+ num_channels=num_channels[block]
+ if i == 0 else num_filters[block],
+ num_filters=num_filters[block],
+ stride=2 if i == 0 and block != 0 else 1,
+ shortcut=shortcut,
+ name=conv_name,
+ data_format=self.data_format))
+ self.block_list.append(basic_block)
+ shortcut = True
+
+ def forward(self, inputs):
+ y = self.conv(inputs)
+ y = self.pool2d_max(y)
+ for block in self.block_list:
+ y = block(y)
+ return y
+
+
+def ResNet18(**args):
+ model = ResNet(layers=18, **args)
+ return model
+
+
+def ResNet34(**args):
+ model = ResNet(layers=34, **args)
+ return model
+
+
+def ResNet50(pretrained=None, **args):
+ model = ResNet(layers=50, **args)
+ if pretrained is not None:
+ if not (os.path.isdir(pretrained) or
+ os.path.exists(pretrained + '.pdparams')):
+ raise ValueError("Model pretrain path {} does not "
+ "exists.".format(pretrained))
+ param_state_dict = paddle.load(pretrained + '.pdparams')
+ model.set_dict(param_state_dict)
+ return model
+
+
+def ResNet101(pretrained=None, **args):
+ model = ResNet(layers=101, **args)
+ if pretrained is not None:
+ if not (os.path.isdir(pretrained) or
+ os.path.exists(pretrained + '.pdparams')):
+ raise ValueError("Model pretrain path {} does not "
+ "exists.".format(pretrained))
+ param_state_dict = paddle.load(pretrained + '.pdparams')
+ model.set_dict(param_state_dict)
+ return model
+
+
+def ResNet152(**args):
+ model = ResNet(layers=152, **args)
+ return model
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/shape_spec.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/shape_spec.py
new file mode 100644
index 000000000..81601fd64
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/shape_spec.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# The code is based on:
+# https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/shape_spec.py
+
+from collections import namedtuple
+
+
+class ShapeSpec(
+ namedtuple("_ShapeSpec", ["channels", "height", "width", "stride"])):
+ def __new__(cls, channels=None, height=None, width=None, stride=None):
+ return super(ShapeSpec, cls).__new__(cls, channels, height, width,
+ stride)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/__init__.py
new file mode 100644
index 000000000..847ddc47a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_architectures.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_architectures.py
new file mode 100644
index 000000000..25767e74a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_architectures.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import unittest
+import ppdet
+
+
+class TestFasterRCNN(unittest.TestCase):
+ def setUp(self):
+ self.set_config()
+
+ def set_config(self):
+ self.cfg_file = 'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.yml'
+
+ def test_trainer(self):
+ # Trainer __init__ will build model and DataLoader
+ # 'train' and 'eval' mode include dataset loading
+ # use 'test' mode to simplify tests
+ cfg = ppdet.core.workspace.load_config(self.cfg_file)
+ trainer = ppdet.engine.Trainer(cfg, mode='test')
+
+
+class TestMaskRCNN(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.yml'
+
+
+class TestCascadeRCNN(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.yml'
+
+
+class TestYolov3(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/yolov3/yolov3_darknet53_270e_coco.yml'
+
+
+class TestSSD(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/ssd/ssd_vgg16_300_240e_voc.yml'
+
+
+class TestGFL(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/gfl/gfl_r50_fpn_1x_coco.yml'
+
+
+class TestPicoDet(TestFasterRCNN):
+ def set_config(self):
+ self.cfg_file = 'configs/picodet/picodet_s_320_coco.yml'
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_base.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_base.py
new file mode 100644
index 000000000..cbb9033b3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import print_function
+import unittest
+
+import contextlib
+
+import paddle
+import paddle.fluid as fluid
+from paddle.fluid.framework import Program
+from paddle.fluid import core
+
+
+class LayerTest(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.seed = 111
+
+ @classmethod
+ def tearDownClass(cls):
+ pass
+
+ def _get_place(self, force_to_use_cpu=False):
+ # this option for ops that only have cpu kernel
+ if force_to_use_cpu:
+ return core.CPUPlace()
+ else:
+ if core.is_compiled_with_cuda():
+ return core.CUDAPlace(0)
+ return core.CPUPlace()
+
+ @contextlib.contextmanager
+ def static_graph(self):
+ paddle.enable_static()
+ scope = fluid.core.Scope()
+ program = Program()
+ with fluid.scope_guard(scope):
+ with fluid.program_guard(program):
+ paddle.seed(self.seed)
+ paddle.framework.random._manual_program_seed(self.seed)
+ yield
+
+ def get_static_graph_result(self,
+ feed,
+ fetch_list,
+ with_lod=False,
+ force_to_use_cpu=False):
+ exe = fluid.Executor(self._get_place(force_to_use_cpu))
+ exe.run(fluid.default_startup_program())
+ return exe.run(fluid.default_main_program(),
+ feed=feed,
+ fetch_list=fetch_list,
+ return_numpy=(not with_lod))
+
+ @contextlib.contextmanager
+ def dynamic_graph(self, force_to_use_cpu=False):
+ paddle.disable_static()
+ with fluid.dygraph.guard(
+ self._get_place(force_to_use_cpu=force_to_use_cpu)):
+ paddle.seed(self.seed)
+ paddle.framework.random._manual_program_seed(self.seed)
+ yield
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_ops.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_ops.py
new file mode 100644
index 000000000..d4b574748
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_ops.py
@@ -0,0 +1,835 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import print_function
+import os, sys
+# add python path of PadleDetection to sys.path
+parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 4)))
+if parent_path not in sys.path:
+ sys.path.append(parent_path)
+
+import unittest
+import numpy as np
+
+import paddle
+import paddle.fluid as fluid
+from paddle.fluid.dygraph import base
+
+import ppdet.modeling.ops as ops
+from ppdet.modeling.tests.test_base import LayerTest
+
+
+def make_rois(h, w, rois_num, output_size):
+ rois = np.zeros((0, 4)).astype('float32')
+ for roi_num in rois_num:
+ roi = np.zeros((roi_num, 4)).astype('float32')
+ roi[:, 0] = np.random.randint(0, h - output_size[0], size=roi_num)
+ roi[:, 1] = np.random.randint(0, w - output_size[1], size=roi_num)
+ roi[:, 2] = np.random.randint(roi[:, 0] + output_size[0], h)
+ roi[:, 3] = np.random.randint(roi[:, 1] + output_size[1], w)
+ rois = np.vstack((rois, roi))
+ return rois
+
+
+def softmax(x):
+ # clip to shiftx, otherwise, when calc loss with
+ # log(exp(shiftx)), may get log(0)=INF
+ shiftx = (x - np.max(x)).clip(-64.)
+ exps = np.exp(shiftx)
+ return exps / np.sum(exps)
+
+
+class TestCollectFpnProposals(LayerTest):
+ def test_collect_fpn_proposals(self):
+ multi_bboxes_np = []
+ multi_scores_np = []
+ rois_num_per_level_np = []
+ for i in range(4):
+ bboxes_np = np.random.rand(5, 4).astype('float32')
+ scores_np = np.random.rand(5, 1).astype('float32')
+ rois_num = np.array([2, 3]).astype('int32')
+ multi_bboxes_np.append(bboxes_np)
+ multi_scores_np.append(scores_np)
+ rois_num_per_level_np.append(rois_num)
+
+ with self.static_graph():
+ multi_bboxes = []
+ multi_scores = []
+ rois_num_per_level = []
+ for i in range(4):
+ bboxes = paddle.static.data(
+ name='rois' + str(i),
+ shape=[5, 4],
+ dtype='float32',
+ lod_level=1)
+ scores = paddle.static.data(
+ name='scores' + str(i),
+ shape=[5, 1],
+ dtype='float32',
+ lod_level=1)
+ rois_num = paddle.static.data(
+ name='rois_num' + str(i), shape=[None], dtype='int32')
+
+ multi_bboxes.append(bboxes)
+ multi_scores.append(scores)
+ rois_num_per_level.append(rois_num)
+
+ fpn_rois, rois_num = ops.collect_fpn_proposals(
+ multi_bboxes,
+ multi_scores,
+ 2,
+ 5,
+ 10,
+ rois_num_per_level=rois_num_per_level)
+ feed = {}
+ for i in range(4):
+ feed['rois' + str(i)] = multi_bboxes_np[i]
+ feed['scores' + str(i)] = multi_scores_np[i]
+ feed['rois_num' + str(i)] = rois_num_per_level_np[i]
+ fpn_rois_stat, rois_num_stat = self.get_static_graph_result(
+ feed=feed, fetch_list=[fpn_rois, rois_num], with_lod=True)
+ fpn_rois_stat = np.array(fpn_rois_stat)
+ rois_num_stat = np.array(rois_num_stat)
+
+ with self.dynamic_graph():
+ multi_bboxes_dy = []
+ multi_scores_dy = []
+ rois_num_per_level_dy = []
+ for i in range(4):
+ bboxes_dy = base.to_variable(multi_bboxes_np[i])
+ scores_dy = base.to_variable(multi_scores_np[i])
+ rois_num_dy = base.to_variable(rois_num_per_level_np[i])
+ multi_bboxes_dy.append(bboxes_dy)
+ multi_scores_dy.append(scores_dy)
+ rois_num_per_level_dy.append(rois_num_dy)
+ fpn_rois_dy, rois_num_dy = ops.collect_fpn_proposals(
+ multi_bboxes_dy,
+ multi_scores_dy,
+ 2,
+ 5,
+ 10,
+ rois_num_per_level=rois_num_per_level_dy)
+ fpn_rois_dy = fpn_rois_dy.numpy()
+ rois_num_dy = rois_num_dy.numpy()
+
+ self.assertTrue(np.array_equal(fpn_rois_stat, fpn_rois_dy))
+ self.assertTrue(np.array_equal(rois_num_stat, rois_num_dy))
+
+ def test_collect_fpn_proposals_error(self):
+ def generate_input(bbox_type, score_type, name):
+ multi_bboxes = []
+ multi_scores = []
+ for i in range(4):
+ bboxes = paddle.static.data(
+ name='rois' + name + str(i),
+ shape=[10, 4],
+ dtype=bbox_type,
+ lod_level=1)
+ scores = paddle.static.data(
+ name='scores' + name + str(i),
+ shape=[10, 1],
+ dtype=score_type,
+ lod_level=1)
+ multi_bboxes.append(bboxes)
+ multi_scores.append(scores)
+ return multi_bboxes, multi_scores
+
+ with self.static_graph():
+ bbox1 = paddle.static.data(
+ name='rois', shape=[5, 10, 4], dtype='float32', lod_level=1)
+ score1 = paddle.static.data(
+ name='scores', shape=[5, 10, 1], dtype='float32', lod_level=1)
+ bbox2, score2 = generate_input('int32', 'float32', '2')
+ self.assertRaises(
+ TypeError,
+ ops.collect_fpn_proposals,
+ multi_rois=bbox1,
+ multi_scores=score1,
+ min_level=2,
+ max_level=5,
+ post_nms_top_n=2000)
+ self.assertRaises(
+ TypeError,
+ ops.collect_fpn_proposals,
+ multi_rois=bbox2,
+ multi_scores=score2,
+ min_level=2,
+ max_level=5,
+ post_nms_top_n=2000)
+
+ paddle.disable_static()
+
+
+class TestDistributeFpnProposals(LayerTest):
+ def test_distribute_fpn_proposals(self):
+ rois_np = np.random.rand(10, 4).astype('float32')
+ rois_num_np = np.array([4, 6]).astype('int32')
+ with self.static_graph():
+ rois = paddle.static.data(
+ name='rois', shape=[10, 4], dtype='float32')
+ rois_num = paddle.static.data(
+ name='rois_num', shape=[None], dtype='int32')
+ multi_rois, restore_ind, rois_num_per_level = ops.distribute_fpn_proposals(
+ fpn_rois=rois,
+ min_level=2,
+ max_level=5,
+ refer_level=4,
+ refer_scale=224,
+ rois_num=rois_num)
+ fetch_list = multi_rois + [restore_ind] + rois_num_per_level
+ output_stat = self.get_static_graph_result(
+ feed={'rois': rois_np,
+ 'rois_num': rois_num_np},
+ fetch_list=fetch_list,
+ with_lod=True)
+ output_stat_np = []
+ for output in output_stat:
+ output_np = np.array(output)
+ if len(output_np) > 0:
+ output_stat_np.append(output_np)
+
+ with self.dynamic_graph():
+ rois_dy = base.to_variable(rois_np)
+ rois_num_dy = base.to_variable(rois_num_np)
+ multi_rois_dy, restore_ind_dy, rois_num_per_level_dy = ops.distribute_fpn_proposals(
+ fpn_rois=rois_dy,
+ min_level=2,
+ max_level=5,
+ refer_level=4,
+ refer_scale=224,
+ rois_num=rois_num_dy)
+ output_dy = multi_rois_dy + [restore_ind_dy] + rois_num_per_level_dy
+ output_dy_np = []
+ for output in output_dy:
+ output_np = output.numpy()
+ if len(output_np) > 0:
+ output_dy_np.append(output_np)
+
+ for res_stat, res_dy in zip(output_stat_np, output_dy_np):
+ self.assertTrue(np.array_equal(res_stat, res_dy))
+
+ def test_distribute_fpn_proposals_error(self):
+ with self.static_graph():
+ fpn_rois = paddle.static.data(
+ name='data_error', shape=[10, 4], dtype='int32', lod_level=1)
+ self.assertRaises(
+ TypeError,
+ ops.distribute_fpn_proposals,
+ fpn_rois=fpn_rois,
+ min_level=2,
+ max_level=5,
+ refer_level=4,
+ refer_scale=224)
+
+ paddle.disable_static()
+
+
+class TestROIAlign(LayerTest):
+ def test_roi_align(self):
+ b, c, h, w = 2, 12, 20, 20
+ inputs_np = np.random.rand(b, c, h, w).astype('float32')
+ rois_num = [4, 6]
+ output_size = (7, 7)
+ rois_np = make_rois(h, w, rois_num, output_size)
+ rois_num_np = np.array(rois_num).astype('int32')
+ with self.static_graph():
+ inputs = paddle.static.data(
+ name='inputs', shape=[b, c, h, w], dtype='float32')
+ rois = paddle.static.data(
+ name='rois', shape=[10, 4], dtype='float32')
+ rois_num = paddle.static.data(
+ name='rois_num', shape=[None], dtype='int32')
+
+ output = ops.roi_align(
+ input=inputs,
+ rois=rois,
+ output_size=output_size,
+ rois_num=rois_num)
+ output_np, = self.get_static_graph_result(
+ feed={
+ 'inputs': inputs_np,
+ 'rois': rois_np,
+ 'rois_num': rois_num_np
+ },
+ fetch_list=output,
+ with_lod=False)
+
+ with self.dynamic_graph():
+ inputs_dy = base.to_variable(inputs_np)
+ rois_dy = base.to_variable(rois_np)
+ rois_num_dy = base.to_variable(rois_num_np)
+
+ output_dy = ops.roi_align(
+ input=inputs_dy,
+ rois=rois_dy,
+ output_size=output_size,
+ rois_num=rois_num_dy)
+ output_dy_np = output_dy.numpy()
+
+ self.assertTrue(np.array_equal(output_np, output_dy_np))
+
+ def test_roi_align_error(self):
+ with self.static_graph():
+ inputs = paddle.static.data(
+ name='inputs', shape=[2, 12, 20, 20], dtype='float32')
+ rois = paddle.static.data(
+ name='data_error', shape=[10, 4], dtype='int32', lod_level=1)
+ self.assertRaises(
+ TypeError,
+ ops.roi_align,
+ input=inputs,
+ rois=rois,
+ output_size=(7, 7))
+
+ paddle.disable_static()
+
+
+class TestROIPool(LayerTest):
+ def test_roi_pool(self):
+ b, c, h, w = 2, 12, 20, 20
+ inputs_np = np.random.rand(b, c, h, w).astype('float32')
+ rois_num = [4, 6]
+ output_size = (7, 7)
+ rois_np = make_rois(h, w, rois_num, output_size)
+ rois_num_np = np.array(rois_num).astype('int32')
+ with self.static_graph():
+ inputs = paddle.static.data(
+ name='inputs', shape=[b, c, h, w], dtype='float32')
+ rois = paddle.static.data(
+ name='rois', shape=[10, 4], dtype='float32')
+ rois_num = paddle.static.data(
+ name='rois_num', shape=[None], dtype='int32')
+
+ output, _ = ops.roi_pool(
+ input=inputs,
+ rois=rois,
+ output_size=output_size,
+ rois_num=rois_num)
+ output_np, = self.get_static_graph_result(
+ feed={
+ 'inputs': inputs_np,
+ 'rois': rois_np,
+ 'rois_num': rois_num_np
+ },
+ fetch_list=[output],
+ with_lod=False)
+
+ with self.dynamic_graph():
+ inputs_dy = base.to_variable(inputs_np)
+ rois_dy = base.to_variable(rois_np)
+ rois_num_dy = base.to_variable(rois_num_np)
+
+ output_dy, _ = ops.roi_pool(
+ input=inputs_dy,
+ rois=rois_dy,
+ output_size=output_size,
+ rois_num=rois_num_dy)
+ output_dy_np = output_dy.numpy()
+
+ self.assertTrue(np.array_equal(output_np, output_dy_np))
+
+ def test_roi_pool_error(self):
+ with self.static_graph():
+ inputs = paddle.static.data(
+ name='inputs', shape=[2, 12, 20, 20], dtype='float32')
+ rois = paddle.static.data(
+ name='data_error', shape=[10, 4], dtype='int32', lod_level=1)
+ self.assertRaises(
+ TypeError,
+ ops.roi_pool,
+ input=inputs,
+ rois=rois,
+ output_size=(7, 7))
+
+ paddle.disable_static()
+
+
+class TestIoUSimilarity(LayerTest):
+ def test_iou_similarity(self):
+ b, c, h, w = 2, 12, 20, 20
+ inputs_np = np.random.rand(b, c, h, w).astype('float32')
+ output_size = (7, 7)
+ x_np = make_rois(h, w, [20], output_size)
+ y_np = make_rois(h, w, [10], output_size)
+ with self.static_graph():
+ x = paddle.static.data(name='x', shape=[20, 4], dtype='float32')
+ y = paddle.static.data(name='y', shape=[10, 4], dtype='float32')
+
+ iou = ops.iou_similarity(x=x, y=y)
+ iou_np, = self.get_static_graph_result(
+ feed={
+ 'x': x_np,
+ 'y': y_np,
+ }, fetch_list=[iou], with_lod=False)
+
+ with self.dynamic_graph():
+ x_dy = base.to_variable(x_np)
+ y_dy = base.to_variable(y_np)
+
+ iou_dy = ops.iou_similarity(x=x_dy, y=y_dy)
+ iou_dy_np = iou_dy.numpy()
+
+ self.assertTrue(np.array_equal(iou_np, iou_dy_np))
+
+
+class TestBipartiteMatch(LayerTest):
+ def test_bipartite_match(self):
+ distance = np.random.random((20, 10)).astype('float32')
+ with self.static_graph():
+ x = paddle.static.data(name='x', shape=[20, 10], dtype='float32')
+
+ match_indices, match_dist = ops.bipartite_match(
+ x, match_type='per_prediction', dist_threshold=0.5)
+ match_indices_np, match_dist_np = self.get_static_graph_result(
+ feed={'x': distance, },
+ fetch_list=[match_indices, match_dist],
+ with_lod=False)
+
+ with self.dynamic_graph():
+ x_dy = base.to_variable(distance)
+
+ match_indices_dy, match_dist_dy = ops.bipartite_match(
+ x_dy, match_type='per_prediction', dist_threshold=0.5)
+ match_indices_dy_np = match_indices_dy.numpy()
+ match_dist_dy_np = match_dist_dy.numpy()
+
+ self.assertTrue(np.array_equal(match_indices_np, match_indices_dy_np))
+ self.assertTrue(np.array_equal(match_dist_np, match_dist_dy_np))
+
+
+class TestYoloBox(LayerTest):
+ def test_yolo_box(self):
+
+ # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2
+ np_x = np.random.random([1, 30, 7, 7]).astype('float32')
+ np_origin_shape = np.array([[608, 608]], dtype='int32')
+ class_num = 10
+ conf_thresh = 0.01
+ downsample_ratio = 32
+ scale_x_y = 1.2
+
+ # static
+ with self.static_graph():
+ # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2
+ x = paddle.static.data(
+ name='x', shape=[1, 30, 7, 7], dtype='float32')
+ origin_shape = paddle.static.data(
+ name='origin_shape', shape=[1, 2], dtype='int32')
+
+ boxes, scores = ops.yolo_box(
+ x,
+ origin_shape, [10, 13, 30, 13],
+ class_num,
+ conf_thresh,
+ downsample_ratio,
+ scale_x_y=scale_x_y)
+
+ boxes_np, scores_np = self.get_static_graph_result(
+ feed={
+ 'x': np_x,
+ 'origin_shape': np_origin_shape,
+ },
+ fetch_list=[boxes, scores],
+ with_lod=False)
+
+ # dygraph
+ with self.dynamic_graph():
+ x_dy = fluid.layers.assign(np_x)
+ origin_shape_dy = fluid.layers.assign(np_origin_shape)
+
+ boxes_dy, scores_dy = ops.yolo_box(
+ x_dy,
+ origin_shape_dy, [10, 13, 30, 13],
+ 10,
+ 0.01,
+ 32,
+ scale_x_y=scale_x_y)
+
+ boxes_dy_np = boxes_dy.numpy()
+ scores_dy_np = scores_dy.numpy()
+
+ self.assertTrue(np.array_equal(boxes_np, boxes_dy_np))
+ self.assertTrue(np.array_equal(scores_np, scores_dy_np))
+
+ def test_yolo_box_error(self):
+ with self.static_graph():
+ # x shape [N C H W], C=K * (5 + class_num), class_num=10, K=2
+ x = paddle.static.data(
+ name='x', shape=[1, 30, 7, 7], dtype='float32')
+ origin_shape = paddle.static.data(
+ name='origin_shape', shape=[1, 2], dtype='int32')
+
+ self.assertRaises(
+ TypeError,
+ ops.yolo_box,
+ x,
+ origin_shape, [10, 13, 30, 13],
+ 10.123,
+ 0.01,
+ 32,
+ scale_x_y=1.2)
+
+ paddle.disable_static()
+
+
+class TestPriorBox(LayerTest):
+ def test_prior_box(self):
+ input_np = np.random.rand(2, 10, 32, 32).astype('float32')
+ image_np = np.random.rand(2, 10, 40, 40).astype('float32')
+ min_sizes = [2, 4]
+ with self.static_graph():
+ input = paddle.static.data(
+ name='input', shape=[2, 10, 32, 32], dtype='float32')
+ image = paddle.static.data(
+ name='image', shape=[2, 10, 40, 40], dtype='float32')
+
+ box, var = ops.prior_box(
+ input=input,
+ image=image,
+ min_sizes=min_sizes,
+ clip=True,
+ flip=True)
+ box_np, var_np = self.get_static_graph_result(
+ feed={
+ 'input': input_np,
+ 'image': image_np,
+ },
+ fetch_list=[box, var],
+ with_lod=False)
+
+ with self.dynamic_graph():
+ inputs_dy = base.to_variable(input_np)
+ image_dy = base.to_variable(image_np)
+
+ box_dy, var_dy = ops.prior_box(
+ input=inputs_dy,
+ image=image_dy,
+ min_sizes=min_sizes,
+ clip=True,
+ flip=True)
+ box_dy_np = box_dy.numpy()
+ var_dy_np = var_dy.numpy()
+
+ self.assertTrue(np.array_equal(box_np, box_dy_np))
+ self.assertTrue(np.array_equal(var_np, var_dy_np))
+
+ def test_prior_box_error(self):
+ with self.static_graph():
+ input = paddle.static.data(
+ name='input', shape=[2, 10, 32, 32], dtype='int32')
+ image = paddle.static.data(
+ name='image', shape=[2, 10, 40, 40], dtype='int32')
+ self.assertRaises(
+ TypeError,
+ ops.prior_box,
+ input=input,
+ image=image,
+ min_sizes=[2, 4],
+ clip=True,
+ flip=True)
+
+ paddle.disable_static()
+
+
+class TestMulticlassNms(LayerTest):
+ def test_multiclass_nms(self):
+ boxes_np = np.random.rand(10, 81, 4).astype('float32')
+ scores_np = np.random.rand(10, 81).astype('float32')
+ rois_num_np = np.array([2, 8]).astype('int32')
+ with self.static_graph():
+ boxes = paddle.static.data(
+ name='bboxes',
+ shape=[None, 81, 4],
+ dtype='float32',
+ lod_level=1)
+ scores = paddle.static.data(
+ name='scores', shape=[None, 81], dtype='float32', lod_level=1)
+ rois_num = paddle.static.data(
+ name='rois_num', shape=[None], dtype='int32')
+
+ output = ops.multiclass_nms(
+ bboxes=boxes,
+ scores=scores,
+ background_label=0,
+ score_threshold=0.5,
+ nms_top_k=400,
+ nms_threshold=0.3,
+ keep_top_k=200,
+ normalized=False,
+ return_index=True,
+ rois_num=rois_num)
+ out_np, index_np, nms_rois_num_np = self.get_static_graph_result(
+ feed={
+ 'bboxes': boxes_np,
+ 'scores': scores_np,
+ 'rois_num': rois_num_np
+ },
+ fetch_list=output,
+ with_lod=True)
+ out_np = np.array(out_np)
+ index_np = np.array(index_np)
+ nms_rois_num_np = np.array(nms_rois_num_np)
+
+ with self.dynamic_graph():
+ boxes_dy = base.to_variable(boxes_np)
+ scores_dy = base.to_variable(scores_np)
+ rois_num_dy = base.to_variable(rois_num_np)
+
+ out_dy, index_dy, nms_rois_num_dy = ops.multiclass_nms(
+ bboxes=boxes_dy,
+ scores=scores_dy,
+ background_label=0,
+ score_threshold=0.5,
+ nms_top_k=400,
+ nms_threshold=0.3,
+ keep_top_k=200,
+ normalized=False,
+ return_index=True,
+ rois_num=rois_num_dy)
+ out_dy_np = out_dy.numpy()
+ index_dy_np = index_dy.numpy()
+ nms_rois_num_dy_np = nms_rois_num_dy.numpy()
+
+ self.assertTrue(np.array_equal(out_np, out_dy_np))
+ self.assertTrue(np.array_equal(index_np, index_dy_np))
+ self.assertTrue(np.array_equal(nms_rois_num_np, nms_rois_num_dy_np))
+
+ def test_multiclass_nms_error(self):
+ with self.static_graph():
+ boxes = paddle.static.data(
+ name='bboxes', shape=[81, 4], dtype='float32', lod_level=1)
+ scores = paddle.static.data(
+ name='scores', shape=[81], dtype='float32', lod_level=1)
+ rois_num = paddle.static.data(
+ name='rois_num', shape=[40, 41], dtype='int32')
+ self.assertRaises(
+ TypeError,
+ ops.multiclass_nms,
+ boxes=boxes,
+ scores=scores,
+ background_label=0,
+ score_threshold=0.5,
+ nms_top_k=400,
+ nms_threshold=0.3,
+ keep_top_k=200,
+ normalized=False,
+ return_index=True,
+ rois_num=rois_num)
+
+
+class TestMatrixNMS(LayerTest):
+ def test_matrix_nms(self):
+ N, M, C = 7, 1200, 21
+ BOX_SIZE = 4
+ nms_top_k = 400
+ keep_top_k = 200
+ score_threshold = 0.01
+ post_threshold = 0.
+
+ scores_np = np.random.random((N * M, C)).astype('float32')
+ scores_np = np.apply_along_axis(softmax, 1, scores_np)
+ scores_np = np.reshape(scores_np, (N, M, C))
+ scores_np = np.transpose(scores_np, (0, 2, 1))
+
+ boxes_np = np.random.random((N, M, BOX_SIZE)).astype('float32')
+ boxes_np[:, :, 0:2] = boxes_np[:, :, 0:2] * 0.5
+ boxes_np[:, :, 2:4] = boxes_np[:, :, 2:4] * 0.5 + 0.5
+
+ with self.static_graph():
+ boxes = paddle.static.data(
+ name='boxes', shape=[N, M, BOX_SIZE], dtype='float32')
+ scores = paddle.static.data(
+ name='scores', shape=[N, C, M], dtype='float32')
+ out, index, _ = ops.matrix_nms(
+ bboxes=boxes,
+ scores=scores,
+ score_threshold=score_threshold,
+ post_threshold=post_threshold,
+ nms_top_k=nms_top_k,
+ keep_top_k=keep_top_k,
+ return_index=True)
+ out_np, index_np = self.get_static_graph_result(
+ feed={'boxes': boxes_np,
+ 'scores': scores_np},
+ fetch_list=[out, index],
+ with_lod=True)
+
+ with self.dynamic_graph():
+ boxes_dy = base.to_variable(boxes_np)
+ scores_dy = base.to_variable(scores_np)
+
+ out_dy, index_dy, _ = ops.matrix_nms(
+ bboxes=boxes_dy,
+ scores=scores_dy,
+ score_threshold=score_threshold,
+ post_threshold=post_threshold,
+ nms_top_k=nms_top_k,
+ keep_top_k=keep_top_k,
+ return_index=True)
+ out_dy_np = out_dy.numpy()
+ index_dy_np = index_dy.numpy()
+
+ self.assertTrue(np.array_equal(out_np, out_dy_np))
+ self.assertTrue(np.array_equal(index_np, index_dy_np))
+
+ def test_matrix_nms_error(self):
+ with self.static_graph():
+ bboxes = paddle.static.data(
+ name='bboxes', shape=[7, 1200, 4], dtype='float32')
+ scores = paddle.static.data(
+ name='data_error', shape=[7, 21, 1200], dtype='int32')
+ self.assertRaises(
+ TypeError,
+ ops.matrix_nms,
+ bboxes=bboxes,
+ scores=scores,
+ score_threshold=0.01,
+ post_threshold=0.,
+ nms_top_k=400,
+ keep_top_k=200,
+ return_index=True)
+
+ paddle.disable_static()
+
+
+class TestBoxCoder(LayerTest):
+ def test_box_coder(self):
+
+ prior_box_np = np.random.random((81, 4)).astype('float32')
+ prior_box_var_np = np.random.random((81, 4)).astype('float32')
+ target_box_np = np.random.random((20, 81, 4)).astype('float32')
+
+ # static
+ with self.static_graph():
+ prior_box = paddle.static.data(
+ name='prior_box', shape=[81, 4], dtype='float32')
+ prior_box_var = paddle.static.data(
+ name='prior_box_var', shape=[81, 4], dtype='float32')
+ target_box = paddle.static.data(
+ name='target_box', shape=[20, 81, 4], dtype='float32')
+
+ boxes = ops.box_coder(
+ prior_box=prior_box,
+ prior_box_var=prior_box_var,
+ target_box=target_box,
+ code_type="decode_center_size",
+ box_normalized=False)
+
+ boxes_np, = self.get_static_graph_result(
+ feed={
+ 'prior_box': prior_box_np,
+ 'prior_box_var': prior_box_var_np,
+ 'target_box': target_box_np,
+ },
+ fetch_list=[boxes],
+ with_lod=False)
+
+ # dygraph
+ with self.dynamic_graph():
+ prior_box_dy = base.to_variable(prior_box_np)
+ prior_box_var_dy = base.to_variable(prior_box_var_np)
+ target_box_dy = base.to_variable(target_box_np)
+
+ boxes_dy = ops.box_coder(
+ prior_box=prior_box_dy,
+ prior_box_var=prior_box_var_dy,
+ target_box=target_box_dy,
+ code_type="decode_center_size",
+ box_normalized=False)
+
+ boxes_dy_np = boxes_dy.numpy()
+
+ self.assertTrue(np.array_equal(boxes_np, boxes_dy_np))
+
+ def test_box_coder_error(self):
+ with self.static_graph():
+ prior_box = paddle.static.data(
+ name='prior_box', shape=[81, 4], dtype='int32')
+ prior_box_var = paddle.static.data(
+ name='prior_box_var', shape=[81, 4], dtype='float32')
+ target_box = paddle.static.data(
+ name='target_box', shape=[20, 81, 4], dtype='float32')
+
+ self.assertRaises(TypeError, ops.box_coder, prior_box,
+ prior_box_var, target_box)
+
+ paddle.disable_static()
+
+
+class TestGenerateProposals(LayerTest):
+ def test_generate_proposals(self):
+ scores_np = np.random.rand(2, 3, 4, 4).astype('float32')
+ bbox_deltas_np = np.random.rand(2, 12, 4, 4).astype('float32')
+ im_shape_np = np.array([[8, 8], [6, 6]]).astype('float32')
+ anchors_np = np.reshape(np.arange(4 * 4 * 3 * 4),
+ [4, 4, 3, 4]).astype('float32')
+ variances_np = np.ones((4, 4, 3, 4)).astype('float32')
+
+ with self.static_graph():
+ scores = paddle.static.data(
+ name='scores', shape=[2, 3, 4, 4], dtype='float32')
+ bbox_deltas = paddle.static.data(
+ name='bbox_deltas', shape=[2, 12, 4, 4], dtype='float32')
+ im_shape = paddle.static.data(
+ name='im_shape', shape=[2, 2], dtype='float32')
+ anchors = paddle.static.data(
+ name='anchors', shape=[4, 4, 3, 4], dtype='float32')
+ variances = paddle.static.data(
+ name='var', shape=[4, 4, 3, 4], dtype='float32')
+ rois, roi_probs, rois_num = ops.generate_proposals(
+ scores,
+ bbox_deltas,
+ im_shape,
+ anchors,
+ variances,
+ pre_nms_top_n=10,
+ post_nms_top_n=5,
+ return_rois_num=True)
+ rois_stat, roi_probs_stat, rois_num_stat = self.get_static_graph_result(
+ feed={
+ 'scores': scores_np,
+ 'bbox_deltas': bbox_deltas_np,
+ 'im_shape': im_shape_np,
+ 'anchors': anchors_np,
+ 'var': variances_np
+ },
+ fetch_list=[rois, roi_probs, rois_num],
+ with_lod=True)
+
+ with self.dynamic_graph():
+ scores_dy = base.to_variable(scores_np)
+ bbox_deltas_dy = base.to_variable(bbox_deltas_np)
+ im_shape_dy = base.to_variable(im_shape_np)
+ anchors_dy = base.to_variable(anchors_np)
+ variances_dy = base.to_variable(variances_np)
+ rois, roi_probs, rois_num = ops.generate_proposals(
+ scores_dy,
+ bbox_deltas_dy,
+ im_shape_dy,
+ anchors_dy,
+ variances_dy,
+ pre_nms_top_n=10,
+ post_nms_top_n=5,
+ return_rois_num=True)
+ rois_dy = rois.numpy()
+ roi_probs_dy = roi_probs.numpy()
+ rois_num_dy = rois_num.numpy()
+
+ self.assertTrue(np.array_equal(np.array(rois_stat), rois_dy))
+ self.assertTrue(np.array_equal(np.array(roi_probs_stat), roi_probs_dy))
+ self.assertTrue(np.array_equal(np.array(rois_num_stat), rois_num_dy))
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_yolov3_loss.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_yolov3_loss.py
new file mode 100644
index 000000000..cec8bc940
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/tests/test_yolov3_loss.py
@@ -0,0 +1,414 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import division
+
+import unittest
+
+import paddle
+from paddle import fluid
+# add python path of PadleDetection to sys.path
+import os
+import sys
+parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 4)))
+if parent_path not in sys.path:
+ sys.path.append(parent_path)
+
+from ppdet.modeling.losses import YOLOv3Loss
+from ppdet.data.transform.op_helper import jaccard_overlap
+import numpy as np
+
+
+def _split_ioup(output, an_num, num_classes):
+ """
+ Split output feature map to output, predicted iou
+ along channel dimension
+ """
+ ioup = fluid.layers.slice(output, axes=[1], starts=[0], ends=[an_num])
+ ioup = fluid.layers.sigmoid(ioup)
+ oriout = fluid.layers.slice(
+ output, axes=[1], starts=[an_num], ends=[an_num * (num_classes + 6)])
+ return (ioup, oriout)
+
+
+def _split_output(output, an_num, num_classes):
+ """
+ Split output feature map to x, y, w, h, objectness, classification
+ along channel dimension
+ """
+ x = fluid.layers.strided_slice(
+ output,
+ axes=[1],
+ starts=[0],
+ ends=[output.shape[1]],
+ strides=[5 + num_classes])
+ y = fluid.layers.strided_slice(
+ output,
+ axes=[1],
+ starts=[1],
+ ends=[output.shape[1]],
+ strides=[5 + num_classes])
+ w = fluid.layers.strided_slice(
+ output,
+ axes=[1],
+ starts=[2],
+ ends=[output.shape[1]],
+ strides=[5 + num_classes])
+ h = fluid.layers.strided_slice(
+ output,
+ axes=[1],
+ starts=[3],
+ ends=[output.shape[1]],
+ strides=[5 + num_classes])
+ obj = fluid.layers.strided_slice(
+ output,
+ axes=[1],
+ starts=[4],
+ ends=[output.shape[1]],
+ strides=[5 + num_classes])
+ clss = []
+ stride = output.shape[1] // an_num
+ for m in range(an_num):
+ clss.append(
+ fluid.layers.slice(
+ output,
+ axes=[1],
+ starts=[stride * m + 5],
+ ends=[stride * m + 5 + num_classes]))
+ cls = fluid.layers.transpose(
+ fluid.layers.stack(
+ clss, axis=1), perm=[0, 1, 3, 4, 2])
+ return (x, y, w, h, obj, cls)
+
+
+def _split_target(target):
+ """
+ split target to x, y, w, h, objectness, classification
+ along dimension 2
+ target is in shape [N, an_num, 6 + class_num, H, W]
+ """
+ tx = target[:, :, 0, :, :]
+ ty = target[:, :, 1, :, :]
+ tw = target[:, :, 2, :, :]
+ th = target[:, :, 3, :, :]
+ tscale = target[:, :, 4, :, :]
+ tobj = target[:, :, 5, :, :]
+ tcls = fluid.layers.transpose(target[:, :, 6:, :, :], perm=[0, 1, 3, 4, 2])
+ tcls.stop_gradient = True
+ return (tx, ty, tw, th, tscale, tobj, tcls)
+
+
+def _calc_obj_loss(output, obj, tobj, gt_box, batch_size, anchors, num_classes,
+ downsample, ignore_thresh, scale_x_y):
+ # A prediction bbox overlap any gt_bbox over ignore_thresh,
+ # objectness loss will be ignored, process as follows:
+ # 1. get pred bbox, which is same with YOLOv3 infer mode, use yolo_box here
+ # NOTE: img_size is set as 1.0 to get noramlized pred bbox
+ bbox, prob = fluid.layers.yolo_box(
+ x=output,
+ img_size=fluid.layers.ones(
+ shape=[batch_size, 2], dtype="int32"),
+ anchors=anchors,
+ class_num=num_classes,
+ conf_thresh=0.,
+ downsample_ratio=downsample,
+ clip_bbox=False,
+ scale_x_y=scale_x_y)
+ # 2. split pred bbox and gt bbox by sample, calculate IoU between pred bbox
+ # and gt bbox in each sample
+ if batch_size > 1:
+ preds = fluid.layers.split(bbox, batch_size, dim=0)
+ gts = fluid.layers.split(gt_box, batch_size, dim=0)
+ else:
+ preds = [bbox]
+ gts = [gt_box]
+ probs = [prob]
+ ious = []
+ for pred, gt in zip(preds, gts):
+
+ def box_xywh2xyxy(box):
+ x = box[:, 0]
+ y = box[:, 1]
+ w = box[:, 2]
+ h = box[:, 3]
+ return fluid.layers.stack(
+ [
+ x - w / 2.,
+ y - h / 2.,
+ x + w / 2.,
+ y + h / 2.,
+ ], axis=1)
+
+ pred = fluid.layers.squeeze(pred, axes=[0])
+ gt = box_xywh2xyxy(fluid.layers.squeeze(gt, axes=[0]))
+ ious.append(fluid.layers.iou_similarity(pred, gt))
+ iou = fluid.layers.stack(ious, axis=0)
+ # 3. Get iou_mask by IoU between gt bbox and prediction bbox,
+ # Get obj_mask by tobj(holds gt_score), calculate objectness loss
+ max_iou = fluid.layers.reduce_max(iou, dim=-1)
+ iou_mask = fluid.layers.cast(max_iou <= ignore_thresh, dtype="float32")
+ output_shape = fluid.layers.shape(output)
+ an_num = len(anchors) // 2
+ iou_mask = fluid.layers.reshape(iou_mask, (-1, an_num, output_shape[2],
+ output_shape[3]))
+ iou_mask.stop_gradient = True
+ # NOTE: tobj holds gt_score, obj_mask holds object existence mask
+ obj_mask = fluid.layers.cast(tobj > 0., dtype="float32")
+ obj_mask.stop_gradient = True
+ # For positive objectness grids, objectness loss should be calculated
+ # For negative objectness grids, objectness loss is calculated only iou_mask == 1.0
+ loss_obj = fluid.layers.sigmoid_cross_entropy_with_logits(obj, obj_mask)
+ loss_obj_pos = fluid.layers.reduce_sum(loss_obj * tobj, dim=[1, 2, 3])
+ loss_obj_neg = fluid.layers.reduce_sum(
+ loss_obj * (1.0 - obj_mask) * iou_mask, dim=[1, 2, 3])
+ return loss_obj_pos, loss_obj_neg
+
+
+def fine_grained_loss(output,
+ target,
+ gt_box,
+ batch_size,
+ num_classes,
+ anchors,
+ ignore_thresh,
+ downsample,
+ scale_x_y=1.,
+ eps=1e-10):
+ an_num = len(anchors) // 2
+ x, y, w, h, obj, cls = _split_output(output, an_num, num_classes)
+ tx, ty, tw, th, tscale, tobj, tcls = _split_target(target)
+
+ tscale_tobj = tscale * tobj
+
+ scale_x_y = scale_x_y
+
+ if (abs(scale_x_y - 1.0) < eps):
+ loss_x = fluid.layers.sigmoid_cross_entropy_with_logits(
+ x, tx) * tscale_tobj
+ loss_x = fluid.layers.reduce_sum(loss_x, dim=[1, 2, 3])
+ loss_y = fluid.layers.sigmoid_cross_entropy_with_logits(
+ y, ty) * tscale_tobj
+ loss_y = fluid.layers.reduce_sum(loss_y, dim=[1, 2, 3])
+ else:
+ dx = scale_x_y * fluid.layers.sigmoid(x) - 0.5 * (scale_x_y - 1.0)
+ dy = scale_x_y * fluid.layers.sigmoid(y) - 0.5 * (scale_x_y - 1.0)
+ loss_x = fluid.layers.abs(dx - tx) * tscale_tobj
+ loss_x = fluid.layers.reduce_sum(loss_x, dim=[1, 2, 3])
+ loss_y = fluid.layers.abs(dy - ty) * tscale_tobj
+ loss_y = fluid.layers.reduce_sum(loss_y, dim=[1, 2, 3])
+
+ # NOTE: we refined loss function of (w, h) as L1Loss
+ loss_w = fluid.layers.abs(w - tw) * tscale_tobj
+ loss_w = fluid.layers.reduce_sum(loss_w, dim=[1, 2, 3])
+ loss_h = fluid.layers.abs(h - th) * tscale_tobj
+ loss_h = fluid.layers.reduce_sum(loss_h, dim=[1, 2, 3])
+
+ loss_obj_pos, loss_obj_neg = _calc_obj_loss(
+ output, obj, tobj, gt_box, batch_size, anchors, num_classes, downsample,
+ ignore_thresh, scale_x_y)
+
+ loss_cls = fluid.layers.sigmoid_cross_entropy_with_logits(cls, tcls)
+ loss_cls = fluid.layers.elementwise_mul(loss_cls, tobj, axis=0)
+ loss_cls = fluid.layers.reduce_sum(loss_cls, dim=[1, 2, 3, 4])
+
+ loss_xys = fluid.layers.reduce_mean(loss_x + loss_y)
+ loss_whs = fluid.layers.reduce_mean(loss_w + loss_h)
+ loss_objs = fluid.layers.reduce_mean(loss_obj_pos + loss_obj_neg)
+ loss_clss = fluid.layers.reduce_mean(loss_cls)
+
+ losses_all = {
+ "loss_xy": fluid.layers.sum(loss_xys),
+ "loss_wh": fluid.layers.sum(loss_whs),
+ "loss_loc": fluid.layers.sum(loss_xys) + fluid.layers.sum(loss_whs),
+ "loss_obj": fluid.layers.sum(loss_objs),
+ "loss_cls": fluid.layers.sum(loss_clss),
+ }
+ return losses_all, x, y, tx, ty
+
+
+def gt2yolotarget(gt_bbox, gt_class, gt_score, anchors, mask, num_classes, size,
+ stride):
+ grid_h, grid_w = size
+ h, w = grid_h * stride, grid_w * stride
+ an_hw = np.array(anchors) / np.array([[w, h]])
+ target = np.zeros(
+ (len(mask), 6 + num_classes, grid_h, grid_w), dtype=np.float32)
+ for b in range(gt_bbox.shape[0]):
+ gx, gy, gw, gh = gt_bbox[b, :]
+ cls = gt_class[b]
+ score = gt_score[b]
+ if gw <= 0. or gh <= 0. or score <= 0.:
+ continue
+
+ # find best match anchor index
+ best_iou = 0.
+ best_idx = -1
+ for an_idx in range(an_hw.shape[0]):
+ iou = jaccard_overlap([0., 0., gw, gh],
+ [0., 0., an_hw[an_idx, 0], an_hw[an_idx, 1]])
+ if iou > best_iou:
+ best_iou = iou
+ best_idx = an_idx
+
+ gi = int(gx * grid_w)
+ gj = int(gy * grid_h)
+
+ # gtbox should be regresed in this layes if best match
+ # anchor index in anchor mask of this layer
+ if best_idx in mask:
+ best_n = mask.index(best_idx)
+
+ # x, y, w, h, scale
+ target[best_n, 0, gj, gi] = gx * grid_w - gi
+ target[best_n, 1, gj, gi] = gy * grid_h - gj
+ target[best_n, 2, gj, gi] = np.log(gw * w / anchors[best_idx][0])
+ target[best_n, 3, gj, gi] = np.log(gh * h / anchors[best_idx][1])
+ target[best_n, 4, gj, gi] = 2.0 - gw * gh
+
+ # objectness record gt_score
+ # if target[best_n, 5, gj, gi] > 0:
+ # print('find 1 duplicate')
+ target[best_n, 5, gj, gi] = score
+
+ # classification
+ target[best_n, 6 + cls, gj, gi] = 1.
+
+ return target
+
+
+class TestYolov3LossOp(unittest.TestCase):
+ def setUp(self):
+ self.initTestCase()
+ x = np.random.uniform(0, 1, self.x_shape).astype('float64')
+ gtbox = np.random.random(size=self.gtbox_shape).astype('float64')
+ gtlabel = np.random.randint(0, self.class_num, self.gtbox_shape[:2])
+ gtmask = np.random.randint(0, 2, self.gtbox_shape[:2])
+ gtbox = gtbox * gtmask[:, :, np.newaxis]
+ gtlabel = gtlabel * gtmask
+
+ gtscore = np.ones(self.gtbox_shape[:2]).astype('float64')
+ if self.gtscore:
+ gtscore = np.random.random(self.gtbox_shape[:2]).astype('float64')
+
+ target = []
+ for box, label, score in zip(gtbox, gtlabel, gtscore):
+ target.append(
+ gt2yolotarget(box, label, score, self.anchors, self.anchor_mask,
+ self.class_num, (self.h, self.w
+ ), self.downsample_ratio))
+
+ self.target = np.array(target).astype('float64')
+
+ self.mask_anchors = []
+ for i in self.anchor_mask:
+ self.mask_anchors.extend(self.anchors[i])
+ self.x = x
+ self.gtbox = gtbox
+ self.gtlabel = gtlabel
+ self.gtscore = gtscore
+
+ def initTestCase(self):
+ self.b = 8
+ self.h = 19
+ self.w = 19
+ self.anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
+ [59, 119], [116, 90], [156, 198], [373, 326]]
+ self.anchor_mask = [6, 7, 8]
+ self.na = len(self.anchor_mask)
+ self.class_num = 80
+ self.ignore_thresh = 0.7
+ self.downsample_ratio = 32
+ self.x_shape = (self.b, len(self.anchor_mask) * (5 + self.class_num),
+ self.h, self.w)
+ self.gtbox_shape = (self.b, 40, 4)
+ self.gtscore = True
+ self.use_label_smooth = False
+ self.scale_x_y = 1.
+
+ def test_loss(self):
+ x, gtbox, gtlabel, gtscore, target = self.x, self.gtbox, self.gtlabel, self.gtscore, self.target
+ yolo_loss = YOLOv3Loss(
+ ignore_thresh=self.ignore_thresh,
+ label_smooth=self.use_label_smooth,
+ num_classes=self.class_num,
+ downsample=self.downsample_ratio,
+ scale_x_y=self.scale_x_y)
+ x = paddle.to_tensor(x.astype(np.float32))
+ gtbox = paddle.to_tensor(gtbox.astype(np.float32))
+ gtlabel = paddle.to_tensor(gtlabel.astype(np.float32))
+ gtscore = paddle.to_tensor(gtscore.astype(np.float32))
+ t = paddle.to_tensor(target.astype(np.float32))
+ anchor = [self.anchors[i] for i in self.anchor_mask]
+ (yolo_loss1, px, py, tx, ty) = fine_grained_loss(
+ output=x,
+ target=t,
+ gt_box=gtbox,
+ batch_size=self.b,
+ num_classes=self.class_num,
+ anchors=self.mask_anchors,
+ ignore_thresh=self.ignore_thresh,
+ downsample=self.downsample_ratio,
+ scale_x_y=self.scale_x_y)
+ yolo_loss2 = yolo_loss.yolov3_loss(
+ x, t, gtbox, anchor, self.downsample_ratio, self.scale_x_y)
+ for k in yolo_loss2:
+ self.assertAlmostEqual(
+ yolo_loss1[k].numpy()[0],
+ yolo_loss2[k].numpy()[0],
+ delta=1e-2,
+ msg=k)
+
+
+class TestYolov3LossNoGTScore(TestYolov3LossOp):
+ def initTestCase(self):
+ self.b = 1
+ self.h = 76
+ self.w = 76
+ self.anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
+ [59, 119], [116, 90], [156, 198], [373, 326]]
+ self.anchor_mask = [0, 1, 2]
+ self.na = len(self.anchor_mask)
+ self.class_num = 80
+ self.ignore_thresh = 0.7
+ self.downsample_ratio = 8
+ self.x_shape = (self.b, len(self.anchor_mask) * (5 + self.class_num),
+ self.h, self.w)
+ self.gtbox_shape = (self.b, 40, 4)
+ self.gtscore = False
+ self.use_label_smooth = False
+ self.scale_x_y = 1.
+
+
+class TestYolov3LossWithScaleXY(TestYolov3LossOp):
+ def initTestCase(self):
+ self.b = 5
+ self.h = 38
+ self.w = 38
+ self.anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
+ [59, 119], [116, 90], [156, 198], [373, 326]]
+ self.anchor_mask = [3, 4, 5]
+ self.na = len(self.anchor_mask)
+ self.class_num = 80
+ self.ignore_thresh = 0.7
+ self.downsample_ratio = 16
+ self.x_shape = (self.b, len(self.anchor_mask) * (5 + self.class_num),
+ self.h, self.w)
+ self.gtbox_shape = (self.b, 40, 4)
+ self.gtscore = True
+ self.use_label_smooth = False
+ self.scale_x_y = 1.2
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__init__.py
new file mode 100644
index 000000000..4aed815d7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__init__.py
@@ -0,0 +1,25 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import detr_transformer
+from . import utils
+from . import matchers
+from . import position_encoding
+from . import deformable_transformer
+
+from .detr_transformer import *
+from .utils import *
+from .matchers import *
+from .position_encoding import *
+from .deformable_transformer import *
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..fae862b10
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/deformable_transformer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/deformable_transformer.cpython-37.pyc
new file mode 100644
index 000000000..75ee28c4b
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/deformable_transformer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/detr_transformer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/detr_transformer.cpython-37.pyc
new file mode 100644
index 000000000..11bce7aef
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/detr_transformer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/matchers.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/matchers.cpython-37.pyc
new file mode 100644
index 000000000..fc05ed243
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/matchers.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/position_encoding.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/position_encoding.cpython-37.pyc
new file mode 100644
index 000000000..195c55c50
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/position_encoding.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/utils.cpython-37.pyc
new file mode 100644
index 000000000..f243af8b9
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/__pycache__/utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/deformable_transformer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/deformable_transformer.py
new file mode 100644
index 000000000..0c2089a8b
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/deformable_transformer.py
@@ -0,0 +1,517 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Modified from Deformable-DETR (https://github.com/fundamentalvision/Deformable-DETR)
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from paddle import ParamAttr
+
+from ppdet.core.workspace import register
+from ..layers import MultiHeadAttention
+from .position_encoding import PositionEmbedding
+from .utils import _get_clones, deformable_attention_core_func
+from ..initializer import linear_init_, constant_, xavier_uniform_, normal_
+
+__all__ = ['DeformableTransformer']
+
+
+class MSDeformableAttention(nn.Layer):
+ def __init__(self,
+ embed_dim=256,
+ num_heads=8,
+ num_levels=4,
+ num_points=4,
+ lr_mult=0.1):
+ """
+ Multi-Scale Deformable Attention Module
+ """
+ super(MSDeformableAttention, self).__init__()
+ self.embed_dim = embed_dim
+ self.num_heads = num_heads
+ self.num_levels = num_levels
+ self.num_points = num_points
+ self.total_points = num_heads * num_levels * num_points
+
+ self.head_dim = embed_dim // num_heads
+ assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
+
+ self.sampling_offsets = nn.Linear(
+ embed_dim,
+ self.total_points * 2,
+ weight_attr=ParamAttr(learning_rate=lr_mult),
+ bias_attr=ParamAttr(learning_rate=lr_mult))
+
+ self.attention_weights = nn.Linear(embed_dim, self.total_points)
+ self.value_proj = nn.Linear(embed_dim, embed_dim)
+ self.output_proj = nn.Linear(embed_dim, embed_dim)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ # sampling_offsets
+ constant_(self.sampling_offsets.weight)
+ thetas = paddle.arange(
+ self.num_heads,
+ dtype=paddle.float32) * (2.0 * math.pi / self.num_heads)
+ grid_init = paddle.stack([thetas.cos(), thetas.sin()], -1)
+ grid_init = grid_init / grid_init.abs().max(-1, keepdim=True)
+ grid_init = grid_init.reshape([self.num_heads, 1, 1, 2]).tile(
+ [1, self.num_levels, self.num_points, 1])
+ scaling = paddle.arange(
+ 1, self.num_points + 1,
+ dtype=paddle.float32).reshape([1, 1, -1, 1])
+ grid_init *= scaling
+ self.sampling_offsets.bias.set_value(grid_init.flatten())
+ # attention_weights
+ constant_(self.attention_weights.weight)
+ constant_(self.attention_weights.bias)
+ # proj
+ xavier_uniform_(self.value_proj.weight)
+ constant_(self.value_proj.bias)
+ xavier_uniform_(self.output_proj.weight)
+ constant_(self.output_proj.bias)
+
+ def forward(self,
+ query,
+ reference_points,
+ value,
+ value_spatial_shapes,
+ value_mask=None):
+ """
+ Args:
+ query (Tensor): [bs, query_length, C]
+ reference_points (Tensor): [bs, query_length, n_levels, 2], range in [0, 1], top-left (0,0),
+ bottom-right (1, 1), including padding area
+ value (Tensor): [bs, value_length, C]
+ value_spatial_shapes (Tensor): [n_levels, 2], [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})]
+ value_mask (Tensor): [bs, value_length], True for non-padding elements, False for padding elements
+
+ Returns:
+ output (Tensor): [bs, Length_{query}, C]
+ """
+ bs, Len_q = query.shape[:2]
+ Len_v = value.shape[1]
+ assert int(value_spatial_shapes.prod(1).sum()) == Len_v
+
+ value = self.value_proj(value)
+ if value_mask is not None:
+ value_mask = value_mask.astype(value.dtype).unsqueeze(-1)
+ value *= value_mask
+ value = value.reshape([bs, Len_v, self.num_heads, self.head_dim])
+
+ sampling_offsets = self.sampling_offsets(query).reshape(
+ [bs, Len_q, self.num_heads, self.num_levels, self.num_points, 2])
+ attention_weights = self.attention_weights(query).reshape(
+ [bs, Len_q, self.num_heads, self.num_levels * self.num_points])
+ attention_weights = F.softmax(attention_weights, -1).reshape(
+ [bs, Len_q, self.num_heads, self.num_levels, self.num_points])
+
+ offset_normalizer = value_spatial_shapes.flip([1]).reshape(
+ [1, 1, 1, self.num_levels, 1, 2])
+ sampling_locations = reference_points.reshape([
+ bs, Len_q, 1, self.num_levels, 1, 2
+ ]) + sampling_offsets / offset_normalizer
+
+ output = deformable_attention_core_func(
+ value, value_spatial_shapes, sampling_locations, attention_weights)
+ output = self.output_proj(output)
+
+ return output
+
+
+class DeformableTransformerEncoderLayer(nn.Layer):
+ def __init__(self,
+ d_model=256,
+ n_head=8,
+ dim_feedforward=1024,
+ dropout=0.1,
+ activation="relu",
+ n_levels=4,
+ n_points=4,
+ weight_attr=None,
+ bias_attr=None):
+ super(DeformableTransformerEncoderLayer, self).__init__()
+ # self attention
+ self.self_attn = MSDeformableAttention(d_model, n_head, n_levels,
+ n_points)
+ self.dropout1 = nn.Dropout(dropout)
+ self.norm1 = nn.LayerNorm(d_model)
+ # ffn
+ self.linear1 = nn.Linear(d_model, dim_feedforward, weight_attr,
+ bias_attr)
+ self.activation = getattr(F, activation)
+ self.dropout2 = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model, weight_attr,
+ bias_attr)
+ self.dropout3 = nn.Dropout(dropout)
+ self.norm2 = nn.LayerNorm(d_model)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.linear1)
+ linear_init_(self.linear2)
+ xavier_uniform_(self.linear1.weight)
+ xavier_uniform_(self.linear2.weight)
+
+ def with_pos_embed(self, tensor, pos):
+ return tensor if pos is None else tensor + pos
+
+ def forward_ffn(self, src):
+ src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
+ src = src + self.dropout3(src2)
+ src = self.norm2(src)
+ return src
+
+ def forward(self,
+ src,
+ reference_points,
+ spatial_shapes,
+ src_mask=None,
+ pos_embed=None):
+ # self attention
+ src2 = self.self_attn(
+ self.with_pos_embed(src, pos_embed), reference_points, src,
+ spatial_shapes, src_mask)
+ src = src + self.dropout1(src2)
+ src = self.norm1(src)
+ # ffn
+ src = self.forward_ffn(src)
+
+ return src
+
+
+class DeformableTransformerEncoder(nn.Layer):
+ def __init__(self, encoder_layer, num_layers):
+ super(DeformableTransformerEncoder, self).__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+
+ @staticmethod
+ def get_reference_points(spatial_shapes, valid_ratios):
+ valid_ratios = valid_ratios.unsqueeze(1)
+ reference_points = []
+ for i, (H, W) in enumerate(spatial_shapes.tolist()):
+ ref_y, ref_x = paddle.meshgrid(
+ paddle.linspace(0.5, H - 0.5, H),
+ paddle.linspace(0.5, W - 0.5, W))
+ ref_y = ref_y.flatten().unsqueeze(0) / (valid_ratios[:, :, i, 1] *
+ H)
+ ref_x = ref_x.flatten().unsqueeze(0) / (valid_ratios[:, :, i, 0] *
+ W)
+ reference_points.append(paddle.stack((ref_x, ref_y), axis=-1))
+ reference_points = paddle.concat(reference_points, 1).unsqueeze(2)
+ reference_points = reference_points * valid_ratios
+ return reference_points
+
+ def forward(self,
+ src,
+ spatial_shapes,
+ src_mask=None,
+ pos_embed=None,
+ valid_ratios=None):
+ output = src
+ if valid_ratios is None:
+ valid_ratios = paddle.ones(
+ [src.shape[0], spatial_shapes.shape[0], 2])
+ reference_points = self.get_reference_points(spatial_shapes,
+ valid_ratios)
+ for layer in self.layers:
+ output = layer(output, reference_points, spatial_shapes, src_mask,
+ pos_embed)
+
+ return output
+
+
+class DeformableTransformerDecoderLayer(nn.Layer):
+ def __init__(self,
+ d_model=256,
+ n_head=8,
+ dim_feedforward=1024,
+ dropout=0.1,
+ activation="relu",
+ n_levels=4,
+ n_points=4,
+ weight_attr=None,
+ bias_attr=None):
+ super(DeformableTransformerDecoderLayer, self).__init__()
+
+ # self attention
+ self.self_attn = MultiHeadAttention(d_model, n_head, dropout=dropout)
+ self.dropout1 = nn.Dropout(dropout)
+ self.norm1 = nn.LayerNorm(d_model)
+
+ # cross attention
+ self.cross_attn = MSDeformableAttention(d_model, n_head, n_levels,
+ n_points)
+ self.dropout2 = nn.Dropout(dropout)
+ self.norm2 = nn.LayerNorm(d_model)
+
+ # ffn
+ self.linear1 = nn.Linear(d_model, dim_feedforward, weight_attr,
+ bias_attr)
+ self.activation = getattr(F, activation)
+ self.dropout3 = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model, weight_attr,
+ bias_attr)
+ self.dropout4 = nn.Dropout(dropout)
+ self.norm3 = nn.LayerNorm(d_model)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.linear1)
+ linear_init_(self.linear2)
+ xavier_uniform_(self.linear1.weight)
+ xavier_uniform_(self.linear2.weight)
+
+ def with_pos_embed(self, tensor, pos):
+ return tensor if pos is None else tensor + pos
+
+ def forward_ffn(self, tgt):
+ tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))
+ tgt = tgt + self.dropout4(tgt2)
+ tgt = self.norm3(tgt)
+ return tgt
+
+ def forward(self,
+ tgt,
+ reference_points,
+ memory,
+ memory_spatial_shapes,
+ memory_mask=None,
+ query_pos_embed=None):
+ # self attention
+ q = k = self.with_pos_embed(tgt, query_pos_embed)
+ tgt2 = self.self_attn(q, k, value=tgt)
+ tgt = tgt + self.dropout1(tgt2)
+ tgt = self.norm1(tgt)
+
+ # cross attention
+ tgt2 = self.cross_attn(
+ self.with_pos_embed(tgt, query_pos_embed), reference_points, memory,
+ memory_spatial_shapes, memory_mask)
+ tgt = tgt + self.dropout2(tgt2)
+ tgt = self.norm2(tgt)
+
+ # ffn
+ tgt = self.forward_ffn(tgt)
+
+ return tgt
+
+
+class DeformableTransformerDecoder(nn.Layer):
+ def __init__(self, decoder_layer, num_layers, return_intermediate=False):
+ super(DeformableTransformerDecoder, self).__init__()
+ self.layers = _get_clones(decoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.return_intermediate = return_intermediate
+
+ def forward(self,
+ tgt,
+ reference_points,
+ memory,
+ memory_spatial_shapes,
+ memory_mask=None,
+ query_pos_embed=None):
+ output = tgt
+ intermediate = []
+ for lid, layer in enumerate(self.layers):
+ output = layer(output, reference_points, memory,
+ memory_spatial_shapes, memory_mask, query_pos_embed)
+
+ if self.return_intermediate:
+ intermediate.append(output)
+
+ if self.return_intermediate:
+ return paddle.stack(intermediate)
+
+ return output.unsqueeze(0)
+
+
+@register
+class DeformableTransformer(nn.Layer):
+ __shared__ = ['hidden_dim']
+
+ def __init__(self,
+ num_queries=300,
+ position_embed_type='sine',
+ return_intermediate_dec=True,
+ backbone_num_channels=[512, 1024, 2048],
+ num_feature_levels=4,
+ num_encoder_points=4,
+ num_decoder_points=4,
+ hidden_dim=256,
+ nhead=8,
+ num_encoder_layers=6,
+ num_decoder_layers=6,
+ dim_feedforward=1024,
+ dropout=0.1,
+ activation="relu",
+ lr_mult=0.1,
+ weight_attr=None,
+ bias_attr=None):
+ super(DeformableTransformer, self).__init__()
+ assert position_embed_type in ['sine', 'learned'], \
+ f'ValueError: position_embed_type not supported {position_embed_type}!'
+ assert len(backbone_num_channels) <= num_feature_levels
+
+ self.hidden_dim = hidden_dim
+ self.nhead = nhead
+ self.num_feature_levels = num_feature_levels
+
+ encoder_layer = DeformableTransformerEncoderLayer(
+ hidden_dim, nhead, dim_feedforward, dropout, activation,
+ num_feature_levels, num_encoder_points, weight_attr, bias_attr)
+ self.encoder = DeformableTransformerEncoder(encoder_layer,
+ num_encoder_layers)
+
+ decoder_layer = DeformableTransformerDecoderLayer(
+ hidden_dim, nhead, dim_feedforward, dropout, activation,
+ num_feature_levels, num_decoder_points, weight_attr, bias_attr)
+ self.decoder = DeformableTransformerDecoder(
+ decoder_layer, num_decoder_layers, return_intermediate_dec)
+
+ self.level_embed = nn.Embedding(num_feature_levels, hidden_dim)
+ self.tgt_embed = nn.Embedding(num_queries, hidden_dim)
+ self.query_pos_embed = nn.Embedding(num_queries, hidden_dim)
+
+ self.reference_points = nn.Linear(
+ hidden_dim,
+ 2,
+ weight_attr=ParamAttr(learning_rate=lr_mult),
+ bias_attr=ParamAttr(learning_rate=lr_mult))
+
+ self.input_proj = nn.LayerList()
+ for in_channels in backbone_num_channels:
+ self.input_proj.append(
+ nn.Sequential(
+ nn.Conv2D(
+ in_channels,
+ hidden_dim,
+ kernel_size=1,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr),
+ nn.GroupNorm(32, hidden_dim)))
+ in_channels = backbone_num_channels[-1]
+ for _ in range(num_feature_levels - len(backbone_num_channels)):
+ self.input_proj.append(
+ nn.Sequential(
+ nn.Conv2D(
+ in_channels,
+ hidden_dim,
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ weight_attr=weight_attr,
+ bias_attr=bias_attr),
+ nn.GroupNorm(32, hidden_dim)))
+ in_channels = hidden_dim
+
+ self.position_embedding = PositionEmbedding(
+ hidden_dim // 2,
+ normalize=True if position_embed_type == 'sine' else False,
+ embed_type=position_embed_type,
+ offset=-0.5)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ normal_(self.level_embed.weight)
+ normal_(self.tgt_embed.weight)
+ normal_(self.query_pos_embed.weight)
+ xavier_uniform_(self.reference_points.weight)
+ constant_(self.reference_points.bias)
+ for l in self.input_proj:
+ xavier_uniform_(l[0].weight)
+ constant_(l[0].bias)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {'backbone_num_channels': [i.channels for i in input_shape], }
+
+ def _get_valid_ratio(self, mask):
+ mask = mask.astype(paddle.float32)
+ _, H, W = mask.shape
+ valid_ratio_h = paddle.sum(mask[:, :, 0], 1) / H
+ valid_ratio_w = paddle.sum(mask[:, 0, :], 1) / W
+ valid_ratio = paddle.stack([valid_ratio_w, valid_ratio_h], -1)
+ return valid_ratio
+
+ def forward(self, src_feats, src_mask=None):
+ srcs = []
+ for i in range(len(src_feats)):
+ srcs.append(self.input_proj[i](src_feats[i]))
+ if self.num_feature_levels > len(srcs):
+ len_srcs = len(srcs)
+ for i in range(len_srcs, self.num_feature_levels):
+ if i == len_srcs:
+ srcs.append(self.input_proj[i](src_feats[-1]))
+ else:
+ srcs.append(self.input_proj[i](srcs[-1]))
+ src_flatten = []
+ mask_flatten = []
+ lvl_pos_embed_flatten = []
+ spatial_shapes = []
+ valid_ratios = []
+ for level, src in enumerate(srcs):
+ bs, c, h, w = src.shape
+ spatial_shapes.append([h, w])
+ src = src.flatten(2).transpose([0, 2, 1])
+ src_flatten.append(src)
+ if src_mask is not None:
+ mask = F.interpolate(
+ src_mask.unsqueeze(0).astype(src.dtype),
+ size=(h, w))[0].astype('bool')
+ else:
+ mask = paddle.ones([bs, h, w], dtype='bool')
+ valid_ratios.append(self._get_valid_ratio(mask))
+ pos_embed = self.position_embedding(mask).flatten(2).transpose(
+ [0, 2, 1])
+ lvl_pos_embed = pos_embed + self.level_embed.weight[level].reshape(
+ [1, 1, -1])
+ lvl_pos_embed_flatten.append(lvl_pos_embed)
+ mask = mask.astype(src.dtype).flatten(1)
+ mask_flatten.append(mask)
+ src_flatten = paddle.concat(src_flatten, 1)
+ mask_flatten = paddle.concat(mask_flatten, 1)
+ lvl_pos_embed_flatten = paddle.concat(lvl_pos_embed_flatten, 1)
+ # [l, 2]
+ spatial_shapes = paddle.to_tensor(spatial_shapes, dtype='int64')
+ # [b, l, 2]
+ valid_ratios = paddle.stack(valid_ratios, 1)
+
+ # encoder
+ memory = self.encoder(src_flatten, spatial_shapes, mask_flatten,
+ lvl_pos_embed_flatten, valid_ratios)
+
+ # prepare input for decoder
+ bs, _, c = memory.shape
+ query_embed = self.query_pos_embed.weight.unsqueeze(0).tile([bs, 1, 1])
+ tgt = self.tgt_embed.weight.unsqueeze(0).tile([bs, 1, 1])
+ reference_points = F.sigmoid(self.reference_points(query_embed))
+ reference_points_input = reference_points.unsqueeze(
+ 2) * valid_ratios.unsqueeze(1)
+
+ # decoder
+ hs = self.decoder(tgt, reference_points_input, memory, spatial_shapes,
+ mask_flatten, query_embed)
+
+ return (hs, memory, reference_points)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/detr_transformer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/detr_transformer.py
new file mode 100644
index 000000000..bd513772d
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/detr_transformer.py
@@ -0,0 +1,353 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Modified from DETR (https://github.com/facebookresearch/detr)
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ppdet.core.workspace import register
+from ..layers import MultiHeadAttention, _convert_attention_mask
+from .position_encoding import PositionEmbedding
+from .utils import _get_clones
+from ..initializer import linear_init_, conv_init_, xavier_uniform_, normal_
+
+__all__ = ['DETRTransformer']
+
+
+class TransformerEncoderLayer(nn.Layer):
+ def __init__(self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ attn_dropout=None,
+ act_dropout=None,
+ normalize_before=False):
+ super(TransformerEncoderLayer, self).__init__()
+ attn_dropout = dropout if attn_dropout is None else attn_dropout
+ act_dropout = dropout if act_dropout is None else act_dropout
+ self.normalize_before = normalize_before
+
+ self.self_attn = MultiHeadAttention(d_model, nhead, attn_dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(act_dropout, mode="upscale_in_train")
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout, mode="upscale_in_train")
+ self.dropout2 = nn.Dropout(dropout, mode="upscale_in_train")
+ self.activation = getattr(F, activation)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.linear1)
+ linear_init_(self.linear2)
+
+ @staticmethod
+ def with_pos_embed(tensor, pos_embed):
+ return tensor if pos_embed is None else tensor + pos_embed
+
+ def forward(self, src, src_mask=None, pos_embed=None):
+ src_mask = _convert_attention_mask(src_mask, src.dtype)
+
+ residual = src
+ if self.normalize_before:
+ src = self.norm1(src)
+ q = k = self.with_pos_embed(src, pos_embed)
+ src = self.self_attn(q, k, value=src, attn_mask=src_mask)
+
+ src = residual + self.dropout1(src)
+ if not self.normalize_before:
+ src = self.norm1(src)
+
+ residual = src
+ if self.normalize_before:
+ src = self.norm2(src)
+ src = self.linear2(self.dropout(self.activation(self.linear1(src))))
+ src = residual + self.dropout2(src)
+ if not self.normalize_before:
+ src = self.norm2(src)
+ return src
+
+
+class TransformerEncoder(nn.Layer):
+ def __init__(self, encoder_layer, num_layers, norm=None):
+ super(TransformerEncoder, self).__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+
+ def forward(self, src, src_mask=None, pos_embed=None):
+ src_mask = _convert_attention_mask(src_mask, src.dtype)
+
+ output = src
+ for layer in self.layers:
+ output = layer(output, src_mask=src_mask, pos_embed=pos_embed)
+
+ if self.norm is not None:
+ output = self.norm(output)
+
+ return output
+
+
+class TransformerDecoderLayer(nn.Layer):
+ def __init__(self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ attn_dropout=None,
+ act_dropout=None,
+ normalize_before=False):
+ super(TransformerDecoderLayer, self).__init__()
+ attn_dropout = dropout if attn_dropout is None else attn_dropout
+ act_dropout = dropout if act_dropout is None else act_dropout
+ self.normalize_before = normalize_before
+
+ self.self_attn = MultiHeadAttention(d_model, nhead, attn_dropout)
+ self.cross_attn = MultiHeadAttention(d_model, nhead, attn_dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(act_dropout, mode="upscale_in_train")
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.norm3 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout, mode="upscale_in_train")
+ self.dropout2 = nn.Dropout(dropout, mode="upscale_in_train")
+ self.dropout3 = nn.Dropout(dropout, mode="upscale_in_train")
+ self.activation = getattr(F, activation)
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ linear_init_(self.linear1)
+ linear_init_(self.linear2)
+
+ @staticmethod
+ def with_pos_embed(tensor, pos_embed):
+ return tensor if pos_embed is None else tensor + pos_embed
+
+ def forward(self,
+ tgt,
+ memory,
+ tgt_mask=None,
+ memory_mask=None,
+ pos_embed=None,
+ query_pos_embed=None):
+ tgt_mask = _convert_attention_mask(tgt_mask, tgt.dtype)
+ memory_mask = _convert_attention_mask(memory_mask, memory.dtype)
+
+ residual = tgt
+ if self.normalize_before:
+ tgt = self.norm1(tgt)
+ q = k = self.with_pos_embed(tgt, query_pos_embed)
+ tgt = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask)
+ tgt = residual + self.dropout1(tgt)
+ if not self.normalize_before:
+ tgt = self.norm1(tgt)
+
+ residual = tgt
+ if self.normalize_before:
+ tgt = self.norm2(tgt)
+ q = self.with_pos_embed(tgt, query_pos_embed)
+ k = self.with_pos_embed(memory, pos_embed)
+ tgt = self.cross_attn(q, k, value=memory, attn_mask=memory_mask)
+ tgt = residual + self.dropout2(tgt)
+ if not self.normalize_before:
+ tgt = self.norm2(tgt)
+
+ residual = tgt
+ if self.normalize_before:
+ tgt = self.norm3(tgt)
+ tgt = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
+ tgt = residual + self.dropout3(tgt)
+ if not self.normalize_before:
+ tgt = self.norm3(tgt)
+ return tgt
+
+
+class TransformerDecoder(nn.Layer):
+ def __init__(self,
+ decoder_layer,
+ num_layers,
+ norm=None,
+ return_intermediate=False):
+ super(TransformerDecoder, self).__init__()
+ self.layers = _get_clones(decoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+ self.return_intermediate = return_intermediate
+
+ def forward(self,
+ tgt,
+ memory,
+ tgt_mask=None,
+ memory_mask=None,
+ pos_embed=None,
+ query_pos_embed=None):
+ tgt_mask = _convert_attention_mask(tgt_mask, tgt.dtype)
+ memory_mask = _convert_attention_mask(memory_mask, memory.dtype)
+
+ output = tgt
+ intermediate = []
+ for layer in self.layers:
+ output = layer(
+ output,
+ memory,
+ tgt_mask=tgt_mask,
+ memory_mask=memory_mask,
+ pos_embed=pos_embed,
+ query_pos_embed=query_pos_embed)
+ if self.return_intermediate:
+ intermediate.append(self.norm(output))
+
+ if self.norm is not None:
+ output = self.norm(output)
+
+ if self.return_intermediate:
+ return paddle.stack(intermediate)
+
+ return output.unsqueeze(0)
+
+
+@register
+class DETRTransformer(nn.Layer):
+ __shared__ = ['hidden_dim']
+
+ def __init__(self,
+ num_queries=100,
+ position_embed_type='sine',
+ return_intermediate_dec=True,
+ backbone_num_channels=2048,
+ hidden_dim=256,
+ nhead=8,
+ num_encoder_layers=6,
+ num_decoder_layers=6,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ attn_dropout=None,
+ act_dropout=None,
+ normalize_before=False):
+ super(DETRTransformer, self).__init__()
+ assert position_embed_type in ['sine', 'learned'],\
+ f'ValueError: position_embed_type not supported {position_embed_type}!'
+ self.hidden_dim = hidden_dim
+ self.nhead = nhead
+
+ encoder_layer = TransformerEncoderLayer(
+ hidden_dim, nhead, dim_feedforward, dropout, activation,
+ attn_dropout, act_dropout, normalize_before)
+ encoder_norm = nn.LayerNorm(hidden_dim) if normalize_before else None
+ self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers,
+ encoder_norm)
+
+ decoder_layer = TransformerDecoderLayer(
+ hidden_dim, nhead, dim_feedforward, dropout, activation,
+ attn_dropout, act_dropout, normalize_before)
+ decoder_norm = nn.LayerNorm(hidden_dim)
+ self.decoder = TransformerDecoder(
+ decoder_layer,
+ num_decoder_layers,
+ decoder_norm,
+ return_intermediate=return_intermediate_dec)
+
+ self.input_proj = nn.Conv2D(
+ backbone_num_channels, hidden_dim, kernel_size=1)
+ self.query_pos_embed = nn.Embedding(num_queries, hidden_dim)
+ self.position_embedding = PositionEmbedding(
+ hidden_dim // 2,
+ normalize=True if position_embed_type == 'sine' else False,
+ embed_type=position_embed_type)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ xavier_uniform_(p)
+ conv_init_(self.input_proj)
+ normal_(self.query_pos_embed.weight)
+
+ @classmethod
+ def from_config(cls, cfg, input_shape):
+ return {
+ 'backbone_num_channels': [i.channels for i in input_shape][-1],
+ }
+
+ def forward(self, src, src_mask=None):
+ r"""
+ Applies a Transformer model on the inputs.
+
+ Parameters:
+ src (List(Tensor)): Backbone feature maps with shape [[bs, c, h, w]].
+ src_mask (Tensor, optional): A tensor used in multi-head attention
+ to prevents attention to some unwanted positions, usually the
+ paddings or the subsequent positions. It is a tensor with shape
+ [bs, H, W]`. When the data type is bool, the unwanted positions
+ have `False` values and the others have `True` values. When the
+ data type is int, the unwanted positions have 0 values and the
+ others have 1 values. When the data type is float, the unwanted
+ positions have `-INF` values and the others have 0 values. It
+ can be None when nothing wanted or needed to be prevented
+ attention to. Default None.
+
+ Returns:
+ output (Tensor): [num_levels, batch_size, num_queries, hidden_dim]
+ memory (Tensor): [batch_size, hidden_dim, h, w]
+ """
+ # use last level feature map
+ src_proj = self.input_proj(src[-1])
+ bs, c, h, w = src_proj.shape
+ # flatten [B, C, H, W] to [B, HxW, C]
+ src_flatten = src_proj.flatten(2).transpose([0, 2, 1])
+ if src_mask is not None:
+ src_mask = F.interpolate(
+ src_mask.unsqueeze(0).astype(src_flatten.dtype),
+ size=(h, w))[0].astype('bool')
+ else:
+ src_mask = paddle.ones([bs, h, w], dtype='bool')
+ pos_embed = self.position_embedding(src_mask).flatten(2).transpose(
+ [0, 2, 1])
+
+ src_mask = _convert_attention_mask(src_mask, src_flatten.dtype)
+ src_mask = src_mask.reshape([bs, 1, 1, -1])
+
+ memory = self.encoder(
+ src_flatten, src_mask=src_mask, pos_embed=pos_embed)
+
+ query_pos_embed = self.query_pos_embed.weight.unsqueeze(0).tile(
+ [bs, 1, 1])
+ tgt = paddle.zeros_like(query_pos_embed)
+ output = self.decoder(
+ tgt,
+ memory,
+ memory_mask=src_mask,
+ pos_embed=pos_embed,
+ query_pos_embed=query_pos_embed)
+
+ return (output, memory.transpose([0, 2, 1]).reshape([bs, c, h, w]),
+ src_proj, src_mask.reshape([bs, 1, 1, h, w]))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/matchers.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/matchers.py
new file mode 100644
index 000000000..794d86328
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/matchers.py
@@ -0,0 +1,126 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Modified from DETR (https://github.com/facebookresearch/detr)
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+from scipy.optimize import linear_sum_assignment
+
+from ppdet.core.workspace import register, serializable
+from ..losses.iou_loss import GIoULoss
+from .utils import bbox_cxcywh_to_xyxy
+
+__all__ = ['HungarianMatcher']
+
+
+@register
+@serializable
+class HungarianMatcher(nn.Layer):
+ __shared__ = ['use_focal_loss']
+
+ def __init__(self,
+ matcher_coeff={'class': 1,
+ 'bbox': 5,
+ 'giou': 2},
+ use_focal_loss=False,
+ alpha=0.25,
+ gamma=2.0):
+ r"""
+ Args:
+ matcher_coeff (dict): The coefficient of hungarian matcher cost.
+ """
+ super(HungarianMatcher, self).__init__()
+ self.matcher_coeff = matcher_coeff
+ self.use_focal_loss = use_focal_loss
+ self.alpha = alpha
+ self.gamma = gamma
+
+ self.giou_loss = GIoULoss()
+
+ def forward(self, boxes, logits, gt_bbox, gt_class):
+ r"""
+ Args:
+ boxes (Tensor): [b, query, 4]
+ logits (Tensor): [b, query, num_classes]
+ gt_bbox (List(Tensor)): list[[n, 4]]
+ gt_class (List(Tensor)): list[[n, 1]]
+
+ Returns:
+ A list of size batch_size, containing tuples of (index_i, index_j) where:
+ - index_i is the indices of the selected predictions (in order)
+ - index_j is the indices of the corresponding selected targets (in order)
+ For each batch element, it holds:
+ len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
+ """
+ bs, num_queries = boxes.shape[:2]
+
+ num_gts = sum(len(a) for a in gt_class)
+ if num_gts == 0:
+ return [(paddle.to_tensor(
+ [], dtype=paddle.int64), paddle.to_tensor(
+ [], dtype=paddle.int64)) for _ in range(bs)]
+
+ # We flatten to compute the cost matrices in a batch
+ # [batch_size * num_queries, num_classes]
+ out_prob = F.sigmoid(logits.flatten(
+ 0, 1)) if self.use_focal_loss else F.softmax(logits.flatten(0, 1))
+ # [batch_size * num_queries, 4]
+ out_bbox = boxes.flatten(0, 1)
+
+ # Also concat the target labels and boxes
+ tgt_ids = paddle.concat(gt_class).flatten()
+ tgt_bbox = paddle.concat(gt_bbox)
+
+ # Compute the classification cost
+ if self.use_focal_loss:
+ neg_cost_class = (1 - self.alpha) * (out_prob**self.gamma) * (-(
+ 1 - out_prob + 1e-8).log())
+ pos_cost_class = self.alpha * (
+ (1 - out_prob)**self.gamma) * (-(out_prob + 1e-8).log())
+ cost_class = paddle.gather(
+ pos_cost_class, tgt_ids, axis=1) - paddle.gather(
+ neg_cost_class, tgt_ids, axis=1)
+ else:
+ cost_class = -paddle.gather(out_prob, tgt_ids, axis=1)
+
+ # Compute the L1 cost between boxes
+ cost_bbox = (
+ out_bbox.unsqueeze(1) - tgt_bbox.unsqueeze(0)).abs().sum(-1)
+
+ # Compute the giou cost betwen boxes
+ cost_giou = self.giou_loss(
+ bbox_cxcywh_to_xyxy(out_bbox.unsqueeze(1)),
+ bbox_cxcywh_to_xyxy(tgt_bbox.unsqueeze(0))).squeeze(-1)
+
+ # Final cost matrix
+ C = self.matcher_coeff['class'] * cost_class + self.matcher_coeff['bbox'] * cost_bbox + \
+ self.matcher_coeff['giou'] * cost_giou
+ C = C.reshape([bs, num_queries, -1])
+ C = [a.squeeze(0) for a in C.chunk(bs)]
+
+ sizes = [a.shape[0] for a in gt_bbox]
+ indices = [
+ linear_sum_assignment(c.split(sizes, -1)[i].numpy())
+ for i, c in enumerate(C)
+ ]
+ return [(paddle.to_tensor(
+ i, dtype=paddle.int64), paddle.to_tensor(
+ j, dtype=paddle.int64)) for i, j in indices]
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/position_encoding.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/position_encoding.py
new file mode 100644
index 000000000..e54165918
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/position_encoding.py
@@ -0,0 +1,108 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Modified from DETR (https://github.com/facebookresearch/detr)
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import paddle
+import paddle.nn as nn
+
+from ppdet.core.workspace import register, serializable
+
+
+@register
+@serializable
+class PositionEmbedding(nn.Layer):
+ def __init__(self,
+ num_pos_feats=128,
+ temperature=10000,
+ normalize=True,
+ scale=None,
+ embed_type='sine',
+ num_embeddings=50,
+ offset=0.):
+ super(PositionEmbedding, self).__init__()
+ assert embed_type in ['sine', 'learned']
+
+ self.embed_type = embed_type
+ self.offset = offset
+ self.eps = 1e-6
+ if self.embed_type == 'sine':
+ self.num_pos_feats = num_pos_feats
+ self.temperature = temperature
+ self.normalize = normalize
+ if scale is not None and normalize is False:
+ raise ValueError("normalize should be True if scale is passed")
+ if scale is None:
+ scale = 2 * math.pi
+ self.scale = scale
+ elif self.embed_type == 'learned':
+ self.row_embed = nn.Embedding(num_embeddings, num_pos_feats)
+ self.col_embed = nn.Embedding(num_embeddings, num_pos_feats)
+ else:
+ raise ValueError(f"not supported {self.embed_type}")
+
+ def forward(self, mask):
+ """
+ Args:
+ mask (Tensor): [B, H, W]
+ Returns:
+ pos (Tensor): [B, C, H, W]
+ """
+ assert mask.dtype == paddle.bool
+ if self.embed_type == 'sine':
+ mask = mask.astype('float32')
+ y_embed = mask.cumsum(1, dtype='float32')
+ x_embed = mask.cumsum(2, dtype='float32')
+ if self.normalize:
+ y_embed = (y_embed + self.offset) / (
+ y_embed[:, -1:, :] + self.eps) * self.scale
+ x_embed = (x_embed + self.offset) / (
+ x_embed[:, :, -1:] + self.eps) * self.scale
+
+ dim_t = 2 * (paddle.arange(self.num_pos_feats) //
+ 2).astype('float32')
+ dim_t = self.temperature**(dim_t / self.num_pos_feats)
+
+ pos_x = x_embed.unsqueeze(-1) / dim_t
+ pos_y = y_embed.unsqueeze(-1) / dim_t
+ pos_x = paddle.stack(
+ (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()),
+ axis=4).flatten(3)
+ pos_y = paddle.stack(
+ (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()),
+ axis=4).flatten(3)
+ pos = paddle.concat((pos_y, pos_x), axis=3).transpose([0, 3, 1, 2])
+ return pos
+ elif self.embed_type == 'learned':
+ h, w = mask.shape[-2:]
+ i = paddle.arange(w)
+ j = paddle.arange(h)
+ x_emb = self.col_embed(i)
+ y_emb = self.row_embed(j)
+ pos = paddle.concat(
+ [
+ x_emb.unsqueeze(0).repeat(h, 1, 1),
+ y_emb.unsqueeze(1).repeat(1, w, 1),
+ ],
+ axis=-1).transpose([2, 0, 1]).unsqueeze(0).tile(mask.shape[0],
+ 1, 1, 1)
+ return pos
+ else:
+ raise ValueError(f"not supported {self.embed_type}")
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/utils.py
new file mode 100644
index 000000000..414ada588
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/modeling/transformers/utils.py
@@ -0,0 +1,109 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Modified from DETR (https://github.com/facebookresearch/detr)
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import copy
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ..bbox_utils import bbox_overlaps
+
+__all__ = [
+ '_get_clones', 'bbox_overlaps', 'bbox_cxcywh_to_xyxy',
+ 'bbox_xyxy_to_cxcywh', 'sigmoid_focal_loss', 'inverse_sigmoid',
+ 'deformable_attention_core_func'
+]
+
+
+def _get_clones(module, N):
+ return nn.LayerList([copy.deepcopy(module) for _ in range(N)])
+
+
+def bbox_cxcywh_to_xyxy(x):
+ x_c, y_c, w, h = x.unbind(-1)
+ b = [(x_c - 0.5 * w), (y_c - 0.5 * h), (x_c + 0.5 * w), (y_c + 0.5 * h)]
+ return paddle.stack(b, axis=-1)
+
+
+def bbox_xyxy_to_cxcywh(x):
+ x0, y0, x1, y1 = x.unbind(-1)
+ b = [(x0 + x1) / 2, (y0 + y1) / 2, (x1 - x0), (y1 - y0)]
+ return paddle.stack(b, axis=-1)
+
+
+def sigmoid_focal_loss(logit, label, normalizer=1.0, alpha=0.25, gamma=2.0):
+ prob = F.sigmoid(logit)
+ ce_loss = F.binary_cross_entropy_with_logits(logit, label, reduction="none")
+ p_t = prob * label + (1 - prob) * (1 - label)
+ loss = ce_loss * ((1 - p_t)**gamma)
+
+ if alpha >= 0:
+ alpha_t = alpha * label + (1 - alpha) * (1 - label)
+ loss = alpha_t * loss
+ return loss.mean(1).sum() / normalizer
+
+
+def inverse_sigmoid(x, eps=1e-6):
+ x = x.clip(min=0., max=1.)
+ return paddle.log(x / (1 - x + eps) + eps)
+
+
+def deformable_attention_core_func(value, value_spatial_shapes,
+ sampling_locations, attention_weights):
+ """
+ Args:
+ value (Tensor): [bs, value_length, n_head, c]
+ value_spatial_shapes (Tensor): [n_levels, 2]
+ sampling_locations (Tensor): [bs, query_length, n_head, n_levels, n_points, 2]
+ attention_weights (Tensor): [bs, query_length, n_head, n_levels, n_points]
+
+ Returns:
+ output (Tensor): [bs, Length_{query}, C]
+ """
+ bs, Len_v, n_head, c = value.shape
+ _, Len_q, n_head, n_levels, n_points, _ = sampling_locations.shape
+
+ value_list = value.split(value_spatial_shapes.prod(1).tolist(), axis=1)
+ sampling_grids = 2 * sampling_locations - 1
+ sampling_value_list = []
+ for level, (h, w) in enumerate(value_spatial_shapes.tolist()):
+ # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, D_, H_, W_
+ value_l_ = value_list[level].flatten(2).transpose(
+ [0, 2, 1]).reshape([bs * n_head, c, h, w])
+ # N_, Lq_, M_, P_, 2 -> N_, M_, Lq_, P_, 2 -> N_*M_, Lq_, P_, 2
+ sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(
+ [0, 2, 1, 3, 4]).flatten(0, 1)
+ # N_*M_, D_, Lq_, P_
+ sampling_value_l_ = F.grid_sample(
+ value_l_,
+ sampling_grid_l_,
+ mode='bilinear',
+ padding_mode='zeros',
+ align_corners=False)
+ sampling_value_list.append(sampling_value_l_)
+ # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_*M_, 1, Lq_, L_*P_)
+ attention_weights = attention_weights.transpose([0, 2, 1, 3, 4]).reshape(
+ [bs * n_head, 1, Len_q, n_levels * n_points])
+ output = (paddle.stack(
+ sampling_value_list, axis=-2).flatten(-2) *
+ attention_weights).sum(-1).reshape([bs, n_head * c, Len_q])
+
+ return output.transpose([0, 2, 1])
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/optimizer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/optimizer.py
new file mode 100644
index 000000000..fcdcbd8d6
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/optimizer.py
@@ -0,0 +1,333 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+import paddle
+import paddle.nn as nn
+
+import paddle.optimizer as optimizer
+import paddle.regularizer as regularizer
+
+from ppdet.core.workspace import register, serializable
+
+__all__ = ['LearningRate', 'OptimizerBuilder']
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@serializable
+class CosineDecay(object):
+ """
+ Cosine learning rate decay
+
+ Args:
+ max_epochs (int): max epochs for the training process.
+ if you commbine cosine decay with warmup, it is recommended that
+ the max_iters is much larger than the warmup iter
+ """
+
+ def __init__(self, max_epochs=1000, use_warmup=True):
+ self.max_epochs = max_epochs
+ self.use_warmup = use_warmup
+
+ def __call__(self,
+ base_lr=None,
+ boundary=None,
+ value=None,
+ step_per_epoch=None):
+ assert base_lr is not None, "either base LR or values should be provided"
+
+ max_iters = self.max_epochs * int(step_per_epoch)
+
+ if boundary is not None and value is not None and self.use_warmup:
+ for i in range(int(boundary[-1]), max_iters):
+ boundary.append(i)
+
+ decayed_lr = base_lr * 0.5 * (
+ math.cos(i * math.pi / max_iters) + 1)
+ value.append(decayed_lr)
+ return optimizer.lr.PiecewiseDecay(boundary, value)
+
+ return optimizer.lr.CosineAnnealingDecay(base_lr, T_max=max_iters)
+
+
+@serializable
+class PiecewiseDecay(object):
+ """
+ Multi step learning rate decay
+
+ Args:
+ gamma (float | list): decay factor
+ milestones (list): steps at which to decay learning rate
+ """
+
+ def __init__(self,
+ gamma=[0.1, 0.01],
+ milestones=[8, 11],
+ values=None,
+ use_warmup=True):
+ super(PiecewiseDecay, self).__init__()
+ if type(gamma) is not list:
+ self.gamma = []
+ for i in range(len(milestones)):
+ self.gamma.append(gamma / 10**i)
+ else:
+ self.gamma = gamma
+ self.milestones = milestones
+ self.values = values
+ self.use_warmup = use_warmup
+
+ def __call__(self,
+ base_lr=None,
+ boundary=None,
+ value=None,
+ step_per_epoch=None):
+ if boundary is not None and self.use_warmup:
+ boundary.extend([int(step_per_epoch) * i for i in self.milestones])
+ else:
+ # do not use LinearWarmup
+ boundary = [int(step_per_epoch) * i for i in self.milestones]
+ value = [base_lr] # during step[0, boundary[0]] is base_lr
+
+ # self.values is setted directly in config
+ if self.values is not None:
+ assert len(self.milestones) + 1 == len(self.values)
+ return optimizer.lr.PiecewiseDecay(boundary, self.values)
+
+ # value is computed by self.gamma
+ value = value if value is not None else [base_lr]
+ for i in self.gamma:
+ value.append(base_lr * i)
+
+ return optimizer.lr.PiecewiseDecay(boundary, value)
+
+
+@serializable
+class LinearWarmup(object):
+ """
+ Warm up learning rate linearly
+
+ Args:
+ steps (int): warm up steps
+ start_factor (float): initial learning rate factor
+ """
+
+ def __init__(self, steps=500, start_factor=1. / 3):
+ super(LinearWarmup, self).__init__()
+ self.steps = steps
+ self.start_factor = start_factor
+
+ def __call__(self, base_lr, step_per_epoch):
+ boundary = []
+ value = []
+ for i in range(self.steps + 1):
+ if self.steps > 0:
+ alpha = i / self.steps
+ factor = self.start_factor * (1 - alpha) + alpha
+ lr = base_lr * factor
+ value.append(lr)
+ if i > 0:
+ boundary.append(i)
+ return boundary, value
+
+
+@serializable
+class BurninWarmup(object):
+ """
+ Warm up learning rate in burnin mode
+ Args:
+ steps (int): warm up steps
+ """
+
+ def __init__(self, steps=1000):
+ super(BurninWarmup, self).__init__()
+ self.steps = steps
+
+ def __call__(self, base_lr, step_per_epoch):
+ boundary = []
+ value = []
+ burnin = min(self.steps, step_per_epoch)
+ for i in range(burnin + 1):
+ factor = (i * 1.0 / burnin)**4
+ lr = base_lr * factor
+ value.append(lr)
+ if i > 0:
+ boundary.append(i)
+ return boundary, value
+
+
+@register
+class LearningRate(object):
+ """
+ Learning Rate configuration
+
+ Args:
+ base_lr (float): base learning rate
+ schedulers (list): learning rate schedulers
+ """
+ __category__ = 'optim'
+
+ def __init__(self,
+ base_lr=0.01,
+ schedulers=[PiecewiseDecay(), LinearWarmup()]):
+ super(LearningRate, self).__init__()
+ self.base_lr = base_lr
+ self.schedulers = schedulers
+
+ def __call__(self, step_per_epoch):
+ assert len(self.schedulers) >= 1
+ if not self.schedulers[0].use_warmup:
+ return self.schedulers[0](base_lr=self.base_lr,
+ step_per_epoch=step_per_epoch)
+
+ # TODO: split warmup & decay
+ # warmup
+ boundary, value = self.schedulers[1](self.base_lr, step_per_epoch)
+ # decay
+ decay_lr = self.schedulers[0](self.base_lr, boundary, value,
+ step_per_epoch)
+ return decay_lr
+
+
+@register
+class OptimizerBuilder():
+ """
+ Build optimizer handles
+ Args:
+ regularizer (object): an `Regularizer` instance
+ optimizer (object): an `Optimizer` instance
+ """
+ __category__ = 'optim'
+
+ def __init__(self,
+ clip_grad_by_norm=None,
+ regularizer={'type': 'L2',
+ 'factor': .0001},
+ optimizer={'type': 'Momentum',
+ 'momentum': .9}):
+ self.clip_grad_by_norm = clip_grad_by_norm
+ self.regularizer = regularizer
+ self.optimizer = optimizer
+
+ def __call__(self, learning_rate, model=None):
+ if self.clip_grad_by_norm is not None:
+ grad_clip = nn.ClipGradByGlobalNorm(
+ clip_norm=self.clip_grad_by_norm)
+ else:
+ grad_clip = None
+ if self.regularizer and self.regularizer != 'None':
+ reg_type = self.regularizer['type'] + 'Decay'
+ reg_factor = self.regularizer['factor']
+ regularization = getattr(regularizer, reg_type)(reg_factor)
+ else:
+ regularization = None
+
+ optim_args = self.optimizer.copy()
+ optim_type = optim_args['type']
+ del optim_args['type']
+ if optim_type != 'AdamW':
+ optim_args['weight_decay'] = regularization
+ op = getattr(optimizer, optim_type)
+
+ if 'without_weight_decay_params' in optim_args:
+ keys = optim_args['without_weight_decay_params']
+ params = [{
+ 'params': [
+ p for n, p in model.named_parameters()
+ if any([k in n for k in keys])
+ ],
+ 'weight_decay': 0.
+ }, {
+ 'params': [
+ p for n, p in model.named_parameters()
+ if all([k not in n for k in keys])
+ ]
+ }]
+ del optim_args['without_weight_decay_params']
+ else:
+ params = model.parameters()
+
+ return op(learning_rate=learning_rate,
+ parameters=params,
+ grad_clip=grad_clip,
+ **optim_args)
+
+
+class ModelEMA(object):
+ """
+ Exponential Weighted Average for Deep Neutal Networks
+ Args:
+ model (nn.Layer): Detector of model.
+ decay (int): The decay used for updating ema parameter.
+ Ema's parameter are updated with the formula:
+ `ema_param = decay * ema_param + (1 - decay) * cur_param`.
+ Defaults is 0.9998.
+ use_thres_step (bool): Whether set decay by thres_step or not
+ cycle_epoch (int): The epoch of interval to reset ema_param and
+ step. Defaults is -1, which means not reset. Its function is to
+ add a regular effect to ema, which is set according to experience
+ and is effective when the total training epoch is large.
+ """
+
+ def __init__(self,
+ model,
+ decay=0.9998,
+ use_thres_step=False,
+ cycle_epoch=-1):
+ self.step = 0
+ self.epoch = 0
+ self.decay = decay
+ self.state_dict = dict()
+ for k, v in model.state_dict().items():
+ self.state_dict[k] = paddle.zeros_like(v)
+ self.use_thres_step = use_thres_step
+ self.cycle_epoch = cycle_epoch
+
+ def reset(self):
+ self.step = 0
+ self.epoch = 0
+ for k, v in self.state_dict.items():
+ self.state_dict[k] = paddle.zeros_like(v)
+
+ def update(self, model):
+ if self.use_thres_step:
+ decay = min(self.decay, (1 + self.step) / (10 + self.step))
+ else:
+ decay = self.decay
+ self._decay = decay
+ model_dict = model.state_dict()
+ for k, v in self.state_dict.items():
+ v = decay * v + (1 - decay) * model_dict[k]
+ v.stop_gradient = True
+ self.state_dict[k] = v
+ self.step += 1
+
+ def apply(self):
+ if self.step == 0:
+ return self.state_dict
+ state_dict = dict()
+ for k, v in self.state_dict.items():
+ v = v / (1 - self._decay**self.step)
+ v.stop_gradient = True
+ state_dict[k] = v
+ self.epoch += 1
+ if self.cycle_epoch > 0 and self.epoch == self.cycle_epoch:
+ self.reset()
+
+ return state_dict
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__init__.py
new file mode 100644
index 000000000..dc22d0717
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__init__.py
@@ -0,0 +1,82 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from . import prune
+from . import quant
+from . import distill
+from . import unstructured_prune
+
+from .prune import *
+from .quant import *
+from .distill import *
+from .unstructured_prune import *
+
+import yaml
+from ppdet.core.workspace import load_config
+from ppdet.utils.checkpoint import load_pretrain_weight
+
+
+def build_slim_model(cfg, slim_cfg, mode='train'):
+ with open(slim_cfg) as f:
+ slim_load_cfg = yaml.load(f, Loader=yaml.Loader)
+ if mode != 'train' and slim_load_cfg['slim'] == 'Distill':
+ return cfg
+
+ if slim_load_cfg['slim'] == 'Distill':
+ model = DistillModel(cfg, slim_cfg)
+ cfg['model'] = model
+ elif slim_load_cfg['slim'] == 'DistillPrune':
+ if mode == 'train':
+ model = DistillModel(cfg, slim_cfg)
+ pruner = create(cfg.pruner)
+ pruner(model.student_model)
+ else:
+ model = create(cfg.architecture)
+ weights = cfg.weights
+ load_config(slim_cfg)
+ pruner = create(cfg.pruner)
+ model = pruner(model)
+ load_pretrain_weight(model, weights)
+ cfg['model'] = model
+ cfg['slim_type'] = cfg.slim
+ elif slim_load_cfg['slim'] == 'PTQ':
+ model = create(cfg.architecture)
+ load_config(slim_cfg)
+ load_pretrain_weight(model, cfg.weights)
+ slim = create(cfg.slim)
+ cfg['slim_type'] = cfg.slim
+ cfg['model'] = slim(model)
+ cfg['slim'] = slim
+ elif slim_load_cfg['slim'] == 'UnstructuredPruner':
+ load_config(slim_cfg)
+ slim = create(cfg.slim)
+ cfg['slim_type'] = cfg.slim
+ cfg['slim'] = slim
+ cfg['unstructured_prune'] = True
+ else:
+ load_config(slim_cfg)
+ model = create(cfg.architecture)
+ if mode == 'train':
+ load_pretrain_weight(model, cfg.pretrain_weights)
+ slim = create(cfg.slim)
+ cfg['slim_type'] = cfg.slim
+ # TODO: fix quant export model in framework.
+ if mode == 'test' and slim_load_cfg['slim'] == 'QAT':
+ slim.quant_config['activation_preprocess_type'] = None
+ cfg['model'] = slim(model)
+ cfg['slim'] = slim
+ if mode != 'train':
+ load_pretrain_weight(cfg['model'], cfg.weights)
+
+ return cfg
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..6efc3d48f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/distill.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/distill.cpython-37.pyc
new file mode 100644
index 000000000..2bcf7f5ed
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/distill.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/prune.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/prune.cpython-37.pyc
new file mode 100644
index 000000000..db85654e8
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/prune.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/quant.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/quant.cpython-37.pyc
new file mode 100644
index 000000000..532a6a089
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/quant.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/unstructured_prune.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/unstructured_prune.cpython-37.pyc
new file mode 100644
index 000000000..acdb77ed0
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/__pycache__/unstructured_prune.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/distill.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/distill.py
new file mode 100644
index 000000000..b808553dd
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/distill.py
@@ -0,0 +1,109 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+import paddle.nn as nn
+import paddle.nn.functional as F
+
+from ppdet.core.workspace import register, create, load_config
+from ppdet.modeling import ops
+from ppdet.utils.checkpoint import load_pretrain_weight
+from ppdet.utils.logger import setup_logger
+
+logger = setup_logger(__name__)
+
+
+class DistillModel(nn.Layer):
+ def __init__(self, cfg, slim_cfg):
+ super(DistillModel, self).__init__()
+
+ self.student_model = create(cfg.architecture)
+ logger.debug('Load student model pretrain_weights:{}'.format(
+ cfg.pretrain_weights))
+ load_pretrain_weight(self.student_model, cfg.pretrain_weights)
+
+ slim_cfg = load_config(slim_cfg)
+ self.teacher_model = create(slim_cfg.architecture)
+ self.distill_loss = create(slim_cfg.distill_loss)
+ logger.debug('Load teacher model pretrain_weights:{}'.format(
+ slim_cfg.pretrain_weights))
+ load_pretrain_weight(self.teacher_model, slim_cfg.pretrain_weights)
+
+ for param in self.teacher_model.parameters():
+ param.trainable = False
+
+ def parameters(self):
+ return self.student_model.parameters()
+
+ def forward(self, inputs):
+ if self.training:
+ teacher_loss = self.teacher_model(inputs)
+ student_loss = self.student_model(inputs)
+ loss = self.distill_loss(self.teacher_model, self.student_model)
+ student_loss['distill_loss'] = loss
+ student_loss['teacher_loss'] = teacher_loss['loss']
+ student_loss['loss'] += student_loss['distill_loss']
+ return student_loss
+ else:
+ return self.student_model(inputs)
+
+
+@register
+class DistillYOLOv3Loss(nn.Layer):
+ def __init__(self, weight=1000):
+ super(DistillYOLOv3Loss, self).__init__()
+ self.weight = weight
+
+ def obj_weighted_reg(self, sx, sy, sw, sh, tx, ty, tw, th, tobj):
+ loss_x = ops.sigmoid_cross_entropy_with_logits(sx, F.sigmoid(tx))
+ loss_y = ops.sigmoid_cross_entropy_with_logits(sy, F.sigmoid(ty))
+ loss_w = paddle.abs(sw - tw)
+ loss_h = paddle.abs(sh - th)
+ loss = paddle.add_n([loss_x, loss_y, loss_w, loss_h])
+ weighted_loss = paddle.mean(loss * F.sigmoid(tobj))
+ return weighted_loss
+
+ def obj_weighted_cls(self, scls, tcls, tobj):
+ loss = ops.sigmoid_cross_entropy_with_logits(scls, F.sigmoid(tcls))
+ weighted_loss = paddle.mean(paddle.multiply(loss, F.sigmoid(tobj)))
+ return weighted_loss
+
+ def obj_loss(self, sobj, tobj):
+ obj_mask = paddle.cast(tobj > 0., dtype="float32")
+ obj_mask.stop_gradient = True
+ loss = paddle.mean(
+ ops.sigmoid_cross_entropy_with_logits(sobj, obj_mask))
+ return loss
+
+ def forward(self, teacher_model, student_model):
+ teacher_distill_pairs = teacher_model.yolo_head.loss.distill_pairs
+ student_distill_pairs = student_model.yolo_head.loss.distill_pairs
+ distill_reg_loss, distill_cls_loss, distill_obj_loss = [], [], []
+ for s_pair, t_pair in zip(student_distill_pairs, teacher_distill_pairs):
+ distill_reg_loss.append(
+ self.obj_weighted_reg(s_pair[0], s_pair[1], s_pair[2], s_pair[
+ 3], t_pair[0], t_pair[1], t_pair[2], t_pair[3], t_pair[4]))
+ distill_cls_loss.append(
+ self.obj_weighted_cls(s_pair[5], t_pair[5], t_pair[4]))
+ distill_obj_loss.append(self.obj_loss(s_pair[4], t_pair[4]))
+ distill_reg_loss = paddle.add_n(distill_reg_loss)
+ distill_cls_loss = paddle.add_n(distill_cls_loss)
+ distill_obj_loss = paddle.add_n(distill_obj_loss)
+ loss = (distill_reg_loss + distill_cls_loss + distill_obj_loss
+ ) * self.weight
+ return loss
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/prune.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/prune.py
new file mode 100644
index 000000000..70d3de369
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/prune.py
@@ -0,0 +1,85 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle
+from paddle.utils import try_import
+
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+def print_prune_params(model):
+ model_dict = model.state_dict()
+ for key in model_dict.keys():
+ weight_name = model_dict[key].name
+ logger.info('Parameter name: {}, shape: {}'.format(
+ weight_name, model_dict[key].shape))
+
+
+@register
+@serializable
+class Pruner(object):
+ def __init__(self,
+ criterion,
+ pruned_params,
+ pruned_ratios,
+ print_params=False):
+ super(Pruner, self).__init__()
+ assert criterion in ['l1_norm', 'fpgm'], \
+ "unsupported prune criterion: {}".format(criterion)
+ self.criterion = criterion
+ self.pruned_params = pruned_params
+ self.pruned_ratios = pruned_ratios
+ self.print_params = print_params
+
+ def __call__(self, model):
+ # FIXME: adapt to network graph when Training and inference are
+ # inconsistent, now only supports prune inference network graph.
+ model.eval()
+ paddleslim = try_import('paddleslim')
+ from paddleslim.analysis import dygraph_flops as flops
+ input_spec = [{
+ "image": paddle.ones(
+ shape=[1, 3, 640, 640], dtype='float32'),
+ "im_shape": paddle.full(
+ [1, 2], 640, dtype='float32'),
+ "scale_factor": paddle.ones(
+ shape=[1, 2], dtype='float32')
+ }]
+ if self.print_params:
+ print_prune_params(model)
+
+ ori_flops = flops(model, input_spec) / (1000**3)
+ logger.info("FLOPs before pruning: {}GFLOPs".format(ori_flops))
+ if self.criterion == 'fpgm':
+ pruner = paddleslim.dygraph.FPGMFilterPruner(model, input_spec)
+ elif self.criterion == 'l1_norm':
+ pruner = paddleslim.dygraph.L1NormFilterPruner(model, input_spec)
+
+ logger.info("pruned params: {}".format(self.pruned_params))
+ pruned_ratios = [float(n) for n in self.pruned_ratios]
+ ratios = {}
+ for i, param in enumerate(self.pruned_params):
+ ratios[param] = pruned_ratios[i]
+ pruner.prune_vars(ratios, [0])
+ pruned_flops = flops(model, input_spec) / (1000**3)
+ logger.info("FLOPs after pruning: {}GFLOPs; pruned ratio: {}".format(
+ pruned_flops, (ori_flops - pruned_flops) / ori_flops))
+
+ return model
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/quant.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/quant.py
new file mode 100644
index 000000000..ab81127ae
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/quant.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from paddle.utils import try_import
+
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class QAT(object):
+ def __init__(self, quant_config, print_model):
+ super(QAT, self).__init__()
+ self.quant_config = quant_config
+ self.print_model = print_model
+
+ def __call__(self, model):
+ paddleslim = try_import('paddleslim')
+ self.quanter = paddleslim.dygraph.quant.QAT(config=self.quant_config)
+ if self.print_model:
+ logger.info("Model before quant:")
+ logger.info(model)
+
+ self.quanter.quantize(model)
+
+ if self.print_model:
+ logger.info("Quantized model:")
+ logger.info(model)
+
+ return model
+
+ def save_quantized_model(self, layer, path, input_spec=None, **config):
+ self.quanter.save_quantized_model(
+ model=layer, path=path, input_spec=input_spec, **config)
+
+
+@register
+@serializable
+class PTQ(object):
+ def __init__(self,
+ ptq_config,
+ quant_batch_num=10,
+ output_dir='output_inference',
+ fuse=True,
+ fuse_list=None):
+ super(PTQ, self).__init__()
+ self.ptq_config = ptq_config
+ self.quant_batch_num = quant_batch_num
+ self.output_dir = output_dir
+ self.fuse = fuse
+ self.fuse_list = fuse_list
+
+ def __call__(self, model):
+ paddleslim = try_import('paddleslim')
+ self.ptq = paddleslim.PTQ(**self.ptq_config)
+ model.eval()
+ quant_model = self.ptq.quantize(
+ model, fuse=self.fuse, fuse_list=self.fuse_list)
+
+ return quant_model
+
+ def save_quantized_model(self,
+ quant_model,
+ quantize_model_path,
+ input_spec=None):
+ self.ptq.save_quantized_model(quant_model, quantize_model_path,
+ input_spec)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/unstructured_prune.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/unstructured_prune.py
new file mode 100644
index 000000000..1dc876a8c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/slim/unstructured_prune.py
@@ -0,0 +1,66 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+from paddle.utils import try_import
+
+from ppdet.core.workspace import register, serializable
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+
+@register
+@serializable
+class UnstructuredPruner(object):
+ def __init__(self,
+ stable_epochs,
+ pruning_epochs,
+ tunning_epochs,
+ pruning_steps,
+ ratio,
+ initial_ratio,
+ prune_params_type=None):
+ self.stable_epochs = stable_epochs
+ self.pruning_epochs = pruning_epochs
+ self.tunning_epochs = tunning_epochs
+ self.ratio = ratio
+ self.prune_params_type = prune_params_type
+ self.initial_ratio = initial_ratio
+ self.pruning_steps = pruning_steps
+
+ def __call__(self, model, steps_per_epoch, skip_params_func=None):
+ paddleslim = try_import('paddleslim')
+ from paddleslim import GMPUnstructuredPruner
+ configs = {
+ 'pruning_strategy': 'gmp',
+ 'stable_iterations': self.stable_epochs * steps_per_epoch,
+ 'pruning_iterations': self.pruning_epochs * steps_per_epoch,
+ 'tunning_iterations': self.tunning_epochs * steps_per_epoch,
+ 'resume_iteration': 0,
+ 'pruning_steps': self.pruning_steps,
+ 'initial_ratio': self.initial_ratio,
+ }
+
+ pruner = GMPUnstructuredPruner(
+ model,
+ ratio=self.ratio,
+ skip_params_func=skip_params_func,
+ prune_params_type=self.prune_params_type,
+ local_sparsity=True,
+ configs=configs)
+
+ return pruner
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__init__.py
new file mode 100644
index 000000000..d0c32e260
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..da3643808
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/check.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/check.cpython-37.pyc
new file mode 100644
index 000000000..31e998962
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/check.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/checkpoint.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/checkpoint.cpython-37.pyc
new file mode 100644
index 000000000..353a47169
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/checkpoint.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/cli.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/cli.cpython-37.pyc
new file mode 100644
index 000000000..4c184bf00
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/cli.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/colormap.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/colormap.cpython-37.pyc
new file mode 100644
index 000000000..b6e9a2c48
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/colormap.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/download.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/download.cpython-37.pyc
new file mode 100644
index 000000000..6e2c53fa4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/download.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/logger.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/logger.cpython-37.pyc
new file mode 100644
index 000000000..6b63f47ed
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/logger.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/profiler.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/profiler.cpython-37.pyc
new file mode 100644
index 000000000..7a5b0b855
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/profiler.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/stats.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/stats.cpython-37.pyc
new file mode 100644
index 000000000..087a97fcf
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/stats.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/visualizer.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/visualizer.cpython-37.pyc
new file mode 100644
index 000000000..44fe18ba3
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/visualizer.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/voc_utils.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/voc_utils.cpython-37.pyc
new file mode 100644
index 000000000..5082f3f63
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/__pycache__/voc_utils.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/check.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/check.py
new file mode 100644
index 000000000..6c795b532
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/check.py
@@ -0,0 +1,112 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import sys
+
+import paddle
+import six
+import paddle.version as fluid_version
+
+from .logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['check_gpu', 'check_npu', 'check_version', 'check_config']
+
+
+def check_npu(use_npu):
+ """
+ Log error and exit when set use_npu=true in paddlepaddle
+ cpu/gpu/xpu version.
+ """
+ err = "Config use_npu cannot be set as true while you are " \
+ "using paddlepaddle cpu/gpu/xpu version ! \nPlease try: \n" \
+ "\t1. Install paddlepaddle-npu to run model on NPU \n" \
+ "\t2. Set use_npu as false in config file to run " \
+ "model on CPU/GPU/XPU"
+
+ try:
+ if use_npu and not paddle.is_compiled_with_npu():
+ logger.error(err)
+ sys.exit(1)
+ except Exception as e:
+ pass
+
+
+def check_gpu(use_gpu):
+ """
+ Log error and exit when set use_gpu=true in paddlepaddle
+ cpu version.
+ """
+ err = "Config use_gpu cannot be set as true while you are " \
+ "using paddlepaddle cpu version ! \nPlease try: \n" \
+ "\t1. Install paddlepaddle-gpu to run model on GPU \n" \
+ "\t2. Set use_gpu as false in config file to run " \
+ "model on CPU"
+
+ try:
+ if use_gpu and not paddle.is_compiled_with_cuda():
+ logger.error(err)
+ sys.exit(1)
+ except Exception as e:
+ pass
+
+
+def check_version(version='2.0'):
+ """
+ Log error and exit when the installed version of paddlepaddle is
+ not satisfied.
+ """
+ err = "PaddlePaddle version {} or higher is required, " \
+ "or a suitable develop version is satisfied as well. \n" \
+ "Please make sure the version is good with your code.".format(version)
+
+ version_installed = [
+ fluid_version.major, fluid_version.minor, fluid_version.patch,
+ fluid_version.rc
+ ]
+ if version_installed == ['0', '0', '0', '0']:
+ return
+ version_split = version.split('.')
+
+ length = min(len(version_installed), len(version_split))
+ for i in six.moves.range(length):
+ if version_installed[i] > version_split[i]:
+ return
+ if version_installed[i] < version_split[i]:
+ raise Exception(err)
+
+
+def check_config(cfg):
+ """
+ Check the correctness of the configuration file. Log error and exit
+ when Config is not compliant.
+ """
+ err = "'{}' not specified in config file. Please set it in config file."
+ check_list = ['architecture', 'num_classes']
+ try:
+ for var in check_list:
+ if not var in cfg:
+ logger.error(err.format(var))
+ sys.exit(1)
+ except Exception as e:
+ pass
+
+ if 'log_iter' not in cfg:
+ cfg.log_iter = 20
+
+ return cfg
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/checkpoint.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/checkpoint.py
new file mode 100644
index 000000000..b5aa84697
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/checkpoint.py
@@ -0,0 +1,226 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+import errno
+import os
+import time
+import numpy as np
+import paddle
+import paddle.nn as nn
+from .download import get_weights_path
+
+from .logger import setup_logger
+logger = setup_logger(__name__)
+
+
+def is_url(path):
+ """
+ Whether path is URL.
+ Args:
+ path (string): URL string or not.
+ """
+ return path.startswith('http://') \
+ or path.startswith('https://') \
+ or path.startswith('ppdet://')
+
+
+def _get_unique_endpoints(trainer_endpoints):
+ # Sorting is to avoid different environmental variables for each card
+ trainer_endpoints.sort()
+ ips = set()
+ unique_endpoints = set()
+ for endpoint in trainer_endpoints:
+ ip = endpoint.split(":")[0]
+ if ip in ips:
+ continue
+ ips.add(ip)
+ unique_endpoints.add(endpoint)
+ logger.info("unique_endpoints {}".format(unique_endpoints))
+ return unique_endpoints
+
+
+def _strip_postfix(path):
+ path, ext = os.path.splitext(path)
+ assert ext in ['', '.pdparams', '.pdopt', '.pdmodel'], \
+ "Unknown postfix {} from weights".format(ext)
+ return path
+
+
+def load_weight(model, weight, optimizer=None):
+ if is_url(weight):
+ weight = get_weights_path(weight)
+
+ path = _strip_postfix(weight)
+ pdparam_path = path + '.pdparams'
+ if not os.path.exists(pdparam_path):
+ raise ValueError("Model pretrain path {} does not "
+ "exists.".format(pdparam_path))
+
+ param_state_dict = paddle.load(pdparam_path)
+ model_dict = model.state_dict()
+ model_weight = {}
+ incorrect_keys = 0
+
+ for key in model_dict.keys():
+ if key in param_state_dict.keys():
+ model_weight[key] = param_state_dict[key]
+ else:
+ logger.info('Unmatched key: {}'.format(key))
+ incorrect_keys += 1
+
+ assert incorrect_keys == 0, "Load weight {} incorrectly, \
+ {} keys unmatched, please check again.".format(weight,
+ incorrect_keys)
+ logger.info('Finish resuming model weights: {}'.format(pdparam_path))
+
+ model.set_dict(model_weight)
+
+ last_epoch = 0
+ if optimizer is not None and os.path.exists(path + '.pdopt'):
+ optim_state_dict = paddle.load(path + '.pdopt')
+ # to solve resume bug, will it be fixed in paddle 2.0
+ for key in optimizer.state_dict().keys():
+ if not key in optim_state_dict.keys():
+ optim_state_dict[key] = optimizer.state_dict()[key]
+ if 'last_epoch' in optim_state_dict:
+ last_epoch = optim_state_dict.pop('last_epoch')
+ optimizer.set_state_dict(optim_state_dict)
+
+ return last_epoch
+
+
+def match_state_dict(model_state_dict, weight_state_dict):
+ """
+ Match between the model state dict and pretrained weight state dict.
+ Return the matched state dict.
+
+ The method supposes that all the names in pretrained weight state dict are
+ subclass of the names in models`, if the prefix 'backbone.' in pretrained weight
+ keys is stripped. And we could get the candidates for each model key. Then we
+ select the name with the longest matched size as the final match result. For
+ example, the model state dict has the name of
+ 'backbone.res2.res2a.branch2a.conv.weight' and the pretrained weight as
+ name of 'res2.res2a.branch2a.conv.weight' and 'branch2a.conv.weight'. We
+ match the 'res2.res2a.branch2a.conv.weight' to the model key.
+ """
+
+ model_keys = sorted(model_state_dict.keys())
+ weight_keys = sorted(weight_state_dict.keys())
+
+ def match(a, b):
+ if a.startswith('backbone.res5'):
+ # In Faster RCNN, res5 pretrained weights have prefix of backbone,
+ # however, the corresponding model weights have difficult prefix,
+ # bbox_head.
+ b = b[9:]
+ return a == b or a.endswith("." + b)
+
+ match_matrix = np.zeros([len(model_keys), len(weight_keys)])
+ for i, m_k in enumerate(model_keys):
+ for j, w_k in enumerate(weight_keys):
+ if match(m_k, w_k):
+ match_matrix[i, j] = len(w_k)
+ max_id = match_matrix.argmax(1)
+ max_len = match_matrix.max(1)
+ max_id[max_len == 0] = -1
+ not_load_weight_name = []
+ for match_idx in range(len(max_id)):
+ if match_idx < len(weight_keys) and max_id[match_idx] == -1:
+ not_load_weight_name.append(weight_keys[match_idx])
+ if len(not_load_weight_name) > 0:
+ logger.info('{} in pretrained weight is not used in the model, '
+ 'and its will not be loaded'.format(not_load_weight_name))
+ matched_keys = {}
+ result_state_dict = {}
+ for model_id, weight_id in enumerate(max_id):
+ if weight_id == -1:
+ continue
+ model_key = model_keys[model_id]
+ weight_key = weight_keys[weight_id]
+ weight_value = weight_state_dict[weight_key]
+ model_value_shape = list(model_state_dict[model_key].shape)
+
+ if list(weight_value.shape) != model_value_shape:
+ logger.info(
+ 'The shape {} in pretrained weight {} is unmatched with '
+ 'the shape {} in model {}. And the weight {} will not be '
+ 'loaded'.format(weight_value.shape, weight_key,
+ model_value_shape, model_key, weight_key))
+ continue
+
+ assert model_key not in result_state_dict
+ result_state_dict[model_key] = weight_value
+ if weight_key in matched_keys:
+ raise ValueError('Ambiguity weight {} loaded, it matches at least '
+ '{} and {} in the model'.format(
+ weight_key, model_key, matched_keys[
+ weight_key]))
+ matched_keys[weight_key] = model_key
+ return result_state_dict
+
+
+def load_pretrain_weight(model, pretrain_weight):
+ if is_url(pretrain_weight):
+ pretrain_weight = get_weights_path(pretrain_weight)
+
+ path = _strip_postfix(pretrain_weight)
+ if not (os.path.isdir(path) or os.path.isfile(path) or
+ os.path.exists(path + '.pdparams')):
+ raise ValueError("Model pretrain path `{}` does not exists. "
+ "If you don't want to load pretrain model, "
+ "please delete `pretrain_weights` field in "
+ "config file.".format(path))
+
+ model_dict = model.state_dict()
+
+ weights_path = path + '.pdparams'
+ param_state_dict = paddle.load(weights_path)
+ param_state_dict = match_state_dict(model_dict, param_state_dict)
+
+ model.set_dict(param_state_dict)
+ logger.info('Finish loading model weights: {}'.format(weights_path))
+
+
+def save_model(model, optimizer, save_dir, save_name, last_epoch):
+ """
+ save model into disk.
+
+ Args:
+ model (paddle.nn.Layer): the Layer instalce to save parameters.
+ optimizer (paddle.optimizer.Optimizer): the Optimizer instance to
+ save optimizer states.
+ save_dir (str): the directory to be saved.
+ save_name (str): the path to be saved.
+ last_epoch (int): the epoch index.
+ """
+ if paddle.distributed.get_rank() != 0:
+ return
+ if not os.path.exists(save_dir):
+ os.makedirs(save_dir)
+ save_path = os.path.join(save_dir, save_name)
+ if isinstance(model, nn.Layer):
+ paddle.save(model.state_dict(), save_path + ".pdparams")
+ else:
+ assert isinstance(model,
+ dict), 'model is not a instance of nn.layer or dict'
+ paddle.save(model, save_path + ".pdparams")
+ state_dict = optimizer.state_dict()
+ state_dict['last_epoch'] = last_epoch
+ paddle.save(state_dict, save_path + ".pdopt")
+ logger.info("Save checkpoint: {}".format(save_dir))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/cli.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/cli.py
new file mode 100644
index 000000000..b8ba59d78
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/cli.py
@@ -0,0 +1,151 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from argparse import ArgumentParser, RawDescriptionHelpFormatter
+
+import yaml
+import re
+from ppdet.core.workspace import get_registered_modules, dump_value
+
+__all__ = ['ColorTTY', 'ArgsParser']
+
+
+class ColorTTY(object):
+ def __init__(self):
+ super(ColorTTY, self).__init__()
+ self.colors = ['red', 'green', 'yellow', 'blue', 'magenta', 'cyan']
+
+ def __getattr__(self, attr):
+ if attr in self.colors:
+ color = self.colors.index(attr) + 31
+
+ def color_message(message):
+ return "[{}m{}[0m".format(color, message)
+
+ setattr(self, attr, color_message)
+ return color_message
+
+ def bold(self, message):
+ return self.with_code('01', message)
+
+ def with_code(self, code, message):
+ return "[{}m{}[0m".format(code, message)
+
+
+class ArgsParser(ArgumentParser):
+ def __init__(self):
+ super(ArgsParser, self).__init__(
+ formatter_class=RawDescriptionHelpFormatter)
+ self.add_argument("-c", "--config", help="configuration file to use")
+ self.add_argument(
+ "-o", "--opt", nargs='*', help="set configuration options")
+
+ def parse_args(self, argv=None):
+ args = super(ArgsParser, self).parse_args(argv)
+ assert args.config is not None, \
+ "Please specify --config=configure_file_path."
+ args.opt = self._parse_opt(args.opt)
+ return args
+
+ def _parse_opt(self, opts):
+ config = {}
+ if not opts:
+ return config
+ for s in opts:
+ s = s.strip()
+ k, v = s.split('=', 1)
+ if '.' not in k:
+ config[k] = yaml.load(v, Loader=yaml.Loader)
+ else:
+ keys = k.split('.')
+ if keys[0] not in config:
+ config[keys[0]] = {}
+ cur = config[keys[0]]
+ for idx, key in enumerate(keys[1:]):
+ if idx == len(keys) - 2:
+ cur[key] = yaml.load(v, Loader=yaml.Loader)
+ else:
+ cur[key] = {}
+ cur = cur[key]
+ return config
+
+
+def print_total_cfg(config):
+ modules = get_registered_modules()
+ color_tty = ColorTTY()
+ green = '___{}___'.format(color_tty.colors.index('green') + 31)
+
+ styled = {}
+ for key in config.keys():
+ if not config[key]: # empty schema
+ continue
+
+ if key not in modules and not hasattr(config[key], '__dict__'):
+ styled[key] = config[key]
+ continue
+ elif key in modules:
+ module = modules[key]
+ else:
+ type_name = type(config[key]).__name__
+ if type_name in modules:
+ module = modules[type_name].copy()
+ module.update({
+ k: v
+ for k, v in config[key].__dict__.items()
+ if k in module.schema
+ })
+ key += " ({})".format(type_name)
+ default = module.find_default_keys()
+ missing = module.find_missing_keys()
+ mismatch = module.find_mismatch_keys()
+ extra = module.find_extra_keys()
+ dep_missing = []
+ for dep in module.inject:
+ if isinstance(module[dep], str) and module[dep] != '':
+ if module[dep] not in modules: # not a valid module
+ dep_missing.append(dep)
+ else:
+ dep_mod = modules[module[dep]]
+ # empty dict but mandatory
+ if not dep_mod and dep_mod.mandatory():
+ dep_missing.append(dep)
+ override = list(
+ set(module.keys()) - set(default) - set(extra) - set(dep_missing))
+ replacement = {}
+ for name in set(override + default + extra + mismatch + missing):
+ new_name = name
+ if name in missing:
+ value = ""
+ else:
+ value = module[name]
+
+ if name in extra:
+ value = dump_value(value) + " "
+ elif name in mismatch:
+ value = dump_value(value) + " "
+ elif name in dep_missing:
+ value = dump_value(value) + " "
+ elif name in override and value != '':
+ mark = green
+ new_name = mark + name
+ replacement[new_name] = value
+ styled[key] = replacement
+ buffer = yaml.dump(styled, default_flow_style=False, default_style='')
+ buffer = (re.sub(r"", r"[31m[0m", buffer))
+ buffer = (re.sub(r"", r"[33m[0m", buffer))
+ buffer = (re.sub(r"", r"[31m[0m", buffer))
+ buffer = (re.sub(r"",
+ r"[31m[0m", buffer))
+ buffer = re.sub(r"___(\d+)___(.*?):", r"[\1m\2[0m:", buffer)
+ print(buffer)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/colormap.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/colormap.py
new file mode 100644
index 000000000..a9cdbe891
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/colormap.py
@@ -0,0 +1,58 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+import numpy as np
+
+
+def colormap(rgb=False):
+ """
+ Get colormap
+
+ The code of this function is copied from https://github.com/facebookresearch/Detectron/blob/main/detectron/utils/colormap.py
+ """
+ color_list = np.array([
+ 0.000, 0.447, 0.741, 0.850, 0.325, 0.098, 0.929, 0.694, 0.125, 0.494,
+ 0.184, 0.556, 0.466, 0.674, 0.188, 0.301, 0.745, 0.933, 0.635, 0.078,
+ 0.184, 0.300, 0.300, 0.300, 0.600, 0.600, 0.600, 1.000, 0.000, 0.000,
+ 1.000, 0.500, 0.000, 0.749, 0.749, 0.000, 0.000, 1.000, 0.000, 0.000,
+ 0.000, 1.000, 0.667, 0.000, 1.000, 0.333, 0.333, 0.000, 0.333, 0.667,
+ 0.000, 0.333, 1.000, 0.000, 0.667, 0.333, 0.000, 0.667, 0.667, 0.000,
+ 0.667, 1.000, 0.000, 1.000, 0.333, 0.000, 1.000, 0.667, 0.000, 1.000,
+ 1.000, 0.000, 0.000, 0.333, 0.500, 0.000, 0.667, 0.500, 0.000, 1.000,
+ 0.500, 0.333, 0.000, 0.500, 0.333, 0.333, 0.500, 0.333, 0.667, 0.500,
+ 0.333, 1.000, 0.500, 0.667, 0.000, 0.500, 0.667, 0.333, 0.500, 0.667,
+ 0.667, 0.500, 0.667, 1.000, 0.500, 1.000, 0.000, 0.500, 1.000, 0.333,
+ 0.500, 1.000, 0.667, 0.500, 1.000, 1.000, 0.500, 0.000, 0.333, 1.000,
+ 0.000, 0.667, 1.000, 0.000, 1.000, 1.000, 0.333, 0.000, 1.000, 0.333,
+ 0.333, 1.000, 0.333, 0.667, 1.000, 0.333, 1.000, 1.000, 0.667, 0.000,
+ 1.000, 0.667, 0.333, 1.000, 0.667, 0.667, 1.000, 0.667, 1.000, 1.000,
+ 1.000, 0.000, 1.000, 1.000, 0.333, 1.000, 1.000, 0.667, 1.000, 0.167,
+ 0.000, 0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000,
+ 0.000, 0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000,
+ 0.000, 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000,
+ 0.833, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.167, 0.000, 0.000,
+ 0.333, 0.000, 0.000, 0.500, 0.000, 0.000, 0.667, 0.000, 0.000, 0.833,
+ 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 0.143, 0.143, 0.143, 0.286,
+ 0.286, 0.286, 0.429, 0.429, 0.429, 0.571, 0.571, 0.571, 0.714, 0.714,
+ 0.714, 0.857, 0.857, 0.857, 1.000, 1.000, 1.000
+ ]).astype(np.float32)
+ color_list = color_list.reshape((-1, 3)) * 255
+ if not rgb:
+ color_list = color_list[:, ::-1]
+ return color_list
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/download.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/download.py
new file mode 100644
index 000000000..54c19c6cf
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/download.py
@@ -0,0 +1,557 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import os.path as osp
+import sys
+import yaml
+import time
+import shutil
+import requests
+import tqdm
+import hashlib
+import base64
+import binascii
+import tarfile
+import zipfile
+
+from paddle.utils.download import _get_unique_endpoints
+from ppdet.core.workspace import BASE_KEY
+from .logger import setup_logger
+from .voc_utils import create_list
+
+logger = setup_logger(__name__)
+
+__all__ = [
+ 'get_weights_path', 'get_dataset_path', 'get_config_path',
+ 'download_dataset', 'create_voc_list'
+]
+
+WEIGHTS_HOME = osp.expanduser("~/.cache/paddle/weights")
+DATASET_HOME = osp.expanduser("~/.cache/paddle/dataset")
+CONFIGS_HOME = osp.expanduser("~/.cache/paddle/configs")
+
+# dict of {dataset_name: (download_info, sub_dirs)}
+# download info: [(url, md5sum)]
+DATASETS = {
+ 'coco': ([
+ (
+ 'http://images.cocodataset.org/zips/train2017.zip',
+ 'cced6f7f71b7629ddf16f17bbcfab6b2', ),
+ (
+ 'http://images.cocodataset.org/zips/val2017.zip',
+ '442b8da7639aecaf257c1dceb8ba8c80', ),
+ (
+ 'http://images.cocodataset.org/annotations/annotations_trainval2017.zip',
+ 'f4bbac642086de4f52a3fdda2de5fa2c', ),
+ ], ["annotations", "train2017", "val2017"]),
+ 'voc': ([
+ (
+ 'http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar',
+ '6cd6e144f989b92b3379bac3b3de84fd', ),
+ (
+ 'http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar',
+ 'c52e279531787c972589f7e41ab4ae64', ),
+ (
+ 'http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar',
+ 'b6e924de25625d8de591ea690078ad9f', ),
+ (
+ 'https://paddledet.bj.bcebos.com/data/label_list.txt',
+ '5ae5d62183cfb6f6d3ac109359d06a1b', ),
+ ], ["VOCdevkit/VOC2012", "VOCdevkit/VOC2007"]),
+ 'wider_face': ([
+ (
+ 'https://dataset.bj.bcebos.com/wider_face/WIDER_train.zip',
+ '3fedf70df600953d25982bcd13d91ba2', ),
+ (
+ 'https://dataset.bj.bcebos.com/wider_face/WIDER_val.zip',
+ 'dfa7d7e790efa35df3788964cf0bbaea', ),
+ (
+ 'https://dataset.bj.bcebos.com/wider_face/wider_face_split.zip',
+ 'a4a898d6193db4b9ef3260a68bad0dc7', ),
+ ], ["WIDER_train", "WIDER_val", "wider_face_split"]),
+ 'fruit': ([(
+ 'https://dataset.bj.bcebos.com/PaddleDetection_demo/fruit.tar',
+ 'baa8806617a54ccf3685fa7153388ae6', ), ],
+ ['Annotations', 'JPEGImages']),
+ 'roadsign_voc': ([(
+ 'https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar',
+ '8d629c0f880dd8b48de9aeff44bf1f3e', ), ], ['annotations', 'images']),
+ 'roadsign_coco': ([(
+ 'https://paddlemodels.bj.bcebos.com/object_detection/roadsign_coco.tar',
+ '49ce5a9b5ad0d6266163cd01de4b018e', ), ], ['annotations', 'images']),
+ 'spine_coco': ([(
+ 'https://paddledet.bj.bcebos.com/data/spine_coco.tar',
+ '7ed69ae73f842cd2a8cf4f58dc3c5535', ), ], ['annotations', 'images']),
+ 'mot': (),
+ 'objects365': (),
+ 'coco_ce': ([(
+ 'https://paddledet.bj.bcebos.com/data/coco_ce.tar',
+ 'eadd1b79bc2f069f2744b1dd4e0c0329', ), ], [])
+}
+
+DOWNLOAD_RETRY_LIMIT = 3
+
+PPDET_WEIGHTS_DOWNLOAD_URL_PREFIX = 'https://paddledet.bj.bcebos.com/'
+
+
+def parse_url(url):
+ url = url.replace("ppdet://", PPDET_WEIGHTS_DOWNLOAD_URL_PREFIX)
+ return url
+
+
+def get_weights_path(url):
+ """Get weights path from WEIGHTS_HOME, if not exists,
+ download it from url.
+ """
+ url = parse_url(url)
+ path, _ = get_path(url, WEIGHTS_HOME)
+ return path
+
+
+def get_config_path(url):
+ """Get weights path from CONFIGS_HOME, if not exists,
+ download it from url.
+ """
+ url = parse_url(url)
+ path = map_path(url, CONFIGS_HOME, path_depth=2)
+ if os.path.isfile(path):
+ return path
+
+ # config file not found, try download
+ # 1. clear configs directory
+ if osp.isdir(CONFIGS_HOME):
+ shutil.rmtree(CONFIGS_HOME)
+
+ # 2. get url
+ try:
+ from ppdet import __version__ as version
+ except ImportError:
+ version = None
+
+ cfg_url = "ppdet://configs/{}/configs.tar".format(version) \
+ if version else "ppdet://configs/configs.tar"
+ cfg_url = parse_url(cfg_url)
+
+ # 3. download and decompress
+ cfg_fullname = _download_dist(cfg_url, osp.dirname(CONFIGS_HOME))
+ _decompress_dist(cfg_fullname)
+
+ # 4. check config file existing
+ if os.path.isfile(path):
+ return path
+ else:
+ logger.error("Get config {} failed after download, please contact us on " \
+ "https://github.com/PaddlePaddle/PaddleDetection/issues".format(path))
+ sys.exit(1)
+
+
+def get_dataset_path(path, annotation, image_dir):
+ """
+ If path exists, return path.
+ Otherwise, get dataset path from DATASET_HOME, if not exists,
+ download it.
+ """
+ if _dataset_exists(path, annotation, image_dir):
+ return path
+
+ logger.info("Dataset {} is not valid for reason above, try searching {} or "
+ "downloading dataset...".format(
+ osp.realpath(path), DATASET_HOME))
+
+ data_name = os.path.split(path.strip().lower())[-1]
+ for name, dataset in DATASETS.items():
+ if data_name == name:
+ logger.debug("Parse dataset_dir {} as dataset "
+ "{}".format(path, name))
+ if name == 'objects365':
+ raise NotImplementedError(
+ "Dataset {} is not valid for download automatically. "
+ "Please apply and download the dataset from "
+ "https://www.objects365.org/download.html".format(name))
+ data_dir = osp.join(DATASET_HOME, name)
+
+ if name == 'mot':
+ if osp.exists(path) or osp.exists(data_dir):
+ return data_dir
+ else:
+ raise NotImplementedError(
+ "Dataset {} is not valid for download automatically. "
+ "Please apply and download the dataset following docs/tutorials/PrepareMOTDataSet.md".
+ format(name))
+
+ if name == "spine_coco":
+ if _dataset_exists(data_dir, annotation, image_dir):
+ return data_dir
+
+ # For voc, only check dir VOCdevkit/VOC2012, VOCdevkit/VOC2007
+ if name in ['voc', 'fruit', 'roadsign_voc']:
+ exists = True
+ for sub_dir in dataset[1]:
+ check_dir = osp.join(data_dir, sub_dir)
+ if osp.exists(check_dir):
+ logger.info("Found {}".format(check_dir))
+ else:
+ exists = False
+ if exists:
+ return data_dir
+
+ # voc exist is checked above, voc is not exist here
+ check_exist = name != 'voc' and name != 'fruit' and name != 'roadsign_voc'
+ for url, md5sum in dataset[0]:
+ get_path(url, data_dir, md5sum, check_exist)
+
+ # voc should create list after download
+ if name == 'voc':
+ create_voc_list(data_dir)
+ return data_dir
+
+ # not match any dataset in DATASETS
+ raise ValueError(
+ "Dataset {} is not valid and cannot parse dataset type "
+ "'{}' for automaticly downloading, which only supports "
+ "'voc' , 'coco', 'wider_face', 'fruit', 'roadsign_voc' and 'mot' currently".
+ format(path, osp.split(path)[-1]))
+
+
+def create_voc_list(data_dir, devkit_subdir='VOCdevkit'):
+ logger.debug("Create voc file list...")
+ devkit_dir = osp.join(data_dir, devkit_subdir)
+ years = ['2007', '2012']
+
+ # NOTE: since using auto download VOC
+ # dataset, VOC default label list should be used,
+ # do not generate label_list.txt here. For default
+ # label, see ../data/source/voc.py
+ create_list(devkit_dir, years, data_dir)
+ logger.debug("Create voc file list finished")
+
+
+def map_path(url, root_dir, path_depth=1):
+ # parse path after download to decompress under root_dir
+ assert path_depth > 0, "path_depth should be a positive integer"
+ dirname = url
+ for _ in range(path_depth):
+ dirname = osp.dirname(dirname)
+ fpath = osp.relpath(url, dirname)
+
+ zip_formats = ['.zip', '.tar', '.gz']
+ for zip_format in zip_formats:
+ fpath = fpath.replace(zip_format, '')
+ return osp.join(root_dir, fpath)
+
+
+def get_path(url, root_dir, md5sum=None, check_exist=True):
+ """ Download from given url to root_dir.
+ if file or directory specified by url is exists under
+ root_dir, return the path directly, otherwise download
+ from url and decompress it, return the path.
+
+ url (str): download url
+ root_dir (str): root dir for downloading, it should be
+ WEIGHTS_HOME or DATASET_HOME
+ md5sum (str): md5 sum of download package
+ """
+ # parse path after download to decompress under root_dir
+ fullpath = map_path(url, root_dir)
+
+ # For same zip file, decompressed directory name different
+ # from zip file name, rename by following map
+ decompress_name_map = {
+ "VOCtrainval_11-May-2012": "VOCdevkit/VOC2012",
+ "VOCtrainval_06-Nov-2007": "VOCdevkit/VOC2007",
+ "VOCtest_06-Nov-2007": "VOCdevkit/VOC2007",
+ "annotations_trainval": "annotations"
+ }
+ for k, v in decompress_name_map.items():
+ if fullpath.find(k) >= 0:
+ fullpath = osp.join(osp.split(fullpath)[0], v)
+
+ if osp.exists(fullpath) and check_exist:
+ if not osp.isfile(fullpath) or \
+ _check_exist_file_md5(fullpath, md5sum, url):
+ logger.debug("Found {}".format(fullpath))
+ return fullpath, True
+ else:
+ os.remove(fullpath)
+
+ fullname = _download_dist(url, root_dir, md5sum)
+
+ # new weights format which postfix is 'pdparams' not
+ # need to decompress
+ if osp.splitext(fullname)[-1] not in ['.pdparams', '.yml']:
+ _decompress_dist(fullname)
+
+ return fullpath, False
+
+
+def download_dataset(path, dataset=None):
+ if dataset not in DATASETS.keys():
+ logger.error("Unknown dataset {}, it should be "
+ "{}".format(dataset, DATASETS.keys()))
+ return
+ dataset_info = DATASETS[dataset][0]
+ for info in dataset_info:
+ get_path(info[0], path, info[1], False)
+ logger.debug("Download dataset {} finished.".format(dataset))
+
+
+def _dataset_exists(path, annotation, image_dir):
+ """
+ Check if user define dataset exists
+ """
+ if not osp.exists(path):
+ logger.warning("Config dataset_dir {} is not exits, "
+ "dataset config is not valid".format(path))
+ return False
+ if annotation:
+ annotation_path = osp.join(path, annotation)
+ if not osp.isfile(annotation_path):
+ logger.warning("Config annotation {} is not a "
+ "file, dataset config is not "
+ "valid".format(annotation_path))
+ return False
+ if image_dir:
+ image_path = osp.join(path, image_dir)
+ if not osp.isdir(image_path):
+ logger.warning("Config image_dir {} is not a "
+ "directory, dataset config is not "
+ "valid".format(image_path))
+ return False
+ return True
+
+
+def _download(url, path, md5sum=None):
+ """
+ Download from url, save to path.
+
+ url (str): download url
+ path (str): download to given path
+ """
+ if not osp.exists(path):
+ os.makedirs(path)
+
+ fname = osp.split(url)[-1]
+ fullname = osp.join(path, fname)
+ retry_cnt = 0
+
+ while not (osp.exists(fullname) and _check_exist_file_md5(fullname, md5sum,
+ url)):
+ if retry_cnt < DOWNLOAD_RETRY_LIMIT:
+ retry_cnt += 1
+ else:
+ raise RuntimeError("Download from {} failed. "
+ "Retry limit reached".format(url))
+
+ logger.info("Downloading {} from {}".format(fname, url))
+
+ # NOTE: windows path join may incur \, which is invalid in url
+ if sys.platform == "win32":
+ url = url.replace('\\', '/')
+
+ req = requests.get(url, stream=True)
+ if req.status_code != 200:
+ raise RuntimeError("Downloading from {} failed with code "
+ "{}!".format(url, req.status_code))
+
+ # For protecting download interupted, download to
+ # tmp_fullname firstly, move tmp_fullname to fullname
+ # after download finished
+ tmp_fullname = fullname + "_tmp"
+ total_size = req.headers.get('content-length')
+ with open(tmp_fullname, 'wb') as f:
+ if total_size:
+ for chunk in tqdm.tqdm(
+ req.iter_content(chunk_size=1024),
+ total=(int(total_size) + 1023) // 1024,
+ unit='KB'):
+ f.write(chunk)
+ else:
+ for chunk in req.iter_content(chunk_size=1024):
+ if chunk:
+ f.write(chunk)
+ shutil.move(tmp_fullname, fullname)
+ return fullname
+
+
+def _download_dist(url, path, md5sum=None):
+ env = os.environ
+ if 'PADDLE_TRAINERS_NUM' in env and 'PADDLE_TRAINER_ID' in env:
+ trainer_id = int(env['PADDLE_TRAINER_ID'])
+ num_trainers = int(env['PADDLE_TRAINERS_NUM'])
+ if num_trainers <= 1:
+ return _download(url, path, md5sum)
+ else:
+ fname = osp.split(url)[-1]
+ fullname = osp.join(path, fname)
+ lock_path = fullname + '.download.lock'
+
+ if not osp.isdir(path):
+ os.makedirs(path)
+
+ if not osp.exists(fullname):
+ from paddle.distributed import ParallelEnv
+ unique_endpoints = _get_unique_endpoints(ParallelEnv()
+ .trainer_endpoints[:])
+ with open(lock_path, 'w'): # touch
+ os.utime(lock_path, None)
+ if ParallelEnv().current_endpoint in unique_endpoints:
+ _download(url, path, md5sum)
+ os.remove(lock_path)
+ else:
+ while os.path.exists(lock_path):
+ time.sleep(0.5)
+ return fullname
+ else:
+ return _download(url, path, md5sum)
+
+
+def _check_exist_file_md5(filename, md5sum, url):
+ # if md5sum is None, and file to check is weights file,
+ # read md5um from url and check, else check md5sum directly
+ return _md5check_from_url(filename, url) if md5sum is None \
+ and filename.endswith('pdparams') \
+ else _md5check(filename, md5sum)
+
+
+def _md5check_from_url(filename, url):
+ # For weights in bcebos URLs, MD5 value is contained
+ # in request header as 'content_md5'
+ req = requests.get(url, stream=True)
+ content_md5 = req.headers.get('content-md5')
+ req.close()
+ if not content_md5 or _md5check(
+ filename,
+ binascii.hexlify(base64.b64decode(content_md5.strip('"'))).decode(
+ )):
+ return True
+ else:
+ return False
+
+
+def _md5check(fullname, md5sum=None):
+ if md5sum is None:
+ return True
+
+ logger.debug("File {} md5 checking...".format(fullname))
+ md5 = hashlib.md5()
+ with open(fullname, 'rb') as f:
+ for chunk in iter(lambda: f.read(4096), b""):
+ md5.update(chunk)
+ calc_md5sum = md5.hexdigest()
+
+ if calc_md5sum != md5sum:
+ logger.warning("File {} md5 check failed, {}(calc) != "
+ "{}(base)".format(fullname, calc_md5sum, md5sum))
+ return False
+ return True
+
+
+def _decompress(fname):
+ """
+ Decompress for zip and tar file
+ """
+ logger.info("Decompressing {}...".format(fname))
+
+ # For protecting decompressing interupted,
+ # decompress to fpath_tmp directory firstly, if decompress
+ # successed, move decompress files to fpath and delete
+ # fpath_tmp and remove download compress file.
+ fpath = osp.split(fname)[0]
+ fpath_tmp = osp.join(fpath, 'tmp')
+ if osp.isdir(fpath_tmp):
+ shutil.rmtree(fpath_tmp)
+ os.makedirs(fpath_tmp)
+
+ if fname.find('tar') >= 0:
+ with tarfile.open(fname) as tf:
+ tf.extractall(path=fpath_tmp)
+ elif fname.find('zip') >= 0:
+ with zipfile.ZipFile(fname) as zf:
+ zf.extractall(path=fpath_tmp)
+ elif fname.find('.txt') >= 0:
+ return
+ else:
+ raise TypeError("Unsupport compress file type {}".format(fname))
+
+ for f in os.listdir(fpath_tmp):
+ src_dir = osp.join(fpath_tmp, f)
+ dst_dir = osp.join(fpath, f)
+ _move_and_merge_tree(src_dir, dst_dir)
+
+ shutil.rmtree(fpath_tmp)
+ os.remove(fname)
+
+
+def _decompress_dist(fname):
+ env = os.environ
+ if 'PADDLE_TRAINERS_NUM' in env and 'PADDLE_TRAINER_ID' in env:
+ trainer_id = int(env['PADDLE_TRAINER_ID'])
+ num_trainers = int(env['PADDLE_TRAINERS_NUM'])
+ if num_trainers <= 1:
+ _decompress(fname)
+ else:
+ lock_path = fname + '.decompress.lock'
+ from paddle.distributed import ParallelEnv
+ unique_endpoints = _get_unique_endpoints(ParallelEnv()
+ .trainer_endpoints[:])
+ # NOTE(dkp): _decompress_dist always performed after
+ # _download_dist, in _download_dist sub-trainers is waiting
+ # for download lock file release with sleeping, if decompress
+ # prograss is very fast and finished with in the sleeping gap
+ # time, e.g in tiny dataset such as coco_ce, spine_coco, main
+ # trainer may finish decompress and release lock file, so we
+ # only craete lock file in main trainer and all sub-trainer
+ # wait 1s for main trainer to create lock file, for 1s is
+ # twice as sleeping gap, this waiting time can keep all
+ # trainer pipeline in order
+ # **change this if you have more elegent methods**
+ if ParallelEnv().current_endpoint in unique_endpoints:
+ with open(lock_path, 'w'): # touch
+ os.utime(lock_path, None)
+ _decompress(fname)
+ os.remove(lock_path)
+ else:
+ time.sleep(1)
+ while os.path.exists(lock_path):
+ time.sleep(0.5)
+ else:
+ _decompress(fname)
+
+
+def _move_and_merge_tree(src, dst):
+ """
+ Move src directory to dst, if dst is already exists,
+ merge src to dst
+ """
+ if not osp.exists(dst):
+ shutil.move(src, dst)
+ elif osp.isfile(src):
+ shutil.move(src, dst)
+ else:
+ for fp in os.listdir(src):
+ src_fp = osp.join(src, fp)
+ dst_fp = osp.join(dst, fp)
+ if osp.isdir(src_fp):
+ if osp.isdir(dst_fp):
+ _move_and_merge_tree(src_fp, dst_fp)
+ else:
+ shutil.move(src_fp, dst_fp)
+ elif osp.isfile(src_fp) and \
+ not osp.isfile(dst_fp):
+ shutil.move(src_fp, dst_fp)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/logger.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/logger.py
new file mode 100644
index 000000000..51e296205
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/logger.py
@@ -0,0 +1,70 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import os
+import sys
+
+import paddle.distributed as dist
+
+__all__ = ['setup_logger']
+
+logger_initialized = []
+
+
+def setup_logger(name="ppdet", output=None):
+ """
+ Initialize logger and set its verbosity level to INFO.
+ Args:
+ output (str): a file name or a directory to save log. If None, will not save log file.
+ If ends with ".txt" or ".log", assumed to be a file name.
+ Otherwise, logs will be saved to `output/log.txt`.
+ name (str): the root module name of this logger
+
+ Returns:
+ logging.Logger: a logger
+ """
+ logger = logging.getLogger(name)
+ if name in logger_initialized:
+ return logger
+
+ logger.setLevel(logging.INFO)
+ logger.propagate = False
+
+ formatter = logging.Formatter(
+ "[%(asctime)s] %(name)s %(levelname)s: %(message)s",
+ datefmt="%m/%d %H:%M:%S")
+ # stdout logging: master only
+ local_rank = dist.get_rank()
+ if local_rank == 0:
+ ch = logging.StreamHandler(stream=sys.stdout)
+ ch.setLevel(logging.DEBUG)
+ ch.setFormatter(formatter)
+ logger.addHandler(ch)
+
+ # file logging: all workers
+ if output is not None:
+ if output.endswith(".txt") or output.endswith(".log"):
+ filename = output
+ else:
+ filename = os.path.join(output, "log.txt")
+ if local_rank > 0:
+ filename = filename + ".rank{}".format(local_rank)
+ os.makedirs(os.path.dirname(filename))
+ fh = logging.FileHandler(filename, mode='a')
+ fh.setLevel(logging.DEBUG)
+ fh.setFormatter(logging.Formatter())
+ logger.addHandler(fh)
+ logger_initialized.append(name)
+ return logger
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/profiler.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/profiler.py
new file mode 100644
index 000000000..cae3773fa
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/profiler.py
@@ -0,0 +1,111 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import paddle
+
+# A global variable to record the number of calling times for profiler
+# functions. It is used to specify the tracing range of training steps.
+_profiler_step_id = 0
+
+# A global variable to avoid parsing from string every time.
+_profiler_options = None
+
+
+class ProfilerOptions(object):
+ '''
+ Use a string to initialize a ProfilerOptions.
+ The string should be in the format: "key1=value1;key2=value;key3=value3".
+ For example:
+ "profile_path=model.profile"
+ "batch_range=[50, 60]; profile_path=model.profile"
+ "batch_range=[50, 60]; tracer_option=OpDetail; profile_path=model.profile"
+
+ ProfilerOptions supports following key-value pair:
+ batch_range - a integer list, e.g. [100, 110].
+ state - a string, the optional values are 'CPU', 'GPU' or 'All'.
+ sorted_key - a string, the optional values are 'calls', 'total',
+ 'max', 'min' or 'ave.
+ tracer_option - a string, the optional values are 'Default', 'OpDetail',
+ 'AllOpDetail'.
+ profile_path - a string, the path to save the serialized profile data,
+ which can be used to generate a timeline.
+ exit_on_finished - a boolean.
+ '''
+
+ def __init__(self, options_str):
+ assert isinstance(options_str, str)
+
+ self._options = {
+ 'batch_range': [10, 20],
+ 'state': 'All',
+ 'sorted_key': 'total',
+ 'tracer_option': 'Default',
+ 'profile_path': '/tmp/profile',
+ 'exit_on_finished': True
+ }
+ self._parse_from_string(options_str)
+
+ def _parse_from_string(self, options_str):
+ for kv in options_str.replace(' ', '').split(';'):
+ key, value = kv.split('=')
+ if key == 'batch_range':
+ value_list = value.replace('[', '').replace(']', '').split(',')
+ value_list = list(map(int, value_list))
+ if len(value_list) >= 2 and value_list[0] >= 0 and value_list[
+ 1] > value_list[0]:
+ self._options[key] = value_list
+ elif key == 'exit_on_finished':
+ self._options[key] = value.lower() in ("yes", "true", "t", "1")
+ elif key in [
+ 'state', 'sorted_key', 'tracer_option', 'profile_path'
+ ]:
+ self._options[key] = value
+
+ def __getitem__(self, name):
+ if self._options.get(name, None) is None:
+ raise ValueError(
+ "ProfilerOptions does not have an option named %s." % name)
+ return self._options[name]
+
+
+def add_profiler_step(options_str=None):
+ '''
+ Enable the operator-level timing using PaddlePaddle's profiler.
+ The profiler uses a independent variable to count the profiler steps.
+ One call of this function is treated as a profiler step.
+
+ Args:
+ profiler_options - a string to initialize the ProfilerOptions.
+ Default is None, and the profiler is disabled.
+ '''
+ if options_str is None:
+ return
+
+ global _profiler_step_id
+ global _profiler_options
+
+ if _profiler_options is None:
+ _profiler_options = ProfilerOptions(options_str)
+
+ if _profiler_step_id == _profiler_options['batch_range'][0]:
+ paddle.utils.profiler.start_profiler(_profiler_options['state'],
+ _profiler_options['tracer_option'])
+ elif _profiler_step_id == _profiler_options['batch_range'][1]:
+ paddle.utils.profiler.stop_profiler(_profiler_options['sorted_key'],
+ _profiler_options['profile_path'])
+ if _profiler_options['exit_on_finished']:
+ sys.exit(0)
+
+ _profiler_step_id += 1
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/stats.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/stats.py
new file mode 100644
index 000000000..4cd36d91c
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/stats.py
@@ -0,0 +1,94 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import collections
+import numpy as np
+
+__all__ = ['SmoothedValue', 'TrainingStats']
+
+
+class SmoothedValue(object):
+ """Track a series of values and provide access to smoothed values over a
+ window or the global series average.
+ """
+
+ def __init__(self, window_size=20, fmt=None):
+ if fmt is None:
+ fmt = "{median:.4f} ({avg:.4f})"
+ self.deque = collections.deque(maxlen=window_size)
+ self.fmt = fmt
+ self.total = 0.
+ self.count = 0
+
+ def update(self, value, n=1):
+ self.deque.append(value)
+ self.count += n
+ self.total += value * n
+
+ @property
+ def median(self):
+ return np.median(self.deque)
+
+ @property
+ def avg(self):
+ return np.mean(self.deque)
+
+ @property
+ def max(self):
+ return np.max(self.deque)
+
+ @property
+ def value(self):
+ return self.deque[-1]
+
+ @property
+ def global_avg(self):
+ return self.total / self.count
+
+ def __str__(self):
+ return self.fmt.format(
+ median=self.median, avg=self.avg, max=self.max, value=self.value)
+
+
+class TrainingStats(object):
+ def __init__(self, window_size, delimiter=' '):
+ self.meters = None
+ self.window_size = window_size
+ self.delimiter = delimiter
+
+ def update(self, stats):
+ if self.meters is None:
+ self.meters = {
+ k: SmoothedValue(self.window_size)
+ for k in stats.keys()
+ }
+ for k, v in self.meters.items():
+ v.update(stats[k].numpy())
+
+ def get(self, extras=None):
+ stats = collections.OrderedDict()
+ if extras:
+ for k, v in extras.items():
+ stats[k] = v
+ for k, v in self.meters.items():
+ stats[k] = format(v.median, '.6f')
+
+ return stats
+
+ def log(self, extras=None):
+ d = self.get(extras)
+ strs = []
+ for k, v in d.items():
+ strs.append("{}: {}".format(k, str(v)))
+ return self.delimiter.join(strs)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/visualizer.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/visualizer.py
new file mode 100644
index 000000000..fdfd966e2
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/visualizer.py
@@ -0,0 +1,321 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+import numpy as np
+from PIL import Image, ImageDraw
+import cv2
+import math
+
+from .colormap import colormap
+from ppdet.utils.logger import setup_logger
+logger = setup_logger(__name__)
+
+__all__ = ['visualize_results']
+
+
+def visualize_results(image,
+ bbox_res,
+ mask_res,
+ segm_res,
+ keypoint_res,
+ im_id,
+ catid2name,
+ threshold=0.5):
+ """
+ Visualize bbox and mask results
+ """
+ if bbox_res is not None:
+ image = draw_bbox(image, im_id, catid2name, bbox_res, threshold)
+ if mask_res is not None:
+ image = draw_mask(image, im_id, mask_res, threshold)
+ if segm_res is not None:
+ image = draw_segm(image, im_id, catid2name, segm_res, threshold)
+ if keypoint_res is not None:
+ image = draw_pose(image, keypoint_res, threshold)
+ return image
+
+
+def draw_mask(image, im_id, segms, threshold, alpha=0.7):
+ """
+ Draw mask on image
+ """
+ mask_color_id = 0
+ w_ratio = .4
+ color_list = colormap(rgb=True)
+ img_array = np.array(image).astype('float32')
+ for dt in np.array(segms):
+ if im_id != dt['image_id']:
+ continue
+ segm, score = dt['segmentation'], dt['score']
+ if score < threshold:
+ continue
+ import pycocotools.mask as mask_util
+ mask = mask_util.decode(segm) * 255
+ color_mask = color_list[mask_color_id % len(color_list), 0:3]
+ mask_color_id += 1
+ for c in range(3):
+ color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255
+ idx = np.nonzero(mask)
+ img_array[idx[0], idx[1], :] *= 1.0 - alpha
+ img_array[idx[0], idx[1], :] += alpha * color_mask
+ return Image.fromarray(img_array.astype('uint8'))
+
+
+def draw_bbox(image, im_id, catid2name, bboxes, threshold):
+ """
+ Draw bbox on image
+ """
+ draw = ImageDraw.Draw(image)
+
+ catid2color = {}
+ color_list = colormap(rgb=True)[:40]
+ for dt in np.array(bboxes):
+ if im_id != dt['image_id']:
+ continue
+ catid, bbox, score = dt['category_id'], dt['bbox'], dt['score']
+ if score < threshold:
+ continue
+
+ if catid not in catid2color:
+ idx = np.random.randint(len(color_list))
+ catid2color[catid] = color_list[idx]
+ color = tuple(catid2color[catid])
+
+ # draw bbox
+ if len(bbox) == 4:
+ # draw bbox
+ xmin, ymin, w, h = bbox
+ xmax = xmin + w
+ ymax = ymin + h
+ draw.line(
+ [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin),
+ (xmin, ymin)],
+ width=2,
+ fill=color)
+ elif len(bbox) == 8:
+ x1, y1, x2, y2, x3, y3, x4, y4 = bbox
+ draw.line(
+ [(x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y1)],
+ width=2,
+ fill=color)
+ xmin = min(x1, x2, x3, x4)
+ ymin = min(y1, y2, y3, y4)
+ else:
+ logger.error('the shape of bbox must be [M, 4] or [M, 8]!')
+
+ # draw label
+ text = "{} {:.2f}".format(catid2name[catid], score)
+ tw, th = draw.textsize(text)
+ draw.rectangle(
+ [(xmin + 1, ymin - th), (xmin + tw + 1, ymin)], fill=color)
+ draw.text((xmin + 1, ymin - th), text, fill=(255, 255, 255))
+
+ return image
+
+
+def save_result(save_path, results, catid2name, threshold):
+ """
+ save result as txt
+ """
+ img_id = int(results["im_id"])
+ with open(save_path, 'w') as f:
+ if "bbox_res" in results:
+ for dt in results["bbox_res"]:
+ catid, bbox, score = dt['category_id'], dt['bbox'], dt['score']
+ if score < threshold:
+ continue
+ # each bbox result as a line
+ # for rbox: classname score x1 y1 x2 y2 x3 y3 x4 y4
+ # for bbox: classname score x1 y1 w h
+ bbox_pred = '{} {} '.format(catid2name[catid],
+ score) + ' '.join(
+ [str(e) for e in bbox])
+ f.write(bbox_pred + '\n')
+ elif "keypoint_res" in results:
+ for dt in results["keypoint_res"]:
+ kpts = dt['keypoints']
+ scores = dt['score']
+ keypoint_pred = [img_id, scores, kpts]
+ print(keypoint_pred, file=f)
+ else:
+ print("No valid results found, skip txt save")
+
+
+def draw_segm(image,
+ im_id,
+ catid2name,
+ segms,
+ threshold,
+ alpha=0.7,
+ draw_box=True):
+ """
+ Draw segmentation on image
+ """
+ mask_color_id = 0
+ w_ratio = .4
+ color_list = colormap(rgb=True)
+ img_array = np.array(image).astype('float32')
+ for dt in np.array(segms):
+ if im_id != dt['image_id']:
+ continue
+ segm, score, catid = dt['segmentation'], dt['score'], dt['category_id']
+ if score < threshold:
+ continue
+ import pycocotools.mask as mask_util
+ mask = mask_util.decode(segm) * 255
+ color_mask = color_list[mask_color_id % len(color_list), 0:3]
+ mask_color_id += 1
+ for c in range(3):
+ color_mask[c] = color_mask[c] * (1 - w_ratio) + w_ratio * 255
+ idx = np.nonzero(mask)
+ img_array[idx[0], idx[1], :] *= 1.0 - alpha
+ img_array[idx[0], idx[1], :] += alpha * color_mask
+
+ if not draw_box:
+ center_y, center_x = ndimage.measurements.center_of_mass(mask)
+ label_text = "{}".format(catid2name[catid])
+ vis_pos = (max(int(center_x) - 10, 0), int(center_y))
+ cv2.putText(img_array, label_text, vis_pos,
+ cv2.FONT_HERSHEY_COMPLEX, 0.3, (255, 255, 255))
+ else:
+ mask = mask_util.decode(segm) * 255
+ sum_x = np.sum(mask, axis=0)
+ x = np.where(sum_x > 0.5)[0]
+ sum_y = np.sum(mask, axis=1)
+ y = np.where(sum_y > 0.5)[0]
+ x0, x1, y0, y1 = x[0], x[-1], y[0], y[-1]
+ cv2.rectangle(img_array, (x0, y0), (x1, y1),
+ tuple(color_mask.astype('int32').tolist()), 1)
+ bbox_text = '%s %.2f' % (catid2name[catid], score)
+ t_size = cv2.getTextSize(bbox_text, 0, 0.3, thickness=1)[0]
+ cv2.rectangle(img_array, (x0, y0), (x0 + t_size[0],
+ y0 - t_size[1] - 3),
+ tuple(color_mask.astype('int32').tolist()), -1)
+ cv2.putText(
+ img_array,
+ bbox_text, (x0, y0 - 2),
+ cv2.FONT_HERSHEY_SIMPLEX,
+ 0.3, (0, 0, 0),
+ 1,
+ lineType=cv2.LINE_AA)
+
+ return Image.fromarray(img_array.astype('uint8'))
+
+
+def draw_pose(image,
+ results,
+ visual_thread=0.6,
+ save_name='pose.jpg',
+ save_dir='output',
+ returnimg=False,
+ ids=None):
+ try:
+ import matplotlib.pyplot as plt
+ import matplotlib
+ plt.switch_backend('agg')
+ except Exception as e:
+ logger.error('Matplotlib not found, please install matplotlib.'
+ 'for example: `pip install matplotlib`.')
+ raise e
+
+ skeletons = np.array([item['keypoints'] for item in results])
+ kpt_nums = 17
+ if len(skeletons) > 0:
+ kpt_nums = int(skeletons.shape[1] / 3)
+ skeletons = skeletons.reshape(-1, kpt_nums, 3)
+ if kpt_nums == 17: #plot coco keypoint
+ EDGES = [(0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8),
+ (7, 9), (8, 10), (5, 11), (6, 12), (11, 13), (12, 14),
+ (13, 15), (14, 16), (11, 12)]
+ else: #plot mpii keypoint
+ EDGES = [(0, 1), (1, 2), (3, 4), (4, 5), (2, 6), (3, 6), (6, 7), (7, 8),
+ (8, 9), (10, 11), (11, 12), (13, 14), (14, 15), (8, 12),
+ (8, 13)]
+ NUM_EDGES = len(EDGES)
+
+ colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \
+ [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \
+ [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]
+ cmap = matplotlib.cm.get_cmap('hsv')
+ plt.figure()
+
+ img = np.array(image).astype('float32')
+
+ color_set = results['colors'] if 'colors' in results else None
+
+ if 'bbox' in results and ids is None:
+ bboxs = results['bbox']
+ for j, rect in enumerate(bboxs):
+ xmin, ymin, xmax, ymax = rect
+ color = colors[0] if color_set is None else colors[color_set[j] %
+ len(colors)]
+ cv2.rectangle(img, (xmin, ymin), (xmax, ymax), color, 1)
+
+ canvas = img.copy()
+ for i in range(kpt_nums):
+ for j in range(len(skeletons)):
+ if skeletons[j][i, 2] < visual_thread:
+ continue
+ if ids is None:
+ color = colors[i] if color_set is None else colors[color_set[j]
+ %
+ len(colors)]
+ else:
+ color = get_color(ids[j])
+
+ cv2.circle(
+ canvas,
+ tuple(skeletons[j][i, 0:2].astype('int32')),
+ 2,
+ color,
+ thickness=-1)
+
+ to_plot = cv2.addWeighted(img, 0.3, canvas, 0.7, 0)
+ fig = matplotlib.pyplot.gcf()
+
+ stickwidth = 2
+
+ for i in range(NUM_EDGES):
+ for j in range(len(skeletons)):
+ edge = EDGES[i]
+ if skeletons[j][edge[0], 2] < visual_thread or skeletons[j][edge[
+ 1], 2] < visual_thread:
+ continue
+
+ cur_canvas = canvas.copy()
+ X = [skeletons[j][edge[0], 1], skeletons[j][edge[1], 1]]
+ Y = [skeletons[j][edge[0], 0], skeletons[j][edge[1], 0]]
+ mX = np.mean(X)
+ mY = np.mean(Y)
+ length = ((X[0] - X[1])**2 + (Y[0] - Y[1])**2)**0.5
+ angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
+ polygon = cv2.ellipse2Poly((int(mY), int(mX)),
+ (int(length / 2), stickwidth),
+ int(angle), 0, 360, 1)
+ if ids is None:
+ color = colors[i] if color_set is None else colors[color_set[j]
+ %
+ len(colors)]
+ else:
+ color = get_color(ids[j])
+ cv2.fillConvexPoly(cur_canvas, polygon, color)
+ canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0)
+ image = Image.fromarray(canvas.astype('uint8'))
+ plt.close()
+ return image
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/voc_utils.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/voc_utils.py
new file mode 100644
index 000000000..cd6d9f90e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/ppdet/utils/voc_utils.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import os.path as osp
+import re
+import random
+
+__all__ = ['create_list']
+
+
+def create_list(devkit_dir, years, output_dir):
+ """
+ create following list:
+ 1. trainval.txt
+ 2. test.txt
+ """
+ trainval_list = []
+ test_list = []
+ for year in years:
+ trainval, test = _walk_voc_dir(devkit_dir, year, output_dir)
+ trainval_list.extend(trainval)
+ test_list.extend(test)
+
+ random.shuffle(trainval_list)
+ with open(osp.join(output_dir, 'trainval.txt'), 'w') as ftrainval:
+ for item in trainval_list:
+ ftrainval.write(item[0] + ' ' + item[1] + '\n')
+
+ with open(osp.join(output_dir, 'test.txt'), 'w') as fval:
+ ct = 0
+ for item in test_list:
+ ct += 1
+ fval.write(item[0] + ' ' + item[1] + '\n')
+
+
+def _get_voc_dir(devkit_dir, year, type):
+ return osp.join(devkit_dir, 'VOC' + year, type)
+
+
+def _walk_voc_dir(devkit_dir, year, output_dir):
+ filelist_dir = _get_voc_dir(devkit_dir, year, 'ImageSets/Main')
+ annotation_dir = _get_voc_dir(devkit_dir, year, 'Annotations')
+ img_dir = _get_voc_dir(devkit_dir, year, 'JPEGImages')
+ trainval_list = []
+ test_list = []
+ added = set()
+
+ for _, _, files in os.walk(filelist_dir):
+ for fname in files:
+ img_ann_list = []
+ if re.match(r'[a-z]+_trainval\.txt', fname):
+ img_ann_list = trainval_list
+ elif re.match(r'[a-z]+_test\.txt', fname):
+ img_ann_list = test_list
+ else:
+ continue
+ fpath = osp.join(filelist_dir, fname)
+ for line in open(fpath):
+ name_prefix = line.strip().split()[0]
+ if name_prefix in added:
+ continue
+ added.add(name_prefix)
+ ann_path = osp.join(
+ osp.relpath(annotation_dir, output_dir),
+ name_prefix + '.xml')
+ img_path = osp.join(
+ osp.relpath(img_dir, output_dir), name_prefix + '.jpg')
+ img_ann_list.append((img_path, ann_path))
+
+ return trainval_list, test_list
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__init__.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/__init__.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 000000000..0f5e2594f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/__init__.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/det_preprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/det_preprocess.cpython-37.pyc
new file mode 100644
index 000000000..ee6910208
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/det_preprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/postprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/postprocess.cpython-37.pyc
new file mode 100644
index 000000000..fd7a26257
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/postprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_det.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_det.cpython-37.pyc
new file mode 100644
index 000000000..882ceb3dc
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_det.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_rec.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_rec.cpython-37.pyc
new file mode 100644
index 000000000..1e9a82a46
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/predict_rec.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/preprocess.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/preprocess.cpython-37.pyc
new file mode 100644
index 000000000..bd2f03afd
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/__pycache__/preprocess.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/build_gallery.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/build_gallery.py
new file mode 100644
index 000000000..7b69a04d7
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/build_gallery.py
@@ -0,0 +1,214 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import faiss
+import numpy as np
+from tqdm import tqdm
+import pickle
+
+from python.predict_rec import RecPredictor
+
+from utils import logger
+from utils import config
+
+
+def split_datafile(data_file, image_root, delimiter="\t"):
+ '''
+ data_file: image path and info, which can be splitted by spacer
+ image_root: image path root
+ delimiter: delimiter
+ '''
+ gallery_images = []
+ gallery_docs = []
+ with open(data_file, 'r', encoding='utf-8') as f:
+ lines = f.readlines()
+ for _, ori_line in enumerate(lines):
+ line = ori_line.strip().split(delimiter)
+ text_num = len(line)
+ assert text_num >= 2, f"line({ori_line}) must be splitted into at least 2 parts, but got {text_num}"
+ image_file = os.path.join(image_root, line[0])
+
+ gallery_images.append(image_file)
+ gallery_docs.append(ori_line.strip())
+
+ return gallery_images, gallery_docs
+
+
+class GalleryBuilder(object):
+ def __init__(self, config):
+
+ self.config = config
+ self.rec_predictor = RecPredictor(config)
+ assert 'IndexProcess' in config.keys(), "Index config not found ... "
+ self.build(config['IndexProcess'])
+
+ def build(self, config):
+ '''
+ build index from scratch
+ '''
+ operation_method = config.get("index_operation", "new").lower()
+
+ gallery_images, gallery_docs = split_datafile(
+ config['data_file'], config['image_root'], config['delimiter'])
+
+ # when remove data in index, do not need extract fatures
+ if operation_method != "remove":
+ gallery_features = self._extract_features(gallery_images, config)
+ assert operation_method in [
+ "new", "remove", "append"
+ ], "Only append, remove and new operation are supported"
+
+ # vector.index: faiss index file
+ # id_map.pkl: use this file to map id to image_doc
+ if operation_method in ["remove", "append"]:
+ # if remove or append, vector.index and id_map.pkl must exist
+ assert os.path.join(
+ config["index_dir"], "vector.index"
+ ), "The vector.index dose not exist in {} when 'index_operation' is not None".format(
+ config["index_dir"])
+ assert os.path.join(
+ config["index_dir"], "id_map.pkl"
+ ), "The id_map.pkl dose not exist in {} when 'index_operation' is not None".format(
+ config["index_dir"])
+ index = faiss.read_index(
+ os.path.join(config["index_dir"], "vector.index"))
+ with open(os.path.join(config["index_dir"], "id_map.pkl"),
+ 'rb') as fd:
+ ids = pickle.load(fd)
+ assert index.ntotal == len(ids.keys(
+ )), "data number in index is not equal in in id_map"
+ else:
+ if not os.path.exists(config["index_dir"]):
+ os.makedirs(config["index_dir"], exist_ok=True)
+ index_method = config.get("index_method", "HNSW32")
+
+ # if IVF method, cal ivf number automaticlly
+ if index_method == "IVF":
+ index_method = index_method + str(
+ min(int(len(gallery_images) // 8), 65536)) + ",Flat"
+
+ # for binary index, add B at head of index_method
+ if config["dist_type"] == "hamming":
+ index_method = "B" + index_method
+
+ #dist_type
+ dist_type = faiss.METRIC_INNER_PRODUCT if config[
+ "dist_type"] == "IP" else faiss.METRIC_L2
+
+ #build index
+ if config["dist_type"] == "hamming":
+ index = faiss.index_binary_factory(config["embedding_size"],
+ index_method)
+ else:
+ index = faiss.index_factory(config["embedding_size"],
+ index_method, dist_type)
+ index = faiss.IndexIDMap2(index)
+ ids = {}
+
+ if config["index_method"] == "HNSW32":
+ logger.warning(
+ "The HNSW32 method dose not support 'remove' operation")
+
+ if operation_method != "remove":
+ # calculate id for new data
+ start_id = max(ids.keys()) + 1 if ids else 0
+ ids_now = (
+ np.arange(0, len(gallery_images)) + start_id).astype(np.int64)
+
+ # only train when new index file
+ if operation_method == "new":
+ if config["dist_type"] == "hamming":
+ index.add(gallery_features)
+ else:
+ index.train(gallery_features)
+
+ if not config["dist_type"] == "hamming":
+ index.add_with_ids(gallery_features, ids_now)
+
+ for i, d in zip(list(ids_now), gallery_docs):
+ ids[i] = d
+ else:
+ if config["index_method"] == "HNSW32":
+ raise RuntimeError(
+ "The index_method: HNSW32 dose not support 'remove' operation"
+ )
+ # remove ids in id_map, remove index data in faiss index
+ remove_ids = list(
+ filter(lambda k: ids.get(k) in gallery_docs, ids.keys()))
+ remove_ids = np.asarray(remove_ids)
+ index.remove_ids(remove_ids)
+ for k in remove_ids:
+ del ids[k]
+
+ # store faiss index file and id_map file
+ if config["dist_type"] == "hamming":
+ faiss.write_index_binary(
+ index, os.path.join(config["index_dir"], "vector.index"))
+ else:
+ faiss.write_index(
+ index, os.path.join(config["index_dir"], "vector.index"))
+
+ with open(os.path.join(config["index_dir"], "id_map.pkl"), 'wb') as fd:
+ pickle.dump(ids, fd)
+
+ def _extract_features(self, gallery_images, config):
+ # extract gallery features
+ if config["dist_type"] == "hamming":
+ gallery_features = np.zeros(
+ [len(gallery_images), config['embedding_size'] // 8],
+ dtype=np.uint8)
+ else:
+ gallery_features = np.zeros(
+ [len(gallery_images), config['embedding_size']],
+ dtype=np.float32)
+
+ #construct batch imgs and do inference
+ batch_size = config.get("batch_size", 32)
+ batch_img = []
+ for i, image_file in enumerate(tqdm(gallery_images)):
+ img = cv2.imread(image_file)
+ if img is None:
+ logger.error("img empty, please check {}".format(image_file))
+ exit()
+ img = img[:, :, ::-1]
+ batch_img.append(img)
+
+ if (i + 1) % batch_size == 0:
+ rec_feat = self.rec_predictor.predict(batch_img)
+ gallery_features[i - batch_size + 1:i + 1, :] = rec_feat
+ batch_img = []
+
+ if len(batch_img) > 0:
+ rec_feat = self.rec_predictor.predict(batch_img)
+ gallery_features[-len(batch_img):, :] = rec_feat
+ batch_img = []
+
+ return gallery_features
+
+
+def main(config):
+ GalleryBuilder(config)
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/det_preprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/det_preprocess.py
new file mode 100644
index 000000000..65db32dc3
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/det_preprocess.py
@@ -0,0 +1,216 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import cv2
+import numpy as np
+
+
+def decode_image(im_file, im_info):
+ """read rgb image
+ Args:
+ im_file (str|np.ndarray): input can be image path or np.ndarray
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ if isinstance(im_file, str):
+ with open(im_file, 'rb') as f:
+ im_read = f.read()
+ data = np.frombuffer(im_read, dtype='uint8')
+ im = cv2.imdecode(data, 1) # BGR mode, but need RGB mode
+ im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
+ else:
+ im = im_file
+ im_info['im_shape'] = np.array(im.shape[:2], dtype=np.float32)
+ im_info['scale_factor'] = np.array([1., 1.], dtype=np.float32)
+ return im, im_info
+
+
+class DetResize(object):
+ """resize image by target_size and max_size
+ Args:
+ target_size (int): the target size of image
+ keep_ratio (bool): whether keep_ratio or not, default true
+ interp (int): method of resize
+ """
+
+ def __init__(
+ self,
+ target_size,
+ keep_ratio=True,
+ interp=cv2.INTER_LINEAR, ):
+ if isinstance(target_size, int):
+ target_size = [target_size, target_size]
+ self.target_size = target_size
+ self.keep_ratio = keep_ratio
+ self.interp = interp
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ assert len(self.target_size) == 2
+ assert self.target_size[0] > 0 and self.target_size[1] > 0
+ im_channel = im.shape[2]
+ im_scale_y, im_scale_x = self.generate_scale(im)
+ # set image_shape
+ im_info['input_shape'][1] = int(im_scale_y * im.shape[0])
+ im_info['input_shape'][2] = int(im_scale_x * im.shape[1])
+ print(0000000000000000000000000000000000000000)
+ print(im)
+ print(im_scale_x,im_scale_y,cv2.INTER_LINEAR,self.interp)
+ im = cv2.resize(
+ im,
+ None,
+ None,
+ fx=im_scale_x,
+ fy=im_scale_y,
+ interpolation=self.interp)
+ print(im)
+ im_info['im_shape'] = np.array(im.shape[:2]).astype('float32')
+ im_info['scale_factor'] = np.array(
+ [im_scale_y, im_scale_x]).astype('float32')
+ return im, im_info
+
+ def generate_scale(self, im):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ Returns:
+ im_scale_x: the resize ratio of X
+ im_scale_y: the resize ratio of Y
+ """
+ origin_shape = im.shape[:2]
+ im_c = im.shape[2]
+ if self.keep_ratio:
+ im_size_min = np.min(origin_shape)
+ im_size_max = np.max(origin_shape)
+ target_size_min = np.min(self.target_size)
+ target_size_max = np.max(self.target_size)
+ im_scale = float(target_size_min) / float(im_size_min)
+ if np.round(im_scale * im_size_max) > target_size_max:
+ im_scale = float(target_size_max) / float(im_size_max)
+ im_scale_x = im_scale
+ im_scale_y = im_scale
+ else:
+ resize_h, resize_w = self.target_size
+ im_scale_y = resize_h / float(origin_shape[0])
+ im_scale_x = resize_w / float(origin_shape[1])
+ return im_scale_y, im_scale_x
+
+
+class DetNormalizeImage(object):
+ """normalize image
+ Args:
+ mean (list): im - mean
+ std (list): im / std
+ is_scale (bool): whether need im / 255
+ is_channel_first (bool): if True: image shape is CHW, else: HWC
+ """
+
+ def __init__(self, mean, std, is_scale=True):
+ self.mean = mean
+ self.std = std
+ self.is_scale = is_scale
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ im = im.astype(np.float32, copy=False)
+ mean = np.array(self.mean)[np.newaxis, np.newaxis, :]
+ std = np.array(self.std)[np.newaxis, np.newaxis, :]
+ if self.is_scale:
+ im = im / 255.0
+ print(im)
+ im -= mean
+ im /= std
+ return im, im_info
+
+
+class DetPermute(object):
+ """permute image
+ Args:
+ to_bgr (bool): whether convert RGB to BGR
+ channel_first (bool): whether convert HWC to CHW
+ """
+
+ def __init__(self, ):
+ super().__init__()
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ #im = im.transpose((2, 0, 1)).copy()
+ print("detprossssssss")
+ print(im)
+ im = im.transpose((2, 0, 1)).copy()
+ print(im)
+ return im, im_info
+
+
+class DetPadStride(object):
+ """ padding image for model with FPN , instead PadBatch(pad_to_stride, pad_gt) in original config
+ Args:
+ stride (bool): model with FPN need image shape % stride == 0
+ """
+
+ def __init__(self, stride=0):
+ self.coarsest_stride = stride
+
+ def __call__(self, im, im_info):
+ """
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ Returns:
+ im (np.ndarray): processed image (np.ndarray)
+ im_info (dict): info of processed image
+ """
+ coarsest_stride = self.coarsest_stride
+ if coarsest_stride <= 0:
+ return im, im_info
+ im_c, im_h, im_w = im.shape
+ pad_h = int(np.ceil(float(im_h) / coarsest_stride) * coarsest_stride)
+ pad_w = int(np.ceil(float(im_w) / coarsest_stride) * coarsest_stride)
+ padding_im = np.zeros((im_c, pad_h, pad_w), dtype=np.float32)
+ padding_im[:, :im_h, :im_w] = im
+ return padding_im, im_info
+
+
+def det_preprocess(im, im_info, preprocess_ops):
+ for operator in preprocess_ops:
+ print(operator)
+ print(im)
+ print(666)
+ im, im_info = operator(im, im_info)
+ print(im)
+ return im, im_info
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/postprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/postprocess.py
new file mode 100644
index 000000000..d26cbaa9a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/postprocess.py
@@ -0,0 +1,161 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import copy
+import shutil
+from functools import partial
+import importlib
+import numpy as np
+import paddle
+import paddle.nn.functional as F
+
+
+def build_postprocess(config):
+ if config is None:
+ return None
+
+ mod = importlib.import_module(__name__)
+ config = copy.deepcopy(config)
+
+ main_indicator = config.pop(
+ "main_indicator") if "main_indicator" in config else None
+ main_indicator = main_indicator if main_indicator else ""
+
+ func_list = []
+ for func in config:
+ func_list.append(getattr(mod, func)(**config[func]))
+ return PostProcesser(func_list, main_indicator)
+
+
+class PostProcesser(object):
+ def __init__(self, func_list, main_indicator="Topk"):
+ self.func_list = func_list
+ self.main_indicator = main_indicator
+
+ def __call__(self, x, image_file=None):
+ rtn = None
+ for func in self.func_list:
+ tmp = func(x, image_file)
+ if type(func).__name__ in self.main_indicator:
+ rtn = tmp
+ return rtn
+
+
+class Topk(object):
+ def __init__(self, topk=1, class_id_map_file=None):
+ assert isinstance(topk, (int, ))
+ self.class_id_map = self.parse_class_id_map(class_id_map_file)
+ self.topk = topk
+
+ def parse_class_id_map(self, class_id_map_file):
+ if class_id_map_file is None:
+ return None
+
+ if not os.path.exists(class_id_map_file):
+ print(
+ "Warning: If want to use your own label_dict, please input legal path!\nOtherwise label_names will be empty!"
+ )
+ return None
+
+ try:
+ class_id_map = {}
+ with open(class_id_map_file, "r") as fin:
+ lines = fin.readlines()
+ for line in lines:
+ partition = line.split("\n")[0].partition(" ")
+ class_id_map[int(partition[0])] = str(partition[-1])
+ except Exception as ex:
+ print(ex)
+ class_id_map = None
+ return class_id_map
+
+ def __call__(self, x, file_names=None, multilabel=False):
+ if file_names is not None:
+ assert x.shape[0] == len(file_names)
+ y = []
+ for idx, probs in enumerate(x):
+ index = probs.argsort(axis=0)[-self.topk:][::-1].astype(
+ "int32") if not multilabel else np.where(
+ probs >= 0.5)[0].astype("int32")
+ clas_id_list = []
+ score_list = []
+ label_name_list = []
+ for i in index:
+ clas_id_list.append(i.item())
+ score_list.append(probs[i].item())
+ if self.class_id_map is not None:
+ label_name_list.append(self.class_id_map[i.item()])
+ result = {
+ "class_ids": clas_id_list,
+ "scores": np.around(
+ score_list, decimals=5).tolist(),
+ }
+ if file_names is not None:
+ result["file_name"] = file_names[idx]
+ if label_name_list is not None:
+ result["label_names"] = label_name_list
+ y.append(result)
+ return y
+
+
+class MultiLabelTopk(Topk):
+ def __init__(self, topk=1, class_id_map_file=None):
+ super().__init__()
+
+ def __call__(self, x, file_names=None):
+ return super().__call__(x, file_names, multilabel=True)
+
+
+class SavePreLabel(object):
+ def __init__(self, save_dir):
+ if save_dir is None:
+ raise Exception(
+ "Please specify save_dir if SavePreLabel specified.")
+ self.save_dir = partial(os.path.join, save_dir)
+
+ def __call__(self, x, file_names=None):
+ if file_names is None:
+ return
+ assert x.shape[0] == len(file_names)
+ for idx, probs in enumerate(x):
+ index = probs.argsort(axis=0)[-1].astype("int32")
+ self.save(index, file_names[idx])
+
+ def save(self, id, image_file):
+ output_dir = self.save_dir(str(id))
+ os.makedirs(output_dir, exist_ok=True)
+ shutil.copy(image_file, output_dir)
+
+
+class Binarize(object):
+ def __init__(self, method="round"):
+ self.method = method
+ self.unit = np.array([[128, 64, 32, 16, 8, 4, 2, 1]]).T
+
+ def __call__(self, x, file_names=None):
+ if self.method == "round":
+ x = np.round(x + 1).astype("uint8") - 1
+
+ if self.method == "sign":
+ x = ((np.sign(x) + 1) / 2).astype("uint8")
+
+ embedding_size = x.shape[1]
+ assert embedding_size % 8 == 0, "The Binary index only support vectors with sizes multiple of 8"
+
+ byte = np.zeros([x.shape[0], embedding_size // 8], dtype=np.uint8)
+ for i in range(embedding_size // 8):
+ byte[:, i:i + 1] = np.dot(x[:, i * 8:(i + 1) * 8], self.unit)
+
+ return byte
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_cls.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_cls.py
new file mode 100644
index 000000000..cdeb32e48
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_cls.py
@@ -0,0 +1,140 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import numpy as np
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from python.preprocess import create_operators
+from python.postprocess import build_postprocess
+
+
+class ClsPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"])
+
+ self.preprocess_ops = []
+ self.postprocess = None
+ if "PreProcess" in config:
+ if "transform_ops" in config["PreProcess"]:
+ self.preprocess_ops = create_operators(config["PreProcess"][
+ "transform_ops"])
+ if "PostProcess" in config:
+ self.postprocess = build_postprocess(config["PostProcess"])
+
+ # for whole_chain project to test each repo of paddle
+ self.benchmark = config["Global"].get("benchmark", False)
+ if self.benchmark:
+ import auto_log
+ import os
+ pid = os.getpid()
+ self.auto_logger = auto_log.AutoLogger(
+ model_name=config["Global"].get("model_name", "cls"),
+ model_precision='fp16'
+ if config["Global"]["use_fp16"] else 'fp32',
+ batch_size=config["Global"].get("batch_size", 1),
+ data_shape=[3, 224, 224],
+ save_path=config["Global"].get("save_log_path",
+ "./auto_log.log"),
+ inference_config=self.config,
+ pids=pid,
+ process_name=None,
+ gpu_ids=None,
+ time_keys=[
+ 'preprocess_time', 'inference_time', 'postprocess_time'
+ ],
+ warmup=2)
+
+ def predict(self, images):
+ input_names = self.paddle_predictor.get_input_names()
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[0])
+
+ output_names = self.paddle_predictor.get_output_names()
+ output_tensor = self.paddle_predictor.get_output_handle(output_names[
+ 0])
+ if self.benchmark:
+ self.auto_logger.times.start()
+ if not isinstance(images, (list, )):
+ images = [images]
+ for idx in range(len(images)):
+ for ops in self.preprocess_ops:
+ images[idx] = ops(images[idx])
+ image = np.array(images)
+ if self.benchmark:
+ self.auto_logger.times.stamp()
+
+ input_tensor.copy_from_cpu(image)
+ self.paddle_predictor.run()
+ batch_output = output_tensor.copy_to_cpu()
+ if self.benchmark:
+ self.auto_logger.times.stamp()
+ if self.postprocess is not None:
+ batch_output = self.postprocess(batch_output)
+ if self.benchmark:
+ self.auto_logger.times.end(stamp=True)
+ return batch_output
+
+
+def main(config):
+ cls_predictor = ClsPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ batch_imgs = []
+ batch_names = []
+ cnt = 0
+ for idx, img_path in enumerate(image_list):
+ img = cv2.imread(img_path)
+ if img is None:
+ logger.warning(
+ "Image file failed to read and has been skipped. The path: {}".
+ format(img_path))
+ else:
+ img = img[:, :, ::-1]
+ batch_imgs.append(img)
+ img_name = os.path.basename(img_path)
+ batch_names.append(img_name)
+ cnt += 1
+
+ if cnt % config["Global"]["batch_size"] == 0 or (idx + 1
+ ) == len(image_list):
+ if len(batch_imgs) == 0:
+ continue
+ batch_results = cls_predictor.predict(batch_imgs)
+ for number, result_dict in enumerate(batch_results):
+ filename = batch_names[number]
+ clas_ids = result_dict["class_ids"]
+ scores_str = "[{}]".format(", ".join("{:.2f}".format(
+ r) for r in result_dict["scores"]))
+ label_names = result_dict["label_names"]
+ print("{}:\tclass id(s): {}, score(s): {}, label_name(s): {}".
+ format(filename, clas_ids, scores_str, label_names))
+ batch_imgs = []
+ batch_names = []
+ if cls_predictor.benchmark:
+ cls_predictor.auto_logger.report()
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det.py
new file mode 100644
index 000000000..0b9c25a5a
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from det_preprocess import det_preprocess
+from preprocess import create_operators
+from utils.draw_bbox import draw_bbox_results
+
+import os
+import argparse
+import time
+import yaml
+import ast
+from functools import reduce
+import cv2
+import numpy as np
+import paddle
+import requests
+import base64
+import json
+class DetPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["det_inference_model_dir"])
+
+ self.preprocess_ops = create_operators(config["DetPreProcess"][
+ "transform_ops"])
+ self.config = config
+
+ def preprocess(self, img):
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': np.array(
+ img.shape[:2], dtype=np.float32),
+ 'input_shape': self.config["Global"]["image_shape"],
+ "scale_factor": np.array(
+ [1., 1.], dtype=np.float32)
+ }
+
+ im, im_info = det_preprocess(img, im_info, self.preprocess_ops)
+ print(111111111111111111111)
+ print(im)
+ inputs = self.create_inputs(im, im_info)
+ return inputs
+
+ def create_inputs(self, im, im_info):
+ """generate input for different model type
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ model_arch (str): model type
+ Returns:
+ inputs (dict): input of model
+ """
+ inputs = {}
+ inputs['image'] = np.array((im, )).astype('float32')
+ inputs['im_shape'] = np.array(
+ (im_info['im_shape'], )).astype('float32')
+ inputs['scale_factor'] = np.array(
+ (im_info['scale_factor'], )).astype('float32')
+ #print(inputs)
+ return inputs
+
+ def parse_det_results(self, pred, threshold, label_list):
+ max_det_results = self.config["Global"]["max_det_results"]
+ keep_indexes = pred[:, 1].argsort()[::-1][:max_det_results]
+ results = []
+ for idx in keep_indexes:
+ single_res = pred[idx]
+ class_id = int(single_res[0])
+ score = single_res[1]
+ bbox = single_res[2:]
+ if score < threshold:
+ continue
+ label_name = label_list[class_id]
+ '''
+ results.append({
+ "class_id": class_id,
+ "score": score,
+ "bbox": bbox,
+ "label_name": label_name,
+ })'''
+ results.append({
+ "bbox": bbox,
+ "rec_docs": "background",
+ "rec_scores": score,
+ })
+ return results
+
+ def predict(self, image, threshold=0.5, run_benchmark=False):
+ '''
+ Args:
+ image (str/np.ndarray): path of image/ np.ndarray read by cv2ps
+ threshold (float): threshold of predicted box' score
+ Returns:
+ results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
+ matix element:[class, score, x_min, y_min, x_max, y_max]
+ MaskRCNN's results include 'masks': np.ndarray:
+ shape: [N, im_h, im_w]
+ '''
+ inputs = self.preprocess(image)
+ print(str(inputs))
+ np_boxes = None
+ input_names = self.paddle_predictor.get_input_names()
+ print(input_names)
+ for i in range(len(input_names)):
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[
+ i])
+ input_tensor.copy_from_cpu(inputs[input_names[i]])
+ print("!!!!!!!",inputs[input_names[i]])
+ t1 = time.time()
+ print(self.paddle_predictor.run())
+ output_names = self.paddle_predictor.get_output_names()
+ boxes_tensor = self.paddle_predictor.get_output_handle(output_names[0])
+
+ np_boxes = boxes_tensor.copy_to_cpu()
+ t2 = time.time()
+
+ print("Inference: {} ms per batch image".format((t2 - t1) * 1000.0))
+
+ # do not perform postprocess in benchmark mode
+ results = []
+ if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
+ print('[WARNNING] No object detected.')
+ results = np.array([])
+ else:
+ results = np_boxes
+
+ results = self.parse_det_results(results,
+ self.config["Global"]["threshold"],
+ self.config["Global"]["labe_list"])
+ return results
+
+
+def main(config):
+ det_predictor = DetPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = det_predictor.predict(img)
+ print(output)
+ draw_bbox_results(img, output, image_file)
+
+ return image_file,output
+
+def cv2_to_base64_img(img):
+ data = cv2.imencode('.jpg', img)[1]
+ return base64.b64encode(data.tostring()).decode('utf8')
+
+def solve_output(output,image_file):
+ print(image_file)
+ img = cv2.imread(image_file)
+
+ for bbox in output:
+ left,top,right,bottom = int(bbox["bbox"][0]),int(bbox["bbox"][1]),int(bbox["bbox"][2]),int(bbox["bbox"][3])
+ print(left,top,right,bottom)
+ img_crop = img[top:bottom,left:right]
+ url = "http://123.157.241.94:36807/ppyolo_mbv3/prediction"
+ img2 = {"key": ["image"], "value": [cv2_to_base64_img(img_crop)]}
+ r = requests.post(url=url,data=json.dumps(img2), timeout=5)
+ r = r.json()
+ print(r)
+ result = eval(r['value'][0])[0]
+ cv2.putText(img,str(round(float(result["scores"][0]),2)),(left,top+30), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+ cv2.putText(img,str(result["label_names"][0]),(left,top+60), cv2.FONT_HERSHEY_SIMPLEX,1.2,(0,255,0),2)
+ cv2.rectangle(img,(left ,top),(right,bottom), (0, 0, 255), 2)
+ cv2.imwrite("./output/ppyolo_result" + image_file[image_file.rfind("/"):],img)
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ image_file,output = main(config)
+ #solve_output(output,image_file)
+
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det_bak.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det_bak.py
new file mode 100644
index 000000000..323d65ab1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_det_bak.py
@@ -0,0 +1,167 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from det_preprocess import det_preprocess
+from preprocess import create_operators
+from utils.draw_bbox import draw_bbox_results
+
+import os
+import argparse
+import time
+import yaml
+import ast
+from functools import reduce
+import cv2
+import numpy as np
+import paddle
+
+
+class DetPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["det_inference_model_dir"])
+
+ self.preprocess_ops = create_operators(config["DetPreProcess"][
+ "transform_ops"])
+ self.config = config
+
+ def preprocess(self, img):
+ im_info = {
+ 'scale_factor': np.array(
+ [1., 1.], dtype=np.float32),
+ 'im_shape': np.array(
+ img.shape[:2], dtype=np.float32),
+ 'input_shape': self.config["Global"]["image_shape"],
+ "scale_factor": np.array(
+ [1., 1.], dtype=np.float32)
+ }
+ im, im_info = det_preprocess(img, im_info, self.preprocess_ops)
+ inputs = self.create_inputs(im, im_info)
+ return inputs
+
+ def create_inputs(self, im, im_info):
+ """generate input for different model type
+ Args:
+ im (np.ndarray): image (np.ndarray)
+ im_info (dict): info of image
+ model_arch (str): model type
+ Returns:
+ inputs (dict): input of model
+ """
+ inputs = {}
+ inputs['image'] = np.array((im, )).astype('float32')
+ inputs['im_shape'] = np.array(
+ (im_info['im_shape'], )).astype('float32')
+ inputs['scale_factor'] = np.array(
+ (im_info['scale_factor'], )).astype('float32')
+ print(inputs)
+ return inputs
+
+ def parse_det_results(self, pred, threshold, label_list):
+ max_det_results = self.config["Global"]["max_det_results"]
+ keep_indexes = pred[:, 1].argsort()[::-1][:max_det_results]
+ results = []
+ for idx in keep_indexes:
+ single_res = pred[idx]
+ class_id = int(single_res[0])
+ score = single_res[1]
+ bbox = single_res[2:]
+ if score < threshold:
+ continue
+ label_name = label_list[class_id]
+ '''
+ results.append({
+ "class_id": class_id,
+ "score": score,
+ "bbox": bbox,
+ "label_name": label_name,
+ })'''
+ results.append({
+ "bbox": bbox,
+ "rec_docs": "background",
+ "rec_scores": score,
+ })
+ return results
+
+ def predict(self, image, threshold=0.5, run_benchmark=False):
+ '''
+ Args:
+ image (str/np.ndarray): path of image/ np.ndarray read by cv2
+ threshold (float): threshold of predicted box' score
+ Returns:
+ results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
+ matix element:[class, score, x_min, y_min, x_max, y_max]
+ MaskRCNN's results include 'masks': np.ndarray:
+ shape: [N, im_h, im_w]
+ '''
+ inputs = self.preprocess(image)
+ np_boxes = None
+ input_names = self.paddle_predictor.get_input_names()
+
+ for i in range(len(input_names)):
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[
+ i])
+ input_tensor.copy_from_cpu(inputs[input_names[i]])
+
+ t1 = time.time()
+ self.paddle_predictor.run()
+ output_names = self.paddle_predictor.get_output_names()
+ boxes_tensor = self.paddle_predictor.get_output_handle(output_names[0])
+ np_boxes = boxes_tensor.copy_to_cpu()
+ t2 = time.time()
+
+ print("Inference: {} ms per batch image".format((t2 - t1) * 1000.0))
+
+ # do not perform postprocess in benchmark mode
+ results = []
+ if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
+ print('[WARNNING] No object detected.')
+ results = np.array([])
+ else:
+ results = np_boxes
+
+ results = self.parse_det_results(results,
+ self.config["Global"]["threshold"],
+ self.config["Global"]["labe_list"])
+ return results
+
+
+def main(config):
+ det_predictor = DetPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = det_predictor.predict(img)
+ print(output)
+ draw_bbox_results(img, output, image_file)
+ print(output)
+
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_rec.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_rec.py
new file mode 100644
index 000000000..d41c513f8
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_rec.py
@@ -0,0 +1,105 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import cv2
+import numpy as np
+
+from utils import logger
+from utils import config
+from utils.predictor import Predictor
+from utils.get_image_list import get_image_list
+from preprocess import create_operators
+from postprocess import build_postprocess
+
+
+class RecPredictor(Predictor):
+ def __init__(self, config):
+ super().__init__(config["Global"],
+ config["Global"]["rec_inference_model_dir"])
+ self.preprocess_ops = create_operators(config["RecPreProcess"][
+ "transform_ops"])
+ self.postprocess = build_postprocess(config["RecPostProcess"])
+
+ def predict(self, images, feature_normalize=True):
+ input_names = self.paddle_predictor.get_input_names()
+ input_tensor = self.paddle_predictor.get_input_handle(input_names[0])
+
+ output_names = self.paddle_predictor.get_output_names()
+ output_tensor = self.paddle_predictor.get_output_handle(output_names[
+ 0])
+
+ if not isinstance(images, (list, )):
+ images = [images]
+ for idx in range(len(images)):
+ for ops in self.preprocess_ops:
+ images[idx] = ops(images[idx])
+ image = np.array(images)
+
+ input_tensor.copy_from_cpu(image)
+ self.paddle_predictor.run()
+ batch_output = output_tensor.copy_to_cpu()
+
+ if feature_normalize:
+ feas_norm = np.sqrt(
+ np.sum(np.square(batch_output), axis=1, keepdims=True))
+ batch_output = np.divide(batch_output, feas_norm)
+
+ if self.postprocess is not None:
+ batch_output = self.postprocess(batch_output)
+ return batch_output
+
+
+def main(config):
+ rec_predictor = RecPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ batch_imgs = []
+ batch_names = []
+ cnt = 0
+ for idx, img_path in enumerate(image_list):
+ img = cv2.imread(img_path)
+ if img is None:
+ logger.warning(
+ "Image file failed to read and has been skipped. The path: {}".
+ format(img_path))
+ else:
+ img = img[:, :, ::-1]
+ batch_imgs.append(img)
+ img_name = os.path.basename(img_path)
+ batch_names.append(img_name)
+ cnt += 1
+
+ if cnt % config["Global"]["batch_size"] == 0 or (idx + 1) == len(image_list):
+ if len(batch_imgs) == 0:
+ continue
+
+ batch_results = rec_predictor.predict(batch_imgs)
+ for number, result_dict in enumerate(batch_results):
+ filename = batch_names[number]
+ print("{}:\t{}".format(filename, result_dict))
+ batch_imgs = []
+ batch_names = []
+
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_system.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_system.py
new file mode 100644
index 000000000..fb2d66a53
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/predict_system.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import sys
+
+__dir__ = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../')))
+
+import copy
+import cv2
+import numpy as np
+import faiss
+import pickle
+
+from python.predict_rec import RecPredictor
+from python.predict_det import DetPredictor
+
+from utils import logger
+from utils import config
+from utils.get_image_list import get_image_list
+from utils.draw_bbox import draw_bbox_results
+
+
+class SystemPredictor(object):
+ def __init__(self, config):
+
+ self.config = config
+ self.rec_predictor = RecPredictor(config)
+ self.det_predictor = DetPredictor(config)
+
+ assert 'IndexProcess' in config.keys(), "Index config not found ... "
+ self.return_k = self.config['IndexProcess']['return_k']
+
+ index_dir = self.config["IndexProcess"]["index_dir"]
+ assert os.path.exists(os.path.join(
+ index_dir, "vector.index")), "vector.index not found ..."
+ assert os.path.exists(os.path.join(
+ index_dir, "id_map.pkl")), "id_map.pkl not found ... "
+
+ if config['IndexProcess'].get("binary_index", False):
+ self.Searcher = faiss.read_index_binary(
+ os.path.join(index_dir, "vector.index"))
+ else:
+ self.Searcher = faiss.read_index(
+ os.path.join(index_dir, "vector.index"))
+
+ with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:
+ self.id_map = pickle.load(fd)
+
+ def append_self(self, results, shape):
+ results.append({
+ "class_id": 0,
+ "score": 1.0,
+ "bbox":
+ np.array([0, 0, shape[1], shape[0]]), # xmin, ymin, xmax, ymax
+ "label_name": "foreground",
+ })
+ return results
+
+ def nms_to_rec_results(self, results, thresh=0.1):
+ filtered_results = []
+ x1 = np.array([r["bbox"][0] for r in results]).astype("float32")
+ y1 = np.array([r["bbox"][1] for r in results]).astype("float32")
+ x2 = np.array([r["bbox"][2] for r in results]).astype("float32")
+ y2 = np.array([r["bbox"][3] for r in results]).astype("float32")
+ scores = np.array([r["rec_scores"] for r in results])
+
+ areas = (x2 - x1 + 1) * (y2 - y1 + 1)
+ order = scores.argsort()[::-1]
+ while order.size > 0:
+ i = order[0]
+ xx1 = np.maximum(x1[i], x1[order[1:]])
+ yy1 = np.maximum(y1[i], y1[order[1:]])
+ xx2 = np.minimum(x2[i], x2[order[1:]])
+ yy2 = np.minimum(y2[i], y2[order[1:]])
+
+ w = np.maximum(0.0, xx2 - xx1 + 1)
+ h = np.maximum(0.0, yy2 - yy1 + 1)
+ inter = w * h
+ ovr = inter / (areas[i] + areas[order[1:]] - inter)
+ inds = np.where(ovr <= thresh)[0]
+ order = order[inds + 1]
+ filtered_results.append(results[i])
+
+ return filtered_results
+
+ def predict(self, img):
+ output = []
+ # st1: get all detection results
+ results = self.det_predictor.predict(img)
+
+ # st2: add the whole image for recognition to improve recall
+ results = self.append_self(results, img.shape)
+
+ # st3: recognition process, use score_thres to ensure accuracy
+ for result in results:
+ preds = {}
+ xmin, ymin, xmax, ymax = result["bbox"].astype("int")
+ crop_img = img[ymin:ymax, xmin:xmax, :].copy()
+ rec_results = self.rec_predictor.predict(crop_img)
+ preds["bbox"] = [xmin, ymin, xmax, ymax]
+ scores, docs = self.Searcher.search(rec_results, self.return_k)
+
+ # just top-1 result will be returned for the final
+ if scores[0][0] >= self.config["IndexProcess"]["score_thres"]:
+ preds["rec_docs"] = self.id_map[docs[0][0]].split()[1]
+ preds["rec_scores"] = scores[0][0]
+ output.append(preds)
+
+ # st5: nms to the final results to avoid fetching duplicate results
+ output = self.nms_to_rec_results(
+ output, self.config["Global"]["rec_nms_thresold"])
+
+ return output
+
+
+def main(config):
+ system_predictor = SystemPredictor(config)
+ image_list = get_image_list(config["Global"]["infer_imgs"])
+
+ assert config["Global"]["batch_size"] == 1
+ for idx, image_file in enumerate(image_list):
+ img = cv2.imread(image_file)[:, :, ::-1]
+ output = system_predictor.predict(img)
+ print(image_file)
+ draw_bbox_results(img, output, image_file)
+ print(output)
+ return
+
+
+if __name__ == "__main__":
+ args = config.parse_args()
+ config = config.get_config(args.config, overrides=args.override, show=True)
+ main(config)
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/preprocess.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/preprocess.py
new file mode 100644
index 000000000..1da32ad6e
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/python/preprocess.py
@@ -0,0 +1,337 @@
+"""
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+from functools import partial
+import six
+import math
+import random
+import cv2
+import numpy as np
+import importlib
+from PIL import Image
+
+from python.det_preprocess import DetNormalizeImage, DetPadStride, DetPermute, DetResize
+
+
+def create_operators(params):
+ """
+ create operators based on the config
+
+ Args:
+ params(list): a dict list, used to create some operators
+ """
+ assert isinstance(params, list), ('operator config should be a list')
+ mod = importlib.import_module(__name__)
+ ops = []
+ for operator in params:
+ assert isinstance(operator,
+ dict) and len(operator) == 1, "yaml format error"
+ op_name = list(operator)[0]
+ param = {} if operator[op_name] is None else operator[op_name]
+ op = getattr(mod, op_name)(**param)
+ ops.append(op)
+
+ return ops
+
+
+class UnifiedResize(object):
+ def __init__(self, interpolation=None, backend="cv2"):
+ _cv2_interp_from_str = {
+ 'nearest': cv2.INTER_NEAREST,
+ 'bilinear': cv2.INTER_LINEAR,
+ 'area': cv2.INTER_AREA,
+ 'bicubic': cv2.INTER_CUBIC,
+ 'lanczos': cv2.INTER_LANCZOS4
+ }
+ _pil_interp_from_str = {
+ 'nearest': Image.NEAREST,
+ 'bilinear': Image.BILINEAR,
+ 'bicubic': Image.BICUBIC,
+ 'box': Image.BOX,
+ 'lanczos': Image.LANCZOS,
+ 'hamming': Image.HAMMING
+ }
+
+ def _pil_resize(src, size, resample):
+ pil_img = Image.fromarray(src)
+ pil_img = pil_img.resize(size, resample)
+ return np.asarray(pil_img)
+
+ if backend.lower() == "cv2":
+ if isinstance(interpolation, str):
+ interpolation = _cv2_interp_from_str[interpolation.lower()]
+ # compatible with opencv < version 4.4.0
+ elif interpolation is None:
+ interpolation = cv2.INTER_LINEAR
+ self.resize_func = partial(cv2.resize, interpolation=interpolation)
+ elif backend.lower() == "pil":
+ if isinstance(interpolation, str):
+ interpolation = _pil_interp_from_str[interpolation.lower()]
+ self.resize_func = partial(_pil_resize, resample=interpolation)
+ else:
+ logger.warning(
+ f"The backend of Resize only support \"cv2\" or \"PIL\". \"f{backend}\" is unavailable. Use \"cv2\" instead."
+ )
+ self.resize_func = cv2.resize
+
+ def __call__(self, src, size):
+ return self.resize_func(src, size)
+
+
+class OperatorParamError(ValueError):
+ """ OperatorParamError
+ """
+ pass
+
+
+class DecodeImage(object):
+ """ decode image """
+
+ def __init__(self, to_rgb=True, to_np=False, channel_first=False):
+ self.to_rgb = to_rgb
+ self.to_np = to_np # to numpy
+ self.channel_first = channel_first # only enabled when to_np is True
+
+ def __call__(self, img):
+ if six.PY2:
+ assert type(img) is str and len(
+ img) > 0, "invalid input 'img' in DecodeImage"
+ else:
+ assert type(img) is bytes and len(
+ img) > 0, "invalid input 'img' in DecodeImage"
+ data = np.frombuffer(img, dtype='uint8')
+ img = cv2.imdecode(data, 1)
+ if self.to_rgb:
+ assert img.shape[2] == 3, 'invalid shape of image[%s]' % (
+ img.shape)
+ img = img[:, :, ::-1]
+
+ if self.channel_first:
+ img = img.transpose((2, 0, 1))
+
+ return img
+
+
+class ResizeImage(object):
+ """ resize image """
+
+ def __init__(self,
+ size=None,
+ resize_short=None,
+ interpolation=None,
+ backend="cv2"):
+ if resize_short is not None and resize_short > 0:
+ self.resize_short = resize_short
+ self.w = None
+ self.h = None
+ elif size is not None:
+ self.resize_short = None
+ self.w = size if type(size) is int else size[0]
+ self.h = size if type(size) is int else size[1]
+ else:
+ raise OperatorParamError("invalid params for ReisizeImage for '\
+ 'both 'size' and 'resize_short' are None")
+
+ self._resize_func = UnifiedResize(
+ interpolation=interpolation, backend=backend)
+
+ def __call__(self, img):
+ img_h, img_w = img.shape[:2]
+ if self.resize_short is not None:
+ percent = float(self.resize_short) / min(img_w, img_h)
+ w = int(round(img_w * percent))
+ h = int(round(img_h * percent))
+ else:
+ w = self.w
+ h = self.h
+ return self._resize_func(img, (w, h))
+
+
+class CropImage(object):
+ """ crop image """
+
+ def __init__(self, size):
+ if type(size) is int:
+ self.size = (size, size)
+ else:
+ self.size = size # (h, w)
+
+ def __call__(self, img):
+ w, h = self.size
+ img_h, img_w = img.shape[:2]
+
+ if img_h < h or img_w < w:
+ raise Exception(
+ f"The size({h}, {w}) of CropImage must be greater than size({img_h}, {img_w}) of image. Please check image original size and size of ResizeImage if used."
+ )
+
+ w_start = (img_w - w) // 2
+ h_start = (img_h - h) // 2
+
+ w_end = w_start + w
+ h_end = h_start + h
+ return img[h_start:h_end, w_start:w_end, :]
+
+
+class RandCropImage(object):
+ """ random crop image """
+
+ def __init__(self,
+ size,
+ scale=None,
+ ratio=None,
+ interpolation=None,
+ backend="cv2"):
+ if type(size) is int:
+ self.size = (size, size) # (h, w)
+ else:
+ self.size = size
+
+ self.scale = [0.08, 1.0] if scale is None else scale
+ self.ratio = [3. / 4., 4. / 3.] if ratio is None else ratio
+
+ self._resize_func = UnifiedResize(
+ interpolation=interpolation, backend=backend)
+
+ def __call__(self, img):
+ size = self.size
+ scale = self.scale
+ ratio = self.ratio
+
+ aspect_ratio = math.sqrt(random.uniform(*ratio))
+ w = 1. * aspect_ratio
+ h = 1. / aspect_ratio
+
+ img_h, img_w = img.shape[:2]
+
+ bound = min((float(img_w) / img_h) / (w**2),
+ (float(img_h) / img_w) / (h**2))
+ scale_max = min(scale[1], bound)
+ scale_min = min(scale[0], bound)
+
+ target_area = img_w * img_h * random.uniform(scale_min, scale_max)
+ target_size = math.sqrt(target_area)
+ w = int(target_size * w)
+ h = int(target_size * h)
+
+ i = random.randint(0, img_w - w)
+ j = random.randint(0, img_h - h)
+
+ img = img[j:j + h, i:i + w, :]
+
+ return self._resize_func(img, size)
+
+
+class RandFlipImage(object):
+ """ random flip image
+ flip_code:
+ 1: Flipped Horizontally
+ 0: Flipped Vertically
+ -1: Flipped Horizontally & Vertically
+ """
+
+ def __init__(self, flip_code=1):
+ assert flip_code in [-1, 0, 1
+ ], "flip_code should be a value in [-1, 0, 1]"
+ self.flip_code = flip_code
+
+ def __call__(self, img):
+ if random.randint(0, 1) == 1:
+ return cv2.flip(img, self.flip_code)
+ else:
+ return img
+
+
+class AutoAugment(object):
+ def __init__(self):
+ self.policy = ImageNetPolicy()
+
+ def __call__(self, img):
+ from PIL import Image
+ img = np.ascontiguousarray(img)
+ img = Image.fromarray(img)
+ img = self.policy(img)
+ img = np.asarray(img)
+
+
+class NormalizeImage(object):
+ """ normalize image such as substract mean, divide std
+ """
+
+ def __init__(self,
+ scale=None,
+ mean=None,
+ std=None,
+ order='chw',
+ output_fp16=False,
+ channel_num=3):
+ if isinstance(scale, str):
+ scale = eval(scale)
+ assert channel_num in [
+ 3, 4
+ ], "channel number of input image should be set to 3 or 4."
+ self.channel_num = channel_num
+ self.output_dtype = 'float16' if output_fp16 else 'float32'
+ self.scale = np.float32(scale if scale is not None else 1.0 / 255.0)
+ self.order = order
+ mean = mean if mean is not None else [0.485, 0.456, 0.406]
+ std = std if std is not None else [0.229, 0.224, 0.225]
+
+ shape = (3, 1, 1) if self.order == 'chw' else (1, 1, 3)
+ self.mean = np.array(mean).reshape(shape).astype('float32')
+ self.std = np.array(std).reshape(shape).astype('float32')
+
+ def __call__(self, img):
+ from PIL import Image
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+
+ assert isinstance(img,
+ np.ndarray), "invalid input 'img' in NormalizeImage"
+
+ img = (img.astype('float32') * self.scale - self.mean) / self.std
+
+ if self.channel_num == 4:
+ img_h = img.shape[1] if self.order == 'chw' else img.shape[0]
+ img_w = img.shape[2] if self.order == 'chw' else img.shape[1]
+ pad_zeros = np.zeros(
+ (1, img_h, img_w)) if self.order == 'chw' else np.zeros(
+ (img_h, img_w, 1))
+ img = (np.concatenate(
+ (img, pad_zeros), axis=0)
+ if self.order == 'chw' else np.concatenate(
+ (img, pad_zeros), axis=2))
+ return img.astype(self.output_dtype)
+
+
+class ToCHWImage(object):
+ """ convert hwc image to chw image
+ """
+
+ def __init__(self):
+ pass
+
+ def __call__(self, img):
+ from PIL import Image
+ if isinstance(img, Image.Image):
+ img = np.array(img)
+
+ return img.transpose((2, 0, 1))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/bbox.json b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/bbox.json
new file mode 100644
index 000000000..07f30b1d5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/bbox.json
@@ -0,0 +1 @@
+[{"image_id": 0, "category_id": 1, "bbox": [695.3732299804688, 251.02064514160156, 267.14727783203125, 667.5905609130859], "score": 0.873725414276123}, {"image_id": 0, "category_id": 1, "bbox": [313.9666748046875, 371.2845458984375, 484.68304443359375, 648.7493286132812], "score": 0.7265506982803345}, {"image_id": 0, "category_id": 1, "bbox": [677.9193115234375, 361.2143859863281, 318.40789794921875, 701.8808288574219], "score": 0.6433115005493164}, {"image_id": 0, "category_id": 1, "bbox": [578.6394653320312, 274.9395751953125, 366.99652099609375, 684.5674438476562], "score": 0.3449091911315918}, {"image_id": 0, "category_id": 1, "bbox": [360.52294921875, 197.74574279785156, 515.4684448242188, 735.8594207763672], "score": 0.265125036239624}, {"image_id": 0, "category_id": 1, "bbox": [70.241455078125, 31.89822006225586, 796.9187622070312, 981.4816627502441], "score": 0.21352604031562805}, {"image_id": 0, "category_id": 1, "bbox": [496.6165466308594, 357.5012512207031, 427.0873718261719, 716.1687927246094], "score": 0.21161292493343353}, {"image_id": 0, "category_id": 1, "bbox": [641.3889770507812, 138.3158721923828, 381.14117431640625, 685.3566131591797], "score": 0.19399303197860718}, {"image_id": 0, "category_id": 1, "bbox": [204.57801818847656, 45.5566291809082, 355.13804626464844, 607.0048942565918], "score": 0.18370762467384338}, {"image_id": 0, "category_id": 1, "bbox": [395.5518493652344, 472.3749694824219, 451.5275573730469, 607.6250305175781], "score": 0.18051570653915405}, {"image_id": 0, "category_id": 1, "bbox": [1625.5928955078125, 0.0, 294.4071044921875, 623.5017700195312], "score": 0.15865056216716766}, {"image_id": 0, "category_id": 1, "bbox": [259.0799865722656, 157.78518676757812, 347.6471252441406, 600.0338439941406], "score": 0.12434157729148865}, {"image_id": 0, "category_id": 1, "bbox": [321.93572998046875, 157.0794219970703, 410.17913818359375, 611.7620086669922], "score": 0.11471202969551086}, {"image_id": 0, "category_id": 1, "bbox": [785.7514038085938, 248.96939086914062, 397.41314697265625, 666.6106872558594], "score": 0.11342783272266388}, {"image_id": 0, "category_id": 1, "bbox": [303.8069152832031, 11.689882278442383, 404.1588439941406, 592.6624126434326], "score": 0.09945282340049744}, {"image_id": 0, "category_id": 1, "bbox": [217.24996948242188, 146.66268920898438, 833.2827453613281, 933.3373107910156], "score": 0.09419939666986465}, {"image_id": 0, "category_id": 1, "bbox": [414.4038391113281, 13.798330307006836, 827.4375915527344, 995.2766208648682], "score": 0.09250863641500473}, {"image_id": 0, "category_id": 1, "bbox": [719.5376586914062, 514.0806884765625, 393.53778076171875, 565.9193115234375], "score": 0.09161505103111267}, {"image_id": 0, "category_id": 1, "bbox": [192.4891815185547, 406.003173828125, 520.6764068603516, 673.996826171875], "score": 0.09044816344976425}, {"image_id": 0, "category_id": 1, "bbox": [669.56103515625, 42.59006118774414, 309.156982421875, 644.9569358825684], "score": 0.08217358589172363}, {"image_id": 0, "category_id": 1, "bbox": [364.2251892089844, 342.280029296875, 907.6244201660156, 737.719970703125], "score": 0.0715600848197937}, {"image_id": 0, "category_id": 1, "bbox": [620.1963500976562, 20.660905838012695, 860.1359252929688, 970.1545238494873], "score": 0.07122403383255005}, {"image_id": 0, "category_id": 1, "bbox": [113.67265319824219, 312.0962829589844, 485.3422393798828, 697.7134094238281], "score": 0.06512273102998734}, {"image_id": 0, "category_id": 1, "bbox": [1213.353515625, 19.728063583374023, 706.646484375, 966.699182510376], "score": 0.06291121244430542}, {"image_id": 0, "category_id": 1, "bbox": [0.0, 271.63079833984375, 748.4136352539062, 808.3692016601562], "score": 0.05891818925738335}, {"image_id": 0, "category_id": 1, "bbox": [803.0306396484375, 150.83236694335938, 859.263427734375, 929.1676330566406], "score": 0.05692801997065544}, {"image_id": 0, "category_id": 1, "bbox": [205.8365020751953, 0.0, 435.48240661621094, 491.6878967285156], "score": 0.056504569947719574}, {"image_id": 0, "category_id": 1, "bbox": [423.4618225097656, 8.994043350219727, 364.2948913574219, 588.2376461029053], "score": 0.056325770914554596}, {"image_id": 0, "category_id": 1, "bbox": [181.5787811279297, 189.3105926513672, 355.9820098876953, 625.7866363525391], "score": 0.05588925629854202}, {"image_id": 0, "category_id": 1, "bbox": [533.9801025390625, 1.8528099060058594, 351.68658447265625, 602.8824195861816], "score": 0.05168253183364868}, {"image_id": 0, "category_id": 1, "bbox": [0.0, 45.22894287109375, 598.8795166015625, 931.0927124023438], "score": 0.05018442124128342}, {"image_id": 0, "category_id": 1, "bbox": [1343.640380859375, 248.69747924804688, 576.359619140625, 831.3025207519531], "score": 0.0436907596886158}, {"image_id": 0, "category_id": 1, "bbox": [556.1782836914062, 88.61674499511719, 387.64990234375, 659.6730499267578], "score": 0.04353850334882736}, {"image_id": 0, "category_id": 1, "bbox": [1513.3153076171875, 0.0, 384.036376953125, 486.5316162109375], "score": 0.04245037958025932}, {"image_id": 0, "category_id": 1, "bbox": [82.62589263916016, 99.5450439453125, 404.5263900756836, 588.563232421875], "score": 0.04022052511572838}, {"image_id": 0, "category_id": 2, "bbox": [450.3915710449219, 676.380126953125, 578.2577209472656, 403.619873046875], "score": 0.22677572071552277}, {"image_id": 0, "category_id": 2, "bbox": [351.4678649902344, 732.3275756835938, 543.7572937011719, 347.67242431640625], "score": 0.1537037044763565}, {"image_id": 0, "category_id": 2, "bbox": [707.0989379882812, 579.64892578125, 436.22406005859375, 500.35107421875], "score": 0.12268988788127899}, {"image_id": 0, "category_id": 2, "bbox": [564.0088500976562, 772.2525634765625, 551.3733520507812, 307.7474365234375], "score": 0.11803540587425232}, {"image_id": 0, "category_id": 2, "bbox": [787.9828491210938, 719.3270874023438, 377.94049072265625, 360.67291259765625], "score": 0.08660945296287537}, {"image_id": 0, "category_id": 2, "bbox": [359.7429504394531, 555.33984375, 416.2214050292969, 524.66015625], "score": 0.08527899533510208}, {"image_id": 0, "category_id": 2, "bbox": [303.93731689453125, 330.9964294433594, 495.6619873046875, 621.1058044433594], "score": 0.08165193349123001}, {"image_id": 0, "category_id": 2, "bbox": [262.8908386230469, 647.728271484375, 441.9308166503906, 432.271728515625], "score": 0.07838243246078491}, {"image_id": 0, "category_id": 2, "bbox": [884.8604736328125, 661.5791015625, 431.5517578125, 418.4208984375], "score": 0.06516268104314804}, {"image_id": 0, "category_id": 2, "bbox": [813.291748046875, 463.5531311035156, 364.7635498046875, 615.6419372558594], "score": 0.06085878983139992}, {"image_id": 0, "category_id": 2, "bbox": [661.9994506835938, 425.6100158691406, 345.1868896484375, 654.3899841308594], "score": 0.0573558434844017}, {"image_id": 0, "category_id": 2, "bbox": [204.57801818847656, 45.5566291809082, 355.13804626464844, 607.0048942565918], "score": 0.05367649346590042}, {"image_id": 0, "category_id": 2, "bbox": [998.26416015625, 600.0878295898438, 400.3265380859375, 479.91217041015625], "score": 0.05328734591603279}, {"image_id": 0, "category_id": 2, "bbox": [695.3732299804688, 251.02064514160156, 267.14727783203125, 667.5905609130859], "score": 0.05168259143829346}, {"image_id": 0, "category_id": 2, "bbox": [273.9808044433594, 195.56089782714844, 389.7720642089844, 651.4178009033203], "score": 0.050418075174093246}, {"image_id": 0, "category_id": 2, "bbox": [115.82988739013672, 647.270751953125, 480.43604278564453, 432.729248046875], "score": 0.04801110923290253}, {"image_id": 0, "category_id": 2, "bbox": [1087.3267822265625, 669.7846069335938, 422.6895751953125, 410.21539306640625], "score": 0.044634222984313965}, {"image_id": 0, "category_id": 2, "bbox": [1304.338623046875, 82.7868423461914, 328.2535400390625, 659.7090072631836], "score": 0.0437178835272789}, {"image_id": 0, "category_id": 2, "bbox": [916.3257446289062, 417.77728271484375, 377.30450439453125, 611.5194702148438], "score": 0.04322515428066254}, {"image_id": 0, "category_id": 2, "bbox": [307.6468505859375, 48.68049621582031, 405.5543212890625, 608.9121551513672], "score": 0.040602296590805054}, {"image_id": 0, "category_id": 2, "bbox": [1426.37451171875, 0.0, 402.5771484375, 477.365966796875], "score": 0.03979233279824257}, {"image_id": 0, "category_id": 2, "bbox": [1396.8446044921875, 95.05376434326172, 346.6268310546875, 641.4352493286133], "score": 0.03934914618730545}, {"image_id": 0, "category_id": 2, "bbox": [1215.671630859375, 529.39892578125, 391.1053466796875, 550.60107421875], "score": 0.03912525624036789}, {"image_id": 0, "category_id": 2, "bbox": [419.9827880859375, 56.105613708496094, 375.37811279296875, 597.7714004516602], "score": 0.038809966295957565}, {"image_id": 0, "category_id": 2, "bbox": [1116.685791015625, 424.75189208984375, 387.8294677734375, 592.2509155273438], "score": 0.03874225914478302}, {"image_id": 0, "category_id": 2, "bbox": [513.9990234375, 422.6842346191406, 400.6072998046875, 657.3157653808594], "score": 0.03870948404073715}, {"image_id": 0, "category_id": 2, "bbox": [1216.1732177734375, 23.961257934570312, 367.170654296875, 669.3480072021484], "score": 0.037472717463970184}, {"image_id": 0, "category_id": 2, "bbox": [1317.4234619140625, 361.7864990234375, 340.16748046875, 620.159912109375], "score": 0.03717796504497528}, {"image_id": 0, "category_id": 2, "bbox": [72.07478332519531, 155.77610778808594, 407.47047424316406, 592.4249420166016], "score": 0.03708164766430855}, {"image_id": 0, "category_id": 3, "bbox": [364.1284484863281, 406.3480224609375, 510.1944885253906, 673.6519775390625], "score": 0.26016080379486084}, {"image_id": 0, "category_id": 3, "bbox": [677.9193115234375, 361.2143859863281, 318.40789794921875, 701.8808288574219], "score": 0.25869840383529663}, {"image_id": 0, "category_id": 3, "bbox": [583.8218383789062, 564.3338623046875, 503.92864990234375, 515.6661376953125], "score": 0.25320538878440857}, {"image_id": 0, "category_id": 3, "bbox": [695.3732299804688, 251.02064514160156, 267.14727783203125, 667.5905609130859], "score": 0.23515181243419647}, {"image_id": 0, "category_id": 3, "bbox": [360.52294921875, 197.74574279785156, 515.4684448242188, 735.8594207763672], "score": 0.2219495177268982}, {"image_id": 0, "category_id": 3, "bbox": [684.4028930664062, 702.7177124023438, 479.45318603515625, 377.28228759765625], "score": 0.17938347160816193}, {"image_id": 0, "category_id": 3, "bbox": [205.68795776367188, 81.5010757446289, 336.9908752441406, 613.2951278686523], "score": 0.15753011405467987}, {"image_id": 0, "category_id": 3, "bbox": [818.2278442382812, 523.0381469726562, 356.09674072265625, 556.9618530273438], "score": 0.14738178253173828}, {"image_id": 0, "category_id": 3, "bbox": [353.4762268066406, 673.176513671875, 542.4141540527344, 406.823486328125], "score": 0.14062419533729553}, {"image_id": 0, "category_id": 3, "bbox": [449.11468505859375, 733.4881591796875, 576.4956665039062, 346.5118408203125], "score": 0.11796186864376068}, {"image_id": 0, "category_id": 3, "bbox": [308.56683349609375, 75.7350082397461, 414.03912353515625, 639.7704238891602], "score": 0.11490284651517868}, {"image_id": 0, "category_id": 3, "bbox": [205.1191864013672, 0.0, 428.0331573486328, 540.8389892578125], "score": 0.10311776399612427}, {"image_id": 0, "category_id": 3, "bbox": [277.3680725097656, 357.999755859375, 437.1716003417969, 613.5983276367188], "score": 0.09784413874149323}, {"image_id": 0, "category_id": 3, "bbox": [259.0799865722656, 157.78518676757812, 347.6471252441406, 600.0338439941406], "score": 0.09211573749780655}, {"image_id": 0, "category_id": 3, "bbox": [519.8496704101562, 307.5931396484375, 414.776123046875, 714.5018310546875], "score": 0.09025998413562775}, {"image_id": 0, "category_id": 3, "bbox": [360.65911865234375, 460.4512634277344, 909.9977416992188, 619.5487365722656], "score": 0.08487977832555771}, {"image_id": 0, "category_id": 3, "bbox": [79.99319458007812, 139.39523315429688, 786.0184631347656, 940.6047668457031], "score": 0.07553756237030029}, {"image_id": 0, "category_id": 3, "bbox": [419.9827880859375, 56.105613708496094, 375.37811279296875, 597.7714004516602], "score": 0.07363048195838928}, {"image_id": 0, "category_id": 3, "bbox": [904.127197265625, 529.7086181640625, 395.068359375, 550.2913818359375], "score": 0.06731098145246506}, {"image_id": 0, "category_id": 3, "bbox": [236.01333618164062, 33.08015823364258, 785.3453063964844, 987.9305839538574], "score": 0.06729743629693985}, {"image_id": 0, "category_id": 3, "bbox": [268.31854248046875, 556.6338500976562, 426.06280517578125, 523.3661499023438], "score": 0.059684500098228455}, {"image_id": 0, "category_id": 3, "bbox": [796.9890747070312, 324.95416259765625, 390.37554931640625, 675.0439453125], "score": 0.05858004465699196}, {"image_id": 0, "category_id": 3, "bbox": [1513.0430908203125, 0.0, 406.9569091796875, 228.5352325439453], "score": 0.05848925933241844}, {"image_id": 0, "category_id": 3, "bbox": [82.62589263916016, 99.5450439453125, 404.5263900756836, 588.563232421875], "score": 0.056956395506858826}, {"image_id": 0, "category_id": 3, "bbox": [304.0814514160156, 0.0, 421.3404846191406, 501.8050231933594], "score": 0.0562148354947567}, {"image_id": 0, "category_id": 3, "bbox": [533.9801025390625, 1.8528099060058594, 351.68658447265625, 602.8824195861816], "score": 0.051280952990055084}, {"image_id": 0, "category_id": 3, "bbox": [1697.5096435546875, 0.0, 222.4903564453125, 193.1752471923828], "score": 0.049951519817113876}, {"image_id": 0, "category_id": 3, "bbox": [669.56103515625, 42.59006118774414, 309.156982421875, 644.9569358825684], "score": 0.04696698486804962}, {"image_id": 0, "category_id": 3, "bbox": [1668.965087890625, 41.29924774169922, 251.034912109375, 234.64386749267578], "score": 0.04610244184732437}, {"image_id": 0, "category_id": 3, "bbox": [571.380615234375, 246.01126098632812, 896.9088134765625, 833.9887390136719], "score": 0.046007189899683}, {"image_id": 0, "category_id": 3, "bbox": [641.3889770507812, 138.3158721923828, 381.14117431640625, 685.3566131591797], "score": 0.044205036014318466}, {"image_id": 0, "category_id": 3, "bbox": [1625.5928955078125, 0.0, 294.4071044921875, 623.5017700195312], "score": 0.04209528863430023}, {"image_id": 0, "category_id": 3, "bbox": [104.86660766601562, 260.8401794433594, 496.3695983886719, 713.7294616699219], "score": 0.04085254669189453}, {"image_id": 0, "category_id": 3, "bbox": [414.4038391113281, 13.798330307006836, 827.4375915527344, 995.2766208648682], "score": 0.040834248065948486}, {"image_id": 0, "category_id": 3, "bbox": [1304.338623046875, 82.7868423461914, 328.2535400390625, 659.7090072631836], "score": 0.03985943645238876}, {"image_id": 0, "category_id": 3, "bbox": [1513.3153076171875, 0.0, 384.036376953125, 486.5316162109375], "score": 0.03837663680315018}, {"image_id": 1, "category_id": 1, "bbox": [997.5174560546875, 213.14662170410156, 364.3343505859375, 641.0175628662109], "score": 0.24449113011360168}, {"image_id": 1, "category_id": 1, "bbox": [227.72842407226562, 121.88896942138672, 342.5745544433594, 564.0383987426758], "score": 0.1826373040676117}, {"image_id": 1, "category_id": 1, "bbox": [982.564453125, 139.61932373046875, 373.0555419921875, 516.285888671875], "score": 0.16992300748825073}, {"image_id": 1, "category_id": 1, "bbox": [1631.2984619140625, 0.0, 274.699462890625, 555.2535400390625], "score": 0.10783612728118896}, {"image_id": 1, "category_id": 1, "bbox": [357.509033203125, 115.10198974609375, 960.323486328125, 964.8980102539062], "score": 0.09964483976364136}, {"image_id": 1, "category_id": 1, "bbox": [35.93037414550781, 35.0949821472168, 843.0650482177734, 961.9981575012207], "score": 0.0948261097073555}, {"image_id": 1, "category_id": 1, "bbox": [229.6660919189453, 0.0, 382.66831970214844, 524.5968017578125], "score": 0.08264536410570145}, {"image_id": 1, "category_id": 1, "bbox": [1199.9945068359375, 7.507335662841797, 720.0054931640625, 983.9323616027832], "score": 0.07683487236499786}, {"image_id": 1, "category_id": 1, "bbox": [568.7382202148438, 238.6720733642578, 919.2339477539062, 841.3279266357422], "score": 0.07614229619503021}, {"image_id": 1, "category_id": 1, "bbox": [147.3486785888672, 232.01626586914062, 986.4595489501953, 847.9837341308594], "score": 0.06581543385982513}, {"image_id": 1, "category_id": 1, "bbox": [307.1380615234375, 0.0, 395.32794189453125, 577.035400390625], "score": 0.06514111906290054}, {"image_id": 1, "category_id": 1, "bbox": [808.4256591796875, 155.9962921142578, 474.565185546875, 654.7679901123047], "score": 0.061964504420757294}, {"image_id": 1, "category_id": 1, "bbox": [0.0, 40.31918716430664, 568.2688598632812, 945.4715843200684], "score": 0.05491221696138382}, {"image_id": 1, "category_id": 1, "bbox": [218.0932159423828, 149.05030822753906, 432.78709411621094, 697.3251800537109], "score": 0.052795279771089554}, {"image_id": 1, "category_id": 1, "bbox": [1171.8597412109375, 339.8847961425781, 748.1402587890625, 740.1152038574219], "score": 0.05180462822318077}, {"image_id": 1, "category_id": 1, "bbox": [947.7489624023438, 0.0, 373.21234130859375, 559.6314697265625], "score": 0.050812363624572754}, {"image_id": 1, "category_id": 1, "bbox": [47.953216552734375, 0.0, 850.6510314941406, 645.6210327148438], "score": 0.050588712096214294}, {"image_id": 1, "category_id": 1, "bbox": [808.0147705078125, 123.57368469238281, 812.243408203125, 956.4263153076172], "score": 0.04943518340587616}, {"image_id": 1, "category_id": 1, "bbox": [0.0, 252.17372131347656, 738.1876831054688, 827.8262786865234], "score": 0.04770356044173241}, {"image_id": 1, "category_id": 1, "bbox": [1071.857177734375, 276.0357971191406, 408.15771484375, 651.8440856933594], "score": 0.046735167503356934}, {"image_id": 1, "category_id": 1, "bbox": [944.5446166992188, 336.05474853515625, 380.69586181640625, 642.3316650390625], "score": 0.04438221827149391}, {"image_id": 1, "category_id": 1, "bbox": [991.9332885742188, 126.97108459472656, 862.9631958007812, 953.0289154052734], "score": 0.04360035061836243}, {"image_id": 1, "category_id": 1, "bbox": [1034.800537109375, 0.0, 498.264404296875, 652.00048828125], "score": 0.04268684610724449}, {"image_id": 1, "category_id": 1, "bbox": [1698.17626953125, 0.0, 221.82373046875, 197.60653686523438], "score": 0.04033979773521423}, {"image_id": 1, "category_id": 1, "bbox": [184.4237823486328, 0.0, 954.5077362060547, 546.6664428710938], "score": 0.039156991988420486}, {"image_id": 1, "category_id": 1, "bbox": [359.7218017578125, 0.0, 982.869873046875, 644.6243896484375], "score": 0.03688075393438339}, {"image_id": 1, "category_id": 1, "bbox": [0.0, 0.0, 578.293701171875, 544.5452270507812], "score": 0.03655608743429184}, {"image_id": 1, "category_id": 1, "bbox": [980.3106079101562, 0.0, 925.4635620117188, 647.032958984375], "score": 0.034414708614349365}, {"image_id": 1, "category_id": 1, "bbox": [1008.9415893554688, 408.375, 340.67498779296875, 237.55377197265625], "score": 0.03395402431488037}, {"image_id": 1, "category_id": 1, "bbox": [764.9998168945312, 453.47576904296875, 926.6813354492188, 626.5242309570312], "score": 0.033227451145648956}, {"image_id": 1, "category_id": 2, "bbox": [973.8045043945312, 134.927001953125, 392.18572998046875, 515.348388671875], "score": 0.2480056881904602}, {"image_id": 1, "category_id": 2, "bbox": [1033.345458984375, 193.55758666992188, 474.0589599609375, 502.2799377441406], "score": 0.12044714391231537}, {"image_id": 1, "category_id": 2, "bbox": [997.5174560546875, 213.14662170410156, 364.3343505859375, 641.0175628662109], "score": 0.10539223253726959}, {"image_id": 1, "category_id": 2, "bbox": [720.1891479492188, 55.10577392578125, 444.19879150390625, 588.2930908203125], "score": 0.06655608117580414}, {"image_id": 1, "category_id": 2, "bbox": [947.7489624023438, 0.0, 373.21234130859375, 559.6314697265625], "score": 0.06620338559150696}, {"image_id": 1, "category_id": 2, "bbox": [944.5446166992188, 336.05474853515625, 380.69586181640625, 642.3316650390625], "score": 0.05874659866094589}, {"image_id": 1, "category_id": 2, "bbox": [1213.3853759765625, 15.75413703918457, 356.4039306640625, 569.0481090545654], "score": 0.05729030817747116}, {"image_id": 1, "category_id": 2, "bbox": [728.943115234375, 204.8000946044922, 436.90478515625, 587.9622955322266], "score": 0.056557174772024155}, {"image_id": 1, "category_id": 2, "bbox": [837.0780029296875, 285.71533203125, 426.9288330078125, 633.6383666992188], "score": 0.05022319778800011}, {"image_id": 1, "category_id": 2, "bbox": [1006.3452758789062, 482.0842590332031, 350.26129150390625, 594.3320007324219], "score": 0.04891819506883621}, {"image_id": 1, "category_id": 2, "bbox": [1102.243408203125, 342.4518127441406, 371.1197509765625, 634.0279235839844], "score": 0.048709992319345474}, {"image_id": 1, "category_id": 2, "bbox": [822.7955932617188, 485.13818359375, 428.04119873046875, 590.968505859375], "score": 0.04719912260770798}, {"image_id": 1, "category_id": 2, "bbox": [1185.6767578125, 247.70567321777344, 413.442626953125, 614.7293853759766], "score": 0.04611574485898018}, {"image_id": 1, "category_id": 2, "bbox": [1312.8875732421875, 0.0, 401.6728515625, 478.9422302246094], "score": 0.0458284355700016}, {"image_id": 1, "category_id": 2, "bbox": [1078.21728515625, 0.0, 401.2777099609375, 556.3494262695312], "score": 0.04388556629419327}, {"image_id": 1, "category_id": 2, "bbox": [1112.9974365234375, 543.3936157226562, 376.166748046875, 536.6063842773438], "score": 0.04236176237463951}, {"image_id": 1, "category_id": 2, "bbox": [1393.505126953125, 0.0, 426.3736572265625, 538.716552734375], "score": 0.041083235293626785}, {"image_id": 1, "category_id": 2, "bbox": [209.38162231445312, 13.68343734741211, 375.6072692871094, 580.5582618713379], "score": 0.040439072996377945}, {"image_id": 1, "category_id": 2, "bbox": [638.6920166015625, 106.63209533691406, 390.194091796875, 589.7396697998047], "score": 0.040002308785915375}, {"image_id": 1, "category_id": 2, "bbox": [1293.2567138671875, 100.03126525878906, 381.57763671875, 611.4940032958984], "score": 0.03915474936366081}, {"image_id": 1, "category_id": 2, "bbox": [808.0147705078125, 123.57368469238281, 812.243408203125, 956.4263153076172], "score": 0.038055650889873505}, {"image_id": 1, "category_id": 2, "bbox": [1281.4400634765625, 595.01318359375, 434.9503173828125, 484.98681640625], "score": 0.037920162081718445}, {"image_id": 1, "category_id": 2, "bbox": [806.5025024414062, 0.0, 473.18572998046875, 627.6235961914062], "score": 0.03787766024470329}, {"image_id": 1, "category_id": 2, "bbox": [158.92922973632812, 118.40994262695312, 950.0995788574219, 961.5900573730469], "score": 0.03619258478283882}, {"image_id": 1, "category_id": 2, "bbox": [231.0110321044922, 149.81060791015625, 344.5010528564453, 588.2882080078125], "score": 0.03588823601603508}, {"image_id": 1, "category_id": 2, "bbox": [619.6505126953125, 24.263837814331055, 830.1717529296875, 972.438310623169], "score": 0.03542332351207733}, {"image_id": 1, "category_id": 2, "bbox": [360.0400085449219, 226.43418884277344, 963.6678771972656, 853.5658111572266], "score": 0.03445490449666977}, {"image_id": 1, "category_id": 2, "bbox": [15.535308837890625, 282.0261535644531, 452.07696533203125, 572.8584899902344], "score": 0.034069325774908066}, {"image_id": 1, "category_id": 2, "bbox": [1520.5927734375, 0.0, 377.05078125, 488.8376159667969], "score": 0.03389637544751167}, {"image_id": 1, "category_id": 2, "bbox": [783.0108642578125, 0.0, 922.7584228515625, 736.8048095703125], "score": 0.0338481180369854}, {"image_id": 1, "category_id": 2, "bbox": [982.9711303710938, 240.32254028320312, 883.5609741210938, 839.6774597167969], "score": 0.033812154084444046}, {"image_id": 1, "category_id": 2, "bbox": [1214.23583984375, 393.50433349609375, 397.9107666015625, 656.9534301757812], "score": 0.03371719643473625}, {"image_id": 1, "category_id": 2, "bbox": [704.4246826171875, 370.22894287109375, 470.9154052734375, 606.06640625], "score": 0.033109813928604126}, {"image_id": 1, "category_id": 2, "bbox": [518.7343139648438, 103.01958465576172, 408.92376708984375, 601.5965042114258], "score": 0.03308727219700813}, {"image_id": 1, "category_id": 2, "bbox": [996.6253662109375, 0.0, 416.7921142578125, 429.11865234375], "score": 0.03264719247817993}, {"image_id": 1, "category_id": 3, "bbox": [982.564453125, 139.61932373046875, 373.0555419921875, 516.285888671875], "score": 0.7264499664306641}, {"image_id": 1, "category_id": 3, "bbox": [997.5174560546875, 213.14662170410156, 364.3343505859375, 641.0175628662109], "score": 0.4768505096435547}, {"image_id": 1, "category_id": 3, "bbox": [1033.345458984375, 193.55758666992188, 474.0589599609375, 502.2799377441406], "score": 0.26717087626457214}, {"image_id": 1, "category_id": 3, "bbox": [808.4256591796875, 155.9962921142578, 474.565185546875, 654.7679901123047], "score": 0.1510302722454071}, {"image_id": 1, "category_id": 3, "bbox": [223.36300659179688, 74.33831787109375, 344.9846496582031, 563.6932373046875], "score": 0.13036808371543884}, {"image_id": 1, "category_id": 3, "bbox": [947.7489624023438, 0.0, 373.21234130859375, 559.6314697265625], "score": 0.12277168035507202}, {"image_id": 1, "category_id": 3, "bbox": [1071.857177734375, 276.0357971191406, 408.15771484375, 651.8440856933594], "score": 0.1194649338722229}, {"image_id": 1, "category_id": 3, "bbox": [943.7728881835938, 0.0, 537.1013793945312, 660.8310546875], "score": 0.1142682284116745}, {"image_id": 1, "category_id": 3, "bbox": [944.5446166992188, 336.05474853515625, 380.69586181640625, 642.3316650390625], "score": 0.09109840542078018}, {"image_id": 1, "category_id": 3, "bbox": [806.5025024414062, 0.0, 473.18572998046875, 627.6235961914062], "score": 0.08420402556657791}, {"image_id": 1, "category_id": 3, "bbox": [597.389404296875, 121.8159408569336, 852.0494384765625, 958.1840591430664], "score": 0.08250249177217484}, {"image_id": 1, "category_id": 3, "bbox": [1199.9945068359375, 7.507335662841797, 720.0054931640625, 983.9323616027832], "score": 0.07666439563035965}, {"image_id": 1, "category_id": 3, "bbox": [1136.9561767578125, 55.35620880126953, 506.6279296875, 679.4648971557617], "score": 0.07268732786178589}, {"image_id": 1, "category_id": 3, "bbox": [307.1380615234375, 0.0, 395.32794189453125, 577.035400390625], "score": 0.0674259215593338}, {"image_id": 1, "category_id": 3, "bbox": [1649.48876953125, 40.35232925415039, 270.51123046875, 242.96892929077148], "score": 0.06582880765199661}, {"image_id": 1, "category_id": 3, "bbox": [1698.17626953125, 0.0, 221.82373046875, 197.60653686523438], "score": 0.06534602493047714}, {"image_id": 1, "category_id": 3, "bbox": [1516.6143798828125, 0.0, 403.3856201171875, 230.1896209716797], "score": 0.06522159278392792}, {"image_id": 1, "category_id": 3, "bbox": [158.65740966796875, 150.12693786621094, 437.06036376953125, 690.1106719970703], "score": 0.060804471373558044}, {"image_id": 1, "category_id": 3, "bbox": [807.333251953125, 234.8459930419922, 827.429443359375, 845.1540069580078], "score": 0.06033357232809067}, {"image_id": 1, "category_id": 3, "bbox": [4.9527740478515625, 243.22787475585938, 928.4971160888672, 836.7721252441406], "score": 0.05078185349702835}, {"image_id": 1, "category_id": 3, "bbox": [329.77490234375, 326.3771667480469, 1026.0186767578125, 753.6228332519531], "score": 0.049885157495737076}, {"image_id": 1, "category_id": 3, "bbox": [991.9332885742188, 126.97108459472656, 862.9631958007812, 953.0289154052734], "score": 0.04851303622126579}, {"image_id": 1, "category_id": 3, "bbox": [980.3106079101562, 0.0, 925.4635620117188, 647.032958984375], "score": 0.04800134897232056}, {"image_id": 1, "category_id": 3, "bbox": [1338.8642578125, 233.53536987304688, 581.1357421875, 846.4646301269531], "score": 0.04772469028830528}, {"image_id": 1, "category_id": 3, "bbox": [1213.3853759765625, 15.75413703918457, 356.4039306640625, 569.0481090545654], "score": 0.04643239453434944}, {"image_id": 1, "category_id": 3, "bbox": [800.8641967773438, 0.0, 852.9357299804688, 849.7445678710938], "score": 0.044755831360816956}, {"image_id": 1, "category_id": 3, "bbox": [312.36627197265625, 167.0087890625, 401.69873046875, 661.0322265625], "score": 0.041519712656736374}, {"image_id": 1, "category_id": 3, "bbox": [400.1049499511719, 48.6071891784668, 419.9554748535156, 610.5923957824707], "score": 0.0405220091342926}, {"image_id": 1, "category_id": 3, "bbox": [718.2431030273438, 98.81644439697266, 450.33905029296875, 598.4150009155273], "score": 0.038510944694280624}, {"image_id": 1, "category_id": 3, "bbox": [116.00665283203125, 449.1982116699219, 1051.5519409179688, 630.8017883300781], "score": 0.037095338106155396}, {"image_id": 1, "category_id": 3, "bbox": [181.21389770507812, 25.111913681030273, 926.2496032714844, 974.7529544830322], "score": 0.035761523991823196}, {"image_id": 1, "category_id": 3, "bbox": [1394.4649658203125, 0.0, 428.072021484375, 483.1687316894531], "score": 0.034289196133613586}, {"image_id": 1, "category_id": 3, "bbox": [1638.4244384765625, 0.0, 267.3065185546875, 489.2379455566406], "score": 0.034216392785310745}, {"image_id": 1, "category_id": 3, "bbox": [343.0436096191406, 0.0, 1001.8591003417969, 851.460205078125], "score": 0.03389797732234001}, {"image_id": 1, "category_id": 3, "bbox": [36.36460876464844, 119.03919982910156, 466.03038024902344, 653.4481048583984], "score": 0.03331182524561882}, {"image_id": 2, "category_id": 1, "bbox": [744.51220703125, 19.65835189819336, 205.94293212890625, 430.02743911743164], "score": 0.6258186101913452}, {"image_id": 2, "category_id": 1, "bbox": [341.05206298828125, 255.701904296875, 218.2022705078125, 387.58868408203125], "score": 0.6193803548812866}, {"image_id": 2, "category_id": 1, "bbox": [783.372314453125, 424.6482849121094, 224.4444580078125, 295.3517150878906], "score": 0.5141648054122925}, {"image_id": 2, "category_id": 1, "bbox": [726.4847412109375, 93.18611907958984, 169.7232666015625, 394.22974395751953], "score": 0.41604265570640564}, {"image_id": 2, "category_id": 1, "bbox": [335.10595703125, 370.3768310546875, 223.8814697265625, 349.6231689453125], "score": 0.3340129554271698}, {"image_id": 2, "category_id": 1, "bbox": [717.7142333984375, 0.0, 181.5628662109375, 393.0101013183594], "score": 0.33154189586639404}, {"image_id": 2, "category_id": 1, "bbox": [779.9652099609375, 274.44085693359375, 284.675048828125, 438.3267822265625], "score": 0.31457409262657166}, {"image_id": 2, "category_id": 1, "bbox": [846.1083374023438, 313.611328125, 277.40692138671875, 406.388671875], "score": 0.29272767901420593}, {"image_id": 2, "category_id": 1, "bbox": [776.2566528320312, 0.0, 186.19512939453125, 338.6823425292969], "score": 0.2866514027118683}, {"image_id": 2, "category_id": 1, "bbox": [817.074462890625, 26.227901458740234, 210.306640625, 416.2046546936035], "score": 0.27876150608062744}, {"image_id": 2, "category_id": 1, "bbox": [873.1625366210938, 182.67300415039062, 263.01251220703125, 454.3901672363281], "score": 0.26855337619781494}, {"image_id": 2, "category_id": 1, "bbox": [734.4992065429688, 119.64443969726562, 224.48321533203125, 421.4815368652344], "score": 0.24493303894996643}, {"image_id": 2, "category_id": 1, "bbox": [606.072509765625, 438.74627685546875, 218.1585693359375, 281.25372314453125], "score": 0.23254962265491486}, {"image_id": 2, "category_id": 1, "bbox": [920.0265502929688, 267.436279296875, 231.66461181640625, 436.240234375], "score": 0.20196624100208282}, {"image_id": 2, "category_id": 1, "bbox": [652.8048706054688, 61.643943786621094, 231.64385986328125, 397.25260162353516], "score": 0.18838302791118622}, {"image_id": 2, "category_id": 1, "bbox": [705.2552490234375, 395.81103515625, 270.5015869140625, 324.18896484375], "score": 0.16972988843917847}, {"image_id": 2, "category_id": 1, "bbox": [253.3760986328125, 301.7941589355469, 290.6611328125, 355.4490661621094], "score": 0.159967303276062}, {"image_id": 2, "category_id": 1, "bbox": [321.4034729003906, 172.58836364746094, 295.1708068847656, 481.1648712158203], "score": 0.13419145345687866}, {"image_id": 2, "category_id": 1, "bbox": [813.7197875976562, 161.0009002685547, 239.02044677734375, 432.31727600097656], "score": 0.11243642121553421}, {"image_id": 2, "category_id": 1, "bbox": [807.4182739257812, 0.0, 215.60589599609375, 300.4808044433594], "score": 0.11205413937568665}, {"image_id": 2, "category_id": 1, "bbox": [859.1271362304688, 21.510337829589844, 237.39678955078125, 423.57401275634766], "score": 0.1014789417386055}, {"image_id": 2, "category_id": 1, "bbox": [756.2540283203125, 203.2908935546875, 220.2879638671875, 415.2799072265625], "score": 0.09805745631456375}, {"image_id": 2, "category_id": 1, "bbox": [1073.6773681640625, 0.0, 205.17919921875, 371.9908447265625], "score": 0.0948367491364479}, {"image_id": 2, "category_id": 1, "bbox": [485.6324462890625, 504.1072082519531, 289.45184326171875, 215.89279174804688], "score": 0.08554235845804214}, {"image_id": 2, "category_id": 1, "bbox": [26.012725830078125, 91.69560241699219, 570.5726623535156, 628.3043975830078], "score": 0.080867238342762}, {"image_id": 2, "category_id": 1, "bbox": [642.5072021484375, 0.0, 219.867431640625, 347.63055419921875], "score": 0.08052774518728256}, {"image_id": 2, "category_id": 1, "bbox": [689.7314453125, 195.74658203125, 231.28759765625, 428.3529052734375], "score": 0.080379419028759}, {"image_id": 2, "category_id": 1, "bbox": [620.2318115234375, 149.4236297607422, 261.87451171875, 442.38990783691406], "score": 0.07949261367321014}, {"image_id": 2, "category_id": 1, "bbox": [133.427490234375, 19.396190643310547, 602.2239379882812, 636.1353645324707], "score": 0.0761333554983139}, {"image_id": 2, "category_id": 1, "bbox": [292.21710205078125, 93.14957427978516, 550.6807250976562, 626.8504257202148], "score": 0.07333367317914963}, {"image_id": 2, "category_id": 1, "bbox": [777.72705078125, 169.46307373046875, 502.27294921875, 550.5369262695312], "score": 0.06907118856906891}, {"image_id": 2, "category_id": 1, "bbox": [917.220458984375, 0.0, 256.2156982421875, 412.54730224609375], "score": 0.0612383596599102}, {"image_id": 2, "category_id": 1, "bbox": [932.5823974609375, 149.875, 219.9869384765625, 403.67120361328125], "score": 0.06098754703998566}, {"image_id": 2, "category_id": 1, "bbox": [352.563232421875, 139.30145263671875, 226.39105224609375, 420.54901123046875], "score": 0.060007575899362564}, {"image_id": 2, "category_id": 1, "bbox": [678.4630126953125, 278.8869323730469, 276.070556640625, 420.6309509277344], "score": 0.05950196459889412}, {"image_id": 2, "category_id": 1, "bbox": [460.831787109375, 337.7149658203125, 298.4686279296875, 343.38970947265625], "score": 0.05949684605002403}, {"image_id": 2, "category_id": 1, "bbox": [998.7368774414062, 0.0, 245.53350830078125, 412.25396728515625], "score": 0.05697806552052498}, {"image_id": 2, "category_id": 2, "bbox": [460.831787109375, 337.7149658203125, 298.4686279296875, 343.38970947265625], "score": 0.2763864994049072}, {"image_id": 2, "category_id": 2, "bbox": [528.7192993164062, 380.21295166015625, 252.83746337890625, 339.78704833984375], "score": 0.19620339572429657}, {"image_id": 2, "category_id": 2, "bbox": [399.87469482421875, 411.403564453125, 328.1947021484375, 308.596435546875], "score": 0.16476958990097046}, {"image_id": 2, "category_id": 2, "bbox": [332.7411804199219, 402.03375244140625, 260.7944641113281, 317.96624755859375], "score": 0.13649119436740875}, {"image_id": 2, "category_id": 2, "bbox": [583.6277465820312, 337.2882995605469, 288.97125244140625, 357.6206970214844], "score": 0.12643887102603912}, {"image_id": 2, "category_id": 2, "bbox": [517.05029296875, 255.56263732910156, 268.694091796875, 385.2583465576172], "score": 0.11980830878019333}, {"image_id": 2, "category_id": 2, "bbox": [337.965087890625, 301.36932373046875, 227.52069091796875, 358.75146484375], "score": 0.10761857777833939}, {"image_id": 2, "category_id": 2, "bbox": [695.6056518554688, 323.1462097167969, 274.38934326171875, 390.1117248535156], "score": 0.102951280772686}, {"image_id": 2, "category_id": 2, "bbox": [136.2960662841797, 406.1953125, 331.1933135986328, 313.8046875], "score": 0.09216462075710297}, {"image_id": 2, "category_id": 2, "bbox": [216.53497314453125, 378.3658447265625, 321.48394775390625, 341.6341552734375], "score": 0.09048698842525482}, {"image_id": 2, "category_id": 2, "bbox": [612.4758911132812, 403.2018127441406, 222.2435302734375, 316.7981872558594], "score": 0.08795756101608276}, {"image_id": 2, "category_id": 2, "bbox": [765.1884765625, 273.8833312988281, 259.88916015625, 436.8657531738281], "score": 0.08420597016811371}, {"image_id": 2, "category_id": 2, "bbox": [783.372314453125, 424.6482849121094, 224.4444580078125, 295.3517150878906], "score": 0.07613956928253174}, {"image_id": 2, "category_id": 2, "bbox": [458.3592834472656, 219.12957763671875, 286.9651794433594, 390.527587890625], "score": 0.0755750983953476}, {"image_id": 2, "category_id": 2, "bbox": [722.8893432617188, 127.14606475830078, 178.783935546875, 403.04442596435547], "score": 0.07539104670286179}, {"image_id": 2, "category_id": 2, "bbox": [396.8189697265625, 244.10755920410156, 283.2181396484375, 410.03013610839844], "score": 0.06955207884311676}, {"image_id": 2, "category_id": 2, "bbox": [609.1813354492188, 197.85885620117188, 257.57708740234375, 427.9960021972656], "score": 0.06862004101276398}, {"image_id": 2, "category_id": 2, "bbox": [583.3659057617188, 479.3427734375, 219.020751953125, 240.6572265625], "score": 0.06362133473157883}, {"image_id": 2, "category_id": 2, "bbox": [739.1348876953125, 75.46237182617188, 216.69390869140625, 425.19586181640625], "score": 0.06258812546730042}, {"image_id": 2, "category_id": 2, "bbox": [689.7314453125, 195.74658203125, 231.28759765625, 428.3529052734375], "score": 0.062394216656684875}, {"image_id": 2, "category_id": 2, "bbox": [548.1318359375, 168.33888244628906, 257.3387451171875, 412.57603454589844], "score": 0.05868319049477577}, {"image_id": 2, "category_id": 2, "bbox": [811.650634765625, 189.78045654296875, 256.252685546875, 440.336669921875], "score": 0.05751308426260948}, {"image_id": 2, "category_id": 2, "bbox": [812.8318481445312, 48.27541732788086, 218.65264892578125, 413.82223892211914], "score": 0.057065609842538834}, {"image_id": 2, "category_id": 3, "bbox": [458.23187255859375, 301.9195251464844, 303.54986572265625, 349.7483215332031], "score": 0.4590396583080292}, {"image_id": 2, "category_id": 3, "bbox": [337.965087890625, 301.36932373046875, 227.52069091796875, 358.75146484375], "score": 0.3770824670791626}, {"image_id": 2, "category_id": 3, "bbox": [421.9063720703125, 379.9833068847656, 300.52996826171875, 338.5633850097656], "score": 0.27733495831489563}, {"image_id": 2, "category_id": 3, "bbox": [739.1348876953125, 75.46237182617188, 216.69390869140625, 425.19586181640625], "score": 0.2621413767337799}, {"image_id": 2, "category_id": 3, "bbox": [528.7192993164062, 380.21295166015625, 252.83746337890625, 339.78704833984375], "score": 0.2607411742210388}, {"image_id": 2, "category_id": 3, "bbox": [784.1431274414062, 394.77001953125, 226.4840087890625, 325.22998046875], "score": 0.18814820051193237}, {"image_id": 2, "category_id": 3, "bbox": [612.4758911132812, 403.2018127441406, 222.2435302734375, 316.7981872558594], "score": 0.18639683723449707}, {"image_id": 2, "category_id": 3, "bbox": [302.52392578125, 381.92193603515625, 278.96337890625, 338.07806396484375], "score": 0.18557214736938477}, {"image_id": 2, "category_id": 3, "bbox": [583.6277465820312, 337.2882995605469, 288.97125244140625, 357.6206970214844], "score": 0.18001800775527954}, {"image_id": 2, "category_id": 3, "bbox": [765.1612548828125, 0.0, 187.5819091796875, 393.3279724121094], "score": 0.17456629872322083}, {"image_id": 2, "category_id": 3, "bbox": [703.5507202148438, 361.93634033203125, 273.1512451171875, 358.06365966796875], "score": 0.1600247323513031}, {"image_id": 2, "category_id": 3, "bbox": [722.8893432617188, 127.14606475830078, 178.783935546875, 403.04442596435547], "score": 0.1579897403717041}, {"image_id": 2, "category_id": 3, "bbox": [793.8466186523438, 223.62258911132812, 278.86956787109375, 451.4606018066406], "score": 0.15615512430667877}, {"image_id": 2, "category_id": 3, "bbox": [651.6652221679688, 99.53595733642578, 220.79681396484375, 385.6624984741211], "score": 0.137779101729393}, {"image_id": 2, "category_id": 3, "bbox": [812.8318481445312, 48.27541732788086, 218.65264892578125, 413.82223892211914], "score": 0.13507284224033356}, {"image_id": 2, "category_id": 3, "bbox": [717.7142333984375, 0.0, 181.5628662109375, 393.0101013183594], "score": 0.12757351994514465}, {"image_id": 2, "category_id": 3, "bbox": [252.07528686523438, 338.7651062011719, 283.1451110839844, 347.1487731933594], "score": 0.1192394495010376}, {"image_id": 2, "category_id": 3, "bbox": [583.3659057617188, 479.3427734375, 219.020751953125, 240.6572265625], "score": 0.10770971328020096}, {"image_id": 2, "category_id": 3, "bbox": [321.4034729003906, 172.58836364746094, 295.1708068847656, 481.1648712158203], "score": 0.10290965437889099}, {"image_id": 2, "category_id": 3, "bbox": [846.1083374023438, 313.611328125, 277.40692138671875, 406.388671875], "score": 0.10273727774620056}, {"image_id": 2, "category_id": 3, "bbox": [873.1625366210938, 182.67300415039062, 263.01251220703125, 454.3901672363281], "score": 0.09928280115127563}, {"image_id": 2, "category_id": 3, "bbox": [819.218994140625, 0.0, 197.50787353515625, 328.0443115234375], "score": 0.09798740595579147}, {"image_id": 2, "category_id": 3, "bbox": [756.2540283203125, 203.2908935546875, 220.2879638671875, 415.2799072265625], "score": 0.09763888269662857}, {"image_id": 2, "category_id": 3, "bbox": [689.7314453125, 195.74658203125, 231.28759765625, 428.3529052734375], "score": 0.08941485732793808}, {"image_id": 2, "category_id": 3, "bbox": [529.9564819335938, 216.22195434570312, 271.16717529296875, 394.2934265136719], "score": 0.08671518415212631}, {"image_id": 2, "category_id": 3, "bbox": [672.0599975585938, 167.43064880371094, 539.4906616210938, 552.5693511962891], "score": 0.08390206098556519}, {"image_id": 2, "category_id": 3, "bbox": [873.9005126953125, 103.83332824707031, 227.7401123046875, 394.8602752685547], "score": 0.08202643692493439}, {"image_id": 2, "category_id": 3, "bbox": [920.0265502929688, 267.436279296875, 231.66461181640625, 436.240234375], "score": 0.08083553612232208}, {"image_id": 2, "category_id": 3, "bbox": [384.2223205566406, 166.7992401123047, 591.0961608886719, 553.2007598876953], "score": 0.0731722041964531}, {"image_id": 2, "category_id": 3, "bbox": [620.2318115234375, 149.4236297607422, 261.87451171875, 442.38990783691406], "score": 0.07241438329219818}, {"image_id": 2, "category_id": 3, "bbox": [396.8189697265625, 244.10755920410156, 283.2181396484375, 410.03013610839844], "score": 0.07126547396183014}, {"image_id": 2, "category_id": 3, "bbox": [277.08489990234375, 246.34165954589844, 579.9214477539062, 473.65834045410156], "score": 0.06578364968299866}, {"image_id": 2, "category_id": 3, "bbox": [642.5072021484375, 0.0, 219.867431640625, 347.63055419921875], "score": 0.06564173847436905}, {"image_id": 2, "category_id": 3, "bbox": [858.1854248046875, 0.0, 236.5518798828125, 417.0265197753906], "score": 0.06508669257164001}, {"image_id": 2, "category_id": 3, "bbox": [922.9540405273438, 74.82980346679688, 236.89178466796875, 396.58624267578125], "score": 0.06488502770662308}, {"image_id": 2, "category_id": 3, "bbox": [685.9042358398438, 0.0, 532.8472290039062, 499.79718017578125], "score": 0.0637088492512703}, {"image_id": 2, "category_id": 3, "bbox": [521.4393310546875, 234.0424041748047, 593.0108642578125, 485.9575958251953], "score": 0.06309568136930466}, {"image_id": 2, "category_id": 3, "bbox": [257.2774353027344, 21.257274627685547, 605.9231262207031, 635.1350593566895], "score": 0.06047413498163223}, {"image_id": 2, "category_id": 3, "bbox": [144.00750732421875, 91.28055572509766, 588.2901000976562, 628.7194442749023], "score": 0.06018701568245888}, {"image_id": 2, "category_id": 3, "bbox": [0.0, 253.0356903076172, 624.9921875, 466.9643096923828], "score": 0.058970093727111816}, {"image_id": 3, "category_id": 1, "bbox": [1062.6109619140625, 172.09158325195312, 267.5137939453125, 690.2594909667969], "score": 0.8554974794387817}, {"image_id": 3, "category_id": 1, "bbox": [1044.677490234375, 304.13751220703125, 328.4405517578125, 673.0938720703125], "score": 0.22548407316207886}, {"image_id": 3, "category_id": 1, "bbox": [1067.3922119140625, 105.67932891845703, 410.9188232421875, 663.9231491088867], "score": 0.18820108473300934}, {"image_id": 3, "category_id": 1, "bbox": [942.4746704101562, 95.7232666015625, 406.83685302734375, 671.319091796875], "score": 0.17742173373699188}, {"image_id": 3, "category_id": 1, "bbox": [211.43357849121094, 63.336402893066406, 371.26515197753906, 566.8272933959961], "score": 0.13464315235614777}, {"image_id": 3, "category_id": 1, "bbox": [1071.1988525390625, 248.0361328125, 435.7906494140625, 693.4053955078125], "score": 0.1281583160161972}, {"image_id": 3, "category_id": 1, "bbox": [15.644989013671875, 26.96539306640625, 861.0198669433594, 971.7730102539062], "score": 0.12508565187454224}, {"image_id": 3, "category_id": 1, "bbox": [754.4990844726562, 338.10528564453125, 416.88629150390625, 548.2113647460938], "score": 0.10627594590187073}, {"image_id": 3, "category_id": 1, "bbox": [855.8370361328125, 139.90440368652344, 376.65869140625, 656.0648345947266], "score": 0.09000253677368164}, {"image_id": 3, "category_id": 1, "bbox": [752.0570068359375, 492.2317199707031, 427.544677734375, 524.1719665527344], "score": 0.08174709230661392}, {"image_id": 3, "category_id": 1, "bbox": [379.1195068359375, 12.600322723388672, 904.7841796875, 976.2365913391113], "score": 0.06506496667861938}, {"image_id": 3, "category_id": 1, "bbox": [859.6172485351562, 306.58062744140625, 490.55670166015625, 691.5777587890625], "score": 0.05971480533480644}, {"image_id": 3, "category_id": 1, "bbox": [40.517669677734375, 0.0, 841.0033264160156, 644.2609252929688], "score": 0.059572476893663406}, {"image_id": 3, "category_id": 1, "bbox": [1612.1806640625, 0.0, 307.8193359375, 632.8601684570312], "score": 0.056635987013578415}, {"image_id": 3, "category_id": 1, "bbox": [1199.2998046875, 2.7812232971191406, 720.7001953125, 969.9078025817871], "score": 0.05443067476153374}, {"image_id": 3, "category_id": 1, "bbox": [1044.2452392578125, 0.0, 338.06689453125, 630.6046752929688], "score": 0.0515710785984993}, {"image_id": 3, "category_id": 1, "bbox": [164.13111877441406, 124.5389175415039, 958.2756195068359, 955.4610824584961], "score": 0.05086848884820938}, {"image_id": 3, "category_id": 1, "bbox": [871.9336547851562, 16.370933532714844, 712.7164916992188, 969.6228408813477], "score": 0.05083303898572922}, {"image_id": 3, "category_id": 1, "bbox": [225.689208984375, 167.04391479492188, 386.56988525390625, 668.6869201660156], "score": 0.04855210706591606}, {"image_id": 3, "category_id": 1, "bbox": [312.9552917480469, 449.56884765625, 1014.3861389160156, 630.43115234375], "score": 0.046587973833084106}, {"image_id": 3, "category_id": 1, "bbox": [0.0, 366.6127624511719, 970.5386352539062, 713.3872375488281], "score": 0.043125469237565994}, {"image_id": 3, "category_id": 1, "bbox": [584.5629272460938, 134.6425018310547, 870.8963012695312, 945.3574981689453], "score": 0.04302931949496269}, {"image_id": 3, "category_id": 1, "bbox": [206.25791931152344, 0.0, 922.0660552978516, 542.7752685546875], "score": 0.04221544787287712}, {"image_id": 3, "category_id": 1, "bbox": [284.2922058105469, 30.980749130249023, 409.1413879394531, 540.564172744751], "score": 0.04201867803931236}, {"image_id": 3, "category_id": 1, "bbox": [0.0, 133.40328979492188, 543.7913818359375, 946.5967102050781], "score": 0.040996551513671875}, {"image_id": 3, "category_id": 1, "bbox": [774.9108276367188, 163.31649780273438, 356.80828857421875, 605.0807800292969], "score": 0.04084865003824234}, {"image_id": 3, "category_id": 1, "bbox": [1169.1866455078125, 175.12033081054688, 405.1151123046875, 683.1728210449219], "score": 0.03840429335832596}, {"image_id": 3, "category_id": 1, "bbox": [0.0, 0.0, 579.2350463867188, 541.5942993164062], "score": 0.03822336345911026}, {"image_id": 3, "category_id": 1, "bbox": [942.8778076171875, 240.502685546875, 964.590576171875, 839.497314453125], "score": 0.03770189359784126}, {"image_id": 3, "category_id": 2, "bbox": [752.0570068359375, 492.2317199707031, 427.544677734375, 524.1719665527344], "score": 0.3003515899181366}, {"image_id": 3, "category_id": 2, "bbox": [754.4990844726562, 338.10528564453125, 416.88629150390625, 548.2113647460938], "score": 0.233663409948349}, {"image_id": 3, "category_id": 2, "bbox": [653.6954956054688, 408.7784729003906, 446.77386474609375, 501.6832580566406], "score": 0.22367450594902039}, {"image_id": 3, "category_id": 2, "bbox": [774.9108276367188, 163.31649780273438, 356.80828857421875, 605.0807800292969], "score": 0.0981830283999443}, {"image_id": 3, "category_id": 2, "bbox": [652.7385864257812, 207.93833923339844, 391.58514404296875, 607.3859405517578], "score": 0.0954592376947403}, {"image_id": 3, "category_id": 2, "bbox": [856.4178466796875, 0.0, 382.7547607421875, 505.3421936035156], "score": 0.07242642343044281}, {"image_id": 3, "category_id": 2, "bbox": [855.8370361328125, 139.90440368652344, 376.65869140625, 656.0648345947266], "score": 0.07181432098150253}, {"image_id": 3, "category_id": 2, "bbox": [525.6502685546875, 257.5973815917969, 434.15716552734375, 606.4940490722656], "score": 0.07046037167310715}, {"image_id": 3, "category_id": 2, "bbox": [755.4590454101562, 9.1924409866333, 340.84844970703125, 572.8677396774292], "score": 0.06798525899648666}, {"image_id": 3, "category_id": 2, "bbox": [975.8546142578125, 203.83355712890625, 368.1129150390625, 680.2666625976562], "score": 0.06325004249811172}, {"image_id": 3, "category_id": 2, "bbox": [645.7936401367188, 54.5694694519043, 361.14892578125, 590.0721321105957], "score": 0.05978944152593613}, {"image_id": 3, "category_id": 2, "bbox": [997.7792358398438, 0.0, 397.32757568359375, 454.37225341796875], "score": 0.054236311465501785}, {"image_id": 3, "category_id": 2, "bbox": [939.0239868164062, 0.0, 357.60491943359375, 613.39453125], "score": 0.053379639983177185}, {"image_id": 3, "category_id": 2, "bbox": [1073.0240478515625, 157.5142822265625, 251.18603515625, 657.6571044921875], "score": 0.05317959934473038}, {"image_id": 3, "category_id": 2, "bbox": [506.24615478515625, 450.9496765136719, 481.31195068359375, 616.5881652832031], "score": 0.05315014347434044}, {"image_id": 3, "category_id": 2, "bbox": [1327.133056640625, 0.0, 403.162109375, 473.2433166503906], "score": 0.044893115758895874}, {"image_id": 3, "category_id": 2, "bbox": [868.4386596679688, 513.032470703125, 472.65374755859375, 566.967529296875], "score": 0.044380977749824524}, {"image_id": 3, "category_id": 2, "bbox": [1278.5374755859375, 606.5696411132812, 425.0609130859375, 473.43035888671875], "score": 0.043142929673194885}, {"image_id": 3, "category_id": 2, "bbox": [195.42141723632812, 9.756786346435547, 910.6348571777344, 986.5281867980957], "score": 0.04299960285425186}, {"image_id": 3, "category_id": 2, "bbox": [985.7695922851562, 448.445556640625, 428.54547119140625, 623.4085693359375], "score": 0.042913708835840225}, {"image_id": 3, "category_id": 2, "bbox": [425.92767333984375, 375.4584045410156, 431.82537841796875, 574.7204895019531], "score": 0.040645796805620193}, {"image_id": 3, "category_id": 2, "bbox": [584.5629272460938, 134.6425018310547, 870.8963012695312, 945.3574981689453], "score": 0.0405254028737545}, {"image_id": 3, "category_id": 2, "bbox": [889.6776733398438, 0.0, 465.28961181640625, 358.43231201171875], "score": 0.04014848545193672}, {"image_id": 3, "category_id": 2, "bbox": [1097.4705810546875, 0.0, 350.6376953125, 546.5389404296875], "score": 0.03943585231900215}, {"image_id": 3, "category_id": 2, "bbox": [1195.40625, 12.213595390319824, 372.856201171875, 579.0114412307739], "score": 0.03930164873600006}, {"image_id": 3, "category_id": 2, "bbox": [517.7697143554688, 97.83709716796875, 413.25732421875, 607.6661376953125], "score": 0.03894736245274544}, {"image_id": 3, "category_id": 2, "bbox": [871.9336547851562, 16.370933532714844, 712.7164916992188, 969.6228408813477], "score": 0.038895394653081894}, {"image_id": 3, "category_id": 2, "bbox": [1100.6619873046875, 515.8238525390625, 396.0205078125, 564.1761474609375], "score": 0.03860515356063843}, {"image_id": 3, "category_id": 3, "bbox": [758.8390502929688, 401.0691833496094, 410.55584716796875, 512.3815612792969], "score": 0.6097580790519714}, {"image_id": 3, "category_id": 3, "bbox": [759.6553955078125, 203.74600219726562, 385.493896484375, 620.5393981933594], "score": 0.32060912251472473}, {"image_id": 3, "category_id": 3, "bbox": [1053.79736328125, 249.32789611816406, 309.4051513671875, 684.8350067138672], "score": 0.2741156220436096}, {"image_id": 3, "category_id": 3, "bbox": [651.7236938476562, 458.0031433105469, 453.90423583984375, 494.6220397949219], "score": 0.26093780994415283}, {"image_id": 3, "category_id": 3, "bbox": [719.1403198242188, 483.4475402832031, 462.17132568359375, 596.5524597167969], "score": 0.24497590959072113}, {"image_id": 3, "category_id": 3, "bbox": [646.70361328125, 251.95274353027344, 415.8240966796875, 624.9683990478516], "score": 0.23766449093818665}, {"image_id": 3, "category_id": 3, "bbox": [859.6172485351562, 306.58062744140625, 490.55670166015625, 691.5777587890625], "score": 0.18282315135002136}, {"image_id": 3, "category_id": 3, "bbox": [973.3248291015625, 148.60418701171875, 370.6070556640625, 659.4913940429688], "score": 0.1543910801410675}, {"image_id": 3, "category_id": 3, "bbox": [871.9336547851562, 16.370933532714844, 712.7164916992188, 969.6228408813477], "score": 0.11219005286693573}, {"image_id": 3, "category_id": 3, "bbox": [476.3395690917969, 319.9029235839844, 508.9530334472656, 666.6101379394531], "score": 0.09965826570987701}, {"image_id": 3, "category_id": 3, "bbox": [332.31024169921875, 242.50025939941406, 978.8953247070312, 837.4997406005859], "score": 0.09948483109474182}, {"image_id": 3, "category_id": 3, "bbox": [204.70193481445312, 18.419668197631836, 384.5010070800781, 565.8228244781494], "score": 0.09832261502742767}, {"image_id": 3, "category_id": 3, "bbox": [608.7565307617188, 27.79786491394043, 829.2695922851562, 949.5771350860596], "score": 0.095442995429039}, {"image_id": 3, "category_id": 3, "bbox": [1055.3665771484375, 160.5343780517578, 400.16845703125, 651.6037445068359], "score": 0.09204820543527603}, {"image_id": 3, "category_id": 3, "bbox": [855.8370361328125, 139.90440368652344, 376.65869140625, 656.0648345947266], "score": 0.0878898948431015}, {"image_id": 3, "category_id": 3, "bbox": [859.98486328125, 16.704797744750977, 351.4844970703125, 561.8995723724365], "score": 0.07988101989030838}, {"image_id": 3, "category_id": 3, "bbox": [1199.2998046875, 2.7812232971191406, 720.7001953125, 969.9078025817871], "score": 0.0781468003988266}, {"image_id": 3, "category_id": 3, "bbox": [564.3340454101562, 339.4085388183594, 915.3651733398438, 740.5914611816406], "score": 0.07565648853778839}, {"image_id": 3, "category_id": 3, "bbox": [764.0009765625, 57.73131561279297, 340.64453125, 590.4665603637695], "score": 0.07268774509429932}, {"image_id": 3, "category_id": 3, "bbox": [924.7733154296875, 0.0, 388.41015625, 509.7742919921875], "score": 0.07242055982351303}, {"image_id": 3, "category_id": 3, "bbox": [0.0, 257.3575439453125, 948.650634765625, 822.6424560546875], "score": 0.06593562662601471}, {"image_id": 3, "category_id": 3, "bbox": [195.42141723632812, 9.756786346435547, 910.6348571777344, 986.5281867980957], "score": 0.061988331377506256}, {"image_id": 3, "category_id": 3, "bbox": [215.5074005126953, 122.37738037109375, 376.42942810058594, 644.8145751953125], "score": 0.06160365417599678}, {"image_id": 3, "category_id": 3, "bbox": [1090.6600341796875, 311.5863342285156, 391.56884765625, 666.8712463378906], "score": 0.06122758239507675}, {"image_id": 3, "category_id": 3, "bbox": [649.1865844726562, 100.15503692626953, 355.96728515625, 612.296989440918], "score": 0.05818498134613037}, {"image_id": 3, "category_id": 3, "bbox": [985.7695922851562, 448.445556640625, 428.54547119140625, 623.4085693359375], "score": 0.057218585163354874}, {"image_id": 3, "category_id": 3, "bbox": [958.5966796875, 117.79381561279297, 931.6561279296875, 943.087043762207], "score": 0.05542192608118057}, {"image_id": 3, "category_id": 3, "bbox": [882.8435668945312, 563.7335815429688, 464.60565185546875, 516.2664184570312], "score": 0.0543479286134243}, {"image_id": 3, "category_id": 3, "bbox": [1097.4705810546875, 0.0, 350.6376953125, 546.5389404296875], "score": 0.054037921130657196}, {"image_id": 3, "category_id": 3, "bbox": [1047.592529296875, 37.14024353027344, 315.6435546875, 666.1573028564453], "score": 0.05339280143380165}, {"image_id": 3, "category_id": 3, "bbox": [1593.1241455078125, 0.0, 326.8758544921875, 171.67945861816406], "score": 0.05163172632455826}, {"image_id": 3, "category_id": 3, "bbox": [360.89776611328125, 0.0, 936.7034301757812, 866.6369018554688], "score": 0.05028960108757019}, {"image_id": 3, "category_id": 3, "bbox": [384.5453186035156, 0.0, 470.0379333496094, 621.4329833984375], "score": 0.050009556114673615}, {"image_id": 3, "category_id": 3, "bbox": [1747.4293212890625, 0.0, 172.5706787109375, 190.30010986328125], "score": 0.048559874296188354}, {"image_id": 3, "category_id": 3, "bbox": [1337.5218505859375, 240.56752014160156, 582.4781494140625, 839.4324798583984], "score": 0.04853980615735054}, {"image_id": 3, "category_id": 3, "bbox": [1664.9505615234375, 40.913291931152344, 255.0494384765625, 235.99219512939453], "score": 0.04805126413702965}, {"image_id": 3, "category_id": 3, "bbox": [978.0254516601562, 0.0, 935.7778930664062, 636.2633666992188], "score": 0.04743540287017822}, {"image_id": 3, "category_id": 3, "bbox": [295.2467041015625, 0.0, 453.291015625, 541.2123413085938], "score": 0.04569484293460846}, {"image_id": 3, "category_id": 3, "bbox": [525.8992309570312, 200.7103271484375, 428.49810791015625, 610.9923095703125], "score": 0.04564596712589264}, {"image_id": 3, "category_id": 3, "bbox": [993.9578857421875, 0.0, 415.4808349609375, 408.9975280761719], "score": 0.04514865204691887}, {"image_id": 3, "category_id": 3, "bbox": [119.15414428710938, 458.3590393066406, 1029.3442687988281, 621.6409606933594], "score": 0.04494065046310425}, {"image_id": 3, "category_id": 3, "bbox": [513.8644409179688, 51.16743087768555, 417.33184814453125, 591.2509651184082], "score": 0.04119905084371567}, {"image_id": 3, "category_id": 3, "bbox": [761.5701904296875, 451.7342834472656, 912.9019775390625, 628.2657165527344], "score": 0.039075352251529694}, {"image_id": 4, "category_id": 1, "bbox": [397.45013427734375, 331.7752380371094, 320.796630859375, 372.2411193847656], "score": 0.28497105836868286}, {"image_id": 4, "category_id": 1, "bbox": [376.37060546875, 416.31146240234375, 284.68511962890625, 303.68853759765625], "score": 0.1885378509759903}, {"image_id": 4, "category_id": 1, "bbox": [24.439422607421875, 6.773895263671875, 572.1087951660156, 670.6602478027344], "score": 0.17092974483966827}, {"image_id": 4, "category_id": 1, "bbox": [445.58050537109375, 414.1954650878906, 334.97515869140625, 305.8045349121094], "score": 0.13983874022960663}, {"image_id": 4, "category_id": 1, "bbox": [232.28192138671875, 0.0, 666.2379760742188, 668.6141967773438], "score": 0.10380109399557114}, {"image_id": 4, "category_id": 1, "bbox": [176.77444458007812, 43.659324645996094, 260.2023620605469, 375.76309967041016], "score": 0.1037050411105156}, {"image_id": 4, "category_id": 1, "bbox": [531.2096557617188, 14.432842254638672, 575.9177856445312, 650.7794380187988], "score": 0.08547429740428925}, {"image_id": 4, "category_id": 1, "bbox": [910.6246948242188, 171.77479553222656, 192.69866943359375, 381.67869567871094], "score": 0.07813899219036102}, {"image_id": 4, "category_id": 1, "bbox": [115.0670166015625, 70.14557647705078, 609.45947265625, 649.8544235229492], "score": 0.07679840177297592}, {"image_id": 4, "category_id": 1, "bbox": [0.0, 10.541553497314453, 382.60296630859375, 651.5828971862793], "score": 0.06780929118394852}, {"image_id": 4, "category_id": 1, "bbox": [520.8143310546875, 272.1015930175781, 276.12982177734375, 430.3283386230469], "score": 0.06716139614582062}, {"image_id": 4, "category_id": 1, "bbox": [390.17010498046875, 73.32344055175781, 626.562744140625, 646.6765594482422], "score": 0.06582247465848923}, {"image_id": 4, "category_id": 1, "bbox": [652.3070678710938, 0.0, 570.4338989257812, 593.7958984375], "score": 0.05452652648091316}, {"image_id": 4, "category_id": 1, "bbox": [447.46588134765625, 241.58740234375, 296.29461669921875, 425.5943603515625], "score": 0.04758226126432419}, {"image_id": 4, "category_id": 1, "bbox": [894.305908203125, 61.47008514404297, 223.75146484375, 431.90503692626953], "score": 0.04694928228855133}, {"image_id": 4, "category_id": 1, "bbox": [799.7202758789062, 16.609336853027344, 480.27972412109375, 655.8966445922852], "score": 0.046636469662189484}, {"image_id": 4, "category_id": 1, "bbox": [0.0, 164.54600524902344, 495.96441650390625, 555.4539947509766], "score": 0.04421522095799446}, {"image_id": 4, "category_id": 1, "bbox": [129.14520263671875, 0.0, 268.1044006347656, 364.01470947265625], "score": 0.043574798852205276}, {"image_id": 4, "category_id": 1, "bbox": [581.5690307617188, 340.9226379394531, 304.24249267578125, 379.0773620605469], "score": 0.04293534904718399}, {"image_id": 4, "category_id": 1, "bbox": [614.4529418945312, 156.3250732421875, 665.5470581054688, 563.6749267578125], "score": 0.04131922125816345}, {"image_id": 4, "category_id": 1, "bbox": [891.5611572265625, 165.38406372070312, 388.4388427734375, 554.6159362792969], "score": 0.040074389427900314}, {"image_id": 4, "category_id": 1, "bbox": [0.0, 313.2903747558594, 649.7593994140625, 406.7096252441406], "score": 0.03984108939766884}, {"image_id": 4, "category_id": 1, "bbox": [362.428955078125, 243.91925048828125, 273.8861083984375, 419.33087158203125], "score": 0.03846558555960655}, {"image_id": 4, "category_id": 1, "bbox": [836.0636596679688, 168.64820861816406, 236.83526611328125, 388.6175994873047], "score": 0.038367822766304016}, {"image_id": 4, "category_id": 1, "bbox": [359.7510986328125, 310.008544921875, 636.627197265625, 409.991455078125], "score": 0.03755444660782814}, {"image_id": 4, "category_id": 1, "bbox": [0.0, 0.0, 625.0577392578125, 418.72918701171875], "score": 0.035537876188755035}, {"image_id": 4, "category_id": 1, "bbox": [886.0164184570312, 227.14430236816406, 247.46343994140625, 432.3501434326172], "score": 0.03456771746277809}, {"image_id": 4, "category_id": 1, "bbox": [197.64126586914062, 0.0, 252.26589965820312, 289.5471496582031], "score": 0.03391050547361374}, {"image_id": 4, "category_id": 1, "bbox": [4.626350402832031, 0.0, 196.50164031982422, 316.6710510253906], "score": 0.03320614993572235}, {"image_id": 4, "category_id": 1, "bbox": [926.2615356445312, 0.0, 249.37567138671875, 413.1147766113281], "score": 0.032749176025390625}, {"image_id": 4, "category_id": 2, "bbox": [377.0150146484375, 441.8638000488281, 356.01873779296875, 278.1361999511719], "score": 0.1847490817308426}, {"image_id": 4, "category_id": 2, "bbox": [518.9544677734375, 501.8125305175781, 276.9503173828125, 218.18746948242188], "score": 0.08543698489665985}, {"image_id": 4, "category_id": 2, "bbox": [576.572509765625, 471.4356994628906, 318.5218505859375, 248.56430053710938], "score": 0.0722944587469101}, {"image_id": 4, "category_id": 2, "bbox": [653.3884887695312, 433.4109802246094, 312.9586181640625, 286.5890197753906], "score": 0.06941265612840652}, {"image_id": 4, "category_id": 2, "bbox": [376.3510437011719, 274.1206970214844, 269.1332092285156, 424.9900207519531], "score": 0.05690854415297508}, {"image_id": 4, "category_id": 2, "bbox": [589.2485961914062, 323.2144775390625, 284.6270751953125, 393.0078125], "score": 0.056753985583782196}, {"image_id": 4, "category_id": 2, "bbox": [733.3068237304688, 357.3998718261719, 263.5528564453125, 362.6001281738281], "score": 0.05355953797698021}, {"image_id": 4, "category_id": 2, "bbox": [207.04470825195312, 441.85577392578125, 307.4269104003906, 278.14422607421875], "score": 0.049918774515390396}, {"image_id": 4, "category_id": 2, "bbox": [170.00930786132812, 0.0, 265.4787292480469, 365.0776062011719], "score": 0.04930106922984123}, {"image_id": 4, "category_id": 2, "bbox": [712.827880859375, 483.6827087402344, 304.56011962890625, 236.31729125976562], "score": 0.049154140055179596}, {"image_id": 4, "category_id": 2, "bbox": [282.4036865234375, 456.0691223144531, 305.145263671875, 263.9308776855469], "score": 0.04835902899503708}, {"image_id": 4, "category_id": 2, "bbox": [802.904296875, 441.2373046875, 270.303466796875, 278.7626953125], "score": 0.04729650914669037}, {"image_id": 4, "category_id": 2, "bbox": [281.1540222167969, 285.8037414550781, 299.9507751464844, 398.7033386230469], "score": 0.0444076806306839}, {"image_id": 4, "category_id": 2, "bbox": [24.439422607421875, 6.773895263671875, 572.1087951660156, 670.6602478027344], "score": 0.04322801157832146}, {"image_id": 4, "category_id": 2, "bbox": [520.8143310546875, 272.1015930175781, 276.12982177734375, 430.3283386230469], "score": 0.04288966581225395}, {"image_id": 4, "category_id": 2, "bbox": [858.801025390625, 395.4917297363281, 295.848876953125, 324.5082702636719], "score": 0.042575303465127945}, {"image_id": 4, "category_id": 2, "bbox": [115.0670166015625, 70.14557647705078, 609.45947265625, 649.8544235229492], "score": 0.04233325272798538}, {"image_id": 4, "category_id": 2, "bbox": [447.46588134765625, 241.58740234375, 296.29461669921875, 425.5943603515625], "score": 0.04075048863887787}, {"image_id": 4, "category_id": 2, "bbox": [666.6970825195312, 283.4668884277344, 275.7899169921875, 388.2985534667969], "score": 0.0395934134721756}, {"image_id": 4, "category_id": 2, "bbox": [148.91253662109375, 325.29266357421875, 291.58331298828125, 390.6903076171875], "score": 0.03882993012666702}, {"image_id": 4, "category_id": 2, "bbox": [908.2291870117188, 118.75697326660156, 193.99029541015625, 380.1521453857422], "score": 0.038248952478170395}, {"image_id": 4, "category_id": 2, "bbox": [487.989013671875, 0.0, 283.19122314453125, 272.9339599609375], "score": 0.03801370784640312}, {"image_id": 4, "category_id": 2, "bbox": [143.1012420654297, 176.53311157226562, 306.8593292236328, 407.0924987792969], "score": 0.037674326449632645}, {"image_id": 4, "category_id": 2, "bbox": [738.9717407226562, 249.3201904296875, 260.5611572265625, 391.15972900390625], "score": 0.03731111064553261}, {"image_id": 4, "category_id": 2, "bbox": [203.1391143798828, 244.4378662109375, 314.8472137451172, 416.3216552734375], "score": 0.037274062633514404}, {"image_id": 4, "category_id": 2, "bbox": [190.6995086669922, 69.39990997314453, 276.6146697998047, 384.0929183959961], "score": 0.036336854100227356}, {"image_id": 4, "category_id": 2, "bbox": [387.3960876464844, 148.62301635742188, 624.8207092285156, 571.3769836425781], "score": 0.034882862120866776}, {"image_id": 4, "category_id": 2, "bbox": [91.60383605957031, 211.22630310058594, 283.0103302001953, 412.54335021972656], "score": 0.034308817237615585}, {"image_id": 4, "category_id": 2, "bbox": [808.9228515625, 278.013427734375, 252.779541015625, 402.91943359375], "score": 0.03400701656937599}, {"image_id": 4, "category_id": 2, "bbox": [914.099609375, 172.1251220703125, 239.283935546875, 383.98223876953125], "score": 0.03395265340805054}, {"image_id": 4, "category_id": 2, "bbox": [831.378173828125, 106.09955596923828, 241.36767578125, 396.61702728271484], "score": 0.03381285071372986}, {"image_id": 4, "category_id": 2, "bbox": [337.2003173828125, 0.0, 290.8519287109375, 277.9066467285156], "score": 0.0337570495903492}, {"image_id": 4, "category_id": 2, "bbox": [117.33758544921875, 0.0, 643.1915283203125, 368.38494873046875], "score": 0.03323587775230408}, {"image_id": 4, "category_id": 2, "bbox": [277.93719482421875, 178.6481170654297, 296.04034423828125, 403.57460021972656], "score": 0.03302469104528427}, {"image_id": 4, "category_id": 2, "bbox": [201.95736694335938, 0.0, 260.64837646484375, 247.5636444091797], "score": 0.032673660665750504}, {"image_id": 4, "category_id": 3, "bbox": [393.51251220703125, 375.6250305175781, 326.43798828125, 344.3749694824219], "score": 0.36567801237106323}, {"image_id": 4, "category_id": 3, "bbox": [452.724609375, 482.6803283691406, 329.5482177734375, 237.31967163085938], "score": 0.2727847695350647}, {"image_id": 4, "category_id": 3, "bbox": [518.72509765625, 413.3856506347656, 307.924560546875, 306.6143493652344], "score": 0.1634632647037506}, {"image_id": 4, "category_id": 3, "bbox": [581.5690307617188, 340.9226379394531, 304.24249267578125, 379.0773620605469], "score": 0.12450331449508667}, {"image_id": 4, "category_id": 3, "bbox": [430.58770751953125, 242.2676544189453, 351.80267333984375, 477.7323455810547], "score": 0.12397114932537079}, {"image_id": 4, "category_id": 3, "bbox": [357.0220947265625, 483.7095642089844, 301.26727294921875, 236.29043579101562], "score": 0.12064994126558304}, {"image_id": 4, "category_id": 3, "bbox": [910.6246948242188, 171.77479553222656, 192.69866943359375, 381.67869567871094], "score": 0.12026355415582657}, {"image_id": 4, "category_id": 3, "bbox": [361.70684814453125, 233.12232971191406, 656.2803344726562, 486.87767028808594], "score": 0.08650375157594681}, {"image_id": 4, "category_id": 3, "bbox": [176.21185302734375, 13.021408081054688, 255.90704345703125, 375.21839904785156], "score": 0.08559748530387878}, {"image_id": 4, "category_id": 3, "bbox": [894.305908203125, 61.47008514404297, 223.75146484375, 431.90503692626953], "score": 0.08531910926103592}, {"image_id": 4, "category_id": 3, "bbox": [243.307861328125, 148.9058380126953, 647.037109375, 571.0941619873047], "score": 0.07947652786970139}, {"image_id": 4, "category_id": 3, "bbox": [658.0391845703125, 360.10845947265625, 291.13458251953125, 359.89154052734375], "score": 0.07744626700878143}, {"image_id": 4, "category_id": 3, "bbox": [20.744720458984375, 77.56388854980469, 574.2724914550781, 642.4361114501953], "score": 0.07439250499010086}, {"image_id": 4, "category_id": 3, "bbox": [84.84759521484375, 237.20913696289062, 695.0452270507812, 482.7908630371094], "score": 0.06842564791440964}, {"image_id": 4, "category_id": 3, "bbox": [836.0636596679688, 168.64820861816406, 236.83526611328125, 388.6175994873047], "score": 0.06791317462921143}, {"image_id": 4, "category_id": 3, "bbox": [376.3510437011719, 274.1206970214844, 269.1332092285156, 424.9900207519531], "score": 0.06650400906801224}, {"image_id": 4, "category_id": 3, "bbox": [578.517578125, 516.388916015625, 324.10345458984375, 203.611083984375], "score": 0.06137565150856972}, {"image_id": 4, "category_id": 3, "bbox": [390.4422607421875, 2.4640274047851562, 634.010009765625, 661.6906967163086], "score": 0.056865230202674866}, {"image_id": 4, "category_id": 3, "bbox": [113.1173095703125, 0.0, 617.25439453125, 679.6934814453125], "score": 0.056435950100421906}, {"image_id": 4, "category_id": 3, "bbox": [729.5507202148438, 397.22991943359375, 269.9395751953125, 322.77008056640625], "score": 0.055190324783325195}, {"image_id": 4, "category_id": 3, "bbox": [886.0164184570312, 227.14430236816406, 247.46343994140625, 432.3501434326172], "score": 0.05418068915605545}, {"image_id": 4, "category_id": 3, "bbox": [504.15887451171875, 152.8347625732422, 638.1753540039062, 567.1652374267578], "score": 0.051819853484630585}, {"image_id": 4, "category_id": 3, "bbox": [630.5555419921875, 10.142784118652344, 610.983154296875, 652.5460586547852], "score": 0.04585684463381767}, {"image_id": 4, "category_id": 3, "bbox": [261.17547607421875, 323.62603759765625, 333.8204345703125, 396.37396240234375], "score": 0.04497351124882698}, {"image_id": 4, "category_id": 3, "bbox": [810.75634765625, 317.15380859375, 243.022216796875, 399.53863525390625], "score": 0.04169793799519539}, {"image_id": 4, "category_id": 3, "bbox": [263.6995849609375, 387.8171081542969, 624.1729736328125, 332.1828918457031], "score": 0.04066556319594383}, {"image_id": 4, "category_id": 3, "bbox": [743.3009033203125, 286.3894958496094, 252.87646484375, 383.6214294433594], "score": 0.04034033417701721}, {"image_id": 4, "category_id": 3, "bbox": [0.0, 83.92012786865234, 369.17962646484375, 636.0798721313477], "score": 0.03898531571030617}, {"image_id": 4, "category_id": 3, "bbox": [802.7232666015625, 166.0010528564453, 477.2767333984375, 553.9989471435547], "score": 0.03709874674677849}, {"image_id": 4, "category_id": 3, "bbox": [233.95718383789062, 0.0, 683.8572692871094, 570.8981323242188], "score": 0.03680828958749771}, {"image_id": 4, "category_id": 3, "bbox": [585.8990478515625, 305.35723876953125, 694.1009521484375, 414.64276123046875], "score": 0.03561432659626007}, {"image_id": 4, "category_id": 3, "bbox": [322.98309326171875, 0.0, 313.616943359375, 405.09906005859375], "score": 0.03513503819704056}, {"image_id": 4, "category_id": 3, "bbox": [113.47465515136719, 45.97132873535156, 278.7163543701172, 427.8950958251953], "score": 0.03398764878511429}, {"image_id": 4, "category_id": 3, "bbox": [261.68212890625, 23.628725051879883, 281.277587890625, 354.17440605163574], "score": 0.03351886570453644}, {"image_id": 4, "category_id": 3, "bbox": [863.9315795898438, 344.4158020019531, 274.74139404296875, 375.5841979980469], "score": 0.03298121690750122}, {"image_id": 5, "category_id": 1, "bbox": [956.1685180664062, 260.16558837890625, 412.75115966796875, 536.9993286132812], "score": 0.30046936869621277}, {"image_id": 5, "category_id": 1, "bbox": [1621.6953125, 0.0, 296.770751953125, 626.8226318359375], "score": 0.14502152800559998}, {"image_id": 5, "category_id": 1, "bbox": [353.158447265625, 0.0, 987.421875, 1010.9869995117188], "score": 0.13432577252388}, {"image_id": 5, "category_id": 1, "bbox": [0.0, 3.7047958374023438, 966.9107666015625, 997.7532119750977], "score": 0.12530557811260223}, {"image_id": 5, "category_id": 1, "bbox": [945.591796875, 0.0, 484.7181396484375, 706.3046264648438], "score": 0.10867444425821304}, {"image_id": 5, "category_id": 1, "bbox": [583.670654296875, 108.8931884765625, 874.1475830078125, 971.1068115234375], "score": 0.08977926522493362}, {"image_id": 5, "category_id": 1, "bbox": [890.5581665039062, 305.8571472167969, 478.58905029296875, 708.2825622558594], "score": 0.07439488172531128}, {"image_id": 5, "category_id": 1, "bbox": [830.692626953125, 243.2830352783203, 472.4171142578125, 507.66258239746094], "score": 0.07203829288482666}, {"image_id": 5, "category_id": 1, "bbox": [128.74708557128906, 227.4475860595703, 1034.4153900146484, 852.5524139404297], "score": 0.06600533425807953}, {"image_id": 5, "category_id": 1, "bbox": [966.5064697265625, 3.0489635467529297, 917.572998046875, 988.5023059844971], "score": 0.06547219306230545}, {"image_id": 5, "category_id": 1, "bbox": [213.190673828125, 78.98770141601562, 378.62847900390625, 534.1972351074219], "score": 0.06259377300739288}, {"image_id": 5, "category_id": 1, "bbox": [795.945556640625, 225.63690185546875, 869.8419189453125, 854.3630981445312], "score": 0.05380347743630409}, {"image_id": 5, "category_id": 1, "bbox": [1171.639892578125, 241.8497772216797, 748.360107421875, 838.1502227783203], "score": 0.05033690482378006}, {"image_id": 5, "category_id": 1, "bbox": [0.0, 13.808887481689453, 554.8766479492188, 979.1710319519043], "score": 0.048862524330616}, {"image_id": 5, "category_id": 1, "bbox": [1697.041748046875, 0.0, 222.958251953125, 200.71592712402344], "score": 0.04558086395263672}, {"image_id": 5, "category_id": 1, "bbox": [1355.1815185546875, 18.338430404663086, 564.8184814453125, 964.8923435211182], "score": 0.045574888586997986}, {"image_id": 5, "category_id": 1, "bbox": [1057.018798828125, 259.3863220214844, 469.1553955078125, 667.5538024902344], "score": 0.043422408401966095}, {"image_id": 5, "category_id": 1, "bbox": [0.0, 474.4252624511719, 968.00537109375, 605.5747375488281], "score": 0.041644107550382614}, {"image_id": 5, "category_id": 1, "bbox": [820.8034057617188, 0.0, 793.5350952148438, 865.2833251953125], "score": 0.040661003440618515}, {"image_id": 5, "category_id": 1, "bbox": [323.7532653808594, 448.4374084472656, 1004.2499084472656, 631.5625915527344], "score": 0.0400838740170002}, {"image_id": 5, "category_id": 1, "bbox": [1636.8585205078125, 0.0, 267.6837158203125, 361.3399658203125], "score": 0.03650720790028572}, {"image_id": 5, "category_id": 1, "bbox": [1505.548583984375, 0.0, 387.53857421875, 493.5966796875], "score": 0.035585515201091766}, {"image_id": 5, "category_id": 1, "bbox": [533.3654174804688, 0.0, 364.6549072265625, 610.3468627929688], "score": 0.03535037115216255}, {"image_id": 5, "category_id": 1, "bbox": [937.1987915039062, 459.3435974121094, 972.5089721679688, 620.6564025878906], "score": 0.034255675971508026}, {"image_id": 5, "category_id": 1, "bbox": [1401.9259033203125, 0.0, 409.0919189453125, 541.6380615234375], "score": 0.033335719257593155}, {"image_id": 5, "category_id": 1, "bbox": [981.0538940429688, 0.0, 924.3976440429688, 547.3361206054688], "score": 0.03330210596323013}, {"image_id": 5, "category_id": 1, "bbox": [143.71266174316406, 581.6504516601562, 1034.221664428711, 498.34954833984375], "score": 0.032556820660829544}, {"image_id": 5, "category_id": 1, "bbox": [582.886474609375, 0.0, 910.189208984375, 540.8436889648438], "score": 0.03207579255104065}, {"image_id": 5, "category_id": 1, "bbox": [827.2205810546875, 33.344398498535156, 426.9951171875, 644.2256698608398], "score": 0.03157854080200195}, {"image_id": 5, "category_id": 1, "bbox": [1661.2877197265625, 59.752281188964844, 258.7122802734375, 233.23603057861328], "score": 0.03097069263458252}, {"image_id": 5, "category_id": 2, "bbox": [950.8788452148438, 186.25462341308594, 418.97528076171875, 525.0670928955078], "score": 0.22238703072071075}, {"image_id": 5, "category_id": 2, "bbox": [830.692626953125, 243.2830352783203, 472.4171142578125, 507.66258239746094], "score": 0.1210339292883873}, {"image_id": 5, "category_id": 2, "bbox": [909.3261108398438, 475.22796630859375, 376.38116455078125, 602.5188598632812], "score": 0.05901431292295456}, {"image_id": 5, "category_id": 2, "bbox": [981.8705444335938, 551.8323974609375, 432.29254150390625, 528.1676025390625], "score": 0.05487018823623657}, {"image_id": 5, "category_id": 2, "bbox": [1078.989990234375, 474.0835876464844, 439.7535400390625, 603.7419738769531], "score": 0.0526604950428009}, {"image_id": 5, "category_id": 2, "bbox": [1408.180419921875, 0.0, 409.4453125, 488.05767822265625], "score": 0.04824093356728554}, {"image_id": 5, "category_id": 2, "bbox": [1185.76708984375, 108.07569885253906, 404.654541015625, 608.6047210693359], "score": 0.0480555035173893}, {"image_id": 5, "category_id": 2, "bbox": [685.03564453125, 497.6167907714844, 502.0745849609375, 565.8171691894531], "score": 0.04603036493062973}, {"image_id": 5, "category_id": 2, "bbox": [990.8941650390625, 0.0, 379.1138916015625, 621.62353515625], "score": 0.04505329579114914}, {"image_id": 5, "category_id": 2, "bbox": [827.2205810546875, 33.344398498535156, 426.9951171875, 644.2256698608398], "score": 0.043478380888700485}, {"image_id": 5, "category_id": 2, "bbox": [1220.586181640625, 6.700046539306641, 335.5550537109375, 591.6674461364746], "score": 0.042202357202768326}, {"image_id": 5, "category_id": 2, "bbox": [1323.04248046875, 37.253074645996094, 338.216796875, 616.5127334594727], "score": 0.042107369750738144}, {"image_id": 5, "category_id": 2, "bbox": [834.2210693359375, 324.0699157714844, 424.1512451171875, 669.1179504394531], "score": 0.04172458499670029}, {"image_id": 5, "category_id": 2, "bbox": [780.5537719726562, 0.0, 478.80206298828125, 422.1895446777344], "score": 0.040771082043647766}, {"image_id": 5, "category_id": 2, "bbox": [1074.253173828125, 41.06483459472656, 384.6904296875, 636.8113250732422], "score": 0.039597347378730774}, {"image_id": 5, "category_id": 2, "bbox": [1184.2655029296875, 530.8296508789062, 446.2265625, 549.1703491210938], "score": 0.038608577102422714}, {"image_id": 5, "category_id": 2, "bbox": [714.4671020507812, 291.5511474609375, 464.58538818359375, 626.0831909179688], "score": 0.0385429672896862}, {"image_id": 5, "category_id": 2, "bbox": [210.35899353027344, 30.988628387451172, 369.7432403564453, 547.4008369445801], "score": 0.03807708993554115}, {"image_id": 5, "category_id": 2, "bbox": [1283.0614013671875, 599.5542602539062, 466.635498046875, 480.44573974609375], "score": 0.03753926604986191}, {"image_id": 5, "category_id": 2, "bbox": [715.2542724609375, 65.34293365478516, 433.3011474609375, 574.1610336303711], "score": 0.036861054599285126}, {"image_id": 5, "category_id": 2, "bbox": [801.68994140625, 117.8184814453125, 839.186279296875, 962.1815185546875], "score": 0.03675336018204689}, {"image_id": 5, "category_id": 2, "bbox": [156.614501953125, 96.91244506835938, 970.72998046875, 983.0875549316406], "score": 0.03631417080760002}, {"image_id": 5, "category_id": 2, "bbox": [1329.5518798828125, 0.0, 386.1292724609375, 442.8591003417969], "score": 0.03628207743167877}, {"image_id": 5, "category_id": 2, "bbox": [1513.5531005859375, 0.0, 378.713134765625, 553.240234375], "score": 0.03356136381626129}, {"image_id": 5, "category_id": 2, "bbox": [564.4074096679688, 220.264892578125, 926.8534545898438, 859.735107421875], "score": 0.03313019871711731}, {"image_id": 5, "category_id": 2, "bbox": [1171.376953125, 299.4496765136719, 448.0792236328125, 607.1381530761719], "score": 0.03227568045258522}, {"image_id": 5, "category_id": 2, "bbox": [208.6549530029297, 0.0, 476.92311096191406, 355.59039306640625], "score": 0.03215698525309563}, {"image_id": 5, "category_id": 2, "bbox": [1399.17822265625, 104.72794342041016, 393.0999755859375, 597.3866806030273], "score": 0.03195074573159218}, {"image_id": 5, "category_id": 2, "bbox": [621.8156127929688, 116.43562316894531, 367.104248046875, 574.0768280029297], "score": 0.03149815648794174}, {"image_id": 5, "category_id": 2, "bbox": [709.8687744140625, 0.0, 415.95458984375, 468.53009033203125], "score": 0.03108149766921997}, {"image_id": 5, "category_id": 2, "bbox": [1006.236572265625, 0.0, 863.6441650390625, 875.8999633789062], "score": 0.030956100672483444}, {"image_id": 5, "category_id": 2, "bbox": [1295.3104248046875, 211.69558715820312, 378.8870849609375, 599.9358825683594], "score": 0.030679363757371902}, {"image_id": 5, "category_id": 2, "bbox": [353.158447265625, 0.0, 987.421875, 1010.9869995117188], "score": 0.03061136044561863}, {"image_id": 5, "category_id": 2, "bbox": [531.3404541015625, 152.58302307128906, 400.82464599609375, 600.1512298583984], "score": 0.030512532219290733}, {"image_id": 5, "category_id": 2, "bbox": [4.3487548828125, 231.90806579589844, 445.3187255859375, 567.4317779541016], "score": 0.030426377430558205}, {"image_id": 5, "category_id": 3, "bbox": [950.8788452148438, 186.25462341308594, 418.97528076171875, 525.0670928955078], "score": 0.7375662922859192}, {"image_id": 5, "category_id": 3, "bbox": [830.692626953125, 243.2830352783203, 472.4171142578125, 507.66258239746094], "score": 0.3983542025089264}, {"image_id": 5, "category_id": 3, "bbox": [890.5581665039062, 305.8571472167969, 478.58905029296875, 708.2825622558594], "score": 0.14430004358291626}, {"image_id": 5, "category_id": 3, "bbox": [990.8941650390625, 0.0, 379.1138916015625, 621.62353515625], "score": 0.12069300562143326}, {"image_id": 5, "category_id": 3, "bbox": [577.3232421875, 2.845853805541992, 878.870361328125, 999.3872394561768], "score": 0.10505835711956024}, {"image_id": 5, "category_id": 3, "bbox": [362.6519470214844, 99.6335678100586, 949.3628234863281, 980.3664321899414], "score": 0.10019754618406296}, {"image_id": 5, "category_id": 3, "bbox": [189.5326690673828, 78.22709655761719, 398.23191833496094, 540.3027496337891], "score": 0.09464608877897263}, {"image_id": 5, "category_id": 3, "bbox": [1074.253173828125, 41.06483459472656, 384.6904296875, 636.8113250732422], "score": 0.08195967972278595}, {"image_id": 5, "category_id": 3, "bbox": [1036.265625, 206.93515014648438, 528.30078125, 681.2730407714844], "score": 0.08146850019693375}, {"image_id": 5, "category_id": 3, "bbox": [1643.39794921875, 39.05764389038086, 276.60205078125, 247.12887954711914], "score": 0.0806666761636734}, {"image_id": 5, "category_id": 3, "bbox": [0.81097412109375, 104.36997985839844, 939.8970336914062, 975.6300201416016], "score": 0.08053837716579437}, {"image_id": 5, "category_id": 3, "bbox": [1697.041748046875, 0.0, 222.958251953125, 200.71592712402344], "score": 0.07660679519176483}, {"image_id": 5, "category_id": 3, "bbox": [655.692138671875, 98.2972640991211, 529.0037841796875, 695.6805801391602], "score": 0.07519432157278061}, {"image_id": 5, "category_id": 3, "bbox": [1571.6898193359375, 0.0, 348.3101806640625, 178.37571716308594], "score": 0.07481846213340759}, {"image_id": 5, "category_id": 3, "bbox": [820.8034057617188, 0.0, 793.5350952148438, 865.2833251953125], "score": 0.07472988963127136}, {"image_id": 5, "category_id": 3, "bbox": [827.2205810546875, 33.344398498535156, 426.9951171875, 644.2256698608398], "score": 0.06232774257659912}, {"image_id": 5, "category_id": 3, "bbox": [1159.2353515625, 180.61441040039062, 469.951416015625, 631.8509216308594], "score": 0.061221808195114136}, {"image_id": 5, "category_id": 3, "bbox": [1006.236572265625, 0.0, 863.6441650390625, 875.8999633789062], "score": 0.058593012392520905}, {"image_id": 5, "category_id": 3, "bbox": [795.945556640625, 225.63690185546875, 869.8419189453125, 854.3630981445312], "score": 0.053846538066864014}, {"image_id": 5, "category_id": 3, "bbox": [155.44253540039062, 0.0, 998.8584899902344, 1008.457763671875], "score": 0.04744111746549606}, {"image_id": 5, "category_id": 3, "bbox": [383.4782409667969, 0.0, 494.1878356933594, 615.5613403320312], "score": 0.04694577679038048}, {"image_id": 5, "category_id": 3, "bbox": [1167.425048828125, 0.0, 752.574951171875, 645.7105712890625], "score": 0.04673805460333824}, {"image_id": 5, "category_id": 3, "bbox": [1076.22216796875, 333.46148681640625, 428.93505859375, 640.6642456054688], "score": 0.0464433953166008}, {"image_id": 5, "category_id": 3, "bbox": [121.07894897460938, 338.3389587402344, 1057.2566223144531, 741.6610412597656], "score": 0.04643399640917778}, {"image_id": 5, "category_id": 3, "bbox": [1190.261474609375, 130.0609893798828, 729.738525390625, 949.9390106201172], "score": 0.04445460066199303}, {"image_id": 5, "category_id": 3, "bbox": [520.1185302734375, 335.1689758300781, 1017.3902587890625, 744.8310241699219], "score": 0.04367470741271973}, {"image_id": 5, "category_id": 3, "bbox": [881.6383666992188, 0.0, 482.27435302734375, 426.3958740234375], "score": 0.04026566445827484}, {"image_id": 5, "category_id": 3, "bbox": [1408.180419921875, 0.0, 409.4453125, 488.05767822265625], "score": 0.03996983543038368}, {"image_id": 5, "category_id": 3, "bbox": [1201.527099609375, 50.129634857177734, 366.19775390625, 610.3369178771973], "score": 0.03832313418388367}, {"image_id": 5, "category_id": 3, "bbox": [349.0066223144531, 0.0, 1029.4253845214844, 743.968994140625], "score": 0.03815001621842384}, {"image_id": 5, "category_id": 3, "bbox": [1621.6953125, 0.0, 296.770751953125, 626.8226318359375], "score": 0.03798140957951546}, {"image_id": 5, "category_id": 3, "bbox": [582.886474609375, 0.0, 910.189208984375, 540.8436889648438], "score": 0.03742585331201553}, {"image_id": 5, "category_id": 3, "bbox": [263.39715576171875, 47.50605010986328, 459.880126953125, 520.3822555541992], "score": 0.03553660959005356}, {"image_id": 5, "category_id": 3, "bbox": [909.3261108398438, 475.22796630859375, 376.38116455078125, 602.5188598632812], "score": 0.035287342965602875}, {"image_id": 5, "category_id": 3, "bbox": [1340.2591552734375, 1.5999269485473633, 315.2366943359375, 589.5212888717651], "score": 0.03498566523194313}, {"image_id": 6, "category_id": 1, "bbox": [433.6170349121094, 267.14471435546875, 353.9197082519531, 447.96905517578125], "score": 0.7232762575149536}, {"image_id": 6, "category_id": 1, "bbox": [730.9089965820312, 361.3147277832031, 268.27508544921875, 358.6852722167969], "score": 0.6687101125717163}, {"image_id": 6, "category_id": 1, "bbox": [592.9573974609375, 268.8675842285156, 242.83209228515625, 448.2185363769531], "score": 0.392996609210968}, {"image_id": 6, "category_id": 1, "bbox": [818.9022216796875, 213.0418243408203, 207.84423828125, 473.7554168701172], "score": 0.28210991621017456}, {"image_id": 6, "category_id": 1, "bbox": [370.306640625, 263.79229736328125, 288.6119384765625, 447.26470947265625], "score": 0.275052011013031}, {"image_id": 6, "category_id": 1, "bbox": [421.39422607421875, 209.20753479003906, 286.05877685546875, 448.75865173339844], "score": 0.1997687667608261}, {"image_id": 6, "category_id": 1, "bbox": [871.7845458984375, 190.12167358398438, 225.8309326171875, 454.2715759277344], "score": 0.16719281673431396}, {"image_id": 6, "category_id": 1, "bbox": [762.9659423828125, 238.96583557128906, 321.26953125, 481.03416442871094], "score": 0.15111011266708374}, {"image_id": 6, "category_id": 1, "bbox": [300.63360595703125, 0.30095672607421875, 539.8870239257812, 679.0578079223633], "score": 0.14505526423454285}, {"image_id": 6, "category_id": 1, "bbox": [732.2869873046875, 193.60488891601562, 263.54925537109375, 455.8025817871094], "score": 0.1414945274591446}, {"image_id": 6, "category_id": 1, "bbox": [3.283203125, 5.3186187744140625, 609.5269775390625, 665.8978729248047], "score": 0.13905780017375946}, {"image_id": 6, "category_id": 1, "bbox": [698.2744750976562, 242.80313110351562, 241.82049560546875, 441.1030578613281], "score": 0.13851772248744965}, {"image_id": 6, "category_id": 1, "bbox": [567.4459228515625, 169.3579864501953, 230.49005126953125, 436.86595153808594], "score": 0.1346355378627777}, {"image_id": 6, "category_id": 1, "bbox": [634.8463745117188, 372.4523620605469, 238.80645751953125, 347.5476379394531], "score": 0.12593361735343933}, {"image_id": 6, "category_id": 1, "bbox": [856.287841796875, 306.77264404296875, 258.304443359375, 413.22735595703125], "score": 0.11297232657670975}, {"image_id": 6, "category_id": 1, "bbox": [800.4462890625, 172.54244995117188, 479.5537109375, 547.4575500488281], "score": 0.09734354168176651}, {"image_id": 6, "category_id": 1, "bbox": [809.2649536132812, 0.0, 470.73504638671875, 585.9813232421875], "score": 0.09273109585046768}, {"image_id": 6, "category_id": 1, "bbox": [1059.9739990234375, 0.0, 220.0260009765625, 386.0355224609375], "score": 0.09119664132595062}, {"image_id": 6, "category_id": 1, "bbox": [395.5287170410156, 0.0, 601.3409729003906, 584.2000732421875], "score": 0.08661521971225739}, {"image_id": 6, "category_id": 1, "bbox": [568.7140502929688, 370.76666259765625, 221.784423828125, 349.23333740234375], "score": 0.0863901674747467}, {"image_id": 6, "category_id": 1, "bbox": [526.452880859375, 90.46128845214844, 570.870361328125, 629.5387115478516], "score": 0.08026156574487686}, {"image_id": 6, "category_id": 1, "bbox": [916.8757934570312, 228.50283813476562, 242.80523681640625, 445.9043273925781], "score": 0.07866501808166504}, {"image_id": 6, "category_id": 1, "bbox": [129.92172241210938, 0.0, 635.9315490722656, 572.5153198242188], "score": 0.06521974503993988}, {"image_id": 6, "category_id": 1, "bbox": [399.91168212890625, 172.89259338378906, 545.3095092773438, 547.1074066162109], "score": 0.06293633580207825}, {"image_id": 6, "category_id": 1, "bbox": [459.8975830078125, 134.07974243164062, 317.45465087890625, 427.6603698730469], "score": 0.06238844245672226}, {"image_id": 6, "category_id": 1, "bbox": [209.79718017578125, 40.33187484741211, 241.13436889648438, 364.63241958618164], "score": 0.06163594871759415}, {"image_id": 6, "category_id": 1, "bbox": [434.5926513671875, 364.21771240234375, 264.0972900390625, 355.78228759765625], "score": 0.05956871807575226}, {"image_id": 6, "category_id": 1, "bbox": [97.56027221679688, 180.53509521484375, 661.4093322753906, 539.4649047851562], "score": 0.0561634786427021}, {"image_id": 6, "category_id": 1, "bbox": [843.3858032226562, 147.52825927734375, 208.11468505859375, 404.76678466796875], "score": 0.05534004792571068}, {"image_id": 6, "category_id": 1, "bbox": [536.8089599609375, 72.97591400146484, 256.58648681640625, 407.6761245727539], "score": 0.05499788746237755}, {"image_id": 6, "category_id": 1, "bbox": [888.5184936523438, 118.5954360961914, 210.44354248046875, 388.9878463745117], "score": 0.049914274364709854}, {"image_id": 6, "category_id": 1, "bbox": [910.0924682617188, 0.0, 294.40802001953125, 416.4046325683594], "score": 0.046580828726291656}, {"image_id": 6, "category_id": 1, "bbox": [656.02783203125, 0.0, 603.648681640625, 368.0651550292969], "score": 0.044743482023477554}, {"image_id": 6, "category_id": 1, "bbox": [14.65924072265625, 0.0, 625.1106567382812, 363.7034912109375], "score": 0.04195987805724144}, {"image_id": 6, "category_id": 1, "bbox": [266.24774169921875, 0.0, 628.0889282226562, 369.7441101074219], "score": 0.04093943163752556}, {"image_id": 6, "category_id": 1, "bbox": [0.0, 184.1331787109375, 494.2915954589844, 535.8668212890625], "score": 0.03971913084387779}, {"image_id": 6, "category_id": 1, "bbox": [243.65887451171875, 303.9045715332031, 646.0878295898438, 416.0954284667969], "score": 0.03893280029296875}, {"image_id": 6, "category_id": 1, "bbox": [912.9427490234375, 338.263671875, 307.8837890625, 381.736328125], "score": 0.03870004042983055}, {"image_id": 6, "category_id": 2, "bbox": [429.57159423828125, 290.2535705566406, 285.2738037109375, 429.7464294433594], "score": 0.07850144803524017}, {"image_id": 6, "category_id": 2, "bbox": [717.9749145507812, 330.6396484375, 252.09417724609375, 385.2900390625], "score": 0.06817291676998138}, {"image_id": 6, "category_id": 2, "bbox": [193.1670379638672, 41.12086486816406, 254.28199768066406, 361.0845489501953], "score": 0.06385660916566849}, {"image_id": 6, "category_id": 2, "bbox": [891.957763671875, 85.87422180175781, 212.38427734375, 372.2902374267578], "score": 0.04998211935162544}, {"image_id": 6, "category_id": 2, "bbox": [260.38427734375, 49.71299743652344, 287.51983642578125, 349.5011444091797], "score": 0.04769511520862579}, {"image_id": 6, "category_id": 2, "bbox": [871.7845458984375, 190.12167358398438, 225.8309326171875, 454.2715759277344], "score": 0.04754742980003357}, {"image_id": 6, "category_id": 2, "bbox": [732.2869873046875, 193.60488891601562, 263.54925537109375, 455.8025817871094], "score": 0.046569179743528366}, {"image_id": 6, "category_id": 2, "bbox": [214.9967498779297, 310.3576965332031, 295.81724548339844, 409.6423034667969], "score": 0.044801224023103714}, {"image_id": 6, "category_id": 2, "bbox": [414.4060974121094, 74.32221984863281, 290.8960266113281, 399.18067932128906], "score": 0.04291963204741478}, {"image_id": 6, "category_id": 2, "bbox": [905.2745361328125, 38.91288757324219, 270.8660888671875, 397.9369354248047], "score": 0.04242687672376633}, {"image_id": 6, "category_id": 2, "bbox": [841.3833618164062, 118.849853515625, 212.81243896484375, 380.1376647949219], "score": 0.04173169657588005}, {"image_id": 6, "category_id": 2, "bbox": [472.490478515625, 409.0013122558594, 277.50738525390625, 310.9986877441406], "score": 0.041416238993406296}, {"image_id": 6, "category_id": 2, "bbox": [536.770263671875, 269.4045715332031, 262.9224853515625, 445.5287780761719], "score": 0.041279640048742294}, {"image_id": 6, "category_id": 2, "bbox": [110.10400390625, 64.88662719726562, 287.9081115722656, 392.0303649902344], "score": 0.040577106177806854}, {"image_id": 6, "category_id": 2, "bbox": [210.54856872558594, 86.44107818603516, 270.25523376464844, 425.54552459716797], "score": 0.04053948074579239}, {"image_id": 6, "category_id": 2, "bbox": [1038.1358642578125, 84.25052642822266, 241.8641357421875, 388.01854705810547], "score": 0.040156442672014236}, {"image_id": 6, "category_id": 2, "bbox": [148.4224853515625, 271.4440612792969, 305.23858642578125, 431.8578796386719], "score": 0.03962787613272667}, {"image_id": 6, "category_id": 2, "bbox": [334.31036376953125, 70.24475860595703, 306.74700927734375, 391.44921112060547], "score": 0.03928137943148613}, {"image_id": 6, "category_id": 2, "bbox": [818.9022216796875, 213.0418243408203, 207.84423828125, 473.7554168701172], "score": 0.03923914209008217}, {"image_id": 6, "category_id": 2, "bbox": [381.75213623046875, 378.2670593261719, 259.62603759765625, 341.7329406738281], "score": 0.038922470062971115}, {"image_id": 6, "category_id": 3, "bbox": [425.3095703125, 309.4581298828125, 288.5113525390625, 410.5418701171875], "score": 0.262311726808548}, {"image_id": 6, "category_id": 3, "bbox": [205.91375732421875, 13.700637817382812, 243.03091430664062, 356.3284454345703], "score": 0.22856993973255157}, {"image_id": 6, "category_id": 3, "bbox": [713.9786987304688, 251.47422790527344, 312.60845947265625, 468.52577209472656], "score": 0.20784343779087067}, {"image_id": 6, "category_id": 3, "bbox": [560.8348388671875, 313.7969665527344, 241.9625244140625, 406.2030334472656], "score": 0.16312475502490997}, {"image_id": 6, "category_id": 3, "bbox": [719.1908569335938, 378.21649169921875, 249.84521484375, 341.78350830078125], "score": 0.1290975958108902}, {"image_id": 6, "category_id": 3, "bbox": [818.9022216796875, 213.0418243408203, 207.84423828125, 473.7554168701172], "score": 0.11350775510072708}, {"image_id": 6, "category_id": 3, "bbox": [297.7001953125, 172.6641845703125, 534.0914306640625, 547.3358154296875], "score": 0.11280528455972672}, {"image_id": 6, "category_id": 3, "bbox": [381.9609069824219, 238.135498046875, 595.6600036621094, 481.864501953125], "score": 0.10487653315067291}, {"image_id": 6, "category_id": 3, "bbox": [622.3531494140625, 339.7597351074219, 229.49725341796875, 380.2402648925781], "score": 0.10260099172592163}, {"image_id": 6, "category_id": 3, "bbox": [780.064697265625, 353.671875, 239.752197265625, 366.328125], "score": 0.0970265194773674}, {"image_id": 6, "category_id": 3, "bbox": [871.7845458984375, 190.12167358398438, 225.8309326171875, 454.2715759277344], "score": 0.09454332292079926}, {"image_id": 6, "category_id": 3, "bbox": [81.80694580078125, 245.64306640625, 701.6143188476562, 474.35693359375], "score": 0.08700421452522278}, {"image_id": 6, "category_id": 3, "bbox": [521.3284912109375, 162.7459716796875, 574.43505859375, 557.2540283203125], "score": 0.0857013389468193}, {"image_id": 6, "category_id": 3, "bbox": [260.38427734375, 49.71299743652344, 287.51983642578125, 349.5011444091797], "score": 0.08287009596824646}, {"image_id": 6, "category_id": 3, "bbox": [891.957763671875, 85.87422180175781, 212.38427734375, 372.2902374267578], "score": 0.08067719638347626}, {"image_id": 6, "category_id": 3, "bbox": [210.54856872558594, 86.44107818603516, 270.25523376464844, 425.54552459716797], "score": 0.0792476162314415}, {"image_id": 6, "category_id": 3, "bbox": [381.75213623046875, 378.2670593261719, 259.62603759765625, 341.7329406738281], "score": 0.07711130380630493}, {"image_id": 6, "category_id": 3, "bbox": [413.1280822753906, 2.032745361328125, 541.4817810058594, 675.5248718261719], "score": 0.0729210302233696}, {"image_id": 6, "category_id": 3, "bbox": [592.0294799804688, 210.34437561035156, 252.17401123046875, 443.3075408935547], "score": 0.07218067348003387}, {"image_id": 6, "category_id": 3, "bbox": [674.793701171875, 168.93099975585938, 550.854736328125, 551.0690002441406], "score": 0.06971462070941925}, {"image_id": 6, "category_id": 3, "bbox": [3.283203125, 5.3186187744140625, 609.5269775390625, 665.8978729248047], "score": 0.06447387486696243}, {"image_id": 6, "category_id": 3, "bbox": [265.24359130859375, 0.0, 606.7368774414062, 578.953369140625], "score": 0.06261224299669266}, {"image_id": 6, "category_id": 3, "bbox": [916.8757934570312, 228.50283813476562, 242.80523681640625, 445.9043273925781], "score": 0.060289591550827026}, {"image_id": 6, "category_id": 3, "bbox": [905.2745361328125, 38.91288757324219, 270.8660888671875, 397.9369354248047], "score": 0.05707626789808273}, {"image_id": 6, "category_id": 3, "bbox": [809.2649536132812, 0.0, 470.73504638671875, 585.9813232421875], "score": 0.05707518383860588}, {"image_id": 6, "category_id": 3, "bbox": [329.31072998046875, 0.0, 302.28057861328125, 388.0342712402344], "score": 0.05462457239627838}, {"image_id": 6, "category_id": 3, "bbox": [1018.1092529296875, 133.72325134277344, 203.4078369140625, 358.2541961669922], "score": 0.0540834441781044}, {"image_id": 6, "category_id": 3, "bbox": [216.6400146484375, 351.0606994628906, 292.1260070800781, 368.9393005371094], "score": 0.05303405225276947}, {"image_id": 6, "category_id": 3, "bbox": [0.0, 184.1331787109375, 494.2915954589844, 535.8668212890625], "score": 0.052502550184726715}, {"image_id": 6, "category_id": 3, "bbox": [856.287841796875, 306.77264404296875, 258.304443359375, 413.22735595703125], "score": 0.05247155949473381}, {"image_id": 6, "category_id": 3, "bbox": [472.490478515625, 409.0013122558594, 277.50738525390625, 310.9986877441406], "score": 0.05199763551354408}, {"image_id": 6, "category_id": 3, "bbox": [843.3858032226562, 147.52825927734375, 208.11468505859375, 404.76678466796875], "score": 0.05191649869084358}, {"image_id": 6, "category_id": 3, "bbox": [110.11699676513672, 23.88450050354004, 293.6206741333008, 397.9227809906006], "score": 0.05121239647269249}, {"image_id": 6, "category_id": 3, "bbox": [926.7747192382812, 159.368408203125, 222.42718505859375, 372.406005859375], "score": 0.05020151287317276}, {"image_id": 6, "category_id": 3, "bbox": [542.9820556640625, 0.0, 588.498779296875, 582.3479614257812], "score": 0.05018999055027962}, {"image_id": 6, "category_id": 3, "bbox": [557.3597412109375, 136.1082763671875, 235.44110107421875, 427.4346923828125], "score": 0.048441290855407715}, {"image_id": 6, "category_id": 3, "bbox": [832.282958984375, 48.93588638305664, 218.0028076171875, 374.3034324645996], "score": 0.04531622678041458}, {"image_id": 6, "category_id": 3, "bbox": [874.2672729492188, 168.25421142578125, 405.73272705078125, 551.7457885742188], "score": 0.04493051767349243}, {"image_id": 6, "category_id": 3, "bbox": [1038.1358642578125, 84.25052642822266, 241.8641357421875, 388.01854705810547], "score": 0.04405749589204788}, {"image_id": 6, "category_id": 3, "bbox": [510.9053039550781, 380.81280517578125, 639.5650329589844, 339.18719482421875], "score": 0.04021564498543739}, {"image_id": 6, "category_id": 3, "bbox": [0.0, 0.0, 660.2454833984375, 417.9510498046875], "score": 0.04002097249031067}, {"image_id": 6, "category_id": 3, "bbox": [464.74639892578125, 71.07989501953125, 304.58740234375, 408.3144226074219], "score": 0.03925594687461853}, {"image_id": 7, "category_id": 1, "bbox": [522.843994140625, 177.84120178222656, 327.021728515625, 473.42930603027344], "score": 0.810905933380127}, {"image_id": 7, "category_id": 1, "bbox": [778.442626953125, 361.8348388671875, 242.3804931640625, 358.1651611328125], "score": 0.5694953203201294}, {"image_id": 7, "category_id": 1, "bbox": [639.870849609375, 184.46751403808594, 266.36273193359375, 462.2499542236328], "score": 0.334099680185318}, {"image_id": 7, "category_id": 1, "bbox": [445.5447692871094, 0.0, 512.0068054199219, 615.5057373046875], "score": 0.18087266385555267}, {"image_id": 7, "category_id": 1, "bbox": [730.8032836914062, 247.83953857421875, 309.58087158203125, 472.16046142578125], "score": 0.14590883255004883}, {"image_id": 7, "category_id": 1, "bbox": [512.9580078125, 254.42291259765625, 265.53057861328125, 444.053466796875], "score": 0.12160052359104156}, {"image_id": 7, "category_id": 1, "bbox": [427.2283630371094, 170.4464874267578, 524.8873596191406, 549.5535125732422], "score": 0.1178697869181633}, {"image_id": 7, "category_id": 1, "bbox": [309.0687255859375, 40.086551666259766, 215.92291259765625, 401.6600914001465], "score": 0.10972802340984344}, {"image_id": 7, "category_id": 1, "bbox": [262.1940002441406, 34.48618698120117, 195.04763793945312, 408.2174873352051], "score": 0.10231529176235199}, {"image_id": 7, "category_id": 1, "bbox": [3.715423583984375, 11.931461334228516, 624.9228820800781, 652.4679527282715], "score": 0.09800948947668076}, {"image_id": 7, "category_id": 1, "bbox": [884.4578857421875, 194.41122436523438, 220.8170166015625, 439.9580383300781], "score": 0.09724008291959763}, {"image_id": 7, "category_id": 1, "bbox": [824.3033447265625, 368.6164245605469, 315.0167236328125, 351.3835754394531], "score": 0.09695769101381302}, {"image_id": 7, "category_id": 1, "bbox": [584.2323608398438, 294.058349609375, 282.48809814453125, 423.47149658203125], "score": 0.09571079909801483}, {"image_id": 7, "category_id": 1, "bbox": [494.826416015625, 133.13856506347656, 289.37261962890625, 447.7872772216797], "score": 0.09136433154344559}, {"image_id": 7, "category_id": 1, "bbox": [806.3973999023438, 188.29254150390625, 237.89031982421875, 449.7236328125], "score": 0.09000396728515625}, {"image_id": 7, "category_id": 1, "bbox": [311.63323974609375, 19.810340881347656, 540.438720703125, 671.5709457397461], "score": 0.08505669981241226}, {"image_id": 7, "category_id": 1, "bbox": [674.633056640625, 328.5689697265625, 290.8658447265625, 391.4310302734375], "score": 0.08347909152507782}, {"image_id": 7, "category_id": 1, "bbox": [526.7337036132812, 51.6053581237793, 319.0120849609375, 450.95329666137695], "score": 0.07708267122507095}, {"image_id": 7, "category_id": 1, "bbox": [528.7597045898438, 380.737060546875, 297.373291015625, 339.262939453125], "score": 0.06964173167943954}, {"image_id": 7, "category_id": 1, "bbox": [427.8203125, 186.9624481201172, 289.630615234375, 452.59620666503906], "score": 0.06572794914245605}, {"image_id": 7, "category_id": 1, "bbox": [945.6814575195312, 172.93392944335938, 204.49945068359375, 385.1662292480469], "score": 0.06370960175991058}, {"image_id": 7, "category_id": 1, "bbox": [535.8873291015625, 100.18126678466797, 531.7508544921875, 619.818733215332], "score": 0.06339125335216522}, {"image_id": 7, "category_id": 1, "bbox": [301.47540283203125, 0.0, 238.073486328125, 314.41986083984375], "score": 0.05803629755973816}, {"image_id": 7, "category_id": 1, "bbox": [895.14404296875, 298.97119140625, 302.1351318359375, 421.02880859375], "score": 0.0564383864402771}, {"image_id": 7, "category_id": 1, "bbox": [638.8367309570312, 0.0, 616.5746459960938, 580.903564453125], "score": 0.05629732087254524}, {"image_id": 7, "category_id": 1, "bbox": [121.99945068359375, 0.0, 660.9404907226562, 576.0885620117188], "score": 0.05100619047880173}, {"image_id": 7, "category_id": 1, "bbox": [939.4639892578125, 37.78023147583008, 212.6220703125, 412.1843376159668], "score": 0.0508987195789814}, {"image_id": 7, "category_id": 1, "bbox": [899.6971435546875, 80.30847930908203, 210.2349853515625, 384.90258026123047], "score": 0.04788746312260628}, {"image_id": 7, "category_id": 1, "bbox": [1060.2177734375, 0.0, 219.7822265625, 373.0710144042969], "score": 0.04752803221344948}, {"image_id": 7, "category_id": 1, "bbox": [1019.6378173828125, 42.70183181762695, 196.55859375, 408.1906547546387], "score": 0.04583562910556793}, {"image_id": 7, "category_id": 1, "bbox": [506.33013916015625, 378.3439636230469, 629.0087280273438, 341.6560363769531], "score": 0.04308799281716347}, {"image_id": 7, "category_id": 1, "bbox": [1034.0091552734375, 163.4327850341797, 190.4261474609375, 345.18922424316406], "score": 0.04247095063328743}, {"image_id": 7, "category_id": 1, "bbox": [629.88818359375, 155.7995147705078, 617.2288818359375, 564.2004852294922], "score": 0.04179849103093147}, {"image_id": 7, "category_id": 1, "bbox": [338.8783264160156, 54.934539794921875, 266.9826965332031, 424.19866943359375], "score": 0.04151194170117378}, {"image_id": 7, "category_id": 1, "bbox": [103.2518310546875, 318.9478759765625, 661.4342651367188, 401.0521240234375], "score": 0.04075940325856209}, {"image_id": 7, "category_id": 2, "bbox": [521.2008666992188, 310.2171936035156, 249.6707763671875, 394.4314880371094], "score": 0.1602218598127365}, {"image_id": 7, "category_id": 2, "bbox": [774.5703125, 329.8519287109375, 239.2647705078125, 382.48699951171875], "score": 0.1102508082985878}, {"image_id": 7, "category_id": 2, "bbox": [430.0217590332031, 321.9610290527344, 298.5321960449219, 376.3891906738281], "score": 0.10036588460206985}, {"image_id": 7, "category_id": 2, "bbox": [584.2323608398438, 294.058349609375, 282.48809814453125, 423.47149658203125], "score": 0.09535666555166245}, {"image_id": 7, "category_id": 2, "bbox": [522.843994140625, 177.84120178222656, 327.021728515625, 473.42930603027344], "score": 0.08202218264341354}, {"image_id": 7, "category_id": 2, "bbox": [884.4578857421875, 194.41122436523438, 220.8170166015625, 439.9580383300781], "score": 0.06776975840330124}, {"image_id": 7, "category_id": 2, "bbox": [926.75146484375, 237.88084411621094, 240.8316650390625, 420.30470275878906], "score": 0.06470651179552078}, {"image_id": 7, "category_id": 2, "bbox": [806.3973999023438, 188.29254150390625, 237.89031982421875, 449.7236328125], "score": 0.06407086551189423}, {"image_id": 7, "category_id": 2, "bbox": [899.6971435546875, 80.30847930908203, 210.2349853515625, 384.90258026123047], "score": 0.06020020693540573}, {"image_id": 7, "category_id": 2, "bbox": [945.6814575195312, 172.93392944335938, 204.49945068359375, 385.1662292480469], "score": 0.05841958895325661}, {"image_id": 7, "category_id": 2, "bbox": [822.0528564453125, 341.0125732421875, 309.125732421875, 363.02703857421875], "score": 0.057584721595048904}, {"image_id": 7, "category_id": 2, "bbox": [357.74456787109375, 277.4457702636719, 293.99676513671875, 406.3332214355469], "score": 0.05505778267979622}, {"image_id": 7, "category_id": 2, "bbox": [699.5720825195312, 367.8640441894531, 274.9591064453125, 352.1359558105469], "score": 0.05330352857708931}, {"image_id": 7, "category_id": 2, "bbox": [1029.7774658203125, 185.09007263183594, 202.06689453125, 356.6315460205078], "score": 0.05291159823536873}, {"image_id": 7, "category_id": 2, "bbox": [427.8203125, 186.9624481201172, 289.630615234375, 452.59620666503906], "score": 0.051854208111763}, {"image_id": 7, "category_id": 2, "bbox": [1027.9022216796875, 88.84107971191406, 195.1666259765625, 370.08152770996094], "score": 0.049873653799295425}, {"image_id": 7, "category_id": 2, "bbox": [991.55615234375, 239.65716552734375, 249.484375, 405.328369140625], "score": 0.0497349314391613}, {"image_id": 7, "category_id": 2, "bbox": [815.7748413085938, 73.67143249511719, 244.98785400390625, 396.30702209472656], "score": 0.04903285205364227}, {"image_id": 7, "category_id": 2, "bbox": [939.4639892578125, 37.78023147583008, 212.6220703125, 412.1843376159668], "score": 0.04702083393931389}, {"image_id": 7, "category_id": 2, "bbox": [309.3072204589844, 0.0, 209.96475219726562, 346.0223083496094], "score": 0.04650630056858063}, {"image_id": 7, "category_id": 2, "bbox": [1060.653076171875, 124.26954650878906, 219.346923828125, 365.34242248535156], "score": 0.046192657202482224}, {"image_id": 7, "category_id": 2, "bbox": [330.479736328125, 30.65621566772461, 268.5765380859375, 415.23019790649414], "score": 0.045382071286439896}, {"image_id": 7, "category_id": 2, "bbox": [269.64532470703125, 0.0, 177.04632568359375, 341.7298583984375], "score": 0.04416671395301819}, {"image_id": 7, "category_id": 2, "bbox": [1053.7572021484375, 199.8439178466797, 226.2427978515625, 404.87586975097656], "score": 0.044016383588314056}, {"image_id": 7, "category_id": 2, "bbox": [535.8873291015625, 100.18126678466797, 531.7508544921875, 619.818733215332], "score": 0.04236799478530884}, {"image_id": 7, "category_id": 2, "bbox": [642.918701171875, 82.4556884765625, 598.7584228515625, 634.781494140625], "score": 0.042188771069049835}, {"image_id": 7, "category_id": 2, "bbox": [581.2374877929688, 447.9969482421875, 322.9368896484375, 272.0030517578125], "score": 0.04089132323861122}, {"image_id": 7, "category_id": 3, "bbox": [513.51611328125, 238.11785888671875, 337.8643798828125, 475.80499267578125], "score": 0.6091630458831787}, {"image_id": 7, "category_id": 3, "bbox": [778.442626953125, 361.8348388671875, 242.3804931640625, 358.1651611328125], "score": 0.3154968023300171}, {"image_id": 7, "category_id": 3, "bbox": [523.6842651367188, 349.2694396972656, 239.65252685546875, 370.7305603027344], "score": 0.29756608605384827}, {"image_id": 7, "category_id": 3, "bbox": [591.6284790039062, 345.38836669921875, 269.35211181640625, 374.61163330078125], "score": 0.22062228620052338}, {"image_id": 7, "category_id": 3, "bbox": [430.0217590332031, 321.9610290527344, 298.5321960449219, 376.3891906738281], "score": 0.19543521106243134}, {"image_id": 7, "category_id": 3, "bbox": [730.8032836914062, 247.83953857421875, 309.58087158203125, 472.16046142578125], "score": 0.19198371469974518}, {"image_id": 7, "category_id": 3, "bbox": [646.5430297851562, 203.3941192626953, 274.52642822265625, 480.9701385498047], "score": 0.14423885941505432}, {"image_id": 7, "category_id": 3, "bbox": [674.633056640625, 328.5689697265625, 290.8658447265625, 391.4310302734375], "score": 0.14082498848438263}, {"image_id": 7, "category_id": 3, "bbox": [945.3626708984375, 80.13238525390625, 201.0537109375, 388.1072082519531], "score": 0.1328650414943695}, {"image_id": 7, "category_id": 3, "bbox": [841.4793701171875, 213.14962768554688, 293.6090087890625, 468.9776306152344], "score": 0.12994062900543213}, {"image_id": 7, "category_id": 3, "bbox": [427.2283630371094, 170.4464874267578, 524.8873596191406, 549.5535125732422], "score": 0.12925219535827637}, {"image_id": 7, "category_id": 3, "bbox": [578.6124267578125, 158.95530700683594, 285.48291015625, 462.80946350097656], "score": 0.1266169399023056}, {"image_id": 7, "category_id": 3, "bbox": [514.337646484375, 233.6387939453125, 585.683349609375, 486.3612060546875], "score": 0.11151713877916336}, {"image_id": 7, "category_id": 3, "bbox": [934.5341796875, 201.0697479248047, 224.0159912109375, 411.6237335205078], "score": 0.11066092550754547}, {"image_id": 7, "category_id": 3, "bbox": [806.3973999023438, 188.29254150390625, 237.89031982421875, 449.7236328125], "score": 0.10713295638561249}, {"image_id": 7, "category_id": 3, "bbox": [905.5654296875, 119.60369110107422, 199.8487548828125, 372.18079376220703], "score": 0.1069631278514862}, {"image_id": 7, "category_id": 3, "bbox": [1031.998291015625, 130.22055053710938, 189.00048828125, 351.1125183105469], "score": 0.10076043754816055}, {"image_id": 7, "category_id": 3, "bbox": [909.5149536132812, 271.1204833984375, 270.51776123046875, 427.2608642578125], "score": 0.09542480111122131}, {"image_id": 7, "category_id": 3, "bbox": [629.88818359375, 155.7995147705078, 617.2288818359375, 564.2004852294922], "score": 0.0905974730849266}, {"image_id": 7, "category_id": 3, "bbox": [824.3033447265625, 368.6164245605469, 315.0167236328125, 351.3835754394531], "score": 0.07538426667451859}, {"image_id": 7, "category_id": 3, "bbox": [750.9002075195312, 315.47760009765625, 529.0997924804688, 404.52239990234375], "score": 0.06876727193593979}, {"image_id": 7, "category_id": 3, "bbox": [269.64532470703125, 0.0, 177.04632568359375, 341.7298583984375], "score": 0.06799320876598358}, {"image_id": 7, "category_id": 3, "bbox": [427.8203125, 186.9624481201172, 289.630615234375, 452.59620666503906], "score": 0.06712918728590012}, {"image_id": 7, "category_id": 3, "bbox": [445.5447692871094, 0.0, 512.0068054199219, 615.5057373046875], "score": 0.06365185230970383}, {"image_id": 7, "category_id": 3, "bbox": [310.49481201171875, 99.51484680175781, 542.7959594726562, 620.4851531982422], "score": 0.06337001919746399}, {"image_id": 7, "category_id": 3, "bbox": [309.3072204589844, 0.0, 209.96475219726562, 346.0223083496094], "score": 0.06058220937848091}, {"image_id": 7, "category_id": 3, "bbox": [767.5650634765625, 16.741378784179688, 512.4349365234375, 631.7833404541016], "score": 0.05997508019208908}, {"image_id": 7, "category_id": 3, "bbox": [1059.432861328125, 82.38970184326172, 220.567138671875, 384.15088653564453], "score": 0.05920619145035744}, {"image_id": 7, "category_id": 3, "bbox": [406.2156677246094, 379.9667663574219, 599.9730529785156, 340.0332336425781], "score": 0.05429977923631668}, {"image_id": 7, "category_id": 3, "bbox": [630.9276123046875, 0.0, 629.076416015625, 494.47613525390625], "score": 0.052147381007671356}, {"image_id": 7, "category_id": 3, "bbox": [824.9594116210938, 107.75485229492188, 232.84234619140625, 396.5146484375], "score": 0.05053601786494255}, {"image_id": 7, "category_id": 3, "bbox": [1012.4347534179688, 208.18260192871094, 221.88079833984375, 382.4145050048828], "score": 0.05051179230213165}, {"image_id": 7, "category_id": 3, "bbox": [3.715423583984375, 11.931461334228516, 624.9228820800781, 652.4679527282715], "score": 0.050232287496328354}, {"image_id": 7, "category_id": 3, "bbox": [879.2562255859375, 158.1203155517578, 400.7437744140625, 561.8796844482422], "score": 0.05010835826396942}, {"image_id": 7, "category_id": 3, "bbox": [253.97061157226562, 321.9878845214844, 620.2734680175781, 398.0121154785156], "score": 0.047511033713817596}, {"image_id": 7, "category_id": 3, "bbox": [538.6010131835938, 28.392070770263672, 285.04302978515625, 423.3212471008301], "score": 0.04563326761126518}, {"image_id": 7, "category_id": 3, "bbox": [303.3406677246094, 49.57923889160156, 226.17623901367188, 427.3821563720703], "score": 0.044603247195482254}, {"image_id": 7, "category_id": 3, "bbox": [508.7242431640625, 158.34371948242188, 272.55584716796875, 459.6634826660156], "score": 0.04119155555963516}, {"image_id": 8, "category_id": 1, "bbox": [665.663818359375, 152.39283752441406, 173.102783203125, 392.58274841308594], "score": 0.7935628890991211}, {"image_id": 8, "category_id": 1, "bbox": [603.444580078125, 163.5847930908203, 276.23797607421875, 482.0481414794922], "score": 0.5158417820930481}, {"image_id": 8, "category_id": 1, "bbox": [645.0481567382812, 39.726253509521484, 275.86700439453125, 472.701114654541], "score": 0.3275265395641327}, {"image_id": 8, "category_id": 1, "bbox": [580.6118774414062, 118.05834197998047, 267.7015380859375, 401.0866165161133], "score": 0.26732927560806274}, {"image_id": 8, "category_id": 1, "bbox": [228.1268310546875, 7.659873962402344, 633.63330078125, 665.5712051391602], "score": 0.14328041672706604}, {"image_id": 8, "category_id": 1, "bbox": [0.0, 5.745334625244141, 645.560546875, 655.0135154724121], "score": 0.11333098262548447}, {"image_id": 8, "category_id": 1, "bbox": [587.3186645507812, 327.3491516113281, 257.1326904296875, 351.4235534667969], "score": 0.109589122235775}, {"image_id": 8, "category_id": 1, "bbox": [703.1038818359375, 131.08360290527344, 292.00518798828125, 431.6009063720703], "score": 0.09975525736808777}, {"image_id": 8, "category_id": 1, "bbox": [790.5262451171875, 181.62864685058594, 489.4737548828125, 538.3713531494141], "score": 0.08405900001525879}, {"image_id": 8, "category_id": 1, "bbox": [931.1613159179688, 171.046142578125, 217.45037841796875, 432.08660888671875], "score": 0.07558973878622055}, {"image_id": 8, "category_id": 1, "bbox": [485.79168701171875, 123.56275177001953, 288.66741943359375, 434.2578659057617], "score": 0.07269816100597382}, {"image_id": 8, "category_id": 1, "bbox": [384.7590026855469, 0.0, 588.5539855957031, 580.2883911132812], "score": 0.05890166759490967}, {"image_id": 8, "category_id": 1, "bbox": [1061.8720703125, 0.0, 218.1279296875, 374.499267578125], "score": 0.0538179874420166}, {"image_id": 8, "category_id": 1, "bbox": [217.96102905273438, 237.99899291992188, 674.2417297363281, 482.0010070800781], "score": 0.04796424135565758}, {"image_id": 8, "category_id": 1, "bbox": [0.0, 7.563434600830078, 363.154296875, 652.3073539733887], "score": 0.04559754207730293}, {"image_id": 8, "category_id": 1, "bbox": [543.5068969726562, 0.0, 565.7260131835938, 578.5079956054688], "score": 0.04303905740380287}, {"image_id": 8, "category_id": 1, "bbox": [510.69451904296875, 173.54978942871094, 579.1489868164062, 546.4502105712891], "score": 0.04269106686115265}, {"image_id": 8, "category_id": 1, "bbox": [0.0, 238.03318786621094, 660.3739624023438, 481.96681213378906], "score": 0.042365964502096176}, {"image_id": 8, "category_id": 1, "bbox": [168.9118194580078, 4.844078063964844, 285.22630310058594, 371.64212799072266], "score": 0.0406411811709404}, {"image_id": 8, "category_id": 1, "bbox": [768.4909057617188, 0.0, 511.50909423828125, 562.9136962890625], "score": 0.04060777649283409}, {"image_id": 8, "category_id": 1, "bbox": [377.0450439453125, 309.9543151855469, 603.254638671875, 410.0456848144531], "score": 0.04006066918373108}, {"image_id": 8, "category_id": 1, "bbox": [651.2030029296875, 0.0, 615.93359375, 361.8298034667969], "score": 0.03969964012503624}, {"image_id": 8, "category_id": 1, "bbox": [793.8764038085938, 160.30484008789062, 284.31317138671875, 438.4846496582031], "score": 0.03776444494724274}, {"image_id": 8, "category_id": 1, "bbox": [135.31094360351562, 0.0, 630.4281921386719, 361.2301330566406], "score": 0.03698703274130821}, {"image_id": 8, "category_id": 1, "bbox": [883.1241455078125, 114.16470336914062, 249.0965576171875, 396.08038330078125], "score": 0.03666641563177109}, {"image_id": 8, "category_id": 1, "bbox": [629.1734619140625, 0.0, 241.83477783203125, 422.0652770996094], "score": 0.03619374334812164}, {"image_id": 8, "category_id": 1, "bbox": [375.7569274902344, 0.0, 671.5497131347656, 361.474365234375], "score": 0.03534536436200142}, {"image_id": 8, "category_id": 2, "bbox": [587.3186645507812, 327.3491516113281, 257.1326904296875, 351.4235534667969], "score": 0.26853591203689575}, {"image_id": 8, "category_id": 2, "bbox": [479.67388916015625, 304.97381591796875, 327.68048095703125, 343.397216796875], "score": 0.19457639753818512}, {"image_id": 8, "category_id": 2, "bbox": [582.005126953125, 159.2288360595703, 266.313232421875, 402.9182586669922], "score": 0.0762784481048584}, {"image_id": 8, "category_id": 2, "bbox": [663.8770751953125, 120.42019653320312, 171.67694091796875, 408.6712951660156], "score": 0.06561380624771118}, {"image_id": 8, "category_id": 2, "bbox": [168.9118194580078, 4.844078063964844, 285.22630310058594, 371.64212799072266], "score": 0.06305363774299622}, {"image_id": 8, "category_id": 2, "bbox": [491.8997802734375, 155.90182495117188, 284.37548828125, 425.8738098144531], "score": 0.05854247882962227}, {"image_id": 8, "category_id": 2, "bbox": [394.40594482421875, 326.39251708984375, 324.7935791015625, 393.60748291015625], "score": 0.05410929024219513}, {"image_id": 8, "category_id": 2, "bbox": [645.8668212890625, 433.219970703125, 321.93115234375, 286.780029296875], "score": 0.051170408725738525}, {"image_id": 8, "category_id": 2, "bbox": [648.3655395507812, 215.671142578125, 261.50701904296875, 449.82757568359375], "score": 0.04937528073787689}, {"image_id": 8, "category_id": 2, "bbox": [253.94940185546875, 87.8854751586914, 604.6327514648438, 632.1145248413086], "score": 0.04764552414417267}, {"image_id": 8, "category_id": 2, "bbox": [855.9937744140625, 441.4383544921875, 303.3812255859375, 278.5616455078125], "score": 0.04565391317009926}, {"image_id": 8, "category_id": 2, "bbox": [418.8916015625, 211.04640197753906, 293.88104248046875, 400.89988708496094], "score": 0.04247104749083519}, {"image_id": 8, "category_id": 2, "bbox": [733.243408203125, 390.4681701660156, 287.28082275390625, 329.5318298339844], "score": 0.040410708636045456}, {"image_id": 8, "category_id": 2, "bbox": [334.345703125, 395.3023986816406, 314.244384765625, 324.6976013183594], "score": 0.039632998406887054}, {"image_id": 8, "category_id": 2, "bbox": [127.45001983642578, 73.11466217041016, 256.39400482177734, 369.5824508666992], "score": 0.039048776030540466}, {"image_id": 8, "category_id": 2, "bbox": [877.4991455078125, 77.9921646118164, 251.142333984375, 400.37877655029297], "score": 0.03759603202342987}, {"image_id": 8, "category_id": 2, "bbox": [876.625, 166.30929565429688, 271.54296875, 432.4727478027344], "score": 0.037062641233205795}, {"image_id": 8, "category_id": 2, "bbox": [95.22817993164062, 5.291770935058594, 670.6766662597656, 655.9396133422852], "score": 0.03648136183619499}, {"image_id": 8, "category_id": 2, "bbox": [256.916015625, 0.0, 346.2666015625, 324.1949768066406], "score": 0.03644473850727081}, {"image_id": 8, "category_id": 2, "bbox": [89.74346923828125, 236.83570861816406, 699.8688354492188, 483.16429138183594], "score": 0.03563397005200386}, {"image_id": 8, "category_id": 2, "bbox": [759.99169921875, 289.85394287109375, 236.22369384765625, 376.6177978515625], "score": 0.035421065986156464}, {"image_id": 8, "category_id": 2, "bbox": [645.0481567382812, 39.726253509521484, 275.86700439453125, 472.701114654541], "score": 0.03524329140782356}, {"image_id": 8, "category_id": 2, "bbox": [267.6597900390625, 361.9915466308594, 309.06182861328125, 358.0084533691406], "score": 0.034812647849321365}, {"image_id": 8, "category_id": 2, "bbox": [817.6239624023438, 6.816158294677734, 226.43743896484375, 385.2817726135254], "score": 0.03457380086183548}, {"image_id": 8, "category_id": 2, "bbox": [1061.735107421875, 79.91029357910156, 218.264892578125, 395.97120666503906], "score": 0.03414293751120567}, {"image_id": 8, "category_id": 2, "bbox": [16.963455200195312, 0.0, 324.23182678222656, 391.5501708984375], "score": 0.03395761922001839}, {"image_id": 8, "category_id": 2, "bbox": [793.5121459960938, 112.63729095458984, 267.67828369140625, 395.0856399536133], "score": 0.03381122276186943}, {"image_id": 8, "category_id": 2, "bbox": [936.3776245117188, 119.53823852539062, 215.60125732421875, 390.83636474609375], "score": 0.033398959785699844}, {"image_id": 8, "category_id": 2, "bbox": [220.51339721679688, 0.0, 668.2879943847656, 493.3916931152344], "score": 0.03337846323847771}, {"image_id": 8, "category_id": 2, "bbox": [135.31094360351562, 0.0, 630.4281921386719, 361.2301330566406], "score": 0.03331791236996651}, {"image_id": 8, "category_id": 2, "bbox": [807.5404663085938, 351.7864074707031, 256.06243896484375, 368.2135925292969], "score": 0.03286505863070488}, {"image_id": 8, "category_id": 2, "bbox": [1003.0045166015625, 84.89070129394531, 226.5458984375, 388.3848419189453], "score": 0.03283941000699997}, {"image_id": 8, "category_id": 2, "bbox": [536.8890991210938, 103.92884063720703, 535.1102905273438, 616.071159362793], "score": 0.03232428804039955}, {"image_id": 8, "category_id": 2, "bbox": [0.0, 0.0, 530.5169067382812, 573.360595703125], "score": 0.03219160437583923}, {"image_id": 8, "category_id": 3, "bbox": [573.6188354492188, 240.572021484375, 273.65655517578125, 402.4510498046875], "score": 0.5195037126541138}, {"image_id": 8, "category_id": 3, "bbox": [661.2098388671875, 162.1226043701172, 183.19122314453125, 400.48304748535156], "score": 0.3119368255138397}, {"image_id": 8, "category_id": 3, "bbox": [479.67388916015625, 304.97381591796875, 327.68048095703125, 343.397216796875], "score": 0.24355672299861908}, {"image_id": 8, "category_id": 3, "bbox": [645.0481567382812, 39.726253509521484, 275.86700439453125, 472.701114654541], "score": 0.18974509835243225}, {"image_id": 8, "category_id": 3, "bbox": [570.8430786132812, 345.7403564453125, 316.73919677734375, 374.2596435546875], "score": 0.15907122194766998}, {"image_id": 8, "category_id": 3, "bbox": [645.8896484375, 167.62088012695312, 277.21478271484375, 457.8659362792969], "score": 0.1382974535226822}, {"image_id": 8, "category_id": 3, "bbox": [230.13650512695312, 166.2631072998047, 645.1888122558594, 553.7368927001953], "score": 0.12171941250562668}, {"image_id": 8, "category_id": 3, "bbox": [168.30703735351562, 25.824737548828125, 277.3354797363281, 378.2430725097656], "score": 0.1165408194065094}, {"image_id": 8, "category_id": 3, "bbox": [491.8997802734375, 155.90182495117188, 284.37548828125, 425.8738098144531], "score": 0.11138635873794556}, {"image_id": 8, "category_id": 3, "bbox": [580.6118774414062, 118.05834197998047, 267.7015380859375, 401.0866165161133], "score": 0.0995236188173294}, {"image_id": 8, "category_id": 3, "bbox": [0.0, 5.745334625244141, 645.560546875, 655.0135154724121], "score": 0.08752159774303436}, {"image_id": 8, "category_id": 3, "bbox": [410.42987060546875, 93.65508270263672, 535.155029296875, 626.3449172973633], "score": 0.08489865809679031}, {"image_id": 8, "category_id": 3, "bbox": [622.3273315429688, 246.0663299560547, 354.77581787109375, 459.95948791503906], "score": 0.08211281150579453}, {"image_id": 8, "category_id": 3, "bbox": [699.86328125, 138.64825439453125, 306.8485107421875, 461.92254638671875], "score": 0.0789993479847908}, {"image_id": 8, "category_id": 3, "bbox": [242.5174560546875, 0.0, 639.1490478515625, 575.97900390625], "score": 0.07676305621862411}, {"image_id": 8, "category_id": 3, "bbox": [931.1613159179688, 171.046142578125, 217.45037841796875, 432.08660888671875], "score": 0.07516534626483917}, {"image_id": 8, "category_id": 3, "bbox": [375.2472839355469, 246.68218994140625, 351.2550964355469, 454.5001220703125], "score": 0.06846577674150467}, {"image_id": 8, "category_id": 3, "bbox": [790.5262451171875, 181.62864685058594, 489.4737548828125, 538.3713531494141], "score": 0.06669364869594574}, {"image_id": 8, "category_id": 3, "bbox": [0.0, 238.03318786621094, 660.3739624023438, 481.96681213378906], "score": 0.06660887598991394}, {"image_id": 8, "category_id": 3, "bbox": [510.69451904296875, 173.54978942871094, 579.1489868164062, 546.4502105712891], "score": 0.06590159237384796}, {"image_id": 8, "category_id": 3, "bbox": [931.024658203125, 79.00460815429688, 230.872314453125, 399.5826110839844], "score": 0.05484747886657715}, {"image_id": 8, "category_id": 3, "bbox": [794.1094360351562, 193.51702880859375, 296.49420166015625, 427.95947265625], "score": 0.04826638847589493}, {"image_id": 8, "category_id": 3, "bbox": [543.5068969726562, 0.0, 565.7260131835938, 578.5079956054688], "score": 0.046023812144994736}, {"image_id": 8, "category_id": 3, "bbox": [113.288818359375, 0.0, 645.3707275390625, 486.4755554199219], "score": 0.04525231942534447}, {"image_id": 8, "category_id": 3, "bbox": [649.7077026367188, 30.70452117919922, 625.2918090820312, 610.3676223754883], "score": 0.04008934274315834}, {"image_id": 8, "category_id": 3, "bbox": [0.0, 84.04225158691406, 347.22515869140625, 635.9577484130859], "score": 0.039597418159246445}, {"image_id": 8, "category_id": 3, "bbox": [767.8719482421875, 0.0, 512.1280517578125, 491.07366943359375], "score": 0.03899211436510086}, {"image_id": 8, "category_id": 3, "bbox": [865.5809936523438, 270.484375, 289.21539306640625, 400.56097412109375], "score": 0.03811891749501228}, {"image_id": 8, "category_id": 3, "bbox": [127.45001983642578, 73.11466217041016, 256.39400482177734, 369.5824508666992], "score": 0.037692222744226456}, {"image_id": 8, "category_id": 3, "bbox": [759.1818237304688, 316.54241943359375, 236.08203125, 389.36773681640625], "score": 0.03744470700621605}, {"image_id": 8, "category_id": 3, "bbox": [792.5711669921875, 75.39281463623047, 265.3968505859375, 402.7140579223633], "score": 0.03560689464211464}, {"image_id": 8, "category_id": 3, "bbox": [1003.0045166015625, 84.89070129394531, 226.5458984375, 388.3848419189453], "score": 0.03509070351719856}, {"image_id": 8, "category_id": 3, "bbox": [252.36434936523438, 10.85342788696289, 359.9450378417969, 365.3527488708496], "score": 0.03406136482954025}, {"image_id": 8, "category_id": 3, "bbox": [1061.735107421875, 79.91029357910156, 218.264892578125, 395.97120666503906], "score": 0.03344514220952988}, {"image_id": 8, "category_id": 3, "bbox": [810.4077758789062, 321.1385803222656, 249.58477783203125, 374.1233215332031], "score": 0.03322984278202057}, {"image_id": 8, "category_id": 3, "bbox": [9.353057861328125, 0.0, 635.5494079589844, 360.3963928222656], "score": 0.03306083381175995}, {"image_id": 8, "category_id": 3, "bbox": [333.5661926269531, 0.0, 726.9255065917969, 422.0791931152344], "score": 0.032958127558231354}, {"image_id": 8, "category_id": 3, "bbox": [714.070068359375, 32.730003356933594, 271.0740966796875, 419.9611587524414], "score": 0.0329093374311924}, {"image_id": 8, "category_id": 3, "bbox": [568.7703247070312, 505.9291687011719, 239.439208984375, 159.01528930664062], "score": 0.032404933124780655}, {"image_id": 9, "category_id": 1, "bbox": [889.1114501953125, 187.39031982421875, 382.9091796875, 676.3524169921875], "score": 0.7223817706108093}, {"image_id": 9, "category_id": 1, "bbox": [865.7237548828125, 370.8044738769531, 391.9989013671875, 668.3415222167969], "score": 0.25203457474708557}, {"image_id": 9, "category_id": 1, "bbox": [752.97802734375, 193.40643310546875, 421.24267578125, 657.9037475585938], "score": 0.18453121185302734}, {"image_id": 9, "category_id": 1, "bbox": [967.6717529296875, 251.79933166503906, 393.7650146484375, 692.3752288818359], "score": 0.17169229686260223}, {"image_id": 9, "category_id": 1, "bbox": [1637.3138427734375, 0.0, 281.7838134765625, 615.146728515625], "score": 0.12930940091609955}, {"image_id": 9, "category_id": 1, "bbox": [690.2908325195312, 30.261703491210938, 689.2783813476562, 983.0678253173828], "score": 0.11736828088760376}, {"image_id": 9, "category_id": 1, "bbox": [875.5936889648438, 524.7146606445312, 315.47930908203125, 555.2853393554688], "score": 0.09018765389919281}, {"image_id": 9, "category_id": 1, "bbox": [196.3014678955078, 44.26654052734375, 963.7056121826172, 947.0564575195312], "score": 0.0869138240814209}, {"image_id": 9, "category_id": 1, "bbox": [814.8283081054688, 146.5294647216797, 776.2117309570312, 933.4705352783203], "score": 0.0732828825712204}, {"image_id": 9, "category_id": 1, "bbox": [382.62506103515625, 161.43165588378906, 923.2289428710938, 918.5683441162109], "score": 0.06880498677492142}, {"image_id": 9, "category_id": 1, "bbox": [1697.9613037109375, 0.0, 222.0386962890625, 209.8900146484375], "score": 0.06348712742328644}, {"image_id": 9, "category_id": 1, "bbox": [748.1891479492188, 397.57373046875, 423.50604248046875, 625.0096435546875], "score": 0.06334269046783447}, {"image_id": 9, "category_id": 1, "bbox": [913.7741088867188, 33.03159713745117, 384.30438232421875, 663.2516670227051], "score": 0.06316301971673965}, {"image_id": 9, "category_id": 1, "bbox": [973.6824340820312, 0.0, 886.1763305664062, 878.662841796875], "score": 0.05772004276514053}, {"image_id": 9, "category_id": 1, "bbox": [192.5206298828125, 65.4250259399414, 419.83282470703125, 540.7440414428711], "score": 0.0568239800632}, {"image_id": 9, "category_id": 1, "bbox": [9.146759033203125, 231.2586669921875, 938.3722229003906, 848.7413330078125], "score": 0.05652674660086632}, {"image_id": 9, "category_id": 1, "bbox": [999.0114135742188, 465.76947021484375, 345.35601806640625, 596.8626098632812], "score": 0.053130317479372025}, {"image_id": 9, "category_id": 1, "bbox": [806.4303588867188, 0.0, 845.3193969726562, 756.9859619140625], "score": 0.0494290329515934}, {"image_id": 9, "category_id": 1, "bbox": [926.6956787109375, 217.00596618652344, 978.7991943359375, 862.9940338134766], "score": 0.04761534929275513}, {"image_id": 9, "category_id": 1, "bbox": [0.0, 52.33103942871094, 728.7782592773438, 934.3983306884766], "score": 0.04725166782736778}, {"image_id": 9, "category_id": 1, "bbox": [855.007568359375, 0.0, 354.7464599609375, 618.0314331054688], "score": 0.04514048248529434}, {"image_id": 9, "category_id": 1, "bbox": [1003.9676513671875, 2.959691047668457, 342.69873046875, 610.9883680343628], "score": 0.043780822306871414}, {"image_id": 9, "category_id": 1, "bbox": [929.9562377929688, 600.7599487304688, 342.21356201171875, 479.24005126953125], "score": 0.03989557549357414}, {"image_id": 9, "category_id": 1, "bbox": [1519.3963623046875, 0.0, 377.24755859375, 541.1200561523438], "score": 0.03928453102707863}, {"image_id": 9, "category_id": 1, "bbox": [1628.7227783203125, 0.0, 287.5455322265625, 341.0650634765625], "score": 0.038693640381097794}, {"image_id": 9, "category_id": 1, "bbox": [606.4978637695312, 355.6337585449219, 837.4956665039062, 724.3662414550781], "score": 0.038476474583148956}, {"image_id": 9, "category_id": 1, "bbox": [370.0670471191406, 596.2114868164062, 929.7160339355469, 483.78851318359375], "score": 0.03617327660322189}, {"image_id": 9, "category_id": 1, "bbox": [1345.5986328125, 12.008554458618164, 574.4013671875, 963.9489650726318], "score": 0.03453870490193367}, {"image_id": 9, "category_id": 1, "bbox": [985.373291015625, 0.0, 928.54541015625, 540.2647705078125], "score": 0.03436574712395668}, {"image_id": 9, "category_id": 2, "bbox": [869.1336059570312, 593.8689575195312, 312.81732177734375, 486.13104248046875], "score": 0.1305200159549713}, {"image_id": 9, "category_id": 2, "bbox": [918.9109497070312, 528.9691162109375, 352.72760009765625, 551.0308837890625], "score": 0.12104882299900055}, {"image_id": 9, "category_id": 2, "bbox": [874.972412109375, 219.84915161132812, 398.60888671875, 675.1905822753906], "score": 0.10471069812774658}, {"image_id": 9, "category_id": 2, "bbox": [1004.7874145507812, 599.82421875, 352.66571044921875, 480.17578125], "score": 0.10019751638174057}, {"image_id": 9, "category_id": 2, "bbox": [756.2210083007812, 468.63616943359375, 400.74957275390625, 585.0780639648438], "score": 0.09097898006439209}, {"image_id": 9, "category_id": 2, "bbox": [745.3209228515625, 232.7140655517578, 437.5478515625, 643.2607269287109], "score": 0.08049364387989044}, {"image_id": 9, "category_id": 2, "bbox": [1087.399658203125, 593.08837890625, 390.3126220703125, 486.91162109375], "score": 0.07093648612499237}, {"image_id": 9, "category_id": 2, "bbox": [1303.8785400390625, 0.0, 422.1475830078125, 468.3970031738281], "score": 0.06301257759332657}, {"image_id": 9, "category_id": 2, "bbox": [1417.6380615234375, 0.0, 406.1064453125, 474.9261779785156], "score": 0.0579981729388237}, {"image_id": 9, "category_id": 2, "bbox": [625.705078125, 484.6343688964844, 436.9879150390625, 585.0500793457031], "score": 0.04986549913883209}, {"image_id": 9, "category_id": 2, "bbox": [996.6914672851562, 398.78289794921875, 347.25286865234375, 609.4979248046875], "score": 0.04856163635849953}, {"image_id": 9, "category_id": 2, "bbox": [1206.7264404296875, 11.123605728149414, 365.9471435546875, 573.7444972991943], "score": 0.047296661883592606}, {"image_id": 9, "category_id": 2, "bbox": [1311.547119140625, 65.01437377929688, 349.19091796875, 570.0803527832031], "score": 0.0460880808532238}, {"image_id": 9, "category_id": 2, "bbox": [643.153076171875, 253.77182006835938, 380.432861328125, 621.2456359863281], "score": 0.04481982812285423}, {"image_id": 9, "category_id": 2, "bbox": [987.1060791015625, 767.1405029296875, 419.4876708984375, 312.8594970703125], "score": 0.0425034835934639}, {"image_id": 9, "category_id": 2, "bbox": [1102.4873046875, 424.0892028808594, 361.5018310546875, 578.9554748535156], "score": 0.04213244840502739}, {"image_id": 9, "category_id": 2, "bbox": [827.9692993164062, 22.665790557861328, 738.8683471679688, 977.8414726257324], "score": 0.04084686562418938}, {"image_id": 9, "category_id": 2, "bbox": [386.4203186035156, 246.82350158691406, 909.8437194824219, 833.1764984130859], "score": 0.04046650230884552}, {"image_id": 9, "category_id": 2, "bbox": [1519.3963623046875, 0.0, 377.24755859375, 541.1200561523438], "score": 0.04035676643252373}, {"image_id": 9, "category_id": 2, "bbox": [141.18504333496094, 213.68504333496094, 442.54115295410156, 581.0876007080078], "score": 0.03932327777147293}, {"image_id": 9, "category_id": 2, "bbox": [1199.660888671875, 526.3402099609375, 401.3760986328125, 553.6597900390625], "score": 0.03891701251268387}, {"image_id": 9, "category_id": 2, "bbox": [1096.899658203125, 7.249045372009277, 373.27392578125, 589.977578163147], "score": 0.03846901282668114}, {"image_id": 9, "category_id": 2, "bbox": [8.34375, 238.2019500732422, 468.5223083496094, 555.0114898681641], "score": 0.03835919126868248}, {"image_id": 9, "category_id": 2, "bbox": [954.0681762695312, 0.9164657592773438, 894.5785522460938, 977.8441543579102], "score": 0.03787016496062279}, {"image_id": 9, "category_id": 2, "bbox": [1304.5330810546875, 222.01351928710938, 351.49560546875, 590.0872497558594], "score": 0.037600286304950714}, {"image_id": 9, "category_id": 2, "bbox": [222.160400390625, 277.077880859375, 439.6475830078125, 571.2656860351562], "score": 0.0375177338719368}, {"image_id": 9, "category_id": 2, "bbox": [1413.204345703125, 116.49420166015625, 370.9608154296875, 575.35205078125], "score": 0.03735986724495888}, {"image_id": 9, "category_id": 2, "bbox": [907.872802734375, 0.0, 354.2291259765625, 433.2292175292969], "score": 0.03727457672357559}, {"image_id": 9, "category_id": 2, "bbox": [418.7904968261719, 220.21754455566406, 410.7906799316406, 589.8796234130859], "score": 0.035559963434934616}, {"image_id": 9, "category_id": 2, "bbox": [515.1683349609375, 437.50653076171875, 434.42218017578125, 582.7530517578125], "score": 0.03517492488026619}, {"image_id": 9, "category_id": 2, "bbox": [1071.5477294921875, 0.0, 449.6427001953125, 358.6690979003906], "score": 0.03480236232280731}, {"image_id": 9, "category_id": 2, "bbox": [1006.7305297851562, 0.0, 354.43902587890625, 499.3772277832031], "score": 0.03451896086335182}, {"image_id": 9, "category_id": 2, "bbox": [1210.3297119140625, 0.0, 398.09228515625, 351.8616027832031], "score": 0.03443637490272522}, {"image_id": 9, "category_id": 2, "bbox": [315.5869445800781, 216.51611328125, 407.5579528808594, 587.8486938476562], "score": 0.03434937074780464}, {"image_id": 9, "category_id": 2, "bbox": [522.8697509765625, 260.4696044921875, 414.389892578125, 613.7008056640625], "score": 0.03433384373784065}, {"image_id": 9, "category_id": 2, "bbox": [1203.7042236328125, 162.83468627929688, 370.1141357421875, 608.9102478027344], "score": 0.03426138684153557}, {"image_id": 9, "category_id": 3, "bbox": [883.216552734375, 301.9679870605469, 421.8533935546875, 684.6893005371094], "score": 0.5312616229057312}, {"image_id": 9, "category_id": 3, "bbox": [872.7015380859375, 443.1317443847656, 349.3468017578125, 626.3217468261719], "score": 0.4374043941497803}, {"image_id": 9, "category_id": 3, "bbox": [929.9562377929688, 600.7599487304688, 342.21356201171875, 479.24005126953125], "score": 0.23758870363235474}, {"image_id": 9, "category_id": 3, "bbox": [756.2210083007812, 468.63616943359375, 400.74957275390625, 585.0780639648438], "score": 0.19779707491397858}, {"image_id": 9, "category_id": 3, "bbox": [1003.01806640625, 530.2113037109375, 351.581298828125, 549.7886962890625], "score": 0.1728420853614807}, {"image_id": 9, "category_id": 3, "bbox": [961.9026489257812, 192.82447814941406, 365.77435302734375, 669.9158782958984], "score": 0.15172970294952393}, {"image_id": 9, "category_id": 3, "bbox": [742.6372680664062, 259.85797119140625, 437.62713623046875, 675.4840087890625], "score": 0.1360728144645691}, {"image_id": 9, "category_id": 3, "bbox": [1697.9613037109375, 0.0, 222.0386962890625, 209.8900146484375], "score": 0.12628720700740814}, {"image_id": 9, "category_id": 3, "bbox": [1642.754638671875, 35.96492004394531, 277.245361328125, 254.3386993408203], "score": 0.10174356400966644}, {"image_id": 9, "category_id": 3, "bbox": [690.2908325195312, 30.261703491210938, 689.2783813476562, 983.0678253173828], "score": 0.10051657259464264}, {"image_id": 9, "category_id": 3, "bbox": [1568.9835205078125, 0.0, 351.0164794921875, 182.57945251464844], "score": 0.09867986291646957}, {"image_id": 9, "category_id": 3, "bbox": [814.8283081054688, 146.5294647216797, 776.2117309570312, 933.4705352783203], "score": 0.09319436550140381}, {"image_id": 9, "category_id": 3, "bbox": [845.4409790039062, 647.54296875, 357.75726318359375, 432.45703125], "score": 0.09200572222471237}, {"image_id": 9, "category_id": 3, "bbox": [954.0681762695312, 0.9164657592773438, 894.5785522460938, 977.8441543579102], "score": 0.0784541666507721}, {"image_id": 9, "category_id": 3, "bbox": [1087.399658203125, 593.08837890625, 390.3126220703125, 486.91162109375], "score": 0.07257501035928726}, {"image_id": 9, "category_id": 3, "bbox": [386.4203186035156, 246.82350158691406, 909.8437194824219, 833.1764984130859], "score": 0.07088606059551239}, {"image_id": 9, "category_id": 3, "bbox": [192.5206298828125, 65.4250259399414, 419.83282470703125, 540.7440414428711], "score": 0.0679088905453682}, {"image_id": 9, "category_id": 3, "bbox": [1163.99951171875, 126.90815734863281, 756.00048828125, 937.6510467529297], "score": 0.06595122069120407}, {"image_id": 9, "category_id": 3, "bbox": [863.382080078125, 148.6351776123047, 398.533203125, 677.5047149658203], "score": 0.0651460736989975}, {"image_id": 9, "category_id": 3, "bbox": [934.9749755859375, 0.0, 327.4337158203125, 618.9953002929688], "score": 0.055259350687265396}, {"image_id": 9, "category_id": 3, "bbox": [1074.646728515625, 282.2994689941406, 414.116455078125, 651.0233459472656], "score": 0.052545834332704544}, {"image_id": 9, "category_id": 3, "bbox": [9.146759033203125, 231.2586669921875, 938.3722229003906, 848.7413330078125], "score": 0.05249667167663574}, {"image_id": 9, "category_id": 3, "bbox": [584.2129516601562, 457.4913024902344, 858.2162475585938, 622.5086975097656], "score": 0.04852967709302902}, {"image_id": 9, "category_id": 3, "bbox": [1155.611572265625, 0.0, 764.388427734375, 641.387939453125], "score": 0.044523682445287704}, {"image_id": 9, "category_id": 3, "bbox": [1754.9678955078125, 0.0, 165.0321044921875, 150.7854766845703], "score": 0.042292144149541855}, {"image_id": 9, "category_id": 3, "bbox": [990.1677856445312, 49.7282829284668, 367.74835205078125, 631.6631355285645], "score": 0.04215564951300621}, {"image_id": 9, "category_id": 3, "bbox": [387.43487548828125, 24.543113708496094, 467.367919921875, 634.1206436157227], "score": 0.0409880056977272}, {"image_id": 9, "category_id": 3, "bbox": [806.4303588867188, 0.0, 845.3193969726562, 756.9859619140625], "score": 0.0390448197722435}, {"image_id": 9, "category_id": 3, "bbox": [370.0670471191406, 596.2114868164062, 929.7160339355469, 483.78851318359375], "score": 0.03802560269832611}, {"image_id": 9, "category_id": 3, "bbox": [1524.9852294921875, 0.0, 377.5037841796875, 475.1366271972656], "score": 0.03740619868040085}, {"image_id": 9, "category_id": 3, "bbox": [1637.3138427734375, 0.0, 281.7838134765625, 615.146728515625], "score": 0.037343766540288925}, {"image_id": 9, "category_id": 3, "bbox": [933.290283203125, 440.8107604980469, 969.9603271484375, 639.1892395019531], "score": 0.03587207570672035}, {"image_id": 9, "category_id": 3, "bbox": [287.03021240234375, 0.0, 471.83868408203125, 528.10400390625], "score": 0.035080332309007645}, {"image_id": 9, "category_id": 3, "bbox": [1363.7510986328125, 0.0, 556.2489013671875, 860.6771850585938], "score": 0.03499571606516838}, {"image_id": 9, "category_id": 3, "bbox": [196.3014678955078, 44.26654052734375, 963.7056121826172, 947.0564575195312], "score": 0.03427572548389435}, {"image_id": 10, "category_id": 1, "bbox": [389.0790710449219, 324.2745056152344, 271.5130920410156, 393.3929748535156], "score": 0.7442577481269836}, {"image_id": 10, "category_id": 1, "bbox": [297.68072509765625, 330.6492614746094, 314.12945556640625, 380.9777526855469], "score": 0.3360668122768402}, {"image_id": 10, "category_id": 1, "bbox": [370.8835754394531, 203.0166015625, 339.3212585449219, 492.65496826171875], "score": 0.2846806049346924}, {"image_id": 10, "category_id": 1, "bbox": [579.8006591796875, 337.4202575683594, 238.719970703125, 362.9407653808594], "score": 0.23496483266353607}, {"image_id": 10, "category_id": 1, "bbox": [300.2844543457031, 2.8632431030273438, 521.7403869628906, 681.3232192993164], "score": 0.14386814832687378}, {"image_id": 10, "category_id": 1, "bbox": [531.772705078125, 3.0875701904296875, 594.0716552734375, 671.8444976806641], "score": 0.13764312863349915}, {"image_id": 10, "category_id": 1, "bbox": [467.5552673339844, 297.2355651855469, 325.1661071777344, 422.7644348144531], "score": 0.13432545959949493}, {"image_id": 10, "category_id": 1, "bbox": [18.1900634765625, 13.796905517578125, 596.4985961914062, 657.0960998535156], "score": 0.11631930619478226}, {"image_id": 10, "category_id": 1, "bbox": [785.2698974609375, 168.3120574951172, 494.7301025390625, 551.6879425048828], "score": 0.10609588027000427}, {"image_id": 10, "category_id": 1, "bbox": [393.3828430175781, 85.31024169921875, 547.6279602050781, 634.6897583007812], "score": 0.0894695594906807}, {"image_id": 10, "category_id": 1, "bbox": [169.45785522460938, 0.0, 267.36767578125, 365.1966552734375], "score": 0.08634939044713974}, {"image_id": 10, "category_id": 1, "bbox": [289.78155517578125, 312.5377502441406, 535.6991577148438, 407.4622497558594], "score": 0.08382374793291092}, {"image_id": 10, "category_id": 1, "bbox": [201.1585693359375, 288.68701171875, 316.8004150390625, 431.31298828125], "score": 0.07856669276952744}, {"image_id": 10, "category_id": 1, "bbox": [575.1112060546875, 215.41513061523438, 221.79736328125, 409.7818908691406], "score": 0.071946881711483}, {"image_id": 10, "category_id": 1, "bbox": [1056.5418701171875, 0.0, 223.4581298828125, 417.36822509765625], "score": 0.06853409856557846}, {"image_id": 10, "category_id": 1, "bbox": [639.8323974609375, 0.0, 617.7271728515625, 577.2957763671875], "score": 0.06736540049314499}, {"image_id": 10, "category_id": 1, "bbox": [568.4288330078125, 209.0952606201172, 338.16375732421875, 482.7400665283203], "score": 0.061245813965797424}, {"image_id": 10, "category_id": 1, "bbox": [898.6610717773438, 153.36146545410156, 200.79644775390625, 370.7881317138672], "score": 0.06032082810997963}, {"image_id": 10, "category_id": 1, "bbox": [492.207275390625, 297.7644958496094, 639.2918701171875, 422.2355041503906], "score": 0.0595410093665123}, {"image_id": 10, "category_id": 1, "bbox": [160.17434692382812, 88.34381103515625, 545.2200622558594, 631.6561889648438], "score": 0.05829990282654762}, {"image_id": 10, "category_id": 1, "bbox": [868.7470703125, 224.55393981933594, 266.2698974609375, 436.4813995361328], "score": 0.05756758153438568}, {"image_id": 10, "category_id": 1, "bbox": [6.046333312988281, 0.0, 195.6231460571289, 358.6008605957031], "score": 0.049494966864585876}, {"image_id": 10, "category_id": 1, "bbox": [811.5802001953125, 189.00022888183594, 272.9951171875, 430.0568389892578], "score": 0.048183657228946686}, {"image_id": 10, "category_id": 1, "bbox": [0.0, 319.09356689453125, 649.17822265625, 400.90643310546875], "score": 0.04791088029742241}, {"image_id": 10, "category_id": 1, "bbox": [917.3824462890625, 169.743896484375, 249.3902587890625, 403.61883544921875], "score": 0.0450928658246994}, {"image_id": 10, "category_id": 1, "bbox": [605.0147094726562, 0.0, 293.39044189453125, 411.45452880859375], "score": 0.04176070913672447}, {"image_id": 10, "category_id": 1, "bbox": [0.0, 8.131771087646484, 362.3641357421875, 651.5839881896973], "score": 0.04174632579088211}, {"image_id": 10, "category_id": 1, "bbox": [810.03515625, 303.3697204589844, 257.748779296875, 416.6302795410156], "score": 0.04055804759263992}, {"image_id": 10, "category_id": 1, "bbox": [461.21405029296875, 450.72076416015625, 291.03973388671875, 269.27923583984375], "score": 0.038975659757852554}, {"image_id": 10, "category_id": 1, "bbox": [4.3099365234375, 0.0, 630.94384765625, 355.42138671875], "score": 0.03738445043563843}, {"image_id": 10, "category_id": 1, "bbox": [1047.0552978515625, 153.89389038085938, 231.66552734375, 392.3066101074219], "score": 0.03562911972403526}, {"image_id": 10, "category_id": 1, "bbox": [883.27587890625, 21.453380584716797, 396.72412109375, 638.0078010559082], "score": 0.035418808460235596}, {"image_id": 10, "category_id": 1, "bbox": [391.38861083984375, 368.7728271484375, 600.3511962890625, 351.2271728515625], "score": 0.03505528345704079}, {"image_id": 10, "category_id": 1, "bbox": [921.7478637695312, 24.219051361083984, 260.03045654296875, 438.5713233947754], "score": 0.03447166830301285}, {"image_id": 10, "category_id": 1, "bbox": [829.838623046875, 113.33003234863281, 249.63818359375, 384.6968231201172], "score": 0.034223318099975586}, {"image_id": 10, "category_id": 1, "bbox": [539.3633422851562, 0.0, 621.9898071289062, 350.66632080078125], "score": 0.0341445729136467}, {"image_id": 10, "category_id": 1, "bbox": [721.650634765625, 0.0, 298.64801025390625, 409.31451416015625], "score": 0.03344011679291725}, {"image_id": 10, "category_id": 2, "bbox": [579.8006591796875, 337.4202575683594, 238.719970703125, 362.9407653808594], "score": 0.1350891888141632}, {"image_id": 10, "category_id": 2, "bbox": [373.5992431640625, 324.5570983886719, 304.6561279296875, 390.6744079589844], "score": 0.06817129254341125}, {"image_id": 10, "category_id": 2, "bbox": [575.1112060546875, 215.41513061523438, 221.79736328125, 409.7818908691406], "score": 0.06624499708414078}, {"image_id": 10, "category_id": 2, "bbox": [173.09429931640625, 28.674659729003906, 271.95831298828125, 384.85286712646484], "score": 0.049130119383335114}, {"image_id": 10, "category_id": 2, "bbox": [662.710693359375, 185.15591430664062, 273.07354736328125, 387.5263366699219], "score": 0.042818404734134674}, {"image_id": 10, "category_id": 2, "bbox": [467.5552673339844, 297.2355651855469, 325.1661071777344, 422.7644348144531], "score": 0.042173612862825394}, {"image_id": 10, "category_id": 2, "bbox": [826.733642578125, 171.38670349121094, 260.137939453125, 386.18067932128906], "score": 0.04186572879552841}, {"image_id": 10, "category_id": 2, "bbox": [852.854736328125, 428.51239013671875, 317.9669189453125, 291.48760986328125], "score": 0.041632235050201416}, {"image_id": 10, "category_id": 2, "bbox": [746.851806640625, 208.27972412109375, 274.92144775390625, 401.13555908203125], "score": 0.04153254255652428}, {"image_id": 10, "category_id": 2, "bbox": [198.08523559570312, 85.95380401611328, 287.1762390136719, 417.7829818725586], "score": 0.03966912627220154}, {"image_id": 10, "category_id": 2, "bbox": [471.72674560546875, 178.18275451660156, 296.00372314453125, 402.6244354248047], "score": 0.03835233300924301}, {"image_id": 10, "category_id": 2, "bbox": [400.72711181640625, 443.1658020019531, 298.59002685546875, 276.8341979980469], "score": 0.0380789078772068}, {"image_id": 10, "category_id": 2, "bbox": [543.4129638671875, 155.33338928222656, 265.2623291015625, 381.1384735107422], "score": 0.037433043122291565}, {"image_id": 10, "category_id": 2, "bbox": [732.1581420898438, 415.4356384277344, 274.5880126953125, 304.5643615722656], "score": 0.03662361949682236}, {"image_id": 10, "category_id": 2, "bbox": [269.05145263671875, 100.9578628540039, 306.67987060546875, 398.42818450927734], "score": 0.036257557570934296}, {"image_id": 10, "category_id": 2, "bbox": [165.4124755859375, 4.162925720214844, 534.1226196289062, 678.2037734985352], "score": 0.03536653518676758}, {"image_id": 10, "category_id": 2, "bbox": [393.3828430175781, 85.31024169921875, 547.6279602050781, 634.6897583007812], "score": 0.0348159484565258}, {"image_id": 10, "category_id": 2, "bbox": [293.46014404296875, 308.5218200683594, 309.1217041015625, 392.1636657714844], "score": 0.034752726554870605}, {"image_id": 10, "category_id": 2, "bbox": [896.1513061523438, 117.64118957519531, 198.44207763671875, 382.98268127441406], "score": 0.0346565879881382}, {"image_id": 10, "category_id": 2, "bbox": [502.25567626953125, 153.09417724609375, 604.8479614257812, 566.9058227539062], "score": 0.03422350808978081}, {"image_id": 10, "category_id": 2, "bbox": [115.18372344970703, 100.02859497070312, 268.5941696166992, 395.8730163574219], "score": 0.03333759307861328}, {"image_id": 10, "category_id": 2, "bbox": [736.26513671875, 92.42265319824219, 285.78564453125, 415.95558166503906], "score": 0.03307328745722771}, {"image_id": 10, "category_id": 2, "bbox": [818.6984252929688, 63.394630432128906, 251.44183349609375, 421.63008880615234], "score": 0.032618552446365356}, {"image_id": 10, "category_id": 2, "bbox": [226.78399658203125, 428.9842224121094, 270.03009033203125, 291.0157775878906], "score": 0.03260747715830803}, {"image_id": 10, "category_id": 2, "bbox": [812.3998413085938, 265.87847900390625, 265.65093994140625, 425.08294677734375], "score": 0.032001152634620667}, {"image_id": 10, "category_id": 2, "bbox": [880.6431274414062, 196.0501251220703, 244.78240966796875, 415.7500457763672], "score": 0.0319473035633564}, {"image_id": 10, "category_id": 3, "bbox": [578.3311767578125, 349.2962646484375, 237.2122802734375, 370.7037353515625], "score": 0.494576096534729}, {"image_id": 10, "category_id": 3, "bbox": [548.9738159179688, 218.44863891601562, 286.26715087890625, 470.0356750488281], "score": 0.28212037682533264}, {"image_id": 10, "category_id": 3, "bbox": [373.5992431640625, 324.5570983886719, 304.6561279296875, 390.6744079589844], "score": 0.27067214250564575}, {"image_id": 10, "category_id": 3, "bbox": [467.5552673339844, 297.2355651855469, 325.1661071777344, 422.7644348144531], "score": 0.13249295949935913}, {"image_id": 10, "category_id": 3, "bbox": [896.1513061523438, 117.64118957519531, 198.44207763671875, 382.98268127441406], "score": 0.10880740731954575}, {"image_id": 10, "category_id": 3, "bbox": [393.3828430175781, 85.31024169921875, 547.6279602050781, 634.6897583007812], "score": 0.10310947895050049}, {"image_id": 10, "category_id": 3, "bbox": [637.6883544921875, 168.18508911132812, 612.2178955078125, 551.8149108886719], "score": 0.10308360308408737}, {"image_id": 10, "category_id": 3, "bbox": [174.1712188720703, 6.7405242919921875, 266.13523864746094, 378.88536071777344], "score": 0.09947553277015686}, {"image_id": 10, "category_id": 3, "bbox": [296.7677001953125, 170.230712890625, 519.7216796875, 549.769287109375], "score": 0.09129945188760757}, {"image_id": 10, "category_id": 3, "bbox": [826.733642578125, 171.38670349121094, 260.137939453125, 386.18067932128906], "score": 0.08395978063344955}, {"image_id": 10, "category_id": 3, "bbox": [485.71160888671875, 224.36862182617188, 645.4370727539062, 495.6313781738281], "score": 0.07705561816692352}, {"image_id": 10, "category_id": 3, "bbox": [916.2285766601562, 64.34891510009766, 252.43377685546875, 430.0176315307617], "score": 0.07579183578491211}, {"image_id": 10, "category_id": 3, "bbox": [643.5166625976562, 229.2483367919922, 292.51153564453125, 445.35829162597656], "score": 0.07248637080192566}, {"image_id": 10, "category_id": 3, "bbox": [594.7740478515625, 179.73355102539062, 256.73358154296875, 402.4815368652344], "score": 0.07112925499677658}, {"image_id": 10, "category_id": 3, "bbox": [868.7470703125, 224.55393981933594, 266.2698974609375, 436.4813995361328], "score": 0.06753429025411606}, {"image_id": 10, "category_id": 3, "bbox": [781.571533203125, 21.847721099853516, 498.428466796875, 641.6221885681152], "score": 0.06662929803133011}, {"image_id": 10, "category_id": 3, "bbox": [165.4124755859375, 4.162925720214844, 534.1226196289062, 678.2037734985352], "score": 0.06633540987968445}, {"image_id": 10, "category_id": 3, "bbox": [145.40203857421875, 246.52053833007812, 599.8041381835938, 473.4794616699219], "score": 0.0638943538069725}, {"image_id": 10, "category_id": 3, "bbox": [531.772705078125, 3.0875701904296875, 594.0716552734375, 671.8444976806641], "score": 0.06067802011966705}, {"image_id": 10, "category_id": 3, "bbox": [812.3998413085938, 265.87847900390625, 265.65093994140625, 425.08294677734375], "score": 0.05827464535832405}, {"image_id": 10, "category_id": 3, "bbox": [757.3848266601562, 308.89361572265625, 522.6151733398438, 411.10638427734375], "score": 0.05640203505754471}, {"image_id": 10, "category_id": 3, "bbox": [746.851806640625, 208.27972412109375, 274.92144775390625, 401.13555908203125], "score": 0.05361001566052437}, {"image_id": 10, "category_id": 3, "bbox": [278.788330078125, 0.0, 576.9332275390625, 585.3538818359375], "score": 0.053244464099407196}, {"image_id": 10, "category_id": 3, "bbox": [7.01348876953125, 101.964111328125, 616.5953979492188, 618.035888671875], "score": 0.04968293383717537}, {"image_id": 10, "category_id": 3, "bbox": [818.6984252929688, 63.394630432128906, 251.44183349609375, 421.63008880615234], "score": 0.04850213974714279}, {"image_id": 10, "category_id": 3, "bbox": [400.72711181640625, 443.1658020019531, 298.59002685546875, 276.8341979980469], "score": 0.04821787402033806}, {"image_id": 10, "category_id": 3, "bbox": [391.38861083984375, 368.7728271484375, 600.3511962890625, 351.2271728515625], "score": 0.04708481580018997}, {"image_id": 10, "category_id": 3, "bbox": [725.1651611328125, 306.8961486816406, 280.68939208984375, 413.1038513183594], "score": 0.04690851271152496}, {"image_id": 10, "category_id": 3, "bbox": [479.06951904296875, 213.5166015625, 292.7242431640625, 411.220947265625], "score": 0.04583293944597244}, {"image_id": 10, "category_id": 3, "bbox": [1052.1214599609375, 76.08323669433594, 223.7564697265625, 415.9194793701172], "score": 0.04404227435588837}, {"image_id": 10, "category_id": 3, "bbox": [161.62945556640625, 87.08604431152344, 280.5509033203125, 412.05906677246094], "score": 0.04380013793706894}, {"image_id": 10, "category_id": 3, "bbox": [620.9952392578125, 0.0, 659.0047607421875, 415.83154296875], "score": 0.04279925674200058}, {"image_id": 10, "category_id": 3, "bbox": [201.1585693359375, 288.68701171875, 316.8004150390625, 431.31298828125], "score": 0.042384203523397446}, {"image_id": 10, "category_id": 3, "bbox": [0.0, 0.0, 638.43798828125, 413.6981201171875], "score": 0.03762006014585495}, {"image_id": 10, "category_id": 3, "bbox": [736.26513671875, 92.42265319824219, 285.78564453125, 415.95558166503906], "score": 0.03704670071601868}, {"image_id": 10, "category_id": 3, "bbox": [0.0, 89.72232055664062, 354.8092956542969, 630.2776794433594], "score": 0.033437248319387436}, {"image_id": 10, "category_id": 3, "bbox": [298.434326171875, 342.61175537109375, 307.6517333984375, 377.38824462890625], "score": 0.03261891379952431}, {"image_id": 11, "category_id": 1, "bbox": [793.6387329101562, 44.77378463745117, 143.69805908203125, 387.8983039855957], "score": 0.5578745007514954}, {"image_id": 11, "category_id": 1, "bbox": [711.1932983398438, 44.032447814941406, 224.10675048828125, 442.65377044677734], "score": 0.4551839232444763}, {"image_id": 11, "category_id": 1, "bbox": [786.4583129882812, 47.87508773803711, 241.32440185546875, 380.9295082092285], "score": 0.28654342889785767}, {"image_id": 11, "category_id": 1, "bbox": [769.1420288085938, 133.29940795898438, 192.7838134765625, 390.6347961425781], "score": 0.1889616698026657}, {"image_id": 11, "category_id": 1, "bbox": [712.239501953125, 174.77545166015625, 186.2564697265625, 370.8365478515625], "score": 0.13099342584609985}, {"image_id": 11, "category_id": 1, "bbox": [0.0, 1.7886085510253906, 620.8497314453125, 664.2385520935059], "score": 0.12405853718519211}, {"image_id": 11, "category_id": 1, "bbox": [250.65359497070312, 5.982845306396484, 636.0046691894531, 666.0437660217285], "score": 0.11166057735681534}, {"image_id": 11, "category_id": 1, "bbox": [763.9708251953125, 0.0, 217.5845947265625, 357.166015625], "score": 0.09830010682344437}, {"image_id": 11, "category_id": 1, "bbox": [381.2557678222656, 91.54817962646484, 599.4175720214844, 628.4518203735352], "score": 0.08873201906681061}, {"image_id": 11, "category_id": 1, "bbox": [619.0498046875, 0.0, 286.61004638671875, 409.74615478515625], "score": 0.07125087827444077}, {"image_id": 11, "category_id": 1, "bbox": [210.31129455566406, 0.0, 236.59617614746094, 353.86468505859375], "score": 0.06919298321008682}, {"image_id": 11, "category_id": 1, "bbox": [812.4999389648438, 159.4417266845703, 467.50006103515625, 560.5582733154297], "score": 0.068832628428936}, {"image_id": 11, "category_id": 1, "bbox": [91.84622192382812, 78.440185546875, 673.4509582519531, 641.559814453125], "score": 0.06656650453805923}, {"image_id": 11, "category_id": 1, "bbox": [786.0416259765625, 196.91603088378906, 269.0340576171875, 415.3472137451172], "score": 0.05511593818664551}, {"image_id": 11, "category_id": 1, "bbox": [749.3510131835938, 236.79835510253906, 228.67022705078125, 402.2859344482422], "score": 0.048651888966560364}, {"image_id": 11, "category_id": 1, "bbox": [0.0, 10.799766540527344, 359.23394775390625, 647.4769058227539], "score": 0.047336239367723465}, {"image_id": 11, "category_id": 1, "bbox": [689.7314453125, 233.7025604248047, 213.201904296875, 407.6909942626953], "score": 0.047131944447755814}, {"image_id": 11, "category_id": 1, "bbox": [841.9393310546875, 25.88986587524414, 271.529296875, 417.8709373474121], "score": 0.04279177263379097}, {"image_id": 11, "category_id": 1, "bbox": [529.36669921875, 162.9161834716797, 547.17919921875, 557.0838165283203], "score": 0.04241880029439926}, {"image_id": 11, "category_id": 1, "bbox": [376.3558044433594, 0.0, 605.0807189941406, 499.2565002441406], "score": 0.04227181151509285}, {"image_id": 11, "category_id": 1, "bbox": [919.2169189453125, 191.57882690429688, 247.582763671875, 435.2640686035156], "score": 0.04164009913802147}, {"image_id": 11, "category_id": 1, "bbox": [186.5535125732422, 63.54481887817383, 256.63575744628906, 386.8127861022949], "score": 0.04006248340010643}, {"image_id": 11, "category_id": 2, "bbox": [706.548095703125, 202.36170959472656, 192.71453857421875, 386.77684020996094], "score": 0.13562677800655365}, {"image_id": 11, "category_id": 2, "bbox": [757.0086669921875, 204.5770721435547, 220.2513427734375, 389.55201721191406], "score": 0.12697230279445648}, {"image_id": 11, "category_id": 2, "bbox": [787.6527099609375, 144.8037567138672, 247.432861328125, 379.32423400878906], "score": 0.10269901156425476}, {"image_id": 11, "category_id": 2, "bbox": [614.524169921875, 239.48757934570312, 276.04376220703125, 400.5533142089844], "score": 0.09959699958562851}, {"image_id": 11, "category_id": 2, "bbox": [548.5499267578125, 207.22146606445312, 288.7354736328125, 391.9248962402344], "score": 0.08847960084676743}, {"image_id": 11, "category_id": 2, "bbox": [639.822265625, 128.90811157226562, 258.61077880859375, 397.7748107910156], "score": 0.08713990449905396}, {"image_id": 11, "category_id": 2, "bbox": [711.1932983398438, 44.032447814941406, 224.10675048828125, 442.65377044677734], "score": 0.08265963941812515}, {"image_id": 11, "category_id": 2, "bbox": [673.3705444335938, 328.29998779296875, 242.902099609375, 379.56036376953125], "score": 0.07812044769525528}, {"image_id": 11, "category_id": 2, "bbox": [745.1964721679688, 331.04437255859375, 239.24407958984375, 376.7371826171875], "score": 0.07513707876205444}, {"image_id": 11, "category_id": 2, "bbox": [789.6646728515625, 234.23495483398438, 275.8387451171875, 413.3244934082031], "score": 0.07496144622564316}, {"image_id": 11, "category_id": 2, "bbox": [793.6387329101562, 44.77378463745117, 143.69805908203125, 387.8983039855957], "score": 0.07436255365610123}, {"image_id": 11, "category_id": 2, "bbox": [191.84173583984375, 42.33481979370117, 246.46170043945312, 364.1914863586426], "score": 0.0716821476817131}, {"image_id": 11, "category_id": 2, "bbox": [786.4583129882812, 47.87508773803711, 241.32440185546875, 380.9295082092285], "score": 0.06926408410072327}, {"image_id": 11, "category_id": 2, "bbox": [875.1549682617188, 226.61598205566406, 274.02301025390625, 428.79856872558594], "score": 0.06744860857725143}, {"image_id": 11, "category_id": 2, "bbox": [816.7042236328125, 355.6716003417969, 265.404541015625, 364.3283996582031], "score": 0.0643807202577591}, {"image_id": 11, "category_id": 2, "bbox": [862.2566528320312, 112.5934829711914, 255.68756103515625, 386.2497787475586], "score": 0.059149909764528275}, {"image_id": 11, "category_id": 2, "bbox": [859.0827026367188, 435.7423400878906, 292.17779541015625, 284.2576599121094], "score": 0.05637622997164726}, {"image_id": 11, "category_id": 2, "bbox": [620.6749267578125, 16.327417373657227, 290.6788330078125, 425.2966060638428], "score": 0.05152783915400505}, {"image_id": 11, "category_id": 2, "bbox": [934.269775390625, 171.0397491455078, 232.5654296875, 394.7007293701172], "score": 0.05053185299038887}, {"image_id": 11, "category_id": 2, "bbox": [938.1800537109375, 75.1349105834961, 231.962646484375, 392.09989166259766], "score": 0.05013241991400719}, {"image_id": 11, "category_id": 2, "bbox": [250.65359497070312, 5.982845306396484, 636.0046691894531, 666.0437660217285], "score": 0.04979142174124718}, {"image_id": 11, "category_id": 2, "bbox": [549.3214111328125, 107.98202514648438, 287.062255859375, 388.8477478027344], "score": 0.04845895990729332}, {"image_id": 11, "category_id": 2, "bbox": [109.46361541748047, 111.83271789550781, 275.68958282470703, 377.4551544189453], "score": 0.04803529381752014}, {"image_id": 11, "category_id": 2, "bbox": [266.8457336425781, 54.528079986572266, 265.4692687988281, 347.3446617126465], "score": 0.04687260091304779}, {"image_id": 11, "category_id": 2, "bbox": [381.2557678222656, 91.54817962646484, 599.4175720214844, 628.4518203735352], "score": 0.04559338092803955}, {"image_id": 11, "category_id": 2, "bbox": [259.9463806152344, 0.0, 326.3260803222656, 263.7514953613281], "score": 0.0448700487613678}, {"image_id": 11, "category_id": 2, "bbox": [531.012939453125, 0.0, 291.1553955078125, 269.7636413574219], "score": 0.04416465014219284}, {"image_id": 11, "category_id": 2, "bbox": [212.7058868408203, 90.98492431640625, 258.49937438964844, 407.0175476074219], "score": 0.0441020205616951}, {"image_id": 11, "category_id": 2, "bbox": [521.9603271484375, 322.5997009277344, 326.62841796875, 387.4262390136719], "score": 0.043646764010190964}, {"image_id": 11, "category_id": 2, "bbox": [1021.3606567382812, 86.2380599975586, 205.48004150390625, 373.6719436645508], "score": 0.04340348765254021}, {"image_id": 11, "category_id": 2, "bbox": [994.951904296875, 179.69537353515625, 238.6934814453125, 376.9566650390625], "score": 0.043371375650167465}, {"image_id": 11, "category_id": 2, "bbox": [843.751708984375, 0.0, 266.088134765625, 407.647216796875], "score": 0.04326639696955681}, {"image_id": 11, "category_id": 2, "bbox": [203.57489013671875, 0.0, 284.6365966796875, 275.9992980957031], "score": 0.0428997166454792}, {"image_id": 11, "category_id": 2, "bbox": [24.003799438476562, 79.28978729248047, 283.2124481201172, 378.4701919555664], "score": 0.040737055242061615}, {"image_id": 11, "category_id": 2, "bbox": [535.6162109375, 16.47736358642578, 540.6273193359375, 633.7865524291992], "score": 0.04055088758468628}, {"image_id": 11, "category_id": 2, "bbox": [1059.6229248046875, 119.06136322021484, 220.3770751953125, 380.90991973876953], "score": 0.039530105888843536}, {"image_id": 11, "category_id": 3, "bbox": [769.1420288085938, 133.29940795898438, 192.7838134765625, 390.6347961425781], "score": 0.4628440737724304}, {"image_id": 11, "category_id": 3, "bbox": [709.7943725585938, 137.6330108642578, 197.05853271484375, 382.3435516357422], "score": 0.4555150270462036}, {"image_id": 11, "category_id": 3, "bbox": [787.3982543945312, 53.27720260620117, 155.37750244140625, 416.9135932922363], "score": 0.3086288571357727}, {"image_id": 11, "category_id": 3, "bbox": [689.7314453125, 233.7025604248047, 213.201904296875, 407.6909942626953], "score": 0.21731334924697876}, {"image_id": 11, "category_id": 3, "bbox": [214.0520782470703, 11.216835021972656, 227.8765411376953, 363.00984954833984], "score": 0.2143668681383133}, {"image_id": 11, "category_id": 3, "bbox": [640.3848266601562, 160.651123046875, 251.390869140625, 397.68792724609375], "score": 0.2022867053747177}, {"image_id": 11, "category_id": 3, "bbox": [749.3510131835938, 236.79835510253906, 228.67022705078125, 402.2859344482422], "score": 0.17962859570980072}, {"image_id": 11, "category_id": 3, "bbox": [720.796630859375, 12.614604949951172, 210.92010498046875, 396.1550178527832], "score": 0.17814677953720093}, {"image_id": 11, "category_id": 3, "bbox": [786.4583129882812, 47.87508773803711, 241.32440185546875, 380.9295082092285], "score": 0.17768555879592896}, {"image_id": 11, "category_id": 3, "bbox": [789.5359497070312, 176.71551513671875, 249.43682861328125, 381.107421875], "score": 0.16221560537815094}, {"image_id": 11, "category_id": 3, "bbox": [380.6868591308594, 10.405426025390625, 602.2237243652344, 656.6643371582031], "score": 0.12607181072235107}, {"image_id": 11, "category_id": 3, "bbox": [624.8347778320312, 46.60826110839844, 287.212646484375, 430.30625915527344], "score": 0.11593650281429291}, {"image_id": 11, "category_id": 3, "bbox": [232.94973754882812, 80.417724609375, 660.5360412597656, 639.582275390625], "score": 0.10764239728450775}, {"image_id": 11, "category_id": 3, "bbox": [265.3182678222656, 19.485694885253906, 270.7479553222656, 351.0743637084961], "score": 0.08211584389209747}, {"image_id": 11, "category_id": 3, "bbox": [540.24609375, 99.18075561523438, 529.450927734375, 620.8192443847656], "score": 0.08209103345870972}, {"image_id": 11, "category_id": 3, "bbox": [812.4999389648438, 159.4417266845703, 467.50006103515625, 560.5582733154297], "score": 0.08085548877716064}, {"image_id": 11, "category_id": 3, "bbox": [0.0, 1.7886085510253906, 620.8497314453125, 664.2385520935059], "score": 0.07544174790382385}, {"image_id": 11, "category_id": 3, "bbox": [875.1549682617188, 226.61598205566406, 274.02301025390625, 428.79856872558594], "score": 0.07184703648090363}, {"image_id": 11, "category_id": 3, "bbox": [607.3319091796875, 284.2518310546875, 282.6614990234375, 388.13885498046875], "score": 0.07103989273309708}, {"image_id": 11, "category_id": 3, "bbox": [934.269775390625, 171.0397491455078, 232.5654296875, 394.7007293701172], "score": 0.07020433992147446}, {"image_id": 11, "category_id": 3, "bbox": [212.7058868408203, 90.98492431640625, 258.49937438964844, 407.0175476074219], "score": 0.07000125199556351}, {"image_id": 11, "category_id": 3, "bbox": [555.46484375, 169.51124572753906, 280.4832763671875, 392.46983337402344], "score": 0.0695919618010521}, {"image_id": 11, "category_id": 3, "bbox": [673.3705444335938, 328.29998779296875, 242.902099609375, 379.56036376953125], "score": 0.06693938374519348}, {"image_id": 11, "category_id": 3, "bbox": [873.6351928710938, 145.146728515625, 259.80023193359375, 385.9412841796875], "score": 0.06670709699392319}, {"image_id": 11, "category_id": 3, "bbox": [798.6546020507812, 284.19696044921875, 272.60223388671875, 397.07501220703125], "score": 0.06562507152557373}, {"image_id": 11, "category_id": 3, "bbox": [863.506103515625, 351.28948974609375, 277.0791015625, 368.71051025390625], "score": 0.061407022178173065}, {"image_id": 11, "category_id": 3, "bbox": [938.1800537109375, 75.1349105834961, 231.962646484375, 392.09989166259766], "score": 0.061358742415905}, {"image_id": 11, "category_id": 3, "bbox": [841.9393310546875, 25.88986587524414, 271.529296875, 417.8709373474121], "score": 0.061117153614759445}, {"image_id": 11, "category_id": 3, "bbox": [245.55477905273438, 0.0, 645.3086242675781, 496.8207092285156], "score": 0.0560259148478508}, {"image_id": 11, "category_id": 3, "bbox": [114.1761474609375, 0.0, 649.2318115234375, 572.5965576171875], "score": 0.0545351542532444}, {"image_id": 11, "category_id": 3, "bbox": [356.19488525390625, 235.07875061035156, 669.8319702148438, 484.92124938964844], "score": 0.05093385651707649}, {"image_id": 11, "category_id": 3, "bbox": [665.0260009765625, 18.582687377929688, 567.8739013671875, 618.4192657470703], "score": 0.05024787783622742}, {"image_id": 11, "category_id": 3, "bbox": [104.71992492675781, 16.680953979492188, 296.1352081298828, 414.3992462158203], "score": 0.048947885632514954}, {"image_id": 11, "category_id": 3, "bbox": [333.08575439453125, 0.0, 286.94757080078125, 401.19097900390625], "score": 0.04768714681267738}, {"image_id": 11, "category_id": 3, "bbox": [763.9708251953125, 0.0, 217.5845947265625, 357.166015625], "score": 0.04744655638933182}, {"image_id": 11, "category_id": 3, "bbox": [1021.3606567382812, 86.2380599975586, 205.48004150390625, 373.6719436645508], "score": 0.043542586266994476}, {"image_id": 11, "category_id": 3, "bbox": [788.1318359375, 0.0, 491.8681640625, 489.9587097167969], "score": 0.042598579078912735}, {"image_id": 11, "category_id": 3, "bbox": [535.5707397460938, 0.0, 550.4198608398438, 493.98736572265625], "score": 0.04168575629591942}, {"image_id": 11, "category_id": 3, "bbox": [70.59246826171875, 233.10952758789062, 733.9910278320312, 486.8904724121094], "score": 0.040473390370607376}, {"image_id": 11, "category_id": 3, "bbox": [365.4703369140625, 0.0, 651.92919921875, 353.3249206542969], "score": 0.040036506950855255}, {"image_id": 11, "category_id": 3, "bbox": [203.57489013671875, 0.0, 284.6365966796875, 275.9992980957031], "score": 0.03872827813029289}, {"image_id": 11, "category_id": 3, "bbox": [641.4378662109375, 0.0, 627.26513671875, 355.7915954589844], "score": 0.03799927234649658}, {"image_id": 12, "category_id": 1, "bbox": [584.5387573242188, 448.1968078613281, 497.77740478515625, 587.0398864746094], "score": 0.756415843963623}, {"image_id": 12, "category_id": 1, "bbox": [575.6952514648438, 319.00439453125, 400.10455322265625, 653.0479125976562], "score": 0.24910491704940796}, {"image_id": 12, "category_id": 1, "bbox": [749.107177734375, 369.7765197753906, 507.744873046875, 699.7013854980469], "score": 0.14256791770458221}, {"image_id": 12, "category_id": 1, "bbox": [423.25244140625, 12.339380264282227, 850.8375244140625, 998.6625118255615], "score": 0.12180855125188828}, {"image_id": 12, "category_id": 1, "bbox": [938.3679809570312, 445.4434509277344, 317.28155517578125, 562.7604675292969], "score": 0.10487187653779984}, {"image_id": 12, "category_id": 1, "bbox": [1312.2138671875, 71.54427337646484, 317.2955322265625, 665.6665420532227], "score": 0.10107286274433136}, {"image_id": 12, "category_id": 1, "bbox": [210.58319091796875, 69.52527618408203, 382.6287841796875, 546.4225387573242], "score": 0.08729560673236847}, {"image_id": 12, "category_id": 1, "bbox": [1.3367156982421875, 25.294269561767578, 941.5722808837891, 965.3807182312012], "score": 0.08666644245386124}, {"image_id": 12, "category_id": 1, "bbox": [615.8834838867188, 133.17747497558594, 812.3528442382812, 946.8225250244141], "score": 0.07625043392181396}, {"image_id": 12, "category_id": 1, "bbox": [1629.343994140625, 0.0, 290.656005859375, 622.9232788085938], "score": 0.07619617879390717}, {"image_id": 12, "category_id": 1, "bbox": [212.35372924804688, 143.08282470703125, 911.4363098144531, 936.9171752929688], "score": 0.06915077567100525}, {"image_id": 12, "category_id": 1, "bbox": [511.025390625, 529.1082153320312, 468.38751220703125, 550.8917846679688], "score": 0.06796782463788986}, {"image_id": 12, "category_id": 1, "bbox": [879.9591064453125, 625.5614013671875, 427.17333984375, 454.4385986328125], "score": 0.06433165073394775}, {"image_id": 12, "category_id": 1, "bbox": [400.519775390625, 470.6707763671875, 879.107421875, 609.3292236328125], "score": 0.05502168461680412}, {"image_id": 12, "category_id": 1, "bbox": [656.6890869140625, 215.79364013671875, 498.1884765625, 707.1752319335938], "score": 0.054581817239522934}, {"image_id": 12, "category_id": 1, "bbox": [0.0, 17.50044822692871, 553.5313720703125, 970.9961338043213], "score": 0.05268816649913788}, {"image_id": 12, "category_id": 1, "bbox": [826.6155395507812, 0.0, 842.4539184570312, 865.6671752929688], "score": 0.046238094568252563}, {"image_id": 12, "category_id": 1, "bbox": [1191.22216796875, 0.0, 728.77783203125, 759.3123168945312], "score": 0.045333489775657654}, {"image_id": 12, "category_id": 1, "bbox": [782.0577392578125, 219.59231567382812, 855.30224609375, 860.4076843261719], "score": 0.04510819539427757}, {"image_id": 12, "category_id": 1, "bbox": [987.031494140625, 232.63682556152344, 869.517333984375, 847.3631744384766], "score": 0.041379477828741074}, {"image_id": 12, "category_id": 1, "bbox": [1696.134521484375, 0.0, 223.865478515625, 197.80824279785156], "score": 0.04018330201506615}, {"image_id": 12, "category_id": 1, "bbox": [0.0, 374.3476867675781, 991.8929443359375, 705.6523132324219], "score": 0.04013173282146454}, {"image_id": 12, "category_id": 1, "bbox": [1330.46826171875, 244.0127716064453, 589.53173828125, 835.9872283935547], "score": 0.03922770917415619}, {"image_id": 12, "category_id": 1, "bbox": [613.7601928710938, 0.0, 899.5416870117188, 755.4652099609375], "score": 0.03913819417357445}, {"image_id": 12, "category_id": 1, "bbox": [365.0588684082031, 0.0, 983.5356140136719, 644.54443359375], "score": 0.03741612285375595}, {"image_id": 12, "category_id": 1, "bbox": [1314.6383056640625, 282.92706298828125, 333.597412109375, 632.1914672851562], "score": 0.0366983488202095}, {"image_id": 12, "category_id": 1, "bbox": [1499.5968017578125, 0.0, 389.786865234375, 561.987060546875], "score": 0.036690179258584976}, {"image_id": 12, "category_id": 1, "bbox": [128.3730926513672, 88.61483764648438, 472.3463897705078, 703.7524719238281], "score": 0.03647608309984207}, {"image_id": 12, "category_id": 1, "bbox": [20.4525146484375, 595.5023193359375, 323.2842102050781, 484.4976806640625], "score": 0.03625425323843956}, {"image_id": 12, "category_id": 1, "bbox": [979.1226196289062, 0.0, 902.2715454101562, 540.760009765625], "score": 0.03460397198796272}, {"image_id": 12, "category_id": 1, "bbox": [389.6744384765625, 418.8507385253906, 517.5331420898438, 661.1492614746094], "score": 0.03444761782884598}, {"image_id": 12, "category_id": 1, "bbox": [0.0, 0.0, 948.7913818359375, 542.5052490234375], "score": 0.033912550657987595}, {"image_id": 12, "category_id": 1, "bbox": [617.5381469726562, 816.321044921875, 351.2027587890625, 237.97998046875], "score": 0.03331560268998146}, {"image_id": 12, "category_id": 2, "bbox": [541.2962036132812, 661.6316528320312, 573.4873657226562, 418.36834716796875], "score": 0.2944899797439575}, {"image_id": 12, "category_id": 2, "bbox": [758.9971923828125, 650.1434326171875, 549.228759765625, 429.8565673828125], "score": 0.20387965440750122}, {"image_id": 12, "category_id": 2, "bbox": [949.1124267578125, 507.5289001464844, 313.06591796875, 530.2667541503906], "score": 0.17492473125457764}, {"image_id": 12, "category_id": 2, "bbox": [963.5787963867188, 568.38525390625, 420.40521240234375, 511.61474609375], "score": 0.12331625819206238}, {"image_id": 12, "category_id": 2, "bbox": [1076.1163330078125, 558.4424438476562, 412.7891845703125, 521.5575561523438], "score": 0.09299363940954208}, {"image_id": 12, "category_id": 2, "bbox": [970.0588989257812, 377.2192687988281, 407.92681884765625, 593.8536682128906], "score": 0.07303420454263687}, {"image_id": 12, "category_id": 2, "bbox": [445.9786682128906, 656.8173217773438, 438.2475891113281, 423.18267822265625], "score": 0.06459607183933258}, {"image_id": 12, "category_id": 2, "bbox": [657.1699829101562, 456.80523681640625, 499.50494384765625, 608.5025024414062], "score": 0.0626990795135498}, {"image_id": 12, "category_id": 2, "bbox": [1321.7589111328125, 61.96654510498047, 297.1748046875, 621.1035842895508], "score": 0.060950856655836105}, {"image_id": 12, "category_id": 2, "bbox": [1211.5887451171875, 546.5008544921875, 367.0682373046875, 533.4991455078125], "score": 0.05962763726711273}, {"image_id": 12, "category_id": 2, "bbox": [1361.6451416015625, 55.82767868041992, 392.0062255859375, 623.3733711242676], "score": 0.048995036631822586}, {"image_id": 12, "category_id": 2, "bbox": [915.3291015625, 306.80596923828125, 361.397216796875, 638.5580444335938], "score": 0.04844856634736061}, {"image_id": 12, "category_id": 2, "bbox": [576.2388916015625, 409.4638671875, 412.06591796875, 583.6846923828125], "score": 0.04771701619029045}, {"image_id": 12, "category_id": 2, "bbox": [1089.3013916015625, 365.08074951171875, 408.4337158203125, 613.6117553710938], "score": 0.04565882682800293}, {"image_id": 12, "category_id": 2, "bbox": [297.6064453125, 653.34375, 499.4632568359375, 426.65625], "score": 0.04482521489262581}, {"image_id": 12, "category_id": 2, "bbox": [100.96220397949219, 663.0093383789062, 487.3482208251953, 416.99066162109375], "score": 0.04296933859586716}, {"image_id": 12, "category_id": 2, "bbox": [1263.8453369140625, 192.77976989746094, 344.5308837890625, 610.1031036376953], "score": 0.04245179519057274}, {"image_id": 12, "category_id": 2, "bbox": [203.8378143310547, 0.0, 369.76808166503906, 550.9328002929688], "score": 0.04202678054571152}, {"image_id": 12, "category_id": 2, "bbox": [1118.8660888671875, 138.14096069335938, 409.3714599609375, 628.0019836425781], "score": 0.04164997860789299}, {"image_id": 12, "category_id": 2, "bbox": [1231.9417724609375, 3.377652168273926, 348.696044921875, 625.5098600387573], "score": 0.04057168588042259}, {"image_id": 12, "category_id": 2, "bbox": [1429.2681884765625, 0.0, 391.950439453125, 492.71368408203125], "score": 0.040231019258499146}, {"image_id": 12, "category_id": 2, "bbox": [709.5217895507812, 437.5854187011719, 960.2532348632812, 642.4145812988281], "score": 0.038646381348371506}, {"image_id": 12, "category_id": 2, "bbox": [1314.6383056640625, 282.92706298828125, 333.597412109375, 632.1914672851562], "score": 0.038616426289081573}, {"image_id": 12, "category_id": 2, "bbox": [1297.1588134765625, 537.7833251953125, 407.5615234375, 542.2166748046875], "score": 0.03687765449285507}, {"image_id": 12, "category_id": 2, "bbox": [21.873390197753906, 605.2442626953125, 437.2166976928711, 474.7557373046875], "score": 0.036657724529504776}, {"image_id": 12, "category_id": 2, "bbox": [208.83056640625, 599.34228515625, 470.3197021484375, 480.65771484375], "score": 0.03537286818027496}, {"image_id": 12, "category_id": 2, "bbox": [579.3394165039062, 239.1968994140625, 847.3419799804688, 840.8031005859375], "score": 0.03512510657310486}, {"image_id": 12, "category_id": 2, "bbox": [825.8699340820312, 43.52079391479492, 425.59295654296875, 611.4515571594238], "score": 0.03500821441411972}, {"image_id": 12, "category_id": 2, "bbox": [907.3282470703125, 85.83909606933594, 427.7047119140625, 631.6411895751953], "score": 0.03373294696211815}, {"image_id": 12, "category_id": 2, "bbox": [310.00872802734375, 224.31784057617188, 438.25579833984375, 574.5038757324219], "score": 0.03315527364611626}, {"image_id": 12, "category_id": 3, "bbox": [949.1124267578125, 507.5289001464844, 313.06591796875, 530.2667541503906], "score": 0.3932914733886719}, {"image_id": 12, "category_id": 3, "bbox": [767.5947875976562, 619.55712890625, 529.7793579101562, 460.44287109375], "score": 0.25304874777793884}, {"image_id": 12, "category_id": 3, "bbox": [552.9085083007812, 585.7274169921875, 549.4083862304688, 494.2725830078125], "score": 0.24541833996772766}, {"image_id": 12, "category_id": 3, "bbox": [657.1699829101562, 456.80523681640625, 499.50494384765625, 608.5025024414062], "score": 0.19563481211662292}, {"image_id": 12, "category_id": 3, "bbox": [1312.2138671875, 71.54427337646484, 317.2955322265625, 665.6665420532227], "score": 0.14432771503925323}, {"image_id": 12, "category_id": 3, "bbox": [192.53404235839844, 0.0, 407.4032745361328, 558.9723510742188], "score": 0.13048040866851807}, {"image_id": 12, "category_id": 3, "bbox": [659.2424926757812, 692.0776977539062, 537.0935668945312, 387.92230224609375], "score": 0.11470801383256912}, {"image_id": 12, "category_id": 3, "bbox": [963.5787963867188, 568.38525390625, 420.40521240234375, 511.61474609375], "score": 0.1111811175942421}, {"image_id": 12, "category_id": 3, "bbox": [485.5148620605469, 460.069091796875, 508.6385192871094, 619.930908203125], "score": 0.10485245287418365}, {"image_id": 12, "category_id": 3, "bbox": [753.72021484375, 317.97552490234375, 488.5916748046875, 710.6469116210938], "score": 0.10411663353443146}, {"image_id": 12, "category_id": 3, "bbox": [970.0588989257812, 377.2192687988281, 407.92681884765625, 593.8536682128906], "score": 0.09336720407009125}, {"image_id": 12, "category_id": 3, "bbox": [517.0243530273438, 699.9879150390625, 466.81011962890625, 380.0120849609375], "score": 0.09277234226465225}, {"image_id": 12, "category_id": 3, "bbox": [404.27471923828125, 362.3662109375, 893.3655395507812, 717.6337890625], "score": 0.09228695183992386}, {"image_id": 12, "category_id": 3, "bbox": [572.546875, 257.7568359375, 527.5782470703125, 729.3822631835938], "score": 0.0918555036187172}, {"image_id": 12, "category_id": 3, "bbox": [915.3291015625, 306.80596923828125, 361.397216796875, 638.5580444335938], "score": 0.07815804332494736}, {"image_id": 12, "category_id": 3, "bbox": [1647.8848876953125, 39.52253341674805, 272.1151123046875, 245.59517288208008], "score": 0.07187265902757645}, {"image_id": 12, "category_id": 3, "bbox": [1515.110595703125, 0.0, 404.889404296875, 231.63916015625], "score": 0.06878863275051117}, {"image_id": 12, "category_id": 3, "bbox": [1696.134521484375, 0.0, 223.865478515625, 197.80824279785156], "score": 0.06449871510267258}, {"image_id": 12, "category_id": 3, "bbox": [128.3730926513672, 88.61483764648438, 472.3463897705078, 703.7524719238281], "score": 0.062326688319444656}, {"image_id": 12, "category_id": 3, "bbox": [1074.5633544921875, 504.58721923828125, 415.5159912109375, 545.8851928710938], "score": 0.057570166885852814}, {"image_id": 12, "category_id": 3, "bbox": [371.5150451660156, 2.842609405517578, 508.8008117675781, 674.5514945983887], "score": 0.05682412162423134}, {"image_id": 12, "category_id": 3, "bbox": [197.62326049804688, 260.92144775390625, 939.3100891113281, 819.0785522460938], "score": 0.055999334901571274}, {"image_id": 12, "category_id": 3, "bbox": [269.88787841796875, 85.14139556884766, 461.7232666015625, 522.3687973022461], "score": 0.055146798491477966}, {"image_id": 12, "category_id": 3, "bbox": [876.803466796875, 725.79638671875, 477.71533203125, 354.20361328125], "score": 0.05181723088026047}, {"image_id": 12, "category_id": 3, "bbox": [1260.519775390625, 239.19972229003906, 335.839599609375, 610.2167205810547], "score": 0.04980340227484703}, {"image_id": 12, "category_id": 3, "bbox": [1231.9417724609375, 3.377652168273926, 348.696044921875, 625.5098600387573], "score": 0.04958599805831909}, {"image_id": 12, "category_id": 3, "bbox": [615.8834838867188, 133.17747497558594, 812.3528442382812, 946.8225250244141], "score": 0.047887831926345825}, {"image_id": 12, "category_id": 3, "bbox": [423.25244140625, 12.339380264282227, 850.8375244140625, 998.6625118255615], "score": 0.046084120869636536}, {"image_id": 12, "category_id": 3, "bbox": [1345.7681884765625, 158.47833251953125, 408.243896484375, 600.09521484375], "score": 0.0427207350730896}, {"image_id": 12, "category_id": 3, "bbox": [1040.0928955078125, 0.0, 775.478515625, 860.419921875], "score": 0.04202926903963089}, {"image_id": 12, "category_id": 3, "bbox": [906.7560424804688, 440.9178161621094, 1013.2439575195312, 639.0821838378906], "score": 0.04044117033481598}, {"image_id": 12, "category_id": 3, "bbox": [738.7144775390625, 323.0152587890625, 925.2144775390625, 756.9847412109375], "score": 0.039636388421058655}, {"image_id": 12, "category_id": 3, "bbox": [0.0, 485.9674987792969, 977.0608520507812, 594.0325012207031], "score": 0.03837479650974274}, {"image_id": 12, "category_id": 3, "bbox": [523.482421875, 582.2359008789062, 958.474853515625, 497.76409912109375], "score": 0.03790397197008133}, {"image_id": 12, "category_id": 3, "bbox": [620.9669189453125, 842.1637573242188, 351.44891357421875, 237.83624267578125], "score": 0.0359388031065464}, {"image_id": 12, "category_id": 3, "bbox": [1495.7440185546875, 0.0, 393.548828125, 497.8324890136719], "score": 0.03562435135245323}, {"image_id": 12, "category_id": 3, "bbox": [0.0, 253.5995635986328, 760.8349609375, 826.4004364013672], "score": 0.03541457653045654}, {"image_id": 13, "category_id": 1, "bbox": [1074.7713623046875, 136.86016845703125, 259.803466796875, 667.9913330078125], "score": 0.6553751230239868}, {"image_id": 13, "category_id": 1, "bbox": [1130.195068359375, 141.2550506591797, 314.416748046875, 666.0061798095703], "score": 0.5566073060035706}, {"image_id": 13, "category_id": 1, "bbox": [1033.12158203125, 253.00706481933594, 359.0496826171875, 678.7765045166016], "score": 0.2951386868953705}, {"image_id": 13, "category_id": 1, "bbox": [831.1115112304688, 391.1956481933594, 360.27960205078125, 517.3589172363281], "score": 0.17238332331180573}, {"image_id": 13, "category_id": 1, "bbox": [889.08251953125, 143.29176330566406, 446.940185546875, 724.8990936279297], "score": 0.17166927456855774}, {"image_id": 13, "category_id": 1, "bbox": [963.5154418945312, 104.7214584350586, 337.03155517578125, 637.3277969360352], "score": 0.09610675275325775}, {"image_id": 13, "category_id": 1, "bbox": [1627.4656982421875, 0.0, 290.734375, 628.128173828125], "score": 0.09153296053409576}, {"image_id": 13, "category_id": 1, "bbox": [1062.542724609375, 0.0, 297.9461669921875, 624.887939453125], "score": 0.07524855434894562}, {"image_id": 13, "category_id": 1, "bbox": [1124.8050537109375, 332.6353454589844, 340.0445556640625, 621.3121643066406], "score": 0.0751338079571724}, {"image_id": 13, "category_id": 1, "bbox": [0.0, 39.15789794921875, 918.4506225585938, 941.2945556640625], "score": 0.06607746332883835}, {"image_id": 13, "category_id": 1, "bbox": [347.1631164550781, 343.3568115234375, 992.1274108886719, 736.6431884765625], "score": 0.06480108201503754}, {"image_id": 13, "category_id": 1, "bbox": [1217.11865234375, 250.1517333984375, 702.88134765625, 829.8482666015625], "score": 0.061579711735248566}, {"image_id": 13, "category_id": 1, "bbox": [1212.784423828125, 0.0, 707.215576171875, 868.1076049804688], "score": 0.06056167557835579}, {"image_id": 13, "category_id": 1, "bbox": [840.6110229492188, 139.87380981445312, 357.87628173828125, 638.3418884277344], "score": 0.05867668241262436}, {"image_id": 13, "category_id": 1, "bbox": [397.9236145019531, 20.726308822631836, 904.2044372558594, 968.5628147125244], "score": 0.056537602096796036}, {"image_id": 13, "category_id": 1, "bbox": [1124.483154296875, 0.0, 311.80810546875, 565.9744262695312], "score": 0.054349448531866074}, {"image_id": 13, "category_id": 1, "bbox": [763.2921752929688, 289.1500549316406, 426.43572998046875, 534.4155578613281], "score": 0.050726328045129776}, {"image_id": 13, "category_id": 1, "bbox": [178.27944946289062, 139.77359008789062, 969.9604187011719, 939.8384704589844], "score": 0.050499215722084045}, {"image_id": 13, "category_id": 1, "bbox": [11.747817993164062, 0.0, 889.1437225341797, 643.8350219726562], "score": 0.050244640558958054}, {"image_id": 13, "category_id": 1, "bbox": [197.41732788085938, 91.66526794433594, 400.3829650878906, 514.3546295166016], "score": 0.04928546026349068}, {"image_id": 13, "category_id": 1, "bbox": [594.5315551757812, 141.6844482421875, 834.1611938476562, 938.3155517578125], "score": 0.04543047770857811}, {"image_id": 13, "category_id": 1, "bbox": [1697.281982421875, 0.0, 222.718017578125, 198.22459411621094], "score": 0.04076142609119415}, {"image_id": 13, "category_id": 1, "bbox": [0.0, 359.6885986328125, 997.9845581054688, 720.3114013671875], "score": 0.03825155273079872}, {"image_id": 13, "category_id": 1, "bbox": [808.2947387695312, 345.9832763671875, 815.8150024414062, 734.0167236328125], "score": 0.036845456808805466}, {"image_id": 13, "category_id": 1, "bbox": [0.0, 135.920166015625, 545.6119995117188, 944.079833984375], "score": 0.03497258201241493}, {"image_id": 13, "category_id": 2, "bbox": [821.4561767578125, 419.1445617675781, 375.6944580078125, 542.8995666503906], "score": 0.22885750234127045}, {"image_id": 13, "category_id": 2, "bbox": [811.1502685546875, 288.8132019042969, 376.37841796875, 547.8455505371094], "score": 0.18652735650539398}, {"image_id": 13, "category_id": 2, "bbox": [758.3272094726562, 149.2642364501953, 381.75836181640625, 615.4462127685547], "score": 0.12020286917686462}, {"image_id": 13, "category_id": 2, "bbox": [872.0560913085938, 23.271669387817383, 327.60540771484375, 544.5407695770264], "score": 0.10528126358985901}, {"image_id": 13, "category_id": 2, "bbox": [755.7078247070312, 14.671099662780762, 354.11431884765625, 561.5770082473755], "score": 0.09780748188495636}, {"image_id": 13, "category_id": 2, "bbox": [962.566650390625, 12.803046226501465, 297.1458740234375, 578.411431312561], "score": 0.08802969753742218}, {"image_id": 13, "category_id": 2, "bbox": [966.9734497070312, 124.4070816040039, 351.34356689453125, 676.1025009155273], "score": 0.08723946660757065}, {"image_id": 13, "category_id": 2, "bbox": [656.0617065429688, 62.502052307128906, 365.6512451171875, 574.4585800170898], "score": 0.07274441421031952}, {"image_id": 13, "category_id": 2, "bbox": [1075.3515625, 216.67401123046875, 261.7979736328125, 669.9424438476562], "score": 0.06055903062224388}, {"image_id": 13, "category_id": 2, "bbox": [1127.139892578125, 220.52235412597656, 326.8631591796875, 665.3699188232422], "score": 0.05660703405737877}, {"image_id": 13, "category_id": 2, "bbox": [1062.542724609375, 0.0, 297.9461669921875, 624.887939453125], "score": 0.05490793660283089}, {"image_id": 13, "category_id": 2, "bbox": [613.8175048828125, 380.0867614746094, 473.4202880859375, 635.3552551269531], "score": 0.05338072404265404}, {"image_id": 13, "category_id": 2, "bbox": [812.044189453125, 0.0, 438.284423828125, 414.5367431640625], "score": 0.052644871175289154}, {"image_id": 13, "category_id": 2, "bbox": [1017.9915771484375, 392.8103332519531, 361.8612060546875, 612.7197570800781], "score": 0.05171917751431465}, {"image_id": 13, "category_id": 2, "bbox": [626.480712890625, 188.63287353515625, 439.7523193359375, 634.4279174804688], "score": 0.05163908749818802}, {"image_id": 13, "category_id": 2, "bbox": [1345.4298095703125, 0.0, 366.5203857421875, 474.37896728515625], "score": 0.05092242732644081}, {"image_id": 13, "category_id": 2, "bbox": [1118.9794921875, 394.92108154296875, 348.972900390625, 610.056640625], "score": 0.04867662489414215}, {"image_id": 13, "category_id": 2, "bbox": [519.1578369140625, 262.014404296875, 447.7056884765625, 591.2722778320312], "score": 0.04586339369416237}, {"image_id": 13, "category_id": 2, "bbox": [397.9236145019531, 20.726308822631836, 904.2044372558594, 968.5628147125244], "score": 0.04516908898949623}, {"image_id": 13, "category_id": 2, "bbox": [178.27944946289062, 139.77359008789062, 969.9604187011719, 939.8384704589844], "score": 0.04314858466386795}, {"image_id": 13, "category_id": 2, "bbox": [1124.483154296875, 0.0, 311.80810546875, 565.9744262695312], "score": 0.043004997074604034}, {"image_id": 13, "category_id": 2, "bbox": [1158.1627197265625, 121.0126724243164, 392.1632080078125, 693.0181503295898], "score": 0.04253359138965607}, {"image_id": 13, "category_id": 2, "bbox": [523.4107055664062, 415.86627197265625, 447.1729736328125, 591.714599609375], "score": 0.04248499125242233}, {"image_id": 13, "category_id": 2, "bbox": [1201.449951171875, 13.144429206848145, 349.013671875, 587.8254194259644], "score": 0.04234619438648224}, {"image_id": 13, "category_id": 2, "bbox": [207.17198181152344, 47.13739776611328, 400.96253967285156, 501.11339569091797], "score": 0.042296670377254486}, {"image_id": 13, "category_id": 2, "bbox": [1005.9633178710938, 0.0, 392.75653076171875, 417.71063232421875], "score": 0.04184853658080101}, {"image_id": 13, "category_id": 2, "bbox": [1298.947265625, 521.94482421875, 395.3671875, 558.05517578125], "score": 0.040786366909742355}, {"image_id": 13, "category_id": 2, "bbox": [508.56866455078125, 61.86715316772461, 425.3851318359375, 572.4971046447754], "score": 0.039179496467113495}, {"image_id": 13, "category_id": 2, "bbox": [0.0, 34.23614501953125, 736.5712890625, 956.2137451171875], "score": 0.03873138874769211}, {"image_id": 13, "category_id": 2, "bbox": [315.70654296875, 328.93927001953125, 439.64544677734375, 584.994384765625], "score": 0.03793958202004433}, {"image_id": 13, "category_id": 2, "bbox": [118.01749420166016, 283.4339904785156, 456.74361419677734, 571.8712463378906], "score": 0.03707306832075119}, {"image_id": 13, "category_id": 2, "bbox": [8.775077819824219, 234.9263916015625, 433.6321792602539, 571.1407470703125], "score": 0.03699303790926933}, {"image_id": 13, "category_id": 2, "bbox": [594.5315551757812, 141.6844482421875, 834.1611938476562, 938.3155517578125], "score": 0.0365842804312706}, {"image_id": 13, "category_id": 2, "bbox": [1377.388916015625, 0.0, 434.25146484375, 595.5211791992188], "score": 0.03592078387737274}, {"image_id": 13, "category_id": 2, "bbox": [1344.1549072265625, 47.030616760253906, 312.815185546875, 617.4065170288086], "score": 0.03589899465441704}, {"image_id": 13, "category_id": 2, "bbox": [1224.0465087890625, 387.4518737792969, 380.960205078125, 649.6054992675781], "score": 0.03580160066485405}, {"image_id": 13, "category_id": 2, "bbox": [409.7948913574219, 216.1328582763672, 444.9522399902344, 583.7818145751953], "score": 0.03573210909962654}, {"image_id": 13, "category_id": 2, "bbox": [664.7590942382812, 0.0, 324.62591552734375, 469.1226806640625], "score": 0.03503477945923805}, {"image_id": 13, "category_id": 2, "bbox": [414.2143249511719, 376.79656982421875, 448.4488220214844, 592.0216674804688], "score": 0.03478070721030235}, {"image_id": 13, "category_id": 3, "bbox": [818.6689453125, 348.40997314453125, 370.3272705078125, 519.1138305664062], "score": 0.6983469724655151}, {"image_id": 13, "category_id": 3, "bbox": [804.944091796875, 421.5149841308594, 416.357666015625, 657.5244445800781], "score": 0.36962100863456726}, {"image_id": 13, "category_id": 3, "bbox": [840.6110229492188, 139.87380981445312, 357.87628173828125, 638.3418884277344], "score": 0.313664972782135}, {"image_id": 13, "category_id": 3, "bbox": [1033.12158203125, 253.00706481933594, 359.0496826171875, 678.7765045166016], "score": 0.3099805414676666}, {"image_id": 13, "category_id": 3, "bbox": [1074.7713623046875, 136.86016845703125, 259.803466796875, 667.9913330078125], "score": 0.27094778418540955}, {"image_id": 13, "category_id": 3, "bbox": [889.08251953125, 143.29176330566406, 446.940185546875, 724.8990936279297], "score": 0.2198207974433899}, {"image_id": 13, "category_id": 3, "bbox": [1130.195068359375, 141.2550506591797, 314.416748046875, 666.0061798095703], "score": 0.20341090857982635}, {"image_id": 13, "category_id": 3, "bbox": [755.4949951171875, 112.84942626953125, 368.728759765625, 595.9637451171875], "score": 0.13557077944278717}, {"image_id": 13, "category_id": 3, "bbox": [1124.8050537109375, 332.6353454589844, 340.0445556640625, 621.3121643066406], "score": 0.12772610783576965}, {"image_id": 13, "category_id": 3, "bbox": [598.0113525390625, 218.7879180908203, 482.9976806640625, 662.4211883544922], "score": 0.11590449512004852}, {"image_id": 13, "category_id": 3, "bbox": [872.0560913085938, 23.271669387817383, 327.60540771484375, 544.5407695770264], "score": 0.11419356614351273}, {"image_id": 13, "category_id": 3, "bbox": [962.566650390625, 12.803046226501465, 297.1458740234375, 578.411431312561], "score": 0.10777939110994339}, {"image_id": 13, "category_id": 3, "bbox": [191.04415893554688, 44.96326446533203, 411.1267395019531, 510.0557174682617], "score": 0.1074787899851799}, {"image_id": 13, "category_id": 3, "bbox": [1062.542724609375, 0.0, 297.9461669921875, 624.887939453125], "score": 0.09838511794805527}, {"image_id": 13, "category_id": 3, "bbox": [963.5154418945312, 104.7214584350586, 337.03155517578125, 637.3277969360352], "score": 0.09707893431186676}, {"image_id": 13, "category_id": 3, "bbox": [594.5315551757812, 141.6844482421875, 834.1611938476562, 938.3155517578125], "score": 0.0874122902750969}, {"image_id": 13, "category_id": 3, "bbox": [1568.3453369140625, 0.0, 351.6546630859375, 179.18341064453125], "score": 0.07589671015739441}, {"image_id": 13, "category_id": 3, "bbox": [1646.181884765625, 40.00691604614258, 273.818115234375, 244.18827438354492], "score": 0.07006432116031647}, {"image_id": 13, "category_id": 3, "bbox": [1124.483154296875, 0.0, 311.80810546875, 565.9744262695312], "score": 0.06869136542081833}, {"image_id": 13, "category_id": 3, "bbox": [748.2572631835938, 0.0, 350.39422607421875, 517.6879272460938], "score": 0.0681183859705925}, {"image_id": 13, "category_id": 3, "bbox": [1697.281982421875, 0.0, 222.718017578125, 198.22459411621094], "score": 0.0667494386434555}, {"image_id": 13, "category_id": 3, "bbox": [1236.0400390625, 11.620513916015625, 683.9599609375, 978.7690734863281], "score": 0.0651257187128067}, {"image_id": 13, "category_id": 3, "bbox": [397.9236145019531, 20.726308822631836, 904.2044372558594, 968.5628147125244], "score": 0.059572793543338776}, {"image_id": 13, "category_id": 3, "bbox": [836.6048583984375, 139.68841552734375, 754.8648681640625, 933.9765014648438], "score": 0.059398502111434937}, {"image_id": 13, "category_id": 3, "bbox": [986.154052734375, 427.1053161621094, 431.180419921875, 652.8946838378906], "score": 0.05196673795580864}, {"image_id": 13, "category_id": 3, "bbox": [347.1631164550781, 343.3568115234375, 992.1274108886719, 736.6431884765625], "score": 0.0518779382109642}, {"image_id": 13, "category_id": 3, "bbox": [1175.58349609375, 0.0, 744.41650390625, 624.3744506835938], "score": 0.04704233631491661}, {"image_id": 13, "category_id": 3, "bbox": [1377.5113525390625, 249.35748291015625, 542.4886474609375, 830.6425170898438], "score": 0.04640166461467743}, {"image_id": 13, "category_id": 3, "bbox": [651.2105102539062, 109.55200958251953, 381.91107177734375, 591.4548263549805], "score": 0.04631352424621582}, {"image_id": 13, "category_id": 3, "bbox": [1001.5380249023438, 127.78234100341797, 854.4579467773438, 938.9619216918945], "score": 0.04360387101769447}, {"image_id": 13, "category_id": 3, "bbox": [178.27944946289062, 139.77359008789062, 969.9604187011719, 939.8384704589844], "score": 0.042786259204149246}, {"image_id": 13, "category_id": 3, "bbox": [121.89191436767578, 91.74436950683594, 485.27220916748047, 702.5570220947266], "score": 0.04192251339554787}, {"image_id": 13, "category_id": 3, "bbox": [1370.984619140625, 0.0, 444.767578125, 483.56195068359375], "score": 0.040454600006341934}, {"image_id": 13, "category_id": 3, "bbox": [1005.9633178710938, 0.0, 392.75653076171875, 417.71063232421875], "score": 0.03943023830652237}, {"image_id": 13, "category_id": 3, "bbox": [560.9658203125, 0.0, 929.433349609375, 739.1602172851562], "score": 0.03881663456559181}, {"image_id": 13, "category_id": 3, "bbox": [371.1028137207031, 0.0, 505.8035583496094, 616.2576904296875], "score": 0.036350179463624954}, {"image_id": 14, "category_id": 1, "bbox": [567.2349853515625, 507.35601806640625, 430.724609375, 555.8296508789062], "score": 0.7526895999908447}, {"image_id": 14, "category_id": 1, "bbox": [590.30126953125, 267.6692810058594, 280.43048095703125, 692.3636169433594], "score": 0.6221360564231873}, {"image_id": 14, "category_id": 1, "bbox": [605.8358154296875, 105.60836029052734, 246.06884765625, 620.2003555297852], "score": 0.43953657150268555}, {"image_id": 14, "category_id": 1, "bbox": [492.1051025390625, 227.9801025390625, 366.5050048828125, 608.7412109375], "score": 0.3361971974372864}, {"image_id": 14, "category_id": 1, "bbox": [445.098388671875, 541.566650390625, 474.5845947265625, 538.433349609375], "score": 0.2989497780799866}, {"image_id": 14, "category_id": 1, "bbox": [601.396728515625, 235.50003051757812, 366.38763427734375, 600.2229309082031], "score": 0.27120867371559143}, {"image_id": 14, "category_id": 1, "bbox": [1640.0386962890625, 0.0, 279.9613037109375, 616.9367065429688], "score": 0.26389074325561523}, {"image_id": 14, "category_id": 1, "bbox": [662.9815673828125, 522.5117797851562, 464.5909423828125, 557.4882202148438], "score": 0.15966013073921204}, {"image_id": 14, "category_id": 1, "bbox": [964.0552368164062, 508.35107421875, 296.98394775390625, 526.0711669921875], "score": 0.15910416841506958}, {"image_id": 14, "category_id": 1, "bbox": [434.9674072265625, 314.8119812011719, 450.33843994140625, 720.6828918457031], "score": 0.1045728549361229}, {"image_id": 14, "category_id": 1, "bbox": [601.0831909179688, 45.43880081176758, 375.8907470703125, 644.2913017272949], "score": 0.09438355267047882}, {"image_id": 14, "category_id": 1, "bbox": [208.09274291992188, 79.05366516113281, 402.2446594238281, 526.2733001708984], "score": 0.08329986035823822}, {"image_id": 14, "category_id": 1, "bbox": [543.2828979492188, 298.719482421875, 557.5109252929688, 744.8590087890625], "score": 0.07453954219818115}, {"image_id": 14, "category_id": 1, "bbox": [461.41510009765625, 17.290180206298828, 778.1782836914062, 997.6200370788574], "score": 0.07449167966842651}, {"image_id": 14, "category_id": 1, "bbox": [253.7256317138672, 19.2048397064209, 796.9589385986328, 987.3511905670166], "score": 0.0730411484837532}, {"image_id": 14, "category_id": 1, "bbox": [822.0243530273438, 21.193862915039062, 847.4710083007812, 990.4091033935547], "score": 0.0709768682718277}, {"image_id": 14, "category_id": 1, "bbox": [1202.78076171875, 16.97161102294922, 717.21923828125, 977.2130813598633], "score": 0.06690913438796997}, {"image_id": 14, "category_id": 1, "bbox": [614.737548828125, 127.54102325439453, 816.534912109375, 952.4589767456055], "score": 0.06335742771625519}, {"image_id": 14, "category_id": 1, "bbox": [0.0, 37.49079132080078, 692.2682495117188, 937.9087448120117], "score": 0.061122409999370575}, {"image_id": 14, "category_id": 1, "bbox": [915.9555053710938, 350.829345703125, 405.46893310546875, 642.7755737304688], "score": 0.0605589896440506}, {"image_id": 14, "category_id": 1, "bbox": [973.8382568359375, 608.9243774414062, 406.0906982421875, 471.07562255859375], "score": 0.05796436965465546}, {"image_id": 14, "category_id": 1, "bbox": [1525.612060546875, 0.0, 376.1258544921875, 492.5939025878906], "score": 0.05238942429423332}, {"image_id": 14, "category_id": 1, "bbox": [413.649169921875, 12.840768814086914, 453.1514892578125, 683.0646877288818], "score": 0.05153421312570572}, {"image_id": 14, "category_id": 1, "bbox": [839.6356811523438, 600.3726196289062, 421.07501220703125, 479.62738037109375], "score": 0.050987571477890015}, {"image_id": 14, "category_id": 1, "bbox": [2.492523193359375, 258.643310546875, 936.2057800292969, 821.356689453125], "score": 0.04724672809243202}, {"image_id": 14, "category_id": 1, "bbox": [291.099609375, 668.8362426757812, 493.7979736328125, 411.16375732421875], "score": 0.04526422917842865}, {"image_id": 14, "category_id": 1, "bbox": [20.304656982421875, 0.0, 895.2794494628906, 551.8087158203125], "score": 0.04254207760095596}, {"image_id": 14, "category_id": 1, "bbox": [349.6882629394531, 215.2140350341797, 427.0466613769531, 601.7118682861328], "score": 0.04094712808728218}, {"image_id": 14, "category_id": 1, "bbox": [1012.6121215820312, 132.66036987304688, 841.9822387695312, 947.3396301269531], "score": 0.04003438353538513}, {"image_id": 14, "category_id": 1, "bbox": [0.0, 249.84130859375, 561.037353515625, 830.15869140625], "score": 0.03949246555566788}, {"image_id": 14, "category_id": 1, "bbox": [384.57177734375, 353.7644348144531, 939.123291015625, 726.2355651855469], "score": 0.03920765221118927}, {"image_id": 14, "category_id": 1, "bbox": [1632.779296875, 0.0, 274.2659912109375, 373.1532897949219], "score": 0.038811247795820236}, {"image_id": 14, "category_id": 1, "bbox": [137.48428344726562, 464.8445129394531, 1047.3683776855469, 615.1554870605469], "score": 0.0384720116853714}, {"image_id": 14, "category_id": 1, "bbox": [1076.3553466796875, 523.0106811523438, 416.0303955078125, 556.9893188476562], "score": 0.038211073726415634}, {"image_id": 14, "category_id": 2, "bbox": [960.9970703125, 459.5618896484375, 296.2281494140625, 528.3929443359375], "score": 0.21660299599170685}, {"image_id": 14, "category_id": 2, "bbox": [960.7950439453125, 542.0023803710938, 395.251220703125, 537.9976196289062], "score": 0.15212364494800568}, {"image_id": 14, "category_id": 2, "bbox": [567.2349853515625, 507.35601806640625, 430.724609375, 555.8296508789062], "score": 0.10515573620796204}, {"image_id": 14, "category_id": 2, "bbox": [845.2739868164062, 384.4676208496094, 402.37957763671875, 574.6394348144531], "score": 0.10337568819522858}, {"image_id": 14, "category_id": 2, "bbox": [839.6356811523438, 600.3726196289062, 421.07501220703125, 479.62738037109375], "score": 0.08198925852775574}, {"image_id": 14, "category_id": 2, "bbox": [1080.6722412109375, 586.7930908203125, 408.9112548828125, 493.2069091796875], "score": 0.07446958869695663}, {"image_id": 14, "category_id": 2, "bbox": [287.3277282714844, 734.7307739257812, 500.1032409667969, 345.26922607421875], "score": 0.07171515375375748}, {"image_id": 14, "category_id": 2, "bbox": [445.098388671875, 541.566650390625, 474.5845947265625, 538.433349609375], "score": 0.05800225958228111}, {"image_id": 14, "category_id": 2, "bbox": [955.2617797851562, 344.5020751953125, 438.27447509765625, 654.572998046875], "score": 0.05690337345004082}, {"image_id": 14, "category_id": 2, "bbox": [900.77294921875, 749.476806640625, 424.463134765625, 330.523193359375], "score": 0.0548742301762104}, {"image_id": 14, "category_id": 2, "bbox": [100.0835952758789, 673.2844848632812, 517.0613632202148, 406.71551513671875], "score": 0.054640933871269226}, {"image_id": 14, "category_id": 2, "bbox": [686.8622436523438, 692.5205688476562, 416.43243408203125, 387.47943115234375], "score": 0.0499107800424099}, {"image_id": 14, "category_id": 2, "bbox": [590.30126953125, 267.6692810058594, 280.43048095703125, 692.3636169433594], "score": 0.04850592836737633}, {"image_id": 14, "category_id": 2, "bbox": [1249.2939453125, 594.5696411132812, 331.4652099609375, 485.43035888671875], "score": 0.04813047870993614}, {"image_id": 14, "category_id": 2, "bbox": [687.1847534179688, 356.4091796875, 434.76361083984375, 644.8225708007812], "score": 0.04804122820496559}, {"image_id": 14, "category_id": 2, "bbox": [936.4876098632812, 274.38409423828125, 346.63616943359375, 597.5467529296875], "score": 0.046264853328466415}, {"image_id": 14, "category_id": 2, "bbox": [351.6494445800781, 252.49285888671875, 429.8062438964844, 620.0924072265625], "score": 0.045265890657901764}, {"image_id": 14, "category_id": 2, "bbox": [1126.8775634765625, 376.07647705078125, 351.744873046875, 597.73779296875], "score": 0.043686844408512115}, {"image_id": 14, "category_id": 2, "bbox": [1269.508544921875, 655.9939575195312, 485.4710693359375, 424.00604248046875], "score": 0.042864374816417694}, {"image_id": 14, "category_id": 2, "bbox": [608.7178955078125, 177.033935546875, 230.68719482421875, 589.6218872070312], "score": 0.04267600551247597}, {"image_id": 14, "category_id": 2, "bbox": [237.9971160888672, 599.7903442382812, 437.81922912597656, 480.20965576171875], "score": 0.042275045067071915}, {"image_id": 14, "category_id": 2, "bbox": [5.378929138183594, 716.0562744140625, 489.92969512939453, 363.9437255859375], "score": 0.04160819575190544}, {"image_id": 14, "category_id": 2, "bbox": [1314.7264404296875, 83.51842498779297, 326.9503173828125, 635.6460037231445], "score": 0.04131321609020233}, {"image_id": 14, "category_id": 2, "bbox": [1426.92333984375, 0.0, 402.13818359375, 483.20111083984375], "score": 0.041259489953517914}, {"image_id": 14, "category_id": 2, "bbox": [465.905029296875, 778.0617065429688, 549.37255859375, 301.93829345703125], "score": 0.04022002965211868}, {"image_id": 14, "category_id": 2, "bbox": [203.6924285888672, 41.09975051879883, 399.9777374267578, 527.4033622741699], "score": 0.03917103260755539}, {"image_id": 14, "category_id": 2, "bbox": [1237.5703125, 133.84193420410156, 354.728759765625, 647.5585174560547], "score": 0.03888211399316788}, {"image_id": 14, "category_id": 2, "bbox": [1398.0546875, 97.52527618408203, 357.835205078125, 606.1565475463867], "score": 0.0385441929101944}, {"image_id": 14, "category_id": 2, "bbox": [434.9674072265625, 314.8119812011719, 450.33843994140625, 720.6828918457031], "score": 0.03813561797142029}, {"image_id": 14, "category_id": 3, "bbox": [960.9970703125, 459.5618896484375, 296.2281494140625, 528.3929443359375], "score": 0.5388631224632263}, {"image_id": 14, "category_id": 3, "bbox": [561.0100708007812, 533.8311767578125, 490.40179443359375, 546.1688232421875], "score": 0.34010806679725647}, {"image_id": 14, "category_id": 3, "bbox": [960.7950439453125, 542.0023803710938, 395.251220703125, 537.9976196289062], "score": 0.3287065923213959}, {"image_id": 14, "category_id": 3, "bbox": [845.2739868164062, 384.4676208496094, 402.37957763671875, 574.6394348144531], "score": 0.17846789956092834}, {"image_id": 14, "category_id": 3, "bbox": [834.0624389648438, 538.7200317382812, 429.97491455078125, 541.2799682617188], "score": 0.17276792228221893}, {"image_id": 14, "category_id": 3, "bbox": [955.2617797851562, 344.5020751953125, 438.27447509765625, 654.572998046875], "score": 0.16832970082759857}, {"image_id": 14, "category_id": 3, "bbox": [602.9393920898438, 255.1500701904297, 255.1710205078125, 635.6842803955078], "score": 0.1525108367204666}, {"image_id": 14, "category_id": 3, "bbox": [605.8358154296875, 105.60836029052734, 246.06884765625, 620.2003555297852], "score": 0.13716329634189606}, {"image_id": 14, "category_id": 3, "bbox": [181.08714294433594, 80.27454376220703, 407.9887237548828, 528.368278503418], "score": 0.10675925761461258}, {"image_id": 14, "category_id": 3, "bbox": [543.2828979492188, 298.719482421875, 557.5109252929688, 744.8590087890625], "score": 0.09941019862890244}, {"image_id": 14, "category_id": 3, "bbox": [594.1728515625, 270.671875, 383.41650390625, 692.5737915039062], "score": 0.09923187643289566}, {"image_id": 14, "category_id": 3, "bbox": [1076.3553466796875, 523.0106811523438, 416.0303955078125, 556.9893188476562], "score": 0.09775498509407043}, {"image_id": 14, "category_id": 3, "bbox": [473.5721740722656, 269.05010986328125, 387.4275207519531, 689.786376953125], "score": 0.08130435645580292}, {"image_id": 14, "category_id": 3, "bbox": [445.098388671875, 541.566650390625, 474.5845947265625, 538.433349609375], "score": 0.07966332882642746}, {"image_id": 14, "category_id": 3, "bbox": [564.1710815429688, 350.7880859375, 904.2788696289062, 729.2119140625], "score": 0.07961498945951462}, {"image_id": 14, "category_id": 3, "bbox": [232.97412109375, 731.0089721679688, 467.35980224609375, 348.99102783203125], "score": 0.07857915759086609}, {"image_id": 14, "category_id": 3, "bbox": [361.6155090332031, 735.734619140625, 519.7944030761719, 344.265380859375], "score": 0.0658208504319191}, {"image_id": 14, "category_id": 3, "bbox": [0.0, 350.018798828125, 978.42919921875, 729.981201171875], "score": 0.06571479886770248}, {"image_id": 14, "category_id": 3, "bbox": [465.905029296875, 778.0617065429688, 549.37255859375, 301.93829345703125], "score": 0.06359872221946716}, {"image_id": 14, "category_id": 3, "bbox": [453.34039306640625, 127.3935317993164, 773.4208374023438, 952.6064682006836], "score": 0.06020958349108696}, {"image_id": 14, "category_id": 3, "bbox": [900.77294921875, 749.476806640625, 424.463134765625, 330.523193359375], "score": 0.05750727280974388}, {"image_id": 14, "category_id": 3, "bbox": [1516.5997314453125, 0.0, 403.4002685546875, 228.4379425048828], "score": 0.056951526552438736}, {"image_id": 14, "category_id": 3, "bbox": [686.8622436523438, 692.5205688476562, 416.43243408203125, 387.47943115234375], "score": 0.056113481521606445}, {"image_id": 14, "category_id": 3, "bbox": [297.166259765625, 553.3778076171875, 481.62823486328125, 526.6221923828125], "score": 0.05128338560461998}, {"image_id": 14, "category_id": 3, "bbox": [1640.0386962890625, 0.0, 279.9613037109375, 616.9367065429688], "score": 0.04986603930592537}, {"image_id": 14, "category_id": 3, "bbox": [1745.5712890625, 0.0, 174.4287109375, 192.38876342773438], "score": 0.04901042580604553}, {"image_id": 14, "category_id": 3, "bbox": [1599.0345458984375, 0.0, 320.9654541015625, 171.02035522460938], "score": 0.04875486344099045}, {"image_id": 14, "category_id": 3, "bbox": [1673.6326904296875, 40.22676086425781, 246.3673095703125, 236.8329315185547], "score": 0.04815680906176567}, {"image_id": 14, "category_id": 3, "bbox": [413.649169921875, 12.840768814086914, 453.1514892578125, 683.0646877288818], "score": 0.04797089472413063}, {"image_id": 14, "category_id": 3, "bbox": [754.4046020507812, 456.9328918457031, 908.8635864257812, 623.0671081542969], "score": 0.044650591909885406}, {"image_id": 14, "category_id": 3, "bbox": [307.0609436035156, 122.27685546875, 453.7564392089844, 654.30517578125], "score": 0.04232749715447426}, {"image_id": 14, "category_id": 3, "bbox": [1246.54345703125, 477.42132568359375, 335.5970458984375, 602.5786743164062], "score": 0.04227519780397415}, {"image_id": 14, "category_id": 3, "bbox": [936.4876098632812, 274.38409423828125, 346.63616943359375, 597.5467529296875], "score": 0.041940104216337204}, {"image_id": 14, "category_id": 3, "bbox": [638.6244506835938, 469.55633544921875, 559.0007934570312, 610.4436645507812], "score": 0.04112297669053078}, {"image_id": 14, "category_id": 3, "bbox": [601.0831909179688, 45.43880081176758, 375.8907470703125, 644.2913017272949], "score": 0.04040771350264549}, {"image_id": 14, "category_id": 3, "bbox": [635.4819946289062, 20.757568359375, 817.6876831054688, 985.9949951171875], "score": 0.040165361016988754}, {"image_id": 14, "category_id": 3, "bbox": [1202.78076171875, 16.97161102294922, 717.21923828125, 977.2130813598633], "score": 0.03838537633419037}, {"image_id": 15, "category_id": 1, "bbox": [284.91510009765625, 450.2974548339844, 301.21368408203125, 269.7025451660156], "score": 0.2997525632381439}, {"image_id": 15, "category_id": 1, "bbox": [555.9175415039062, 328.0311279296875, 252.61529541015625, 379.929931640625], "score": 0.20892192423343658}, {"image_id": 15, "category_id": 1, "bbox": [386.26690673828125, 447.0202331542969, 325.08245849609375, 272.9797668457031], "score": 0.13350306451320648}, {"image_id": 15, "category_id": 1, "bbox": [119.32919311523438, 8.192298889160156, 619.7023620605469, 659.0658798217773], "score": 0.11995235830545425}, {"image_id": 15, "category_id": 1, "bbox": [306.305908203125, 289.4626770019531, 340.86572265625, 430.5373229980469], "score": 0.10713429749011993}, {"image_id": 15, "category_id": 1, "bbox": [897.9622802734375, 216.91004943847656, 259.6658935546875, 431.10447692871094], "score": 0.09514526277780533}, {"image_id": 15, "category_id": 1, "bbox": [1067.0791015625, 0.0, 212.9208984375, 406.5905456542969], "score": 0.08614256978034973}, {"image_id": 15, "category_id": 1, "bbox": [460.75286865234375, 409.401611328125, 321.98492431640625, 310.598388671875], "score": 0.08206141740083694}, {"image_id": 15, "category_id": 1, "bbox": [265.2857666015625, 80.8100357055664, 580.7918701171875, 639.1899642944336], "score": 0.07750962674617767}, {"image_id": 15, "category_id": 1, "bbox": [570.964599609375, 222.68138122558594, 232.615478515625, 365.35633850097656], "score": 0.07376915216445923}, {"image_id": 15, "category_id": 1, "bbox": [381.3116455078125, 5.584384918212891, 612.2816162109375, 661.5071067810059], "score": 0.07220371812582016}, {"image_id": 15, "category_id": 1, "bbox": [916.8201904296875, 158.23199462890625, 232.10302734375, 362.18060302734375], "score": 0.0666867345571518}, {"image_id": 15, "category_id": 1, "bbox": [583.2916259765625, 400.37310791015625, 284.61285400390625, 319.62689208984375], "score": 0.06342270225286484}, {"image_id": 15, "category_id": 1, "bbox": [0.0, 15.127452850341797, 494.71258544921875, 638.1953010559082], "score": 0.061983272433280945}, {"image_id": 15, "category_id": 1, "bbox": [550.5108032226562, 0.0, 562.9132690429688, 584.5116577148438], "score": 0.05714055150747299}, {"image_id": 15, "category_id": 1, "bbox": [799.176025390625, 165.86282348632812, 480.823974609375, 554.1371765136719], "score": 0.05553276464343071}, {"image_id": 15, "category_id": 1, "bbox": [506.4452209472656, 160.68675231933594, 620.6016540527344, 559.3132476806641], "score": 0.04692748934030533}, {"image_id": 15, "category_id": 1, "bbox": [238.74917602539062, 317.44024658203125, 636.9797058105469, 402.55975341796875], "score": 0.046654924750328064}, {"image_id": 15, "category_id": 1, "bbox": [469.9813232421875, 233.91444396972656, 313.74835205078125, 429.4609832763672], "score": 0.04583228379487991}, {"image_id": 15, "category_id": 1, "bbox": [717.3346557617188, 344.4520568847656, 289.74493408203125, 375.5479431152344], "score": 0.04422125592827797}, {"image_id": 15, "category_id": 1, "bbox": [0.0, 243.67950439453125, 659.1595458984375, 476.32049560546875], "score": 0.04374482482671738}, {"image_id": 15, "category_id": 1, "bbox": [534.0018310546875, 0.0, 617.26904296875, 360.382080078125], "score": 0.04262200742959976}, {"image_id": 15, "category_id": 1, "bbox": [929.6754150390625, 21.531417846679688, 232.0682373046875, 436.56944274902344], "score": 0.04210285842418671}, {"image_id": 15, "category_id": 1, "bbox": [8.2193603515625, 0.0, 636.0980224609375, 357.1515197753906], "score": 0.0388530008494854}, {"image_id": 15, "category_id": 1, "bbox": [643.494384765625, 326.48583984375, 288.2838134765625, 393.51416015625], "score": 0.038426004350185394}, {"image_id": 15, "category_id": 1, "bbox": [246.31805419921875, 0.0, 672.9280395507812, 359.90020751953125], "score": 0.03661080822348595}, {"image_id": 15, "category_id": 1, "bbox": [807.4751586914062, 183.95301818847656, 297.55621337890625, 442.38731384277344], "score": 0.03621523082256317}, {"image_id": 15, "category_id": 2, "bbox": [558.860595703125, 296.6305236816406, 245.22216796875, 387.9569396972656], "score": 0.18008014559745789}, {"image_id": 15, "category_id": 2, "bbox": [592.989990234375, 231.11990356445312, 235.485595703125, 350.7550354003906], "score": 0.11358044296503067}, {"image_id": 15, "category_id": 2, "bbox": [486.3038635253906, 222.37753295898438, 292.3921203613281, 365.4058532714844], "score": 0.09818989783525467}, {"image_id": 15, "category_id": 2, "bbox": [382.8323059082031, 318.0440979003906, 344.9108581542969, 399.6003112792969], "score": 0.07726331055164337}, {"image_id": 15, "category_id": 2, "bbox": [648.5999755859375, 265.9183654785156, 272.69512939453125, 363.5362854003906], "score": 0.07118812948465347}, {"image_id": 15, "category_id": 2, "bbox": [309.4896240234375, 445.9449462890625, 335.2369384765625, 274.0550537109375], "score": 0.06269417703151703}, {"image_id": 15, "category_id": 2, "bbox": [911.2342529296875, 172.74163818359375, 238.197021484375, 380.03057861328125], "score": 0.05578232929110527}, {"image_id": 15, "category_id": 2, "bbox": [400.4683532714844, 219.5218048095703, 317.6741027832031, 382.5673065185547], "score": 0.05446565896272659}, {"image_id": 15, "category_id": 2, "bbox": [218.48794555664062, 0.0, 281.9642639160156, 272.89208984375], "score": 0.05397253856062889}, {"image_id": 15, "category_id": 2, "bbox": [318.94927978515625, 281.7162780761719, 311.876953125, 421.7878723144531], "score": 0.05256935581564903}, {"image_id": 15, "category_id": 2, "bbox": [261.9578857421875, 447.0304260253906, 265.19805908203125, 272.9695739746094], "score": 0.050740357488393784}, {"image_id": 15, "category_id": 2, "bbox": [236.7536163330078, 52.073150634765625, 219.83119201660156, 367.0986022949219], "score": 0.05021743103861809}, {"image_id": 15, "category_id": 2, "bbox": [853.0221557617188, 441.9833984375, 313.57476806640625, 278.0166015625], "score": 0.04955644905567169}, {"image_id": 15, "category_id": 2, "bbox": [653.7049560546875, 423.1524963378906, 312.6859130859375, 296.8475036621094], "score": 0.047936584800481796}, {"image_id": 15, "category_id": 2, "bbox": [192.6336669921875, 17.61650848388672, 218.5576171875, 359.18738555908203], "score": 0.04739083722233772}, {"image_id": 15, "category_id": 2, "bbox": [716.12353515625, 289.7076110839844, 290.47662353515625, 391.2383117675781], "score": 0.046481166034936905}, {"image_id": 15, "category_id": 2, "bbox": [583.2916259765625, 400.37310791015625, 284.61285400390625, 319.62689208984375], "score": 0.04603557661175728}, {"image_id": 15, "category_id": 2, "bbox": [476.6762390136719, 359.8182373046875, 306.7839660644531, 360.1817626953125], "score": 0.045040156692266464}, {"image_id": 15, "category_id": 2, "bbox": [95.72367858886719, 125.01004791259766, 275.9160614013672, 362.58629608154297], "score": 0.04476787894964218}, {"image_id": 15, "category_id": 2, "bbox": [263.78436279296875, 0.0, 308.42047119140625, 312.9623718261719], "score": 0.04180523380637169}, {"image_id": 15, "category_id": 2, "bbox": [826.1325073242188, 109.44786071777344, 258.76776123046875, 390.80470275878906], "score": 0.04159163311123848}, {"image_id": 15, "category_id": 2, "bbox": [737.7461547851562, 170.91270446777344, 257.1009521484375, 400.6968536376953], "score": 0.04107937961816788}, {"image_id": 15, "category_id": 2, "bbox": [537.5369873046875, 148.45718383789062, 281.99810791015625, 398.4436340332031], "score": 0.04044336453080177}, {"image_id": 15, "category_id": 2, "bbox": [894.1014404296875, 63.635677337646484, 228.24365234375, 427.52194595336914], "score": 0.0401245579123497}, {"image_id": 15, "category_id": 2, "bbox": [106.47913360595703, 8.222768783569336, 253.17900848388672, 379.88401222229004], "score": 0.039341866970062256}, {"image_id": 15, "category_id": 2, "bbox": [218.0297088623047, 315.354248046875, 305.1634063720703, 404.645751953125], "score": 0.03925473988056183}, {"image_id": 15, "category_id": 2, "bbox": [150.67440795898438, 161.9541015625, 289.331298828125, 365.28924560546875], "score": 0.03906649351119995}, {"image_id": 15, "category_id": 2, "bbox": [1015.0320434570312, 83.0521240234375, 208.35797119140625, 395.8305969238281], "score": 0.03903981298208237}, {"image_id": 15, "category_id": 2, "bbox": [888.8126220703125, 250.46983337402344, 268.2392578125, 424.55726623535156], "score": 0.0379801020026207}, {"image_id": 15, "category_id": 2, "bbox": [144.16204833984375, 0.0, 307.8683776855469, 235.1691436767578], "score": 0.03780936077237129}, {"image_id": 15, "category_id": 2, "bbox": [76.85116577148438, 445.4505615234375, 312.6731262207031, 274.5494384765625], "score": 0.036969225853681564}, {"image_id": 15, "category_id": 2, "bbox": [21.388458251953125, 124.09181213378906, 282.1029357910156, 370.08824157714844], "score": 0.03691590204834938}, {"image_id": 15, "category_id": 2, "bbox": [809.4547729492188, 222.21839904785156, 292.79351806640625, 429.74778747558594], "score": 0.0366983637213707}, {"image_id": 15, "category_id": 2, "bbox": [1062.2579345703125, 15.140653610229492, 217.7420654296875, 433.8019428253174], "score": 0.036641690880060196}, {"image_id": 15, "category_id": 2, "bbox": [797.1039428710938, 395.4020690917969, 272.64801025390625, 324.5979309082031], "score": 0.03639151155948639}, {"image_id": 15, "category_id": 2, "bbox": [272.7364807128906, 47.5428352355957, 263.5800476074219, 377.7568473815918], "score": 0.03638328239321709}, {"image_id": 15, "category_id": 2, "bbox": [328.54547119140625, 0.0, 325.69390869140625, 272.14227294921875], "score": 0.03636637702584267}, {"image_id": 15, "category_id": 2, "bbox": [71.16325378417969, 0.0, 321.3257598876953, 276.5601806640625], "score": 0.036110471934080124}, {"image_id": 15, "category_id": 2, "bbox": [665.8670043945312, 132.3923797607422, 273.0784912109375, 412.49916076660156], "score": 0.035840559750795364}, {"image_id": 15, "category_id": 3, "bbox": [555.9175415039062, 328.0311279296875, 252.61529541015625, 379.929931640625], "score": 0.5262241363525391}, {"image_id": 15, "category_id": 3, "bbox": [570.964599609375, 222.68138122558594, 232.615478515625, 365.35633850097656], "score": 0.33346280455589294}, {"image_id": 15, "category_id": 3, "bbox": [248.63467407226562, 239.8922119140625, 619.3224792480469, 480.1077880859375], "score": 0.1497812420129776}, {"image_id": 15, "category_id": 3, "bbox": [469.9813232421875, 233.91444396972656, 313.74835205078125, 429.4609832763672], "score": 0.14499850571155548}, {"image_id": 15, "category_id": 3, "bbox": [919.9066772460938, 118.68367004394531, 223.05987548828125, 382.4033660888672], "score": 0.1257125586271286}, {"image_id": 15, "category_id": 3, "bbox": [382.8323059082031, 318.0440979003906, 344.9108581542969, 399.6003112792969], "score": 0.12555484473705292}, {"image_id": 15, "category_id": 3, "bbox": [460.75286865234375, 409.401611328125, 321.98492431640625, 310.598388671875], "score": 0.10890904068946838}, {"image_id": 15, "category_id": 3, "bbox": [648.97509765625, 344.55706787109375, 289.83587646484375, 375.44293212890625], "score": 0.10702326148748398}, {"image_id": 15, "category_id": 3, "bbox": [506.4452209472656, 160.68675231933594, 620.6016540527344, 559.3132476806641], "score": 0.1064506247639656}, {"image_id": 15, "category_id": 3, "bbox": [583.2916259765625, 400.37310791015625, 284.61285400390625, 319.62689208984375], "score": 0.10637763887643814}, {"image_id": 15, "category_id": 3, "bbox": [284.86669921875, 366.37158203125, 298.44378662109375, 353.62841796875], "score": 0.10520832985639572}, {"image_id": 15, "category_id": 3, "bbox": [897.9622802734375, 216.91004943847656, 259.6658935546875, 431.10447692871094], "score": 0.1001012772321701}, {"image_id": 15, "category_id": 3, "bbox": [376.6915283203125, 84.57168579101562, 590.23974609375, 635.4283142089844], "score": 0.09627997875213623}, {"image_id": 15, "category_id": 3, "bbox": [648.5999755859375, 265.9183654785156, 272.69512939453125, 363.5362854003906], "score": 0.09408961236476898}, {"image_id": 15, "category_id": 3, "bbox": [799.176025390625, 165.86282348632812, 480.823974609375, 554.1371765136719], "score": 0.07922138273715973}, {"image_id": 15, "category_id": 3, "bbox": [140.79421997070312, 85.06013488769531, 599.9707336425781, 634.9398651123047], "score": 0.06963465362787247}, {"image_id": 15, "category_id": 3, "bbox": [715.276611328125, 317.3141784667969, 289.43798828125, 402.6858215332031], "score": 0.06945259869098663}, {"image_id": 15, "category_id": 3, "bbox": [242.28701782226562, 5.334720611572266, 626.4225769042969, 662.9099082946777], "score": 0.06461895257234573}, {"image_id": 15, "category_id": 3, "bbox": [603.8661499023438, 309.64794921875, 676.1338500976562, 410.35205078125], "score": 0.059298790991306305}, {"image_id": 15, "category_id": 3, "bbox": [550.5108032226562, 0.0, 562.9132690429688, 584.5116577148438], "score": 0.053432121872901917}, {"image_id": 15, "category_id": 3, "bbox": [0.0, 243.67950439453125, 659.1595458984375, 476.32049560546875], "score": 0.05238478630781174}, {"image_id": 15, "category_id": 3, "bbox": [832.3271484375, 150.47891235351562, 262.1251220703125, 370.2017517089844], "score": 0.050997294485569}, {"image_id": 15, "category_id": 3, "bbox": [809.4547729492188, 222.21839904785156, 292.79351806640625, 429.74778747558594], "score": 0.050015583634376526}, {"image_id": 15, "category_id": 3, "bbox": [1015.0320434570312, 83.0521240234375, 208.35797119140625, 395.8305969238281], "score": 0.04959661886096001}, {"image_id": 15, "category_id": 3, "bbox": [883.2669677734375, 15.153596878051758, 237.442626953125, 448.83813285827637], "score": 0.04927991330623627}, {"image_id": 15, "category_id": 3, "bbox": [305.38671875, 482.4426574707031, 343.713623046875, 237.55734252929688], "score": 0.04926776885986328}, {"image_id": 15, "category_id": 3, "bbox": [1067.0791015625, 0.0, 212.9208984375, 406.5905456542969], "score": 0.04636219143867493}, {"image_id": 15, "category_id": 3, "bbox": [358.5914001464844, 385.6519470214844, 670.0138244628906, 334.3480529785156], "score": 0.04156940057873726}, {"image_id": 15, "category_id": 3, "bbox": [0.0, 9.65753173828125, 608.2761840820312, 651.2119140625], "score": 0.03883548080921173}, {"image_id": 15, "category_id": 3, "bbox": [875.565673828125, 342.5406494140625, 257.29736328125, 377.4593505859375], "score": 0.03813798725605011}, {"image_id": 15, "category_id": 3, "bbox": [625.810546875, 0.0, 618.1572265625, 424.5748596191406], "score": 0.0377938374876976}, {"image_id": 15, "category_id": 3, "bbox": [892.9049682617188, 24.841358184814453, 387.09503173828125, 624.0291862487793], "score": 0.03615352511405945}, {"image_id": 15, "category_id": 3, "bbox": [236.7536163330078, 52.073150634765625, 219.83119201660156, 367.0986022949219], "score": 0.0358269102871418}, {"image_id": 15, "category_id": 3, "bbox": [806.228271484375, 346.5971984863281, 257.928466796875, 373.4028015136719], "score": 0.03558814898133278}, {"image_id": 16, "category_id": 1, "bbox": [362.6244812011719, 299.6726989746094, 205.22348022460938, 371.9792175292969], "score": 0.588880717754364}, {"image_id": 16, "category_id": 1, "bbox": [342.2330322265625, 224.3411102294922, 196.91412353515625, 388.96644592285156], "score": 0.455890029668808}, {"image_id": 16, "category_id": 1, "bbox": [324.1534423828125, 280.1866149902344, 313.3809814453125, 439.8133850097656], "score": 0.21698173880577087}, {"image_id": 16, "category_id": 1, "bbox": [381.912353515625, 216.9392547607422, 271.4776611328125, 400.02284240722656], "score": 0.14732572436332703}, {"image_id": 16, "category_id": 1, "bbox": [121.05645751953125, 6.633819580078125, 613.6747436523438, 669.7938537597656], "score": 0.10174722969532013}, {"image_id": 16, "category_id": 1, "bbox": [533.361572265625, 19.518310546875, 584.92333984375, 651.4182739257812], "score": 0.09975647926330566}, {"image_id": 16, "category_id": 1, "bbox": [365.91461181640625, 333.45257568359375, 369.83966064453125, 358.20172119140625], "score": 0.08535637706518173}, {"image_id": 16, "category_id": 1, "bbox": [263.83026123046875, 289.4198303222656, 254.74212646484375, 385.9544982910156], "score": 0.077920101583004}, {"image_id": 16, "category_id": 1, "bbox": [908.3897705078125, 170.68675231933594, 210.3905029296875, 387.3239288330078], "score": 0.07092487812042236}, {"image_id": 16, "category_id": 1, "bbox": [0.0, 15.668083190917969, 487.0750732421875, 639.7146072387695], "score": 0.06925756484270096}, {"image_id": 16, "category_id": 1, "bbox": [334.9927978515625, 119.18450927734375, 283.7127685546875, 458.4326171875], "score": 0.06871666014194489}, {"image_id": 16, "category_id": 1, "bbox": [385.1438293457031, 0.0, 650.6490173339844, 586.452392578125], "score": 0.06418672204017639}, {"image_id": 16, "category_id": 1, "bbox": [164.21063232421875, 15.601238250732422, 288.81182861328125, 367.2226753234863], "score": 0.0548994205892086}, {"image_id": 16, "category_id": 1, "bbox": [258.0205078125, 80.78785705566406, 608.81005859375, 639.2121429443359], "score": 0.05021756887435913}, {"image_id": 16, "category_id": 1, "bbox": [926.6473388671875, 235.84046936035156, 232.6497802734375, 417.4474334716797], "score": 0.04941415414214134}, {"image_id": 16, "category_id": 1, "bbox": [1058.0938720703125, 0.0, 221.9061279296875, 375.61273193359375], "score": 0.0490218810737133}, {"image_id": 16, "category_id": 1, "bbox": [665.4188232421875, 105.33186340332031, 547.4813232421875, 614.6681365966797], "score": 0.04763634875416756}, {"image_id": 16, "category_id": 1, "bbox": [442.40130615234375, 415.6287536621094, 312.31805419921875, 304.3712463378906], "score": 0.04652315378189087}, {"image_id": 16, "category_id": 1, "bbox": [928.5352783203125, 23.192533493041992, 220.8953857421875, 437.0414447784424], "score": 0.045879822224378586}, {"image_id": 16, "category_id": 1, "bbox": [464.6447448730469, 219.59957885742188, 267.9160461425781, 391.7303771972656], "score": 0.04544048383831978}, {"image_id": 16, "category_id": 1, "bbox": [380.5443420410156, 162.265380859375, 633.5134582519531, 557.734619140625], "score": 0.04437420889735222}, {"image_id": 16, "category_id": 1, "bbox": [630.3564453125, 0.0, 580.2840576171875, 509.85748291015625], "score": 0.04364008828997612}, {"image_id": 16, "category_id": 1, "bbox": [897.9171752929688, 169.39280700683594, 382.08282470703125, 550.6071929931641], "score": 0.04103400558233261}, {"image_id": 16, "category_id": 1, "bbox": [532.2042236328125, 0.0, 615.1090087890625, 362.6470947265625], "score": 0.04097892716526985}, {"image_id": 16, "category_id": 1, "bbox": [0.0, 176.7051239013672, 622.8216552734375, 543.2948760986328], "score": 0.040484994649887085}, {"image_id": 16, "category_id": 1, "bbox": [384.74835205078125, 540.9713134765625, 229.1282958984375, 162.43731689453125], "score": 0.03996006399393082}, {"image_id": 16, "category_id": 1, "bbox": [134.569091796875, 0.0, 627.1005859375, 375.66448974609375], "score": 0.03877488523721695}, {"image_id": 16, "category_id": 1, "bbox": [356.1250305175781, 426.20916748046875, 314.5539245605469, 293.79083251953125], "score": 0.036838095635175705}, {"image_id": 16, "category_id": 1, "bbox": [790.0411376953125, 0.0, 489.9588623046875, 576.9779663085938], "score": 0.03545915335416794}, {"image_id": 16, "category_id": 2, "bbox": [374.6915588378906, 376.95355224609375, 362.2582702636719, 333.26007080078125], "score": 0.40823251008987427}, {"image_id": 16, "category_id": 2, "bbox": [510.4267578125, 416.3811340332031, 291.88934326171875, 303.6188659667969], "score": 0.1628989428281784}, {"image_id": 16, "category_id": 2, "bbox": [464.1882629394531, 252.70892333984375, 271.3666076660156, 393.460205078125], "score": 0.0721372663974762}, {"image_id": 16, "category_id": 2, "bbox": [362.6244812011719, 299.6726989746094, 205.22348022460938, 371.9792175292969], "score": 0.06583435088396072}, {"image_id": 16, "category_id": 2, "bbox": [900.43505859375, 193.4424285888672, 222.790283203125, 409.82411193847656], "score": 0.06368616223335266}, {"image_id": 16, "category_id": 2, "bbox": [342.2330322265625, 224.3411102294922, 196.91412353515625, 388.96644592285156], "score": 0.06261888891458511}, {"image_id": 16, "category_id": 2, "bbox": [201.50938415527344, 433.6234436035156, 334.57154846191406, 286.3765563964844], "score": 0.059675030410289764}, {"image_id": 16, "category_id": 2, "bbox": [741.4872436523438, 320.49310302734375, 266.03656005859375, 389.92034912109375], "score": 0.05816587433218956}, {"image_id": 16, "category_id": 2, "bbox": [904.086181640625, 116.78359985351562, 208.246826171875, 384.7676696777344], "score": 0.05695266276597977}, {"image_id": 16, "category_id": 2, "bbox": [590.281982421875, 288.16461181640625, 281.69720458984375, 392.32293701171875], "score": 0.05417177081108093}, {"image_id": 16, "category_id": 2, "bbox": [800.674560546875, 278.0810241699219, 266.830322265625, 398.8716735839844], "score": 0.053575389087200165}, {"image_id": 16, "category_id": 2, "bbox": [654.4322509765625, 356.0010070800781, 307.76568603515625, 363.9989929199219], "score": 0.052051231265068054}, {"image_id": 16, "category_id": 2, "bbox": [381.912353515625, 216.9392547607422, 271.4776611328125, 400.02284240722656], "score": 0.049821220338344574}, {"image_id": 16, "category_id": 2, "bbox": [747.88720703125, 208.172119140625, 257.34228515625, 401.63580322265625], "score": 0.04964092746376991}, {"image_id": 16, "category_id": 2, "bbox": [1008.7347412109375, 212.14476013183594, 236.3936767578125, 372.65370178222656], "score": 0.04958510771393776}, {"image_id": 16, "category_id": 2, "bbox": [927.4014282226562, 272.383544921875, 243.11065673828125, 408.31378173828125], "score": 0.049196965992450714}, {"image_id": 16, "category_id": 2, "bbox": [535.3876953125, 212.96585083007812, 253.74530029296875, 402.7171936035156], "score": 0.04918587580323219}, {"image_id": 16, "category_id": 2, "bbox": [826.5387573242188, 165.60958862304688, 254.59857177734375, 398.0992736816406], "score": 0.04882694408297539}, {"image_id": 16, "category_id": 2, "bbox": [256.838623046875, 208.4617462158203, 259.85198974609375, 408.2347869873047], "score": 0.04751313105225563}, {"image_id": 16, "category_id": 2, "bbox": [673.7640991210938, 248.00155639648438, 272.78790283203125, 395.0783996582031], "score": 0.04608523100614548}, {"image_id": 16, "category_id": 2, "bbox": [1070.961669921875, 177.84765625, 209.038330078125, 386.8983154296875], "score": 0.04577619209885597}, {"image_id": 16, "category_id": 2, "bbox": [1025.5101318359375, 133.16525268554688, 201.0634765625, 352.4986877441406], "score": 0.04225904121994972}, {"image_id": 16, "category_id": 2, "bbox": [1073.748291015625, 81.47222137451172, 206.251708984375, 388.50479888916016], "score": 0.041746124625205994}, {"image_id": 16, "category_id": 2, "bbox": [154.67506408691406, 312.0383605957031, 303.82456970214844, 407.9616394042969], "score": 0.04020458832383156}, {"image_id": 16, "category_id": 2, "bbox": [130.55560302734375, 84.07486724853516, 600.1583862304688, 635.9251327514648], "score": 0.039116017520427704}, {"image_id": 16, "category_id": 2, "bbox": [582.5391845703125, 460.6905212402344, 332.5740966796875, 259.3094787597656], "score": 0.03909450024366379}, {"image_id": 16, "category_id": 2, "bbox": [881.3582763671875, 345.85284423828125, 247.264892578125, 374.14715576171875], "score": 0.03856780380010605}, {"image_id": 16, "category_id": 2, "bbox": [190.73626708984375, 0.0, 330.726806640625, 228.35089111328125], "score": 0.03853188827633858}, {"image_id": 16, "category_id": 2, "bbox": [162.0272674560547, 215.4930877685547, 290.28941345214844, 405.2661895751953], "score": 0.03824985772371292}, {"image_id": 16, "category_id": 2, "bbox": [324.1534423828125, 280.1866149902344, 313.3809814453125, 439.8133850097656], "score": 0.03821541368961334}, {"image_id": 16, "category_id": 2, "bbox": [928.5352783203125, 23.192533493041992, 220.8953857421875, 437.0414447784424], "score": 0.03739107400178909}, {"image_id": 16, "category_id": 2, "bbox": [780.98046875, 430.8081970214844, 313.95166015625, 289.1918029785156], "score": 0.03504335507750511}, {"image_id": 16, "category_id": 2, "bbox": [160.67730712890625, 0.0, 289.50445556640625, 360.2318420410156], "score": 0.03491990640759468}, {"image_id": 16, "category_id": 2, "bbox": [134.569091796875, 0.0, 627.1005859375, 375.66448974609375], "score": 0.03480954095721245}, {"image_id": 16, "category_id": 2, "bbox": [328.9412536621094, 0.0, 316.5586242675781, 271.8346252441406], "score": 0.03469555824995041}, {"image_id": 16, "category_id": 3, "bbox": [374.6915588378906, 376.95355224609375, 362.2582702636719, 333.26007080078125], "score": 0.5387378334999084}, {"image_id": 16, "category_id": 3, "bbox": [510.3959655761719, 376.9273986816406, 295.3989562988281, 339.6742248535156], "score": 0.2397172451019287}, {"image_id": 16, "category_id": 3, "bbox": [362.6244812011719, 299.6726989746094, 205.22348022460938, 371.9792175292969], "score": 0.2000344842672348}, {"image_id": 16, "category_id": 3, "bbox": [349.681396484375, 175.08908081054688, 227.1343994140625, 435.5223693847656], "score": 0.1607332080602646}, {"image_id": 16, "category_id": 3, "bbox": [908.3897705078125, 170.68675231933594, 210.3905029296875, 387.3239288330078], "score": 0.14116838574409485}, {"image_id": 16, "category_id": 3, "bbox": [324.1534423828125, 280.1866149902344, 313.3809814453125, 439.8133850097656], "score": 0.1339721828699112}, {"image_id": 16, "category_id": 3, "bbox": [926.6473388671875, 235.84046936035156, 232.6497802734375, 417.4474334716797], "score": 0.10526320338249207}, {"image_id": 16, "category_id": 3, "bbox": [424.9783630371094, 278.9742126464844, 371.9918518066406, 441.0257873535156], "score": 0.10098551958799362}, {"image_id": 16, "category_id": 3, "bbox": [381.912353515625, 216.9392547607422, 271.4776611328125, 400.02284240722656], "score": 0.09634407609701157}, {"image_id": 16, "category_id": 3, "bbox": [922.7388305664062, 61.649505615234375, 229.36932373046875, 429.0456848144531], "score": 0.09247448295354843}, {"image_id": 16, "category_id": 3, "bbox": [464.1882629394531, 252.70892333984375, 271.3666076660156, 393.460205078125], "score": 0.08747979998588562}, {"image_id": 16, "category_id": 3, "bbox": [176.94850158691406, 0.0, 275.32530212402344, 360.2836608886719], "score": 0.07500462979078293}, {"image_id": 16, "category_id": 3, "bbox": [1025.8707275390625, 168.45895385742188, 207.7408447265625, 340.8694152832031], "score": 0.06891115754842758}, {"image_id": 16, "category_id": 3, "bbox": [826.5387573242188, 165.60958862304688, 254.59857177734375, 398.0992736816406], "score": 0.0668478012084961}, {"image_id": 16, "category_id": 3, "bbox": [652.6432495117188, 16.462806701660156, 555.1187133789062, 647.6706161499023], "score": 0.06487735360860825}, {"image_id": 16, "category_id": 3, "bbox": [226.284423828125, 5.047187805175781, 654.1099853515625, 662.524284362793], "score": 0.061036452651023865}, {"image_id": 16, "category_id": 3, "bbox": [568.3052978515625, 330.9673156738281, 337.3135986328125, 389.0326843261719], "score": 0.056971173733472824}, {"image_id": 16, "category_id": 3, "bbox": [535.3876953125, 212.96585083007812, 253.74530029296875, 402.7171936035156], "score": 0.05513722449541092}, {"image_id": 16, "category_id": 3, "bbox": [504.2130126953125, 161.31239318847656, 637.9130859375, 558.6876068115234], "score": 0.05499034374952316}, {"image_id": 16, "category_id": 3, "bbox": [130.55560302734375, 84.07486724853516, 600.1583862304688, 635.9251327514648], "score": 0.05390508845448494}, {"image_id": 16, "category_id": 3, "bbox": [256.838623046875, 208.4617462158203, 259.85198974609375, 408.2347869873047], "score": 0.05376997962594032}, {"image_id": 16, "category_id": 3, "bbox": [800.674560546875, 278.0810241699219, 266.830322265625, 398.8716735839844], "score": 0.05218616500496864}, {"image_id": 16, "category_id": 3, "bbox": [0.0, 11.810302734375, 595.8836669921875, 649.3958740234375], "score": 0.04942874237895012}, {"image_id": 16, "category_id": 3, "bbox": [357.4493103027344, 240.18296813964844, 664.8404235839844, 479.81703186035156], "score": 0.04893581569194794}, {"image_id": 16, "category_id": 3, "bbox": [888.44873046875, 307.0274963378906, 238.362060546875, 411.1767883300781], "score": 0.04624510183930397}, {"image_id": 16, "category_id": 3, "bbox": [796.6165771484375, 174.51051330566406, 483.3834228515625, 545.4894866943359], "score": 0.04518451914191246}, {"image_id": 16, "category_id": 3, "bbox": [1022.3482055664062, 43.18114471435547, 192.48663330078125, 409.8479995727539], "score": 0.043276216834783554}, {"image_id": 16, "category_id": 3, "bbox": [385.13409423828125, 558.0647583007812, 232.6156005859375, 161.93524169921875], "score": 0.042906444519758224}, {"image_id": 16, "category_id": 3, "bbox": [338.5818176269531, 471.57196044921875, 320.5621032714844, 248.42803955078125], "score": 0.039614252746105194}, {"image_id": 16, "category_id": 3, "bbox": [186.42694091796875, 48.5874137878418, 304.403564453125, 413.3409309387207], "score": 0.03960668668150902}, {"image_id": 16, "category_id": 3, "bbox": [271.910888671875, 405.84466552734375, 323.2139892578125, 314.15533447265625], "score": 0.0395544096827507}, {"image_id": 16, "category_id": 3, "bbox": [748.4271850585938, 250.6399383544922, 260.8472900390625, 387.2527618408203], "score": 0.03748486191034317}, {"image_id": 16, "category_id": 3, "bbox": [385.1438293457031, 0.0, 650.6490173339844, 586.452392578125], "score": 0.03702947124838829}, {"image_id": 16, "category_id": 3, "bbox": [0.0, 250.64659118652344, 653.8670043945312, 469.35340881347656], "score": 0.03537863492965698}, {"image_id": 16, "category_id": 3, "bbox": [0.0, 89.0743637084961, 367.0728759765625, 630.9256362915039], "score": 0.035141654312610626}, {"image_id": 16, "category_id": 3, "bbox": [134.569091796875, 0.0, 627.1005859375, 375.66448974609375], "score": 0.0347442701458931}, {"image_id": 17, "category_id": 1, "bbox": [642.640380859375, 155.69898986816406, 225.71331787109375, 380.4238739013672], "score": 0.7648375034332275}, {"image_id": 17, "category_id": 1, "bbox": [709.8895874023438, 13.42254638671875, 177.9163818359375, 440.2854309082031], "score": 0.458734393119812}, {"image_id": 17, "category_id": 1, "bbox": [627.34912109375, 18.419300079345703, 333.43218994140625, 498.51606369018555], "score": 0.37424737215042114}, {"image_id": 17, "category_id": 1, "bbox": [620.2101440429688, 226.9580535888672, 265.8079833984375, 423.08729553222656], "score": 0.28743839263916016}, {"image_id": 17, "category_id": 1, "bbox": [754.619384765625, 17.712295532226562, 213.39239501953125, 433.1717987060547], "score": 0.21018463373184204}, {"image_id": 17, "category_id": 1, "bbox": [248.20370483398438, 2.4769363403320312, 619.7076110839844, 678.7368087768555], "score": 0.19410593807697296}, {"image_id": 17, "category_id": 1, "bbox": [541.73193359375, 170.29168701171875, 314.62603759765625, 458.75823974609375], "score": 0.18471553921699524}, {"image_id": 17, "category_id": 1, "bbox": [714.63427734375, 0.0, 170.64453125, 288.231201171875], "score": 0.16654491424560547}, {"image_id": 17, "category_id": 1, "bbox": [0.0, 10.041023254394531, 614.572998046875, 655.2190475463867], "score": 0.1395576298236847}, {"image_id": 17, "category_id": 1, "bbox": [562.1514892578125, 4.9571685791015625, 269.9791259765625, 456.39512634277344], "score": 0.11050999909639359}, {"image_id": 17, "category_id": 1, "bbox": [413.08575439453125, 0.0, 546.5186767578125, 595.09326171875], "score": 0.10670723021030426}, {"image_id": 17, "category_id": 1, "bbox": [752.6062622070312, 0.0, 214.60028076171875, 284.84356689453125], "score": 0.1051381528377533}, {"image_id": 17, "category_id": 1, "bbox": [625.20751953125, 358.7413635253906, 272.4921875, 361.2586364746094], "score": 0.09037618339061737}, {"image_id": 17, "category_id": 1, "bbox": [710.1547241210938, 73.19877624511719, 298.57452392578125, 456.23854064941406], "score": 0.06896702945232391}, {"image_id": 17, "category_id": 1, "bbox": [123.26718139648438, 0.0, 629.2522888183594, 577.5272827148438], "score": 0.06626073271036148}, {"image_id": 17, "category_id": 1, "bbox": [176.06105041503906, 6.988197326660156, 270.39991760253906, 371.98155975341797], "score": 0.05491214245557785}, {"image_id": 17, "category_id": 1, "bbox": [471.49468994140625, 148.0978240966797, 314.69976806640625, 435.9379425048828], "score": 0.0538911372423172}, {"image_id": 17, "category_id": 1, "bbox": [927.9784545898438, 169.49703979492188, 226.39202880859375, 426.4973449707031], "score": 0.05239735543727875}, {"image_id": 17, "category_id": 1, "bbox": [559.3236083984375, 0.0, 519.92529296875, 586.4776000976562], "score": 0.0479855090379715}, {"image_id": 17, "category_id": 1, "bbox": [375.63616943359375, 165.5070343017578, 598.5910034179688, 554.4929656982422], "score": 0.046177081763744354}, {"image_id": 17, "category_id": 1, "bbox": [827.1849975585938, 0.0, 207.22430419921875, 271.7910461425781], "score": 0.04441769793629646}, {"image_id": 17, "category_id": 1, "bbox": [513.6051635742188, 355.4906005859375, 339.42755126953125, 364.5093994140625], "score": 0.0431843101978302}, {"image_id": 17, "category_id": 1, "bbox": [81.54586791992188, 173.18812561035156, 704.6744079589844, 546.8118743896484], "score": 0.03941582888364792}, {"image_id": 17, "category_id": 2, "bbox": [627.2880249023438, 303.3180847167969, 266.27679443359375, 341.3589172363281], "score": 0.2694793939590454}, {"image_id": 17, "category_id": 2, "bbox": [557.4025268554688, 346.1223449707031, 308.894775390625, 373.8776550292969], "score": 0.14485512673854828}, {"image_id": 17, "category_id": 2, "bbox": [560.3780517578125, 234.26344299316406, 292.88714599609375, 406.70225524902344], "score": 0.11618058383464813}, {"image_id": 17, "category_id": 2, "bbox": [456.5608825683594, 276.8895568847656, 333.2786560058594, 391.6845397949219], "score": 0.09265735745429993}, {"image_id": 17, "category_id": 2, "bbox": [176.06105041503906, 6.988197326660156, 270.39991760253906, 371.98155975341797], "score": 0.08762674033641815}, {"image_id": 17, "category_id": 2, "bbox": [777.381591796875, 319.29669189453125, 304.945556640625, 386.52154541015625], "score": 0.07967693358659744}, {"image_id": 17, "category_id": 2, "bbox": [636.3483276367188, 112.46546173095703, 240.86932373046875, 409.54772186279297], "score": 0.07723695039749146}, {"image_id": 17, "category_id": 2, "bbox": [710.54052734375, 259.0818176269531, 292.6044921875, 370.0212707519531], "score": 0.07368699461221695}, {"image_id": 17, "category_id": 2, "bbox": [888.1851806640625, 191.5070037841797, 256.00048828125, 433.1315460205078], "score": 0.06643594801425934}, {"image_id": 17, "category_id": 2, "bbox": [707.6183471679688, 351.13037109375, 309.5440673828125, 368.86962890625], "score": 0.061486709862947464}, {"image_id": 17, "category_id": 2, "bbox": [852.9856567382812, 436.72845458984375, 302.03375244140625, 283.27154541015625], "score": 0.05872819945216179}, {"image_id": 17, "category_id": 2, "bbox": [787.1588745117188, 205.80575561523438, 277.44415283203125, 406.5547180175781], "score": 0.05764651298522949}, {"image_id": 17, "category_id": 2, "bbox": [384.1004943847656, 327.140380859375, 341.2298889160156, 360.0958251953125], "score": 0.054869819432497025}, {"image_id": 17, "category_id": 2, "bbox": [239.642578125, 77.51788330078125, 641.6549072265625, 642.4821166992188], "score": 0.0511140301823616}, {"image_id": 17, "category_id": 2, "bbox": [761.0565185546875, 0.0, 196.028564453125, 343.58642578125], "score": 0.049717165529727936}, {"image_id": 17, "category_id": 2, "bbox": [261.4793395996094, 13.59781265258789, 315.5679626464844, 361.13540267944336], "score": 0.04895368590950966}, {"image_id": 17, "category_id": 2, "bbox": [882.2146606445312, 72.02662658691406, 239.12445068359375, 405.4775848388672], "score": 0.0469292588531971}, {"image_id": 17, "category_id": 2, "bbox": [718.8453979492188, 0.0, 161.7257080078125, 319.530517578125], "score": 0.04671063646674156}, {"image_id": 17, "category_id": 2, "bbox": [798.7554931640625, 104.35291290283203, 263.0467529296875, 403.11898040771484], "score": 0.04662745073437691}, {"image_id": 17, "category_id": 2, "bbox": [562.1514892578125, 4.9571685791015625, 269.9791259765625, 456.39512634277344], "score": 0.046216342598199844}, {"image_id": 17, "category_id": 2, "bbox": [531.407470703125, 90.97057342529297, 541.0201416015625, 629.029426574707], "score": 0.04612142592668533}, {"image_id": 17, "category_id": 2, "bbox": [399.5692443847656, 4.198940277099609, 560.3531799316406, 666.7764015197754], "score": 0.04540373757481575}, {"image_id": 17, "category_id": 2, "bbox": [802.0678100585938, 6.163261413574219, 230.70526123046875, 386.36637115478516], "score": 0.04321713745594025}, {"image_id": 17, "category_id": 2, "bbox": [0.0, 9.433719635009766, 324.7626953125, 376.1458396911621], "score": 0.04228012263774872}, {"image_id": 17, "category_id": 2, "bbox": [941.8444213867188, 120.73287963867188, 212.88714599609375, 380.136962890625], "score": 0.04207183048129082}, {"image_id": 17, "category_id": 2, "bbox": [913.3199462890625, 315.23919677734375, 269.2445068359375, 377.40972900390625], "score": 0.041885875165462494}, {"image_id": 17, "category_id": 2, "bbox": [103.86776733398438, 3.766284942626953, 650.8767395019531, 666.1833000183105], "score": 0.04067213088274002}, {"image_id": 17, "category_id": 2, "bbox": [867.0841674804688, 0.0, 232.04888916015625, 352.4938049316406], "score": 0.04011482000350952}, {"image_id": 17, "category_id": 2, "bbox": [778.3263549804688, 465.1954040527344, 317.91082763671875, 254.80459594726562], "score": 0.03932567313313484}, {"image_id": 17, "category_id": 2, "bbox": [658.2764282226562, 90.51498413085938, 572.1337280273438, 629.4850158691406], "score": 0.03889114409685135}, {"image_id": 17, "category_id": 2, "bbox": [191.97482299804688, 0.0, 315.9578552246094, 230.13912963867188], "score": 0.03879811614751816}, {"image_id": 17, "category_id": 2, "bbox": [546.93994140625, 0.0, 266.18017578125, 286.6252746582031], "score": 0.038391221314668655}, {"image_id": 17, "category_id": 2, "bbox": [1014.8348388671875, 87.3218765258789, 217.29443359375, 374.9229965209961], "score": 0.03828512132167816}, {"image_id": 17, "category_id": 3, "bbox": [627.2880249023438, 303.3180847167969, 266.27679443359375, 341.3589172363281], "score": 0.5789417028427124}, {"image_id": 17, "category_id": 3, "bbox": [642.640380859375, 155.69898986816406, 225.71331787109375, 380.4238739013672], "score": 0.3278292715549469}, {"image_id": 17, "category_id": 3, "bbox": [560.3780517578125, 234.26344299316406, 292.88714599609375, 406.70225524902344], "score": 0.25536417961120605}, {"image_id": 17, "category_id": 3, "bbox": [627.34912109375, 18.419300079345703, 333.43218994140625, 498.51606369018555], "score": 0.17199085652828217}, {"image_id": 17, "category_id": 3, "bbox": [557.4025268554688, 346.1223449707031, 308.894775390625, 373.8776550292969], "score": 0.15819358825683594}, {"image_id": 17, "category_id": 3, "bbox": [188.04348754882812, 6.398540496826172, 275.52239990234375, 374.3571662902832], "score": 0.154961958527565}, {"image_id": 17, "category_id": 3, "bbox": [399.5692443847656, 4.198940277099609, 560.3531799316406, 666.7764015197754], "score": 0.14968079328536987}, {"image_id": 17, "category_id": 3, "bbox": [709.8895874023438, 13.42254638671875, 177.9163818359375, 440.2854309082031], "score": 0.13697436451911926}, {"image_id": 17, "category_id": 3, "bbox": [754.619384765625, 17.712295532226562, 213.39239501953125, 433.1717987060547], "score": 0.11727626621723175}, {"image_id": 17, "category_id": 3, "bbox": [714.63427734375, 0.0, 170.64453125, 288.231201171875], "score": 0.11073663830757141}, {"image_id": 17, "category_id": 3, "bbox": [708.415283203125, 283.960693359375, 298.839111328125, 381.5162353515625], "score": 0.10457336902618408}, {"image_id": 17, "category_id": 3, "bbox": [239.642578125, 77.51788330078125, 641.6549072265625, 642.4821166992188], "score": 0.10010140389204025}, {"image_id": 17, "category_id": 3, "bbox": [456.5608825683594, 276.8895568847656, 333.2786560058594, 391.6845397949219], "score": 0.09778010100126266}, {"image_id": 17, "category_id": 3, "bbox": [710.1547241210938, 73.19877624511719, 298.57452392578125, 456.23854064941406], "score": 0.09644732624292374}, {"image_id": 17, "category_id": 3, "bbox": [752.6062622070312, 0.0, 214.60028076171875, 284.84356689453125], "score": 0.09482651948928833}, {"image_id": 17, "category_id": 3, "bbox": [927.9784545898438, 169.49703979492188, 226.39202880859375, 426.4973449707031], "score": 0.08590667694807053}, {"image_id": 17, "category_id": 3, "bbox": [515.4315795898438, 164.00503540039062, 570.2935180664062, 555.9949645996094], "score": 0.0858064517378807}, {"image_id": 17, "category_id": 3, "bbox": [103.86776733398438, 3.766284942626953, 650.8767395019531, 666.1833000183105], "score": 0.08361556380987167}, {"image_id": 17, "category_id": 3, "bbox": [777.381591796875, 319.29669189453125, 304.945556640625, 386.52154541015625], "score": 0.07980667799711227}, {"image_id": 17, "category_id": 3, "bbox": [359.8238525390625, 244.33580017089844, 638.1507568359375, 475.66419982910156], "score": 0.07391756027936935}, {"image_id": 17, "category_id": 3, "bbox": [559.3236083984375, 0.0, 519.92529296875, 586.4776000976562], "score": 0.07355862855911255}, {"image_id": 17, "category_id": 3, "bbox": [457.28759765625, 105.8379898071289, 331.62744140625, 452.5794906616211], "score": 0.07215924561023712}, {"image_id": 17, "category_id": 3, "bbox": [658.2764282226562, 90.51498413085938, 572.1337280273438, 629.4850158691406], "score": 0.07176879048347473}, {"image_id": 17, "category_id": 3, "bbox": [629.640869140625, 360.28125, 325.51025390625, 359.71875], "score": 0.06696338206529617}, {"image_id": 17, "category_id": 3, "bbox": [562.1514892578125, 4.9571685791015625, 269.9791259765625, 456.39512634277344], "score": 0.06498369574546814}, {"image_id": 17, "category_id": 3, "bbox": [795.4771728515625, 142.04312133789062, 269.571044921875, 394.7601013183594], "score": 0.06374162435531616}, {"image_id": 17, "category_id": 3, "bbox": [0.0, 82.62295532226562, 623.1898803710938, 637.3770446777344], "score": 0.06221986189484596}, {"image_id": 17, "category_id": 3, "bbox": [936.7857666015625, 77.63832092285156, 228.3360595703125, 393.3939971923828], "score": 0.060388896614313126}, {"image_id": 17, "category_id": 3, "bbox": [133.6034698486328, 34.106849670410156, 265.5951385498047, 372.94795989990234], "score": 0.059565313160419464}, {"image_id": 17, "category_id": 3, "bbox": [833.68310546875, 0.0, 186.2125244140625, 300.0516357421875], "score": 0.057351432740688324}, {"image_id": 17, "category_id": 3, "bbox": [881.1314086914062, 275.140869140625, 267.21331787109375, 392.84820556640625], "score": 0.05715303122997284}, {"image_id": 17, "category_id": 3, "bbox": [884.6882934570312, 109.40875244140625, 241.95269775390625, 400.7122802734375], "score": 0.055558882653713226}, {"image_id": 17, "category_id": 3, "bbox": [262.99261474609375, 25.98273468017578, 304.71624755859375, 388.0404281616211], "score": 0.054053790867328644}, {"image_id": 17, "category_id": 3, "bbox": [776.2408447265625, 19.730518341064453, 503.7591552734375, 621.8473014831543], "score": 0.04952942207455635}, {"image_id": 17, "category_id": 3, "bbox": [232.09915161132812, 0.0, 643.8135070800781, 498.2525634765625], "score": 0.04799845069646835}, {"image_id": 17, "category_id": 3, "bbox": [882.1927490234375, 156.61203002929688, 397.8072509765625, 563.3879699707031], "score": 0.04705141857266426}, {"image_id": 17, "category_id": 3, "bbox": [71.58273315429688, 236.45701599121094, 729.8620300292969, 483.54298400878906], "score": 0.04608164727687836}, {"image_id": 17, "category_id": 3, "bbox": [384.1004943847656, 327.140380859375, 341.2298889160156, 360.0958251953125], "score": 0.04371112585067749}, {"image_id": 17, "category_id": 3, "bbox": [648.4258422851562, 0.0, 227.63140869140625, 330.24798583984375], "score": 0.04368837922811508}, {"image_id": 17, "category_id": 3, "bbox": [328.321044921875, 6.271081924438477, 317.582275390625, 377.4578609466553], "score": 0.04149802774190903}, {"image_id": 17, "category_id": 3, "bbox": [0.0, 0.0, 636.8208618164062, 486.11431884765625], "score": 0.04147428646683693}, {"image_id": 17, "category_id": 3, "bbox": [371.3867492675781, 0.0, 644.7054138183594, 424.3362731933594], "score": 0.04110519587993622}, {"image_id": 17, "category_id": 3, "bbox": [652.0845947265625, 0.0, 600.3082275390625, 496.5477600097656], "score": 0.04083681479096413}, {"image_id": 17, "category_id": 3, "bbox": [866.6705932617188, 3.4014015197753906, 246.23004150390625, 390.69167709350586], "score": 0.038429342210292816}, {"image_id": 18, "category_id": 1, "bbox": [399.3203125, 208.83363342285156, 182.7967529296875, 416.86949157714844], "score": 0.8311252593994141}, {"image_id": 18, "category_id": 1, "bbox": [365.54168701171875, 238.3986053466797, 243.45892333984375, 474.71498107910156], "score": 0.6822131872177124}, {"image_id": 18, "category_id": 1, "bbox": [646.476806640625, 340.6021728515625, 171.64959716796875, 345.20819091796875], "score": 0.24593405425548553}, {"image_id": 18, "category_id": 1, "bbox": [722.4418334960938, 343.31280517578125, 178.3448486328125, 344.2802734375], "score": 0.2243434190750122}, {"image_id": 18, "category_id": 1, "bbox": [316.36541748046875, 135.3047637939453, 271.07562255859375, 435.6820526123047], "score": 0.21753700077533722}, {"image_id": 18, "category_id": 1, "bbox": [7.19024658203125, 15.80826187133789, 591.340576171875, 647.6103172302246], "score": 0.1466938555240631}, {"image_id": 18, "category_id": 1, "bbox": [441.40289306640625, 179.20196533203125, 326.45159912109375, 462.87567138671875], "score": 0.11606841534376144}, {"image_id": 18, "category_id": 1, "bbox": [620.6363525390625, 366.08489990234375, 251.36920166015625, 353.91510009765625], "score": 0.10744303464889526}, {"image_id": 18, "category_id": 1, "bbox": [625.2373657226562, 232.4219512939453, 235.98431396484375, 424.23057556152344], "score": 0.10626339912414551}, {"image_id": 18, "category_id": 1, "bbox": [273.296875, 8.289321899414062, 549.80859375, 663.2640838623047], "score": 0.1059536561369896}, {"image_id": 18, "category_id": 1, "bbox": [698.5021362304688, 390.3028259277344, 243.13519287109375, 329.6971740722656], "score": 0.09309118986129761}, {"image_id": 18, "category_id": 1, "bbox": [211.33445739746094, 176.27047729492188, 313.41627502441406, 461.6811218261719], "score": 0.08671552687883377}, {"image_id": 18, "category_id": 1, "bbox": [374.04400634765625, 63.0802001953125, 224.32049560546875, 432.2757568359375], "score": 0.07621091604232788}, {"image_id": 18, "category_id": 1, "bbox": [150.67730712890625, 87.24994659423828, 543.9344482421875, 632.7500534057617], "score": 0.07576047629117966}, {"image_id": 18, "category_id": 1, "bbox": [469.2803955078125, 312.87628173828125, 254.69354248046875, 405.8546142578125], "score": 0.07135026156902313}, {"image_id": 18, "category_id": 1, "bbox": [926.2571411132812, 165.7624053955078, 225.25531005859375, 430.77915954589844], "score": 0.0690477043390274}, {"image_id": 18, "category_id": 1, "bbox": [378.72991943359375, 4.6676788330078125, 626.1039428710938, 653.0517425537109], "score": 0.06764914840459824}, {"image_id": 18, "category_id": 1, "bbox": [1058.138671875, 0.0, 221.861328125, 371.155517578125], "score": 0.06072165444493294}, {"image_id": 18, "category_id": 1, "bbox": [806.2705078125, 171.50828552246094, 473.7294921875, 548.4917144775391], "score": 0.05551741272211075}, {"image_id": 18, "category_id": 1, "bbox": [293.9404296875, 371.70257568359375, 315.2745361328125, 346.0548095703125], "score": 0.05381467193365097}, {"image_id": 18, "category_id": 1, "bbox": [0.0, 10.865409851074219, 374.0946350097656, 641.0012283325195], "score": 0.05233713611960411}, {"image_id": 18, "category_id": 1, "bbox": [0.0, 235.07273864746094, 645.2622680664062, 484.92726135253906], "score": 0.05222594738006592}, {"image_id": 18, "category_id": 1, "bbox": [535.991455078125, 0.0, 590.5478515625, 582.2379150390625], "score": 0.0489974208176136}, {"image_id": 18, "category_id": 1, "bbox": [546.1051025390625, 327.04864501953125, 256.87158203125, 377.16180419921875], "score": 0.048086073249578476}, {"image_id": 18, "category_id": 1, "bbox": [367.6881103515625, 95.8426284790039, 360.782958984375, 489.6484603881836], "score": 0.04687792807817459}, {"image_id": 18, "category_id": 1, "bbox": [892.5545654296875, 256.63653564453125, 253.4176025390625, 424.99078369140625], "score": 0.046792056411504745}, {"image_id": 18, "category_id": 1, "bbox": [495.47747802734375, 161.63047790527344, 642.3250122070312, 558.3695220947266], "score": 0.045307327061891556}, {"image_id": 18, "category_id": 1, "bbox": [202.5981903076172, 34.071006774902344, 240.05918884277344, 388.4756546020508], "score": 0.04360231012105942}, {"image_id": 18, "category_id": 1, "bbox": [14.1793212890625, 0.0, 610.4825439453125, 365.2908020019531], "score": 0.042837247252464294}, {"image_id": 18, "category_id": 1, "bbox": [682.4840087890625, 186.42723083496094, 265.7386474609375, 451.67677307128906], "score": 0.04089988023042679}, {"image_id": 18, "category_id": 1, "bbox": [719.1195678710938, 270.8828430175781, 271.4766845703125, 416.6634216308594], "score": 0.040136415511369705}, {"image_id": 18, "category_id": 1, "bbox": [303.7578430175781, 476.8208923339844, 366.8321838378906, 243.17910766601562], "score": 0.03972431644797325}, {"image_id": 18, "category_id": 1, "bbox": [261.5706787109375, 232.38951110839844, 597.423828125, 487.61048889160156], "score": 0.03971507400274277}, {"image_id": 18, "category_id": 1, "bbox": [647.2861328125, 0.0, 598.2288818359375, 362.5942687988281], "score": 0.038032177835702896}, {"image_id": 18, "category_id": 2, "bbox": [302.3157958984375, 414.4477844238281, 370.93206787109375, 305.5522155761719], "score": 0.20930245518684387}, {"image_id": 18, "category_id": 2, "bbox": [546.1051025390625, 327.04864501953125, 256.87158203125, 377.16180419921875], "score": 0.08744240552186966}, {"image_id": 18, "category_id": 2, "bbox": [381.5999755859375, 476.9804382324219, 347.48406982421875, 243.01956176757812], "score": 0.07954926043748856}, {"image_id": 18, "category_id": 2, "bbox": [469.2803955078125, 312.87628173828125, 254.69354248046875, 405.8546142578125], "score": 0.07507217675447464}, {"image_id": 18, "category_id": 2, "bbox": [722.4418334960938, 343.31280517578125, 178.3448486328125, 344.2802734375], "score": 0.07198373228311539}, {"image_id": 18, "category_id": 2, "bbox": [646.476806640625, 340.6021728515625, 171.64959716796875, 345.20819091796875], "score": 0.07030263543128967}, {"image_id": 18, "category_id": 2, "bbox": [516.07177734375, 476.368408203125, 338.5997314453125, 243.631591796875], "score": 0.06532739847898483}, {"image_id": 18, "category_id": 2, "bbox": [283.5845947265625, 282.197998046875, 306.40130615234375, 437.802001953125], "score": 0.06179151311516762}, {"image_id": 18, "category_id": 2, "bbox": [399.3203125, 208.83363342285156, 182.7967529296875, 416.86949157714844], "score": 0.06143924593925476}, {"image_id": 18, "category_id": 2, "bbox": [176.12423706054688, 62.01991653442383, 260.8713684082031, 396.5207633972168], "score": 0.054708462208509445}, {"image_id": 18, "category_id": 2, "bbox": [724.3902587890625, 430.767578125, 275.361572265625, 289.232421875], "score": 0.054218851029872894}, {"image_id": 18, "category_id": 2, "bbox": [811.0269775390625, 440.5391845703125, 262.1390380859375, 279.4608154296875], "score": 0.0493231900036335}, {"image_id": 18, "category_id": 2, "bbox": [136.8541717529297, 445.76678466796875, 322.35292053222656, 274.23321533203125], "score": 0.04711971431970596}, {"image_id": 18, "category_id": 2, "bbox": [656.072265625, 474.6877136230469, 293.760009765625, 245.31228637695312], "score": 0.045958537608385086}, {"image_id": 18, "category_id": 2, "bbox": [900.1009521484375, 194.52024841308594, 242.98193359375, 431.92152404785156], "score": 0.04459964856505394}, {"image_id": 18, "category_id": 2, "bbox": [127.78926849365234, 32.41390609741211, 264.49292755126953, 385.48080825805664], "score": 0.04278760403394699}, {"image_id": 18, "category_id": 2, "bbox": [721.1678466796875, 234.2992706298828, 263.803466796875, 421.04002380371094], "score": 0.04277132824063301}, {"image_id": 18, "category_id": 2, "bbox": [865.000732421875, 398.81805419921875, 267.694091796875, 321.18194580078125], "score": 0.04218815267086029}, {"image_id": 18, "category_id": 2, "bbox": [803.3971557617188, 229.92938232421875, 288.75714111328125, 423.7611083984375], "score": 0.03997331112623215}, {"image_id": 18, "category_id": 2, "bbox": [221.5021514892578, 122.79947662353516, 288.9256134033203, 441.89608001708984], "score": 0.03965191915631294}, {"image_id": 18, "category_id": 2, "bbox": [17.797882080078125, 69.74903869628906, 294.515625, 394.51893615722656], "score": 0.039341673254966736}, {"image_id": 18, "category_id": 2, "bbox": [927.3861083984375, 80.94548034667969, 232.7408447265625, 383.9580841064453], "score": 0.03900161758065224}, {"image_id": 18, "category_id": 2, "bbox": [286.16064453125, 158.5311737060547, 534.67529296875, 561.4688262939453], "score": 0.03815891221165657}, {"image_id": 18, "category_id": 2, "bbox": [209.35533142089844, 467.666748046875, 323.96925354003906, 252.333251953125], "score": 0.03812236338853836}, {"image_id": 18, "category_id": 2, "bbox": [533.1817016601562, 201.4303741455078, 288.11578369140625, 424.6631317138672], "score": 0.03806813061237335}, {"image_id": 18, "category_id": 2, "bbox": [203.09947204589844, 0.0, 244.0326690673828, 363.9478759765625], "score": 0.03801187500357628}, {"image_id": 18, "category_id": 2, "bbox": [1056.2508544921875, 80.65677642822266, 223.7491455078125, 383.92501068115234], "score": 0.03753840550780296}, {"image_id": 18, "category_id": 2, "bbox": [316.36541748046875, 135.3047637939453, 271.07562255859375, 435.6820526123047], "score": 0.03720267489552498}, {"image_id": 18, "category_id": 3, "bbox": [321.35113525390625, 275.3378601074219, 320.51458740234375, 444.6621398925781], "score": 0.3923160135746002}, {"image_id": 18, "category_id": 3, "bbox": [378.07537841796875, 373.43023681640625, 356.179443359375, 342.35174560546875], "score": 0.31821033358573914}, {"image_id": 18, "category_id": 3, "bbox": [301.011474609375, 454.5102233886719, 373.0806884765625, 265.4897766113281], "score": 0.25587278604507446}, {"image_id": 18, "category_id": 3, "bbox": [399.3203125, 208.83363342285156, 182.7967529296875, 416.86949157714844], "score": 0.25195708870887756}, {"image_id": 18, "category_id": 3, "bbox": [647.7015991210938, 299.2430725097656, 178.05328369140625, 357.9086608886719], "score": 0.16788813471794128}, {"image_id": 18, "category_id": 3, "bbox": [467.2480773925781, 273.23577880859375, 262.4407653808594, 415.36444091796875], "score": 0.13269028067588806}, {"image_id": 18, "category_id": 3, "bbox": [723.7334594726562, 301.4986267089844, 183.9490966796875, 357.7221374511719], "score": 0.12863761186599731}, {"image_id": 18, "category_id": 3, "bbox": [286.16064453125, 158.5311737060547, 534.67529296875, 561.4688262939453], "score": 0.11346122622489929}, {"image_id": 18, "category_id": 3, "bbox": [520.19580078125, 394.83563232421875, 296.5428466796875, 325.16436767578125], "score": 0.10685143619775772}, {"image_id": 18, "category_id": 3, "bbox": [150.67730712890625, 87.24994659423828, 543.9344482421875, 632.7500534057617], "score": 0.09723170846700668}, {"image_id": 18, "category_id": 3, "bbox": [202.5981903076172, 34.071006774902344, 240.05918884277344, 388.4756546020508], "score": 0.09371877461671829}, {"image_id": 18, "category_id": 3, "bbox": [211.33445739746094, 176.27047729492188, 313.41627502441406, 461.6811218261719], "score": 0.08003179728984833}, {"image_id": 18, "category_id": 3, "bbox": [441.40289306640625, 179.20196533203125, 326.45159912109375, 462.87567138671875], "score": 0.07789149135351181}, {"image_id": 18, "category_id": 3, "bbox": [0.0, 235.07273864746094, 645.2622680664062, 484.92726135253906], "score": 0.07733529061079025}, {"image_id": 18, "category_id": 3, "bbox": [450.97222900390625, 474.2694091796875, 301.54693603515625, 245.7305908203125], "score": 0.0763150006532669}, {"image_id": 18, "category_id": 3, "bbox": [7.19024658203125, 15.80826187133789, 591.340576171875, 647.6103172302246], "score": 0.07627503573894501}, {"image_id": 18, "category_id": 3, "bbox": [354.5708923339844, 226.02536010742188, 656.2649230957031, 493.9746398925781], "score": 0.0760548859834671}, {"image_id": 18, "category_id": 3, "bbox": [316.36541748046875, 135.3047637939453, 271.07562255859375, 435.6820526123047], "score": 0.07517086714506149}, {"image_id": 18, "category_id": 3, "bbox": [551.7643432617188, 280.83978271484375, 263.37603759765625, 391.72967529296875], "score": 0.06793973594903946}, {"image_id": 18, "category_id": 3, "bbox": [926.2571411132812, 165.7624053955078, 225.25531005859375, 430.77915954589844], "score": 0.06740720570087433}, {"image_id": 18, "category_id": 3, "bbox": [495.47747802734375, 161.63047790527344, 642.3250122070312, 558.3695220947266], "score": 0.0636921301484108}, {"image_id": 18, "category_id": 3, "bbox": [613.4796142578125, 386.6827087402344, 259.681640625, 333.3172912597656], "score": 0.0630265399813652}, {"image_id": 18, "category_id": 3, "bbox": [720.7098388671875, 390.5624084472656, 275.11053466796875, 329.4375915527344], "score": 0.05813354253768921}, {"image_id": 18, "category_id": 3, "bbox": [892.5545654296875, 256.63653564453125, 253.4176025390625, 424.99078369140625], "score": 0.05630258843302727}, {"image_id": 18, "category_id": 3, "bbox": [806.2705078125, 171.50828552246094, 473.7294921875, 548.4917144775391], "score": 0.05583582818508148}, {"image_id": 18, "category_id": 3, "bbox": [378.72991943359375, 4.6676788330078125, 626.1039428710938, 653.0517425537109], "score": 0.04974007233977318}, {"image_id": 18, "category_id": 3, "bbox": [0.0, 87.19134521484375, 363.3289794921875, 632.8086547851562], "score": 0.04743045195937157}, {"image_id": 18, "category_id": 3, "bbox": [209.1873321533203, 331.3917236328125, 320.51939392089844, 388.6082763671875], "score": 0.04735464230179787}, {"image_id": 18, "category_id": 3, "bbox": [261.3817443847656, 0.0, 601.8935241699219, 580.4398193359375], "score": 0.046188417822122574}, {"image_id": 18, "category_id": 3, "bbox": [535.991455078125, 0.0, 590.5478515625, 582.2379150390625], "score": 0.04467049241065979}, {"image_id": 18, "category_id": 3, "bbox": [127.78926849365234, 32.41390609741211, 264.49292755126953, 385.48080825805664], "score": 0.04389302805066109}, {"image_id": 18, "category_id": 3, "bbox": [927.3861083984375, 80.94548034667969, 232.7408447265625, 383.9580841064453], "score": 0.04347451031208038}, {"image_id": 18, "category_id": 3, "bbox": [374.04400634765625, 63.0802001953125, 224.32049560546875, 432.2757568359375], "score": 0.04309418052434921}, {"image_id": 18, "category_id": 3, "bbox": [721.1678466796875, 234.2992706298828, 263.803466796875, 421.04002380371094], "score": 0.042947448790073395}, {"image_id": 18, "category_id": 3, "bbox": [208.94036865234375, 89.63260650634766, 279.2989501953125, 432.12459564208984], "score": 0.04247027263045311}, {"image_id": 18, "category_id": 3, "bbox": [367.6881103515625, 95.8426284790039, 360.782958984375, 489.6484603881836], "score": 0.041150644421577454}, {"image_id": 18, "category_id": 3, "bbox": [642.7740478515625, 25.01525115966797, 612.30712890625, 621.1850051879883], "score": 0.040001191198825836}, {"image_id": 18, "category_id": 3, "bbox": [629.7042846679688, 0.0, 612.2349243164062, 424.857421875], "score": 0.03994756191968918}, {"image_id": 19, "category_id": 1, "bbox": [1083.5552978515625, 121.27059936523438, 228.735595703125, 624.4814147949219], "score": 0.5378648638725281}, {"image_id": 19, "category_id": 1, "bbox": [1095.45751953125, 124.42922973632812, 348.4569091796875, 621.9758605957031], "score": 0.3355593979358673}, {"image_id": 19, "category_id": 1, "bbox": [1058.4752197265625, 276.05023193359375, 290.092041015625, 604.0613403320312], "score": 0.21437472105026245}, {"image_id": 19, "category_id": 1, "bbox": [972.6149291992188, 214.06674194335938, 357.96490478515625, 582.3255310058594], "score": 0.196515753865242}, {"image_id": 19, "category_id": 1, "bbox": [384.75030517578125, 8.526128768920898, 919.6034545898438, 998.1067447662354], "score": 0.17202228307724}, {"image_id": 19, "category_id": 1, "bbox": [10.40277099609375, 16.570232391357422, 919.6292724609375, 977.7473335266113], "score": 0.1427760273218155}, {"image_id": 19, "category_id": 1, "bbox": [172.7029266357422, 101.75118255615234, 956.0100860595703, 978.2488174438477], "score": 0.12031728774309158}, {"image_id": 19, "category_id": 1, "bbox": [951.8260498046875, 309.17303466796875, 369.1380615234375, 664.3521118164062], "score": 0.08533389866352081}, {"image_id": 19, "category_id": 1, "bbox": [1209.674072265625, 235.85670471191406, 710.325927734375, 844.1432952880859], "score": 0.0849035233259201}, {"image_id": 19, "category_id": 1, "bbox": [232.0223846435547, 154.03482055664062, 354.2984161376953, 579.1134948730469], "score": 0.08398036658763885}, {"image_id": 19, "category_id": 1, "bbox": [564.4915161132812, 234.95079040527344, 907.3417358398438, 845.0492095947266], "score": 0.08222981542348862}, {"image_id": 19, "category_id": 1, "bbox": [209.3423309326172, 13.704036712646484, 374.99415588378906, 550.5466957092285], "score": 0.07037705928087234}, {"image_id": 19, "category_id": 1, "bbox": [1054.5245361328125, 0.0, 309.7841796875, 588.5333862304688], "score": 0.06612008064985275}, {"image_id": 19, "category_id": 1, "bbox": [971.2454223632812, 43.616939544677734, 308.84051513671875, 650.9736976623535], "score": 0.06537369638681412}, {"image_id": 19, "category_id": 1, "bbox": [606.6021728515625, 0.0, 836.741455078125, 877.9691162109375], "score": 0.06344739347696304}, {"image_id": 19, "category_id": 1, "bbox": [1609.6180419921875, 0.0, 310.3819580078125, 637.5390625], "score": 0.06304096430540085}, {"image_id": 19, "category_id": 1, "bbox": [1360.6820068359375, 11.725622177124023, 559.3179931640625, 973.8843631744385], "score": 0.0547214038670063}, {"image_id": 19, "category_id": 1, "bbox": [852.2474365234375, 201.65789794921875, 413.0628662109375, 609.2301635742188], "score": 0.05065874755382538}, {"image_id": 19, "category_id": 1, "bbox": [0.0, 231.03955078125, 747.4063720703125, 848.96044921875], "score": 0.04972207546234131}, {"image_id": 19, "category_id": 1, "bbox": [984.03369140625, 5.130117416381836, 889.6597900390625, 965.2625217437744], "score": 0.049543242901563644}, {"image_id": 19, "category_id": 1, "bbox": [0.0, 24.19112205505371, 557.1464233398438, 958.66526222229], "score": 0.04752582684159279}, {"image_id": 19, "category_id": 1, "bbox": [1169.9854736328125, 135.29786682128906, 385.6878662109375, 641.7587738037109], "score": 0.043798353523015976}, {"image_id": 19, "category_id": 1, "bbox": [332.64019775390625, 446.6051940917969, 999.8425903320312, 633.3948059082031], "score": 0.04351605102419853}, {"image_id": 19, "category_id": 1, "bbox": [0.0, 465.058837890625, 977.2416381835938, 614.941162109375], "score": 0.04312947392463684}, {"image_id": 19, "category_id": 1, "bbox": [338.8112487792969, 0.0, 1006.6994934082031, 632.661376953125], "score": 0.041987013071775436}, {"image_id": 19, "category_id": 1, "bbox": [1175.132568359375, 0.0, 744.867431640625, 748.4298706054688], "score": 0.039251185953617096}, {"image_id": 19, "category_id": 1, "bbox": [800.4139404296875, 324.8336486816406, 864.024658203125, 755.1663513183594], "score": 0.03872949257493019}, {"image_id": 19, "category_id": 1, "bbox": [828.9464721679688, 0.0, 371.56866455078125, 566.5751342773438], "score": 0.038453709334135056}, {"image_id": 19, "category_id": 1, "bbox": [5.962005615234375, 0.0, 973.1551208496094, 618.959716796875], "score": 0.0383325032889843}, {"image_id": 19, "category_id": 1, "bbox": [1583.55908203125, 170.4002685546875, 336.44091796875, 683.7091674804688], "score": 0.03760361298918724}, {"image_id": 19, "category_id": 2, "bbox": [972.6149291992188, 214.06674194335938, 357.96490478515625, 582.3255310058594], "score": 0.13344503939151764}, {"image_id": 19, "category_id": 2, "bbox": [1061.555419921875, 177.15061950683594, 402.3941650390625, 626.0648956298828], "score": 0.07584280520677567}, {"image_id": 19, "category_id": 2, "bbox": [906.649658203125, 490.4180603027344, 414.161376953125, 562.4248352050781], "score": 0.07568567991256714}, {"image_id": 19, "category_id": 2, "bbox": [1058.4752197265625, 276.05023193359375, 290.092041015625, 604.0613403320312], "score": 0.0735924169421196}, {"image_id": 19, "category_id": 2, "bbox": [1083.5552978515625, 121.27059936523438, 228.735595703125, 624.4814147949219], "score": 0.07207722961902618}, {"image_id": 19, "category_id": 2, "bbox": [861.1846313476562, 153.22219848632812, 391.92437744140625, 617.7627868652344], "score": 0.06676746159791946}, {"image_id": 19, "category_id": 2, "bbox": [823.4144897460938, 429.1307373046875, 431.06524658203125, 568.1822509765625], "score": 0.06295453011989594}, {"image_id": 19, "category_id": 2, "bbox": [182.9867706298828, 192.42616271972656, 406.2402801513672, 603.0377044677734], "score": 0.062301456928253174}, {"image_id": 19, "category_id": 2, "bbox": [992.4437255859375, 554.4766235351562, 392.9923095703125, 525.5233764648438], "score": 0.05489707738161087}, {"image_id": 19, "category_id": 2, "bbox": [49.389793395996094, 204.04562377929688, 444.58521270751953, 596.2515563964844], "score": 0.05435497313737869}, {"image_id": 19, "category_id": 2, "bbox": [204.5184783935547, 61.10201644897461, 374.19044494628906, 572.5913429260254], "score": 0.0534067302942276}, {"image_id": 19, "category_id": 2, "bbox": [744.5196533203125, 99.6021499633789, 396.0238037109375, 613.7987899780273], "score": 0.050507158041000366}, {"image_id": 19, "category_id": 2, "bbox": [1091.6768798828125, 477.3342590332031, 396.825927734375, 584.3298034667969], "score": 0.05017963424324989}, {"image_id": 19, "category_id": 2, "bbox": [22.055831909179688, 329.1246032714844, 426.83363342285156, 585.8310241699219], "score": 0.048516646027565}, {"image_id": 19, "category_id": 2, "bbox": [951.8260498046875, 309.17303466796875, 369.1380615234375, 664.3521118164062], "score": 0.04581855237483978}, {"image_id": 19, "category_id": 2, "bbox": [1169.9854736328125, 135.29786682128906, 385.6878662109375, 641.7587738037109], "score": 0.04551633819937706}, {"image_id": 19, "category_id": 2, "bbox": [130.42144775390625, 321.74322509765625, 425.14727783203125, 594.6079711914062], "score": 0.04543738439679146}, {"image_id": 19, "category_id": 2, "bbox": [311.60504150390625, 172.343017578125, 382.371337890625, 556.8453369140625], "score": 0.04534950107336044}, {"image_id": 19, "category_id": 2, "bbox": [734.2424926757812, 235.48228454589844, 433.84942626953125, 632.9399566650391], "score": 0.044453464448451996}, {"image_id": 19, "category_id": 2, "bbox": [689.9584350585938, 495.2972106933594, 489.53497314453125, 563.9757385253906], "score": 0.04359890893101692}, {"image_id": 19, "category_id": 2, "bbox": [0.0, 274.5169372558594, 339.6925354003906, 582.7585754394531], "score": 0.043590739369392395}, {"image_id": 19, "category_id": 2, "bbox": [384.75030517578125, 8.526128768920898, 919.6034545898438, 998.1067447662354], "score": 0.04220017045736313}, {"image_id": 19, "category_id": 2, "bbox": [971.2454223632812, 43.616939544677734, 308.84051513671875, 650.9736976623535], "score": 0.04172388091683388}, {"image_id": 19, "category_id": 2, "bbox": [714.184326171875, 0.0, 412.9075927734375, 472.53460693359375], "score": 0.04151882231235504}, {"image_id": 19, "category_id": 2, "bbox": [172.7029266357422, 101.75118255615234, 956.0100860595703, 978.2488174438477], "score": 0.041330285370349884}, {"image_id": 19, "category_id": 2, "bbox": [630.5449829101562, 115.80489349365234, 392.96240234375, 581.8255996704102], "score": 0.040975429117679596}, {"image_id": 19, "category_id": 2, "bbox": [844.8473510742188, 19.57660675048828, 768.7935180664062, 952.7991256713867], "score": 0.0408250130712986}, {"image_id": 19, "category_id": 2, "bbox": [416.5718994140625, 203.90652465820312, 410.5155029296875, 607.7856750488281], "score": 0.0408090241253376}, {"image_id": 19, "category_id": 2, "bbox": [593.6343383789062, 119.41899871826172, 853.6694946289062, 960.5810012817383], "score": 0.039982955902814865}, {"image_id": 19, "category_id": 2, "bbox": [62.652889251708984, 71.5713882446289, 444.43994522094727, 558.7232894897461], "score": 0.03956343233585358}, {"image_id": 19, "category_id": 2, "bbox": [803.3406372070312, 0.0, 409.33721923828125, 426.28814697265625], "score": 0.039108842611312866}, {"image_id": 19, "category_id": 2, "bbox": [298.03607177734375, 20.91777992248535, 406.2025146484375, 542.6248226165771], "score": 0.03902973234653473}, {"image_id": 19, "category_id": 2, "bbox": [7.951789855957031, 440.16748046875, 317.23033905029297, 597.554931640625], "score": 0.038969069719314575}, {"image_id": 19, "category_id": 2, "bbox": [1190.696533203125, 657.8096313476562, 429.1717529296875, 422.19036865234375], "score": 0.038737472146749496}, {"image_id": 19, "category_id": 2, "bbox": [996.1561889648438, 0.0, 399.83258056640625, 361.66510009765625], "score": 0.038505084812641144}, {"image_id": 19, "category_id": 2, "bbox": [1084.5267333984375, 0.0, 428.9188232421875, 363.90313720703125], "score": 0.03837126865983009}, {"image_id": 19, "category_id": 2, "bbox": [1017.2208862304688, 0.0, 844.3344116210938, 871.92919921875], "score": 0.03763663023710251}, {"image_id": 19, "category_id": 2, "bbox": [523.068115234375, 112.6292953491211, 385.1661376953125, 590.5076675415039], "score": 0.03694787621498108}, {"image_id": 19, "category_id": 3, "bbox": [1061.439453125, 240.22496032714844, 274.5799560546875, 597.1305084228516], "score": 0.606549859046936}, {"image_id": 19, "category_id": 3, "bbox": [974.9205322265625, 162.74070739746094, 345.029052734375, 604.3134918212891], "score": 0.34585994482040405}, {"image_id": 19, "category_id": 3, "bbox": [1061.555419921875, 177.15061950683594, 402.3941650390625, 626.0648956298828], "score": 0.31565138697624207}, {"image_id": 19, "category_id": 3, "bbox": [951.8260498046875, 309.17303466796875, 369.1380615234375, 664.3521118164062], "score": 0.25405964255332947}, {"image_id": 19, "category_id": 3, "bbox": [1026.9862060546875, 384.30938720703125, 343.874755859375, 636.9789428710938], "score": 0.1600458323955536}, {"image_id": 19, "category_id": 3, "bbox": [852.2474365234375, 201.65789794921875, 413.0628662109375, 609.2301635742188], "score": 0.1350499838590622}, {"image_id": 19, "category_id": 3, "bbox": [222.45452880859375, 107.30513000488281, 358.33551025390625, 578.2366790771484], "score": 0.12839359045028687}, {"image_id": 19, "category_id": 3, "bbox": [609.53466796875, 14.435468673706055, 824.3419189453125, 975.857988357544], "score": 0.12174081057310104}, {"image_id": 19, "category_id": 3, "bbox": [1074.3028564453125, 60.24495315551758, 248.936279296875, 643.4259452819824], "score": 0.12147335708141327}, {"image_id": 19, "category_id": 3, "bbox": [369.1385498046875, 108.63579559326172, 945.6484375, 971.3642044067383], "score": 0.1072656586766243}, {"image_id": 19, "category_id": 3, "bbox": [1080.687255859375, 319.0575866699219, 405.7197265625, 654.0152282714844], "score": 0.10543368756771088}, {"image_id": 19, "category_id": 3, "bbox": [14.940399169921875, 102.16429901123047, 908.7486877441406, 977.8357009887695], "score": 0.08965212106704712}, {"image_id": 19, "category_id": 3, "bbox": [204.9681854248047, 0.0, 388.2613067626953, 510.0557861328125], "score": 0.08299317210912704}, {"image_id": 19, "category_id": 3, "bbox": [831.1487426757812, 130.16346740722656, 795.6423950195312, 949.8365325927734], "score": 0.08285055309534073}, {"image_id": 19, "category_id": 3, "bbox": [964.263427734375, 234.05503845214844, 914.1812744140625, 845.9449615478516], "score": 0.07750314474105835}, {"image_id": 19, "category_id": 3, "bbox": [1221.2388916015625, 6.906349182128906, 698.7611083984375, 974.9222030639648], "score": 0.07689044624567032}, {"image_id": 19, "category_id": 3, "bbox": [178.90072631835938, 11.81363296508789, 958.0703430175781, 984.2643699645996], "score": 0.06495741754770279}, {"image_id": 19, "category_id": 3, "bbox": [1105.939697265625, 0.0, 351.2701416015625, 639.305419921875], "score": 0.06210622936487198}, {"image_id": 19, "category_id": 3, "bbox": [1363.134765625, 230.79869079589844, 556.865234375, 849.2013092041016], "score": 0.05937302112579346}, {"image_id": 19, "category_id": 3, "bbox": [534.384521484375, 331.02789306640625, 977.738525390625, 748.9721069335938], "score": 0.055273693054914474}, {"image_id": 19, "category_id": 3, "bbox": [965.5433349609375, 0.0, 308.699951171875, 633.299072265625], "score": 0.05506341531872749}, {"image_id": 19, "category_id": 3, "bbox": [906.649658203125, 490.4180603027344, 414.161376953125, 562.4248352050781], "score": 0.05341381952166557}, {"image_id": 19, "category_id": 3, "bbox": [1169.9854736328125, 135.29786682128906, 385.6878662109375, 641.7587738037109], "score": 0.05341159924864769}, {"image_id": 19, "category_id": 3, "bbox": [871.9500732421875, 54.54948806762695, 332.63232421875, 594.874828338623], "score": 0.051953934133052826}, {"image_id": 19, "category_id": 3, "bbox": [309.0577392578125, 125.52936553955078, 381.44183349609375, 558.5551681518555], "score": 0.05152403935790062}, {"image_id": 19, "category_id": 3, "bbox": [248.37368774414062, 245.9715118408203, 362.0437316894531, 619.9862518310547], "score": 0.051376838237047195}, {"image_id": 19, "category_id": 3, "bbox": [141.2560272216797, 332.6877746582031, 1039.2582550048828, 747.3122253417969], "score": 0.05011116713285446}, {"image_id": 19, "category_id": 3, "bbox": [356.83734130859375, 0.0, 967.5060424804688, 748.9248046875], "score": 0.04369395226240158}, {"image_id": 19, "category_id": 3, "bbox": [823.4144897460938, 429.1307373046875, 431.06524658203125, 568.1822509765625], "score": 0.04016021266579628}, {"image_id": 19, "category_id": 3, "bbox": [10.906402587890625, 0.0, 936.0325012207031, 725.3652954101562], "score": 0.039944734424352646}, {"image_id": 19, "category_id": 3, "bbox": [300.4727783203125, 0.0, 412.12359619140625, 508.9482116699219], "score": 0.03989148512482643}, {"image_id": 19, "category_id": 3, "bbox": [811.5675659179688, 0.0, 383.35906982421875, 503.3565673828125], "score": 0.03696984797716141}]
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/coco_detection_main.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/coco_detection_main.yml
new file mode 100644
index 000000000..69af912b5
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/coco_detection_main.yml
@@ -0,0 +1,19 @@
+metric: COCO
+num_classes: 3
+
+TrainDataset:
+ !COCODataSet
+ image_dir: picodet_motorcycle/JPEGImages/
+ anno_path: voc_train.json
+ dataset_dir: /home/aistudio/data/data128282/
+ data_fields: ['image', 'gt_bbox', 'gt_class', 'is_crowd']
+
+EvalDataset:
+ !COCODataSet
+ image_dir: picodet_motorcycle/JPEGImages/
+ anno_path: voc_test.json
+ dataset_dir: /home/aistudio/data/data128282/
+
+TestDataset:
+ !ImageFolder
+ anno_path: voc_test.json
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/eval.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/eval.py
new file mode 100644
index 000000000..67f5383ff
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/eval.py
@@ -0,0 +1,144 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+
+# add python path of PadleDetection to sys.path
+parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
+sys.path.insert(0, parent_path)
+
+# ignore warning log
+import warnings
+warnings.filterwarnings('ignore')
+
+import paddle
+
+from ppdet.core.workspace import load_config, merge_config
+from ppdet.utils.check import check_gpu, check_npu, check_version, check_config
+from ppdet.utils.cli import ArgsParser
+from ppdet.engine import Trainer, init_parallel_env
+from ppdet.metrics.coco_utils import json_eval_results
+from ppdet.slim import build_slim_model
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('eval')
+
+
+def parse_args():
+ parser = ArgsParser()
+ parser.add_argument(
+ "--output_eval",
+ default=None,
+ type=str,
+ help="Evaluation directory, default is current directory.")
+
+ parser.add_argument(
+ '--json_eval',
+ action='store_true',
+ default=False,
+ help='Whether to re eval with already exists bbox.json or mask.json')
+
+ parser.add_argument(
+ "--slim_config",
+ default=None,
+ type=str,
+ help="Configuration file of slim method.")
+
+ # TODO: bias should be unified
+ parser.add_argument(
+ "--bias",
+ action="store_true",
+ help="whether add bias or not while getting w and h")
+
+ parser.add_argument(
+ "--classwise",
+ action="store_true",
+ help="whether per-category AP and draw P-R Curve or not.")
+
+ parser.add_argument(
+ '--save_prediction_only',
+ action='store_true',
+ default=False,
+ help='Whether to save the evaluation results only')
+
+ args = parser.parse_args()
+ return args
+
+
+def run(FLAGS, cfg):
+ if FLAGS.json_eval:
+ logger.info(
+ "In json_eval mode, PaddleDetection will evaluate json files in "
+ "output_eval directly. And proposal.json, bbox.json and mask.json "
+ "will be detected by default.")
+ json_eval_results(
+ cfg.metric,
+ json_directory=FLAGS.output_eval,
+ dataset=cfg['EvalDataset'])
+ return
+
+ # init parallel environment if nranks > 1
+ init_parallel_env()
+
+ # build trainer
+ trainer = Trainer(cfg, mode='eval')
+
+ # load weights
+ trainer.load_weights(cfg.weights)
+
+ # training
+ trainer.evaluate()
+
+
+def main():
+ FLAGS = parse_args()
+ cfg = load_config(FLAGS.config)
+ # TODO: bias should be unified
+ cfg['bias'] = 1 if FLAGS.bias else 0
+ cfg['classwise'] = True if FLAGS.classwise else False
+ cfg['output_eval'] = FLAGS.output_eval
+ cfg['save_prediction_only'] = FLAGS.save_prediction_only
+ merge_config(FLAGS.opt)
+
+ # disable npu in config by default
+ if 'use_npu' not in cfg:
+ cfg.use_npu = False
+
+ if cfg.use_gpu:
+ place = paddle.set_device('gpu')
+ elif cfg.use_npu:
+ place = paddle.set_device('npu')
+ else:
+ place = paddle.set_device('cpu')
+
+ if 'norm_type' in cfg and cfg['norm_type'] == 'sync_bn' and not cfg.use_gpu:
+ cfg['norm_type'] = 'bn'
+
+ if FLAGS.slim_config:
+ cfg = build_slim_model(cfg, FLAGS.slim_config, mode='eval')
+ check_config(cfg)
+ check_gpu(cfg.use_gpu)
+ check_npu(cfg.use_npu)
+ check_version()
+
+ run(FLAGS, cfg)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/export_model.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/export_model.py
new file mode 100644
index 000000000..deac2ea12
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/export_model.py
@@ -0,0 +1,115 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+
+# add python path of PadleDetection to sys.path
+parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
+sys.path.insert(0, parent_path)
+
+# ignore warning log
+import warnings
+warnings.filterwarnings('ignore')
+
+import paddle
+
+from ppdet.core.workspace import load_config, merge_config
+from ppdet.utils.check import check_gpu, check_version, check_config
+from ppdet.utils.cli import ArgsParser
+from ppdet.engine import Trainer
+from ppdet.slim import build_slim_model
+
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('export_model')
+
+
+def parse_args():
+ parser = ArgsParser()
+ parser.add_argument(
+ "--output_dir",
+ type=str,
+ default="output_inference",
+ help="Directory for storing the output model files.")
+ parser.add_argument(
+ "--export_serving_model",
+ type=bool,
+ default=False,
+ help="Whether to export serving model or not.")
+ parser.add_argument(
+ "--slim_config",
+ default=None,
+ type=str,
+ help="Configuration file of slim method.")
+ args = parser.parse_args()
+ return args
+
+
+def run(FLAGS, cfg):
+ # build detector
+ trainer = Trainer(cfg, mode='test')
+
+ # load weights
+ if cfg.architecture in ['DeepSORT']:
+ if cfg.det_weights != 'None':
+ trainer.load_weights_sde(cfg.det_weights, cfg.reid_weights)
+ else:
+ trainer.load_weights_sde(None, cfg.reid_weights)
+ else:
+ trainer.load_weights(cfg.weights)
+
+ # export model
+ trainer.export(FLAGS.output_dir)
+
+ if FLAGS.export_serving_model:
+ from paddle_serving_client.io import inference_model_to_serving
+ model_name = os.path.splitext(os.path.split(cfg.filename)[-1])[0]
+
+ inference_model_to_serving(
+ dirname="{}/{}".format(FLAGS.output_dir, model_name),
+ serving_server="{}/{}/serving_server".format(FLAGS.output_dir,
+ model_name),
+ serving_client="{}/{}/serving_client".format(FLAGS.output_dir,
+ model_name),
+ model_filename="model.pdmodel",
+ params_filename="model.pdiparams")
+
+
+def main():
+ paddle.set_device("cpu")
+ FLAGS = parse_args()
+ cfg = load_config(FLAGS.config)
+ # TODO: to be refined in the future
+ if 'norm_type' in cfg and cfg['norm_type'] == 'sync_bn':
+ FLAGS.opt['norm_type'] = 'bn'
+ merge_config(FLAGS.opt)
+
+ if FLAGS.slim_config:
+ cfg = build_slim_model(cfg, FLAGS.slim_config, mode='test')
+
+ # FIXME: Temporarily solve the priority problem of FLAGS.opt
+ merge_config(FLAGS.opt)
+ check_config(cfg)
+ check_gpu(cfg.use_gpu)
+ check_version()
+
+ run(FLAGS, cfg)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/optimizer_300e.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/optimizer_300e.yml
new file mode 100644
index 000000000..5a89bbbce
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/optimizer_300e.yml
@@ -0,0 +1,18 @@
+epoch: 300
+
+LearningRate:
+ base_lr: 0.4
+ schedulers:
+ - !CosineDecay
+ max_epochs: 300
+ - !LinearWarmup
+ start_factor: 0.1
+ steps: 300
+
+OptimizerBuilder:
+ optimizer:
+ momentum: 0.9
+ type: Momentum
+ regularizer:
+ factor: 0.00004
+ type: L2
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdopt b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdopt
new file mode 100644
index 000000000..0869488a1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdopt differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdparams b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdparams
new file mode 100644
index 000000000..dddcbaa91
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/best_model.pdparams differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdopt b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdopt
new file mode 100644
index 000000000..0869488a1
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdopt differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdparams b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdparams
new file mode 100644
index 000000000..dddcbaa91
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output/picodet_lcnet_1_5x_416_coco/model_final.pdparams differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/infer_cfg.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/infer_cfg.yml
new file mode 100644
index 000000000..e29f9298f
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/infer_cfg.yml
@@ -0,0 +1,118 @@
+mode: fluid
+draw_threshold: 0.5
+metric: COCO
+use_dynamic_shape: false
+arch: PicoDet
+min_subgraph_size: 3
+Preprocess:
+- interp: 2
+ keep_ratio: false
+ target_size:
+ - 640
+ - 640
+ type: Resize
+- is_scale: true
+ mean:
+ - 0.485
+ - 0.456
+ - 0.406
+ std:
+ - 0.229
+ - 0.224
+ - 0.225
+ type: NormalizeImage
+- type: Permute
+- stride: 32
+ type: PadStride
+label_list:
+- person
+- bicycle
+- car
+- motorcycle
+- airplane
+- bus
+- train
+- truck
+- boat
+- traffic light
+- fire hydrant
+- stop sign
+- parking meter
+- bench
+- bird
+- cat
+- dog
+- horse
+- sheep
+- cow
+- elephant
+- bear
+- zebra
+- giraffe
+- backpack
+- umbrella
+- handbag
+- tie
+- suitcase
+- frisbee
+- skis
+- snowboard
+- sports ball
+- kite
+- baseball bat
+- baseball glove
+- skateboard
+- surfboard
+- tennis racket
+- bottle
+- wine glass
+- cup
+- fork
+- knife
+- spoon
+- bowl
+- banana
+- apple
+- sandwich
+- orange
+- broccoli
+- carrot
+- hot dog
+- pizza
+- donut
+- cake
+- chair
+- couch
+- potted plant
+- bed
+- dining table
+- toilet
+- tv
+- laptop
+- mouse
+- remote
+- keyboard
+- cell phone
+- microwave
+- oven
+- toaster
+- sink
+- refrigerator
+- book
+- clock
+- vase
+- scissors
+- teddy bear
+- hair drier
+- toothbrush
+NMS:
+ keep_top_k: 100
+ name: MultiClassNMS
+ nms_threshold: 0.5
+ nms_top_k: 1000
+ score_threshold: 0.3
+fpn_stride:
+- 8
+- 16
+- 32
+- 64
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams
new file mode 100644
index 000000000..6dcfada5f
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams.info b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams.info
new file mode 100644
index 000000000..c505a92f6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdiparams.info differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdmodel b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdmodel
new file mode 100644
index 000000000..dfcfe5115
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/model.pdmodel differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/serving_server/model.pdmodel b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/output_inference/picodet_lcnet_1_5x_416_coco/serving_server/model.pdmodel
new file mode 100644
index 000000000..e69de29bb
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_640_reader.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_640_reader.yml
new file mode 100644
index 000000000..a931f2a76
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_640_reader.yml
@@ -0,0 +1,41 @@
+worker_num: 6
+TrainReader:
+ sample_transforms:
+ - Decode: {}
+ - RandomCrop: {}
+ - RandomFlip: {prob: 0.5}
+ - RandomDistort: {}
+ batch_transforms:
+ - BatchRandomResize: {target_size: [576, 608, 640, 672, 704], random_size: True, random_interp: True, keep_ratio: False}
+ - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
+ - Permute: {}
+ batch_size: 56
+ shuffle: true
+ drop_last: true
+ collate_batch: false
+
+
+EvalReader:
+ sample_transforms:
+ - Decode: {}
+ - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False}
+ - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
+ - Permute: {}
+ batch_transforms:
+ - PadBatch: {pad_to_stride: 32}
+ batch_size: 8
+ shuffle: false
+
+
+TestReader:
+ inputs_def:
+ image_shape: [1, 3, 640, 640]
+ sample_transforms:
+ - Decode: {}
+ - Resize: {interp: 2, target_size: [640, 640], keep_ratio: False}
+ - NormalizeImage: {is_scale: true, mean: [0.485,0.456,0.406], std: [0.229, 0.224,0.225]}
+ - Permute: {}
+ batch_transforms:
+ - PadBatch: {pad_to_stride: 32}
+ batch_size: 1
+ shuffle: false
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_esnet.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_esnet.yml
new file mode 100644
index 000000000..aa099fca1
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_esnet.yml
@@ -0,0 +1,55 @@
+architecture: PicoDet
+pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/ESNet_x1_0_pretrained.pdparams
+
+PicoDet:
+ backbone: ESNet
+ neck: CSPPAN
+ head: PicoHead
+
+ESNet:
+ scale: 1.0
+ feature_maps: [4, 11, 14]
+ act: hard_swish
+ channel_ratio: [0.875, 0.5, 1.0, 0.625, 0.5, 0.75, 0.625, 0.625, 0.5, 0.625, 1.0, 0.625, 0.75]
+
+CSPPAN:
+ out_channels: 128
+ use_depthwise: True
+ num_csp_blocks: 1
+ num_features: 4
+
+PicoHead:
+ conv_feat:
+ name: PicoFeat
+ feat_in: 128
+ feat_out: 128
+ num_convs: 4
+ num_fpn_stride: 4
+ norm_type: bn
+ share_cls_reg: True
+ fpn_stride: [8, 16, 32, 64]
+ feat_in_chan: 128
+ prior_prob: 0.01
+ reg_max: 7
+ cell_offset: 0.5
+ loss_class:
+ name: VarifocalLoss
+ use_sigmoid: True
+ iou_weighted: True
+ loss_weight: 1.0
+ loss_dfl:
+ name: DistributionFocalLoss
+ loss_weight: 0.25
+ loss_bbox:
+ name: GIoULoss
+ loss_weight: 2.0
+ assigner:
+ name: SimOTAAssigner
+ candidate_topk: 10
+ iou_weight: 6
+ nms:
+ name: MultiClassNMS
+ nms_top_k: 1000
+ keep_top_k: 100
+ score_threshold: 0.025
+ nms_threshold: 0.6
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_lcnet_1_5x_416_coco.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_lcnet_1_5x_416_coco.yml
new file mode 100644
index 000000000..302137468
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/picodet_lcnet_1_5x_416_coco.yml
@@ -0,0 +1,36 @@
+_BASE_: [
+ './coco_detection_main.yml',
+ './runtime.yml',
+ './picodet_esnet.yml',
+ './optimizer_300e.yml',
+ './picodet_640_reader.yml',
+]
+
+pretrain_weights: https://paddledet.bj.bcebos.com/models/pretrained/LCNet_x1_5_pretrained.pdparams
+weights: output/picodet_lcnet_1_5x_416_coco/best_model
+find_unused_parameters: True
+use_ema: true
+cycle_epoch: 40
+snapshot_epoch: 10
+epoch: 10
+
+PicoDet:
+ backbone: LCNet
+ neck: CSPPAN
+ head: PicoHead
+
+LCNet:
+ scale: 1.5
+ feature_maps: [3, 4, 5]
+
+TrainReader:
+ batch_size: 20
+
+LearningRate:
+ base_lr: 0.1
+ schedulers:
+ - !CosineDecay
+ max_epochs: 300
+ - !LinearWarmup
+ start_factor: 0.1
+ steps: 300
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/requirements.txt b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/requirements.txt
new file mode 100644
index 000000000..8b184e905
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/requirements.txt
@@ -0,0 +1,17 @@
+tqdm
+typeguard ; python_version >= '3.4'
+visualdl>=2.1.0 ; python_version <= '3.7'
+opencv-python
+PyYAML
+shapely
+scipy
+terminaltables
+Cython
+pycocotools
+#xtcocotools==1.6 #only for crowdpose
+setuptools>=42.0.0
+lap
+sklearn
+motmetrics
+openpyxl
+cython_bbox
\ No newline at end of file
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/runtime.yml b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/runtime.yml
new file mode 100644
index 000000000..c502ddabe
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/runtime.yml
@@ -0,0 +1,5 @@
+use_gpu: true
+log_iter: 20
+save_dir: output
+snapshot_epoch: 1
+print_flops: false
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/train.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/train.py
new file mode 100644
index 000000000..878aa60fa
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/train/train.py
@@ -0,0 +1,171 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+
+# add python path of PadleDetection to sys.path
+parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 2)))
+sys.path.insert(0, parent_path)
+
+# ignore warning log
+import warnings
+warnings.filterwarnings('ignore')
+
+import paddle
+
+from ppdet.core.workspace import load_config, merge_config
+from ppdet.engine import Trainer, init_parallel_env, set_random_seed, init_fleet_env
+from ppdet.slim import build_slim_model
+
+import ppdet.utils.cli as cli
+import ppdet.utils.check as check
+from ppdet.utils.logger import setup_logger
+logger = setup_logger('train')
+
+
+def parse_args():
+ parser = cli.ArgsParser()
+ parser.add_argument(
+ "--eval",
+ action='store_true',
+ default=False,
+ help="Whether to perform evaluation in train")
+ parser.add_argument(
+ "-r", "--resume", default=None, help="weights path for resume")
+ parser.add_argument(
+ "--slim_config",
+ default=None,
+ type=str,
+ help="Configuration file of slim method.")
+ parser.add_argument(
+ "--enable_ce",
+ type=bool,
+ default=False,
+ help="If set True, enable continuous evaluation job."
+ "This flag is only used for internal test.")
+ parser.add_argument(
+ "--fp16",
+ action='store_true',
+ default=False,
+ help="Enable mixed precision training.")
+ parser.add_argument(
+ "--fleet", action='store_true', default=False, help="Use fleet or not")
+ parser.add_argument(
+ "--use_vdl",
+ type=bool,
+ default=False,
+ help="whether to record the data to VisualDL.")
+ parser.add_argument(
+ '--vdl_log_dir',
+ type=str,
+ default="vdl_log_dir/scalar",
+ help='VisualDL logging directory for scalar.')
+ parser.add_argument(
+ '--save_prediction_only',
+ action='store_true',
+ default=False,
+ help='Whether to save the evaluation results only')
+ parser.add_argument(
+ '--profiler_options',
+ type=str,
+ default=None,
+ help="The option of profiler, which should be in "
+ "format \"key1=value1;key2=value2;key3=value3\"."
+ "please see ppdet/utils/profiler.py for detail.")
+ parser.add_argument(
+ '--save_proposals',
+ action='store_true',
+ default=False,
+ help='Whether to save the train proposals')
+ parser.add_argument(
+ '--proposals_path',
+ type=str,
+ default="sniper/proposals.json",
+ help='Train proposals directory')
+
+ args = parser.parse_args()
+ return args
+
+
+def run(FLAGS, cfg):
+ # init fleet environment
+ if cfg.fleet:
+ init_fleet_env(cfg.get('find_unused_parameters', False))
+ else:
+ # init parallel environment if nranks > 1
+ init_parallel_env()
+
+ if FLAGS.enable_ce:
+ set_random_seed(0)
+
+ # build trainer
+ trainer = Trainer(cfg, mode='train')
+
+ # load weights
+ if FLAGS.resume is not None:
+ trainer.resume_weights(FLAGS.resume)
+ elif 'pretrain_weights' in cfg and cfg.pretrain_weights:
+ trainer.load_weights(cfg.pretrain_weights)
+
+ # training
+ trainer.train(FLAGS.eval)
+
+
+def main():
+ FLAGS = parse_args()
+ cfg = load_config(FLAGS.config)
+ cfg['fp16'] = FLAGS.fp16
+ cfg['fleet'] = FLAGS.fleet
+ cfg['use_vdl'] = FLAGS.use_vdl
+ cfg['vdl_log_dir'] = FLAGS.vdl_log_dir
+ cfg['save_prediction_only'] = FLAGS.save_prediction_only
+ cfg['profiler_options'] = FLAGS.profiler_options
+ cfg['save_proposals'] = FLAGS.save_proposals
+ cfg['proposals_path'] = FLAGS.proposals_path
+ merge_config(FLAGS.opt)
+
+ # disable npu in config by default
+ if 'use_npu' not in cfg:
+ cfg.use_npu = False
+
+ if cfg.use_gpu:
+ place = paddle.set_device('gpu')
+ elif cfg.use_npu:
+ place = paddle.set_device('npu')
+ else:
+ place = paddle.set_device('cpu')
+
+ if 'norm_type' in cfg and cfg['norm_type'] == 'sync_bn' and not cfg.use_gpu:
+ cfg['norm_type'] = 'bn'
+
+ if FLAGS.slim_config:
+ cfg = build_slim_model(cfg, FLAGS.slim_config)
+
+ # FIXME: Temporarily solve the priority problem of FLAGS.opt
+ merge_config(FLAGS.opt)
+ check.check_config(cfg)
+ check.check_gpu(cfg.use_gpu)
+ check.check_npu(cfg.use_npu)
+ check.check_version()
+
+ run(FLAGS, cfg)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/config.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/config.cpython-37.pyc
new file mode 100644
index 000000000..34c5168eb
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/config.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/get_image_list.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/get_image_list.cpython-37.pyc
new file mode 100644
index 000000000..c2ca48112
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/get_image_list.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/logger.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/logger.cpython-37.pyc
new file mode 100644
index 000000000..fdde1c9da
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/logger.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/predictor.cpython-37.pyc b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/predictor.cpython-37.pyc
new file mode 100644
index 000000000..a54b304db
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/__pycache__/predictor.cpython-37.pyc differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/config.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/config.py
new file mode 100644
index 000000000..eb7914806
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/config.py
@@ -0,0 +1,197 @@
+# copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import copy
+import argparse
+import yaml
+
+from utils import logger
+
+__all__ = ['get_config']
+
+
+class AttrDict(dict):
+ def __getattr__(self, key):
+ return self[key]
+
+ def __setattr__(self, key, value):
+ if key in self.__dict__:
+ self.__dict__[key] = value
+ else:
+ self[key] = value
+
+ def __deepcopy__(self, content):
+ return copy.deepcopy(dict(self))
+
+
+def create_attr_dict(yaml_config):
+ from ast import literal_eval
+ for key, value in yaml_config.items():
+ if type(value) is dict:
+ yaml_config[key] = value = AttrDict(value)
+ if isinstance(value, str):
+ try:
+ value = literal_eval(value)
+ except BaseException:
+ pass
+ if isinstance(value, AttrDict):
+ create_attr_dict(yaml_config[key])
+ else:
+ yaml_config[key] = value
+
+
+def parse_config(cfg_file):
+ """Load a config file into AttrDict"""
+ with open(cfg_file, 'r') as fopen:
+ yaml_config = AttrDict(yaml.load(fopen, Loader=yaml.SafeLoader))
+ create_attr_dict(yaml_config)
+ return yaml_config
+
+
+def print_dict(d, delimiter=0):
+ """
+ Recursively visualize a dict and
+ indenting acrrording by the relationship of keys.
+ """
+ placeholder = "-" * 60
+ for k, v in sorted(d.items()):
+ if isinstance(v, dict):
+ logger.info("{}{} : ".format(delimiter * " ",
+ logger.coloring(k, "HEADER")))
+ print_dict(v, delimiter + 4)
+ elif isinstance(v, list) and len(v) >= 1 and isinstance(v[0], dict):
+ logger.info("{}{} : ".format(delimiter * " ",
+ logger.coloring(str(k), "HEADER")))
+ for value in v:
+ print_dict(value, delimiter + 4)
+ else:
+ logger.info("{}{} : {}".format(delimiter * " ",
+ logger.coloring(k, "HEADER"),
+ logger.coloring(v, "OKGREEN")))
+ if k.isupper():
+ logger.info(placeholder)
+
+
+def print_config(config):
+ """
+ visualize configs
+ Arguments:
+ config: configs
+ """
+ logger.advertise()
+ print_dict(config)
+
+
+def override(dl, ks, v):
+ """
+ Recursively replace dict of list
+ Args:
+ dl(dict or list): dict or list to be replaced
+ ks(list): list of keys
+ v(str): value to be replaced
+ """
+
+ def str2num(v):
+ try:
+ return eval(v)
+ except Exception:
+ return v
+
+ assert isinstance(dl, (list, dict)), ("{} should be a list or a dict")
+ assert len(ks) > 0, ('lenght of keys should larger than 0')
+ if isinstance(dl, list):
+ k = str2num(ks[0])
+ if len(ks) == 1:
+ assert k < len(dl), ('index({}) out of range({})'.format(k, dl))
+ dl[k] = str2num(v)
+ else:
+ override(dl[k], ks[1:], v)
+ else:
+ if len(ks) == 1:
+ # assert ks[0] in dl, ('{} is not exist in {}'.format(ks[0], dl))
+ if not ks[0] in dl:
+ logger.warning('A new filed ({}) detected!'.format(ks[0], dl))
+ dl[ks[0]] = str2num(v)
+ else:
+ override(dl[ks[0]], ks[1:], v)
+
+
+def override_config(config, options=None):
+ """
+ Recursively override the config
+ Args:
+ config(dict): dict to be replaced
+ options(list): list of pairs(key0.key1.idx.key2=value)
+ such as: [
+ 'topk=2',
+ 'VALID.transforms.1.ResizeImage.resize_short=300'
+ ]
+ Returns:
+ config(dict): replaced config
+ """
+ if options is not None:
+ for opt in options:
+ assert isinstance(opt, str), (
+ "option({}) should be a str".format(opt))
+ assert "=" in opt, (
+ "option({}) should contain a ="
+ "to distinguish between key and value".format(opt))
+ pair = opt.split('=')
+ assert len(pair) == 2, ("there can be only a = in the option")
+ key, value = pair
+ keys = key.split('.')
+ override(config, keys, value)
+ return config
+
+
+def get_config(fname, overrides=None, show=True):
+ """
+ Read config from file
+ """
+ assert os.path.exists(fname), (
+ 'config file({}) is not exist'.format(fname))
+ config = parse_config(fname)
+ override_config(config, overrides)
+ if show:
+ print_config(config)
+ # check_config(config)
+ return config
+
+
+def parser():
+ parser = argparse.ArgumentParser("generic-image-rec train script")
+ parser.add_argument(
+ '-c',
+ '--config',
+ type=str,
+ default='configs/config.yaml',
+ help='config file path')
+ parser.add_argument(
+ '-o',
+ '--override',
+ action='append',
+ default=[],
+ help='config options to be overridden')
+ parser.add_argument(
+ '-v',
+ '--verbose',
+ action='store_true',
+ help='wheather print the config info')
+ return parser
+
+
+def parse_args():
+ args = parser().parse_args()
+ return args
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/get_image_list.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/get_image_list.py
new file mode 100644
index 000000000..6f10935ad
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/get_image_list.py
@@ -0,0 +1,49 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import argparse
+import base64
+import numpy as np
+
+
+def get_image_list(img_file):
+ imgs_lists = []
+ if img_file is None or not os.path.exists(img_file):
+ raise Exception("not found any img file in {}".format(img_file))
+
+ img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp']
+ if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end:
+ imgs_lists.append(img_file)
+ elif os.path.isdir(img_file):
+ for single_file in os.listdir(img_file):
+ if single_file.split('.')[-1] in img_end:
+ imgs_lists.append(os.path.join(img_file, single_file))
+ if len(imgs_lists) == 0:
+ raise Exception("not found any img file in {}".format(img_file))
+ imgs_lists = sorted(imgs_lists)
+ return imgs_lists
+
+
+def get_image_list_from_label_file(image_path, label_file_path):
+ imgs_lists = []
+ gt_labels = []
+ with open(label_file_path, "r") as fin:
+ lines = fin.readlines()
+ for line in lines:
+ image_name, label = line.strip("\n").split()
+ label = int(label)
+ imgs_lists.append(os.path.join(image_path, image_name))
+ gt_labels.append(int(label))
+ return imgs_lists, gt_labels
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/logger.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/logger.py
new file mode 100644
index 000000000..ece852624
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/logger.py
@@ -0,0 +1,120 @@
+# copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import os
+import datetime
+
+logging.basicConfig(
+ level=logging.INFO,
+ format="%(asctime)s %(levelname)s: %(message)s",
+ datefmt="%Y-%m-%d %H:%M:%S")
+
+
+def time_zone(sec, fmt):
+ real_time = datetime.datetime.now()
+ return real_time.timetuple()
+
+
+logging.Formatter.converter = time_zone
+_logger = logging.getLogger(__name__)
+
+Color = {
+ 'RED': '\033[31m',
+ 'HEADER': '\033[35m', # deep purple
+ 'PURPLE': '\033[95m', # purple
+ 'OKBLUE': '\033[94m',
+ 'OKGREEN': '\033[92m',
+ 'WARNING': '\033[93m',
+ 'FAIL': '\033[91m',
+ 'ENDC': '\033[0m'
+}
+
+
+def coloring(message, color="OKGREEN"):
+ assert color in Color.keys()
+ if os.environ.get('PADDLECLAS_COLORING', False):
+ return Color[color] + str(message) + Color["ENDC"]
+ else:
+ return message
+
+
+def anti_fleet(log):
+ """
+ logs will print multi-times when calling Fleet API.
+ Only display single log and ignore the others.
+ """
+
+ def wrapper(fmt, *args):
+ if int(os.getenv("PADDLE_TRAINER_ID", 0)) == 0:
+ log(fmt, *args)
+
+ return wrapper
+
+
+@anti_fleet
+def info(fmt, *args):
+ _logger.info(fmt, *args)
+
+
+@anti_fleet
+def warning(fmt, *args):
+ _logger.warning(coloring(fmt, "RED"), *args)
+
+
+@anti_fleet
+def error(fmt, *args):
+ _logger.error(coloring(fmt, "FAIL"), *args)
+
+
+def scaler(name, value, step, writer):
+ """
+ This function will draw a scalar curve generated by the visualdl.
+ Usage: Install visualdl: pip3 install visualdl==2.0.0b4
+ and then:
+ visualdl --logdir ./scalar --host 0.0.0.0 --port 8830
+ to preview loss corve in real time.
+ """
+ writer.add_scalar(tag=name, step=step, value=value)
+
+
+def advertise():
+ """
+ Show the advertising message like the following:
+
+ ===========================================================
+ == PaddleClas is powered by PaddlePaddle ! ==
+ ===========================================================
+ == ==
+ == For more info please go to the following website. ==
+ == ==
+ == https://github.com/PaddlePaddle/PaddleClas ==
+ ===========================================================
+
+ """
+ copyright = "PaddleClas is powered by PaddlePaddle !"
+ ad = "For more info please go to the following website."
+ website = "https://github.com/PaddlePaddle/PaddleClas"
+ AD_LEN = 6 + len(max([copyright, ad, website], key=len))
+
+ info(
+ coloring("\n{0}\n{1}\n{2}\n{3}\n{4}\n{5}\n{6}\n{7}\n".format(
+ "=" * (AD_LEN + 4),
+ "=={}==".format(copyright.center(AD_LEN)),
+ "=" * (AD_LEN + 4),
+ "=={}==".format(' ' * AD_LEN),
+ "=={}==".format(ad.center(AD_LEN)),
+ "=={}==".format(' ' * AD_LEN),
+ "=={}==".format(website.center(AD_LEN)),
+ "=" * (AD_LEN + 4), ), "RED"))
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/predictor.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/predictor.py
new file mode 100644
index 000000000..11f153071
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/utils/predictor.py
@@ -0,0 +1,70 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import os
+import argparse
+import base64
+import shutil
+import cv2
+import numpy as np
+
+from paddle.inference import Config
+from paddle.inference import create_predictor
+
+
+class Predictor(object):
+ def __init__(self, args, inference_model_dir=None):
+ # HALF precission predict only work when using tensorrt
+ if args.use_fp16 is True:
+ assert args.use_tensorrt is True
+ self.args = args
+ self.paddle_predictor, self.config = self.create_paddle_predictor(
+ args, inference_model_dir)
+
+ def predict(self, image):
+ raise NotImplementedError
+
+ def create_paddle_predictor(self, args, inference_model_dir=None):
+ if inference_model_dir is None:
+ inference_model_dir = args.inference_model_dir
+ params_file = os.path.join(inference_model_dir, "inference.pdiparams")
+ model_file = os.path.join(inference_model_dir, "inference.pdmodel")
+ config = Config(model_file, params_file)
+
+ if args.use_gpu:
+ config.enable_use_gpu(args.gpu_mem, 0)
+ else:
+ config.disable_gpu()
+ if args.enable_mkldnn:
+ # cache 10 different shapes for mkldnn to avoid memory leak
+ config.set_mkldnn_cache_capacity(10)
+ config.enable_mkldnn()
+ config.set_cpu_math_library_num_threads(args.cpu_num_threads)
+
+ if args.enable_profile:
+ config.enable_profile()
+ config.disable_glog_info()
+ config.switch_ir_optim(args.ir_optim) # default true
+ if args.use_tensorrt:
+ config.enable_tensorrt_engine(
+ precision_mode=Config.Precision.Half
+ if args.use_fp16 else Config.Precision.Float32,
+ max_batch_size=args.batch_size,
+ min_subgraph_size=30)
+
+ config.enable_memory_optim()
+ # use zero copy
+ config.switch_use_feed_fetch_ops(False)
+ predictor = create_predictor(config)
+
+ return predictor, config
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/x2coco.py b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/x2coco.py
new file mode 100644
index 000000000..2d0e64e64
--- /dev/null
+++ b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/code/x2coco.py
@@ -0,0 +1,450 @@
+#!/usr/bin/env python
+# coding: utf-8
+# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import glob
+import json
+import os
+import os.path as osp
+import shutil
+import xml.etree.ElementTree as ET
+from tqdm import tqdm
+
+import numpy as np
+import PIL.ImageDraw
+
+label_to_num = {}
+categories_list = []
+labels_list = []
+
+
+class MyEncoder(json.JSONEncoder):
+ def default(self, obj):
+ if isinstance(obj, np.integer):
+ return int(obj)
+ elif isinstance(obj, np.floating):
+ return float(obj)
+ elif isinstance(obj, np.ndarray):
+ return obj.tolist()
+ else:
+ return super(MyEncoder, self).default(obj)
+
+
+def images_labelme(data, num):
+ image = {}
+ image['height'] = data['imageHeight']
+ image['width'] = data['imageWidth']
+ image['id'] = num + 1
+ if '\\' in data['imagePath']:
+ image['file_name'] = data['imagePath'].split('\\')[-1]
+ else:
+ image['file_name'] = data['imagePath'].split('/')[-1]
+ image['file_name'] = image['file_name'].rstrip()
+ return image
+
+
+def images_cityscape(data, num, img_file):
+ image = {}
+ image['height'] = data['imgHeight']
+ image['width'] = data['imgWidth']
+ image['id'] = num + 1
+ image['file_name'] = img_file
+ image['file_name'] = image['file_name'].rstrip()
+ return image
+
+
+def categories(label, labels_list):
+ category = {}
+ category['supercategory'] = 'component'
+ category['id'] = len(labels_list) + 1
+ category['name'] = label
+ return category
+
+
+def annotations_rectangle(points, label, image_num, object_num, label_to_num):
+ annotation = {}
+ seg_points = np.asarray(points).copy()
+ seg_points[1, :] = np.asarray(points)[2, :]
+ seg_points[2, :] = np.asarray(points)[1, :]
+ annotation['segmentation'] = [list(seg_points.flatten())]
+ annotation['iscrowd'] = 0
+ annotation['image_id'] = image_num + 1
+ annotation['bbox'] = list(
+ map(float, [
+ points[0][0], points[0][1], points[1][0] - points[0][0], points[1][
+ 1] - points[0][1]
+ ]))
+ annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]
+ annotation['category_id'] = label_to_num[label]
+ annotation['id'] = object_num + 1
+ return annotation
+
+
+def annotations_polygon(height, width, points, label, image_num, object_num,
+ label_to_num):
+ annotation = {}
+ annotation['segmentation'] = [list(np.asarray(points).flatten())]
+ annotation['iscrowd'] = 0
+ annotation['image_id'] = image_num + 1
+ annotation['bbox'] = list(map(float, get_bbox(height, width, points)))
+ annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]
+ annotation['category_id'] = label_to_num[label]
+ annotation['id'] = object_num + 1
+ return annotation
+
+
+def get_bbox(height, width, points):
+ polygons = points
+ mask = np.zeros([height, width], dtype=np.uint8)
+ mask = PIL.Image.fromarray(mask)
+ xy = list(map(tuple, polygons))
+ PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
+ mask = np.array(mask, dtype=bool)
+ index = np.argwhere(mask == 1)
+ rows = index[:, 0]
+ clos = index[:, 1]
+ left_top_r = np.min(rows)
+ left_top_c = np.min(clos)
+ right_bottom_r = np.max(rows)
+ right_bottom_c = np.max(clos)
+ return [
+ left_top_c, left_top_r, right_bottom_c - left_top_c,
+ right_bottom_r - left_top_r
+ ]
+
+
+def deal_json(ds_type, img_path, json_path):
+ data_coco = {}
+ images_list = []
+ annotations_list = []
+ image_num = -1
+ object_num = -1
+ for img_file in os.listdir(img_path):
+ img_label = os.path.splitext(img_file)[0]
+ if img_file.split('.')[
+ -1] not in ['bmp', 'jpg', 'jpeg', 'png', 'JPEG', 'JPG', 'PNG']:
+ continue
+ label_file = osp.join(json_path, img_label + '.json')
+ print('Generating dataset from:', label_file)
+ image_num = image_num + 1
+ with open(label_file) as f:
+ data = json.load(f)
+ if ds_type == 'labelme':
+ images_list.append(images_labelme(data, image_num))
+ elif ds_type == 'cityscape':
+ images_list.append(images_cityscape(data, image_num, img_file))
+ if ds_type == 'labelme':
+ for shapes in data['shapes']:
+ object_num = object_num + 1
+ label = shapes['label']
+ if label not in labels_list:
+ categories_list.append(categories(label, labels_list))
+ labels_list.append(label)
+ label_to_num[label] = len(labels_list)
+ p_type = shapes['shape_type']
+ if p_type == 'polygon':
+ points = shapes['points']
+ annotations_list.append(
+ annotations_polygon(data['imageHeight'], data[
+ 'imageWidth'], points, label, image_num,
+ object_num, label_to_num))
+
+ if p_type == 'rectangle':
+ (x1, y1), (x2, y2) = shapes['points']
+ x1, x2 = sorted([x1, x2])
+ y1, y2 = sorted([y1, y2])
+ points = [[x1, y1], [x2, y2], [x1, y2], [x2, y1]]
+ annotations_list.append(
+ annotations_rectangle(points, label, image_num,
+ object_num, label_to_num))
+ elif ds_type == 'cityscape':
+ for shapes in data['objects']:
+ object_num = object_num + 1
+ label = shapes['label']
+ if label not in labels_list:
+ categories_list.append(categories(label, labels_list))
+ labels_list.append(label)
+ label_to_num[label] = len(labels_list)
+ points = shapes['polygon']
+ annotations_list.append(
+ annotations_polygon(data['imgHeight'], data[
+ 'imgWidth'], points, label, image_num, object_num,
+ label_to_num))
+ data_coco['images'] = images_list
+ data_coco['categories'] = categories_list
+ data_coco['annotations'] = annotations_list
+ return data_coco
+
+
+def voc_get_label_anno(ann_dir_path, ann_ids_path, labels_path):
+ with open(labels_path, 'r') as f:
+ labels_str = f.read().split()
+ labels_ids = list(range(1, len(labels_str) + 1))
+
+ with open(ann_ids_path, 'r') as f:
+ ann_ids = [lin.strip().split(' ')[-1] for lin in f.readlines()]
+
+ ann_paths = []
+ for aid in ann_ids:
+ if aid.endswith('xml'):
+ ann_path = os.path.join(ann_dir_path, aid)
+ else:
+ ann_path = os.path.join(ann_dir_path, aid + '.xml')
+ ann_paths.append(ann_path)
+
+ return dict(zip(labels_str, labels_ids)), ann_paths
+
+
+def voc_get_image_info(annotation_root, im_id):
+ filename = annotation_root.findtext('filename')
+ assert filename is not None
+ img_name = os.path.basename(filename)
+
+ size = annotation_root.find('size')
+ width = float(size.findtext('width'))
+ height = float(size.findtext('height'))
+
+ image_info = {
+ 'file_name': filename,
+ 'height': height,
+ 'width': width,
+ 'id': im_id
+ }
+ return image_info
+
+
+def voc_get_coco_annotation(obj, label2id):
+ label = obj.findtext('name')
+ assert label in label2id, "label is not in label2id."
+ category_id = label2id[label]
+ bndbox = obj.find('bndbox')
+ xmin = float(bndbox.findtext('xmin'))
+ ymin = float(bndbox.findtext('ymin'))
+ xmax = float(bndbox.findtext('xmax'))
+ ymax = float(bndbox.findtext('ymax'))
+ assert xmax > xmin and ymax > ymin, "Box size error."
+ o_width = xmax - xmin
+ o_height = ymax - ymin
+ anno = {
+ 'area': o_width * o_height,
+ 'iscrowd': 0,
+ 'bbox': [xmin, ymin, o_width, o_height],
+ 'category_id': category_id,
+ 'ignore': 0,
+ }
+ return anno
+
+
+def voc_xmls_to_cocojson(annotation_paths, label2id, output_dir, output_file):
+ output_json_dict = {
+ "images": [],
+ "type": "instances",
+ "annotations": [],
+ "categories": []
+ }
+ bnd_id = 1 # bounding box start id
+ im_id = 0
+ print('Start converting !')
+ for a_path in tqdm(annotation_paths):
+ # Read annotation xml
+ ann_tree = ET.parse(a_path)
+ ann_root = ann_tree.getroot()
+
+ img_info = voc_get_image_info(ann_root, im_id)
+ output_json_dict['images'].append(img_info)
+ print(a_path)
+ for obj in ann_root.findall('object'):
+ ann = voc_get_coco_annotation(obj=obj, label2id=label2id)
+ ann.update({'image_id': im_id, 'id': bnd_id})
+ output_json_dict['annotations'].append(ann)
+ bnd_id = bnd_id + 1
+ im_id += 1
+
+ for label, label_id in label2id.items():
+ category_info = {'supercategory': 'none', 'id': label_id, 'name': label}
+ output_json_dict['categories'].append(category_info)
+ output_file = os.path.join(output_dir, output_file)
+ with open(output_file, 'w') as f:
+ output_json = json.dumps(output_json_dict)
+ f.write(output_json)
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+ parser.add_argument(
+ '--dataset_type',
+ help='the type of dataset, can be `voc`, `labelme` or `cityscape`')
+ parser.add_argument('--json_input_dir', help='input annotated directory')
+ parser.add_argument('--image_input_dir', help='image directory')
+ parser.add_argument(
+ '--output_dir', help='output dataset directory', default='./')
+ parser.add_argument(
+ '--train_proportion',
+ help='the proportion of train dataset',
+ type=float,
+ default=1.0)
+ parser.add_argument(
+ '--val_proportion',
+ help='the proportion of validation dataset',
+ type=float,
+ default=0.0)
+ parser.add_argument(
+ '--test_proportion',
+ help='the proportion of test dataset',
+ type=float,
+ default=0.0)
+ parser.add_argument(
+ '--voc_anno_dir',
+ help='In Voc format dataset, path to annotation files directory.',
+ type=str,
+ default=None)
+ parser.add_argument(
+ '--voc_anno_list',
+ help='In Voc format dataset, path to annotation files ids list.',
+ type=str,
+ default=None)
+ parser.add_argument(
+ '--voc_label_list',
+ help='In Voc format dataset, path to label list. The content of each line is a category.',
+ type=str,
+ default=None)
+ parser.add_argument(
+ '--voc_out_name',
+ type=str,
+ default='voc.json',
+ help='In Voc format dataset, path to output json file')
+ args = parser.parse_args()
+ try:
+ assert args.dataset_type in ['voc', 'labelme', 'cityscape']
+ except AssertionError as e:
+ print(
+ 'Now only support the voc, cityscape dataset and labelme dataset!!')
+ os._exit(0)
+
+ if args.dataset_type == 'voc':
+ assert args.voc_anno_dir and args.voc_anno_list and args.voc_label_list
+ label2id, ann_paths = voc_get_label_anno(
+ args.voc_anno_dir, args.voc_anno_list, args.voc_label_list)
+ voc_xmls_to_cocojson(
+ annotation_paths=ann_paths,
+ label2id=label2id,
+ output_dir=args.output_dir,
+ output_file=args.voc_out_name)
+ else:
+ try:
+ assert os.path.exists(args.json_input_dir)
+ except AssertionError as e:
+ print('The json folder does not exist!')
+ os._exit(0)
+ try:
+ assert os.path.exists(args.image_input_dir)
+ except AssertionError as e:
+ print('The image folder does not exist!')
+ os._exit(0)
+ try:
+ assert abs(args.train_proportion + args.val_proportion \
+ + args.test_proportion - 1.0) < 1e-5
+ except AssertionError as e:
+ print(
+ 'The sum of pqoportion of training, validation and test datase must be 1!'
+ )
+ os._exit(0)
+
+ # Allocate the dataset.
+ total_num = len(glob.glob(osp.join(args.json_input_dir, '*.json')))
+ if args.train_proportion != 0:
+ train_num = int(total_num * args.train_proportion)
+ out_dir = args.output_dir + '/train'
+ if not os.path.exists(out_dir):
+ os.makedirs(out_dir)
+ else:
+ train_num = 0
+ if args.val_proportion == 0.0:
+ val_num = 0
+ test_num = total_num - train_num
+ out_dir = args.output_dir + '/test'
+ if args.test_proportion != 0.0 and not os.path.exists(out_dir):
+ os.makedirs(out_dir)
+ else:
+ val_num = int(total_num * args.val_proportion)
+ test_num = total_num - train_num - val_num
+ val_out_dir = args.output_dir + '/val'
+ if not os.path.exists(val_out_dir):
+ os.makedirs(val_out_dir)
+ test_out_dir = args.output_dir + '/test'
+ if args.test_proportion != 0.0 and not os.path.exists(test_out_dir):
+ os.makedirs(test_out_dir)
+ count = 1
+ for img_name in os.listdir(args.image_input_dir):
+ if count <= train_num:
+ if osp.exists(args.output_dir + '/train/'):
+ shutil.copyfile(
+ osp.join(args.image_input_dir, img_name),
+ osp.join(args.output_dir + '/train/', img_name))
+ else:
+ if count <= train_num + val_num:
+ if osp.exists(args.output_dir + '/val/'):
+ shutil.copyfile(
+ osp.join(args.image_input_dir, img_name),
+ osp.join(args.output_dir + '/val/', img_name))
+ else:
+ if osp.exists(args.output_dir + '/test/'):
+ shutil.copyfile(
+ osp.join(args.image_input_dir, img_name),
+ osp.join(args.output_dir + '/test/', img_name))
+ count = count + 1
+
+ # Deal with the json files.
+ if not os.path.exists(args.output_dir + '/annotations'):
+ os.makedirs(args.output_dir + '/annotations')
+ if args.train_proportion != 0:
+ train_data_coco = deal_json(args.dataset_type,
+ args.output_dir + '/train',
+ args.json_input_dir)
+ train_json_path = osp.join(args.output_dir + '/annotations',
+ 'instance_train.json')
+ json.dump(
+ train_data_coco,
+ open(train_json_path, 'w'),
+ indent=4,
+ cls=MyEncoder)
+ if args.val_proportion != 0:
+ val_data_coco = deal_json(args.dataset_type,
+ args.output_dir + '/val',
+ args.json_input_dir)
+ val_json_path = osp.join(args.output_dir + '/annotations',
+ 'instance_val.json')
+ json.dump(
+ val_data_coco,
+ open(val_json_path, 'w'),
+ indent=4,
+ cls=MyEncoder)
+ if args.test_proportion != 0:
+ test_data_coco = deal_json(args.dataset_type,
+ args.output_dir + '/test',
+ args.json_input_dir)
+ test_json_path = osp.join(args.output_dir + '/annotations',
+ 'instance_test.json')
+ json.dump(
+ test_data_coco,
+ open(test_json_path, 'w'),
+ indent=4,
+ cls=MyEncoder)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/index_infer_result.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/index_infer_result.png
new file mode 100644
index 000000000..a419074b7
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/index_infer_result.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/infer_result.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/infer_result.png
new file mode 100644
index 000000000..bf299fd11
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/infer_result.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/label_img.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/label_img.png
new file mode 100644
index 000000000..46aef3f11
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/label_img.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/result_5.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/result_5.png
new file mode 100644
index 000000000..511156661
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/result_5.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/xml_content.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/xml_content.png
new file mode 100644
index 000000000..f4db8d49c
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/docs/images/xml_content.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_1.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_1.png
new file mode 100644
index 000000000..ebce35597
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_1.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_2.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_2.png
new file mode 100644
index 000000000..466689918
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_2.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_3.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_3.png
new file mode 100644
index 000000000..6ada31633
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_3.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_4.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_4.png
new file mode 100644
index 000000000..ad7e495c6
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_4.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_5.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_5.png
new file mode 100644
index 000000000..8585dd300
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/pic_5.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_1.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_1.png
new file mode 100644
index 000000000..fb1b850f4
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_1.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_2.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_2.png
new file mode 100644
index 000000000..87c36dd61
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_2.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_3.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_3.png
new file mode 100644
index 000000000..050aebd3a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_3.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_4.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_4.png
new file mode 100644
index 000000000..a0a34bf6a
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_4.png differ
diff --git a/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_5.png b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_5.png
new file mode 100644
index 000000000..511156661
Binary files /dev/null and b/Paddle_Industry_Practice_Sample_Library/Electromobile_In_Elevator_Detection/result/result_5.png differ