Create SegmentationResult doc and evaluation functions (#119)

* Update README.md

* Update README.md

* Update README.md

* Create README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add evaluation calculate time and fix some bugs

* Update classification __init__

* Move to ppseg

* Add segmentation doc

* Add PaddleClas infer.py

* Update PaddleClas infer.py

* Delete .infer.py.swp

* Add PaddleClas infer.cc

* Update README.md

* Update README.md

* Update README.md

* Update infer.py

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add PaddleSeg doc and infer.cc demo

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Create segmentation_result.md

* Update README.md

* Update segmentation_result.md

* Update segmentation_result.md

* Update segmentation_result.md

* Update classification and detection evaluation function

Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
huangjianhui
2022-08-18 13:05:28 +08:00
committed by GitHub
parent 04c1ffde2c
commit c0e5ce248d
6 changed files with 43 additions and 5 deletions

View File

@@ -24,7 +24,6 @@ void BindPPSeg(pybind11::module& m) {
pybind11::array& data) { pybind11::array& data) {
auto mat = PyArrayToCvMat(data); auto mat = PyArrayToCvMat(data);
vision::SegmentationResult* res = new vision::SegmentationResult(); vision::SegmentationResult* res = new vision::SegmentationResult();
// self.Predict(&mat, &res);
self.Predict(&mat, res); self.Predict(&mat, res);
return res; return res;
}) })

View File

@@ -4,7 +4,8 @@ FastDeploy根据视觉模型的任务类型定义了不同的结构体(`csrcs
| 结构体 | 文档 | 说明 | 相应模型 | | 结构体 | 文档 | 说明 | 相应模型 |
| :----- | :--- | :---- | :------- | | :----- | :--- | :---- | :------- |
| ClassificationResult | [C++/Python文档](./classification_result.md) | 图像分类返回结果 | ResNet50、MobileNetV3等 | | ClassifyResult | [C++/Python文档](./classification_result.md) | 图像分类返回结果 | ResNet50、MobileNetV3等 |
| SegmentationResult | [C++/Python文档](./segmentation_result.md) | 图像分割返回结果 | PP-HumanSeg、PP-LiteSeg等 |
| DetectionResult | [C++/Python文档](./detection_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 | | DetectionResult | [C++/Python文档](./detection_result.md) | 目标检测返回结果 | PPYOLOE、YOLOv7系列模型等 |
| FaceDetectionResult | [C++/Python文档](./face_detection_result.md) | 目标检测返回结果 | SCRFD、RetinaFace系列模型等 | | FaceDetectionResult | [C++/Python文档](./face_detection_result.md) | 目标检测返回结果 | SCRFD、RetinaFace系列模型等 |
| FaceRecognitionResult | [C++/Python文档](./face_recognition_result.md) | 目标检测返回结果 | ArcFace、CosFace系列模型等 | | FaceRecognitionResult | [C++/Python文档](./face_recognition_result.md) | 目标检测返回结果 | ArcFace、CosFace系列模型等 |

View File

@@ -0,0 +1,32 @@
# SegmentationResult 目标检测结果
SegmentationResult代码定义在`csrcs/fastdeploy/vision/common/result.h`中,用于表明图像中每个像素预测出来的分割类别和分割类别的概率值。
## C++ 定义
`fastdeploy::vision::DetectionResult`
```
struct DetectionResult {
std::vector<uint8_t> label_map;
std::vector<float> score_map;
std::vector<int64_t> shape;
bool contain_score_map = false;
void Clear();
std::string Str();
};
```
- **label_map**: 成员变量,表示单张图片每个像素点的分割类别,`label_map.size()`表示图片像素点的个数
- **score_map**: 成员变量与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`without_argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`without_argmax`以及`with_softmax`或者导出模型时指定`without_argmax`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/cpp/)`with_softmax=True`)
- **shape**: 成员变量表示输出图片的shape为H\*W
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Str()**: 成员函数将结构体中的信息以字符串形式输出用于Debug
## Python 定义
`fastdeploy.vision.SegmentationResult`
- **label_map**(list of int): 成员变量,表示单张图片每个像素点的分割类别
- **score_map**(list of float): 成员变量与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`without_argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`without_argmax`以及`with_softmax`或者导出模型时指定`without_argmax`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/python/)`with_softmax=true`)
- **shape**(list of int): 成员变量表示输出图片的shape为H\*W

View File

@@ -16,7 +16,7 @@ tar xvf fastdeploy-linux-x64-gpu-0.2.0.tgz
cd fastdeploy-linux-x64-gpu-0.2.0/examples/vision/classification/paddleclas/cpp cd fastdeploy-linux-x64-gpu-0.2.0/examples/vision/classification/paddleclas/cpp
mkdir build mkdir build
cd build cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../fastdeploy-linux-x64-gpu-0.2.0 cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.2.0
make -j make -j
# 下载ResNet50_vd模型文件和测试图片 # 下载ResNet50_vd模型文件和测试图片

View File

@@ -71,7 +71,12 @@ def eval_classify(model, image_file_path, label_file_path, topk=5):
topk_acc_score = topk_accuracy(np.array(result_list), np.array(label_list)) topk_acc_score = topk_accuracy(np.array(result_list), np.array(label_list))
if topk == 1: if topk == 1:
scores.update({'topk1': topk_acc_score}) scores.update({'topk1': topk_acc_score})
scores.update({
'topk1_average_inference_time(s)': average_inference_time
})
elif topk == 5: elif topk == 5:
scores.update({'topk5': topk_acc_score}) scores.update({'topk5': topk_acc_score})
scores.update({'average_inference_time': average_inference_time}) scores.update({
'topk5_average_inference_time(s)': average_inference_time
})
return scores return scores

View File

@@ -28,6 +28,7 @@ def eval_detection(model,
from .utils import COCOMetric from .utils import COCOMetric
import cv2 import cv2
from tqdm import trange from tqdm import trange
import time
if conf_threshold is not None or nms_iou_threshold is not None: if conf_threshold is not None or nms_iou_threshold is not None:
assert conf_threshold is not None and nms_iou_threshold is not None, "The conf_threshold and nms_iou_threshold should be setted at the same time" assert conf_threshold is not None and nms_iou_threshold is not None, "The conf_threshold and nms_iou_threshold should be setted at the same time"
@@ -80,6 +81,6 @@ def eval_detection(model,
eval_metric.accumulate() eval_metric.accumulate()
eval_details = eval_metric.details eval_details = eval_metric.details
scores.update(eval_metric.get()) scores.update(eval_metric.get())
scores.update({'average_inference_time': average_inference_time}) scores.update({'average_inference_time(s)': average_inference_time})
eval_metric.reset() eval_metric.reset()
return scores return scores