[Doc]Update keypointdetection result docs (#739)

Update keypointdetection result docs
This commit is contained in:
huangjianhui
2022-11-29 18:02:56 +08:00
committed by GitHub
parent 44627ff696
commit 2b4594a2a6
4 changed files with 11 additions and 13 deletions

View File

@@ -16,16 +16,13 @@ struct KeyPointDetectionResult {
};
```
- **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J * 2`
- **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J`
- `N`:图片中的目标数量
- `J`num_joints一个目标的关键点数量
- `3`:坐标信息[x, y]
- **scores**: 成员变量,表示识别到的目标行为的关键点坐标的置信度。`scores.size()= N * J`
- `N`:图片中的目标数量
- `J`:num_joints一个目标的关键点数量
- **num_joints**: 成员变量,一个目标的关键点数量
- **num_joints**: 成员变量,一个目标的关键点数量
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Str()**: 成员函数将结构体中的信息以字符串形式输出用于Debug
@@ -34,10 +31,9 @@ struct KeyPointDetectionResult {
`fastdeploy.vision.KeyPointDetectionResult`
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。
`keypoints.size()= N * J * 2`
`keypoints.size()= N * J`
`N`:图片中的目标数量
`J`:num_joints关键点数量
`3`:坐标信息[x, y, conf]
- **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。
`scores.size()= N * J`
`N`:图片中的目标数量

View File

@@ -46,10 +46,9 @@ API:`fastdeploy.vision.FaceDetectionResult` , 该结果返回:
KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。
API:`fastdeploy.vision.KeyPointDetectionResult` , 该结果返回:
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J * 2`
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J`
- `N`:图片中的目标数量
- `J`num_joints一个目标的关键点数量
- `3`:坐标信息[x, y]
- **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。`scores.size()= N * J`
- `N`:图片中的目标数量
- `J`:num_joints一个目标的关键点数量

View File

@@ -49,10 +49,9 @@ API: `fastdeploy.vision.FaceDetectionResult`, The FaceDetectionResult will retur
The KeyPointDetectionResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the coordinates and confidence of each keypoint of the target behavior in the image.
API:`fastdeploy.vision.KeyPointDetectionResult`, The KeyPointDetectionResult will return:
- **keypoints**(list of list(float)): Member variable, representing the key point coordinates of the identified target behavior. `keypoints.size()= N * J * 2`
- **keypoints**(list of list(float)): Member variable, representing the key point coordinates of the identified target behavior. `keypoints.size()= N * J`
- `N`: number of objects in the picture
- `J`: num_jointsnumber of keypoints for a target
- `3`: 坐标信息[x, y]
- **scores**(list of float): Member variable, representing the confidence of the keypoint coordinates of the recognized target behavior. `scores.size()= N * J`
- `N`: number of objects in the picture
- `J`: num_jointsnumber of keypoints for a target

View File

@@ -60,6 +60,7 @@ def test_detection_ppyoloe():
assert diff_label_ids[scores > score_threshold].max(
) < 1e-04, "There's diff in label_ids."
def test_detection_ppyoloe1():
model_url = "https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz"
input_url1 = "https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg"
@@ -77,13 +78,16 @@ def test_detection_ppyoloe1():
postprocessor = fd.vision.detection.PaddleDetPostprocessor()
rc.test_option.set_model_path(model_file, params_file)
runtime = fd.Runtime(rc.test_option);
runtime = fd.Runtime(rc.test_option)
# compare diff
im1 = cv2.imread("./resources/000000014439.jpg")
for i in range(2):
input_tensors = preprocessor.run([im1])
output_tensors = runtime.infer({"image": input_tensors[0], "scale_factor": input_tensors[1]})
output_tensors = runtime.infer({
"image": input_tensors[0],
"scale_factor": input_tensors[1]
})
results = postprocessor.run(output_tensors)
result = results[0]
with open("resources/ppyoloe_baseline.pkl", "rb") as f: