diff --git a/docs/api_docs/python/vision_results_cn.md b/docs/api_docs/python/vision_results_cn.md new file mode 100644 index 000000000..586464a06 --- /dev/null +++ b/docs/api_docs/python/vision_results_cn.md @@ -0,0 +1,68 @@ +# 视觉模型预测结果说明 + +## ClassifyResult +ClassifyResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像的分类结果和置信度. + +API:`fastdeploy.vision.ClassifyResult`, 该结果返回: +- **label_ids**(list of int): 成员变量,表示单张图片的分类结果,其个数根据在使用分类模型时传入的`topk`决定,例如可以返回`top5`的分类结果. +- **scores**(list of float): 成员变量,表示单张图片在相应分类结果上的置信度,其个数根据在使用分类模型时传入的`topk`决定,例如可以返回`top5`的分类置信度. + + +## SegmentationResult +SegmentationResult代码定义在`fastdeploy/vision/ttommon/result.h`中,用于表明图像中每个像素预测出来的分割类别和分割类别的概率值. + +API:`fastdeploy.vision.SegmentationResult`, 该结果返回: +- **label_map**(list of int): 成员变量,表示单张图片每个像素点的分割类别. +- **score_map**(list of float): 成员变量,与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`--output_op argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`--output_op softmax`或者导出模型时指定`--output_op none`同时模型初始化的时候设置模型类成员属性`apply_softmax=true`). +- **shape**(list of int): 成员变量,表示输出图片的尺寸,为`H*W`. + +## DetectionResult +DetectionResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度. + +API:`fastdeploy.vision.DetectionResult` , 该结果返回: +- **boxes**(list of list(float)): 成员变量,表示单张图片检测出来的所有目标框坐标. boxes是一个list,其每个元素为一个长度为4的list, 表示为一个框,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标. +- **scores**(list of float): 成员变量,表示单张图片检测出来的所有目标置信度. +- **label_ids**(list of int): 成员变量,表示单张图片检测出来的所有目标类别. +- **masks**: 成员变量,表示单张图片检测出来的所有实例mask,其元素个数及shape大小与boxes一致. +- **contain_masks**: 成员变量,表示检测结果中是否包含实例mask,实例分割模型的结果此项一般为`True`. + +`fastdeploy.vision.Mask` , 该结果返回: +- **data**: 成员变量,表示检测到的一个mask. +- **shape**: 成员变量,表示mask的尺寸,如 `H*W`. + + +## FaceDetectionResult +FaceDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸检测出来的目标框、人脸landmarks,目标置信度和每张人脸的landmark数量. + +API:`fastdeploy.vision.FaceDetectionResult` , 该结果返回: +- **boxes**(list of list(float)): 成员变量,表示单张图片检测出来的所有目标框坐标。boxes是一个list,其每个元素为一个长度为4的list, 表示为一个框,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标. +- **scores**(list of float): 成员变量,表示单张图片检测出来的所有目标置信度. +- **landmarks**(list of list(float)): 成员变量,表示单张图片检测出来的所有人脸的关键点. +- **landmarks_per_face**(int): 成员变量,表示每个人脸框中的关键点的数量. + + +## FaceRecognitionResult +FaceRecognitionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸识别模型对图像特征的embedding. + +API:`fastdeploy.vision.FaceRecognitionResult`, 该结果返回: +- **embedding**(list of float): 成员变量,表示人脸识别模型最终提取的特征embedding,可以用来计算人脸之间的特征相似度. + + +## MattingResult +MattingResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明模型预测的alpha透明度的值,预测的前景等. + +API:`fastdeploy.vision.MattingResult`, 该结果返回: +- **alpha**(list of float): 是一维向量,为预测的alpha透明度的值,值域为`[0.,1.]`,长度为`H*W`,H,W为输入图像的高和宽. +- **foreground**(list of float): 是一维向量,为预测的前景,值域为`[0.,255.]`,长度为`H*W*C`,H,W为输入图像的高和宽,C一般为3,`foreground`不是一定有的,只有模型本身预测了前景,这个属性才会有效. +- **contain_foreground**(bool): 表示预测的结果是否包含前景. +- **shape**(list of int): 表示输出结果的shape,当`contain_foreground`为`false`,shape只包含`(H,W)`,当`contain_foreground`为`true`,shape包含`(H,W,C)`, C一般为3. + +## OCRResult +OCRResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测和识别出来的文本框,文本框方向分类,以及文本框内的文本内容. + +API:`fastdeploy.vision.OCRResult`, 该结果返回: +- **boxes**(list of list(int)): 成员变量,表示单张图片检测出来的所有目标框坐标,boxes.size()表示单张图内检测出的框的个数,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上. +- **text**(list of string): 成员变量,表示多个文本框内被识别出来的文本内容,其元素个数与`boxes.size()`一致. +- **rec_scores**(list of float): 成员变量,表示文本框内识别出来的文本的置信度,其元素个数与`boxes.size()`一致. +- **cls_scores**(list of float): 成员变量,表示文本框的分类结果的置信度,其元素个数与`boxes.size()`一致. +- **cls_labels**(list of int): 成员变量,表示文本框的方向分类类别,其元素个数与`boxes.size()`一致. diff --git a/docs/api_docs/python/vision_results_en.md b/docs/api_docs/python/vision_results_en.md new file mode 100644 index 000000000..1e97b2e9d --- /dev/null +++ b/docs/api_docs/python/vision_results_en.md @@ -0,0 +1,66 @@ +# Description of Vision Results + +## ClassifyResult +The code of ClassifyResult is defined in `fastdeploy/vision/common/result.h` and is used to indicate the classification label result and confidence the image. + +API: `fastdeploy.vision.ClassifyResult`, The ClassifyResult will return: +- **label_ids**(list of int):Member variables that represent the classification label results of a single image, the number of which is determined by the `topk` passed in when using the classification model. For example, you can return the label results of the Top 5 categories. + +- **scores**(list of float):Member variables that indicate the confidence level of a single image on the corresponding classification result, the number of which is determined by the `topk` passed in when using the classification model, e.g. the confidence level of a Top 5 classification can be returned. + +## SegmentationResult +The code of SegmentationResult is defined in `fastdeploy/vision/common/result.h` and is used to indicate the segmentation category predicted for each pixel in the image and the probability of the segmentation category. + +API: `fastdeploy.vision.SegmentationResult`, The SegmentationResult will return: +- **label_ids**(list of int):Member variable indicating the segmentation category for each pixel of a single image. +- **score_map**(list of float):Member variable, the predicted probability value of the segmentation category corresponding to `label_map` (specified when exporting the model `--output_op argmax`) or the probability value normalized by softmax (specified when exporting the model `--output_op softmax` or when exporting the model `--output_op none` and set the model class member attribute `apply_softmax=true` when initializing the model). +- **shape**(list of int):Member variable indicating the shape of the output image, as `H*W`. + + +## DetectionResult +The code of DetectionResult is defined in `fastdeploy/vision/common/result.h` and is used to indicate the target location (detection box), target class and target confidence level detected by the image. + +API: `fastdeploy.vision.DetectionResult`, The DetectionResult will return: +- **boxes**(list of list(float)):Member variable, represents the coordinates of all target boxes detected by a single image. boxes is a list, each element of which is a list of length 4, representing a box with 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corners. +- **socres**(list of float):Member variable indicating the confidence of all targets detected by a single image. +- **label_ids**(list of int):Member variable indicating all target categories detected for a single image. +- **masks**:Member variable that represents all instances of mask detected from a single image, with the same number of elements and shape size as boxes. +- **contain_masks**:Member variable indicating whether the detection result contains the instance mask, the result of the instance segmentation model is generally set to `True`. + +API: `fastdeploy.vision.Mask`, The Mask will return: +- **data**:Member variable indicating a detected mask. +- **shape**:Member variable representing the shape of the mask, e.g. `(H,W)`. + +## FaceDetectionResult +The FaceDetectionResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the target frames detected by face detection, face landmarks, target confidence and the number of landmarks per face. + +API: `fastdeploy.vision.FaceDetectionResult`, The FaceDetectionResult will return: +- **data**(list of list(float)):Member variables that represent the coordinates of all target boxes detected by a single image. boxes is a list, each element of which is a list of length 4, representing a box with 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corners. +- **scores**(list of float):Member variable indicating the confidence of all targets detected by a single image. +- **landmarks**(list of list(float)): Member variables that represent the key points of all faces detected by a single image. +- **landmarks_per_face**(int):Member variable indicating the number of key points in each face frame. + +## FaceRecognitionResult +The FaceRecognitionResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the embedding of the image features by the face recognition model. + +API: `fastdeploy.vision.FaceRecognitionResult`, The FaceRecognitionResult will return: +- **landmarks_per_face**(list of float):Member variables, which indicate the final extracted features embedding of the face recognition model, can be used to calculate the feature similarity between faces. + +## MattingResult +The MattingResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the value of alpha transparency predicted by the model, the predicted outlook, etc. + +API:`fastdeploy.vision.MattingResult`, The MattingResult will return: +- **alpha**(list of float):This is a one-dimensional vector of predicted alpha transparency values in the range `[0.,1.]`, with length `H*W`, H,W being the height and width of the input image. +- **foreground**(list of float):This is a one-dimensional vector for the predicted foreground, the value domain is `[0.,255.]`, the length is `H*W*C`, H,W is the height and width of the input image, C is generally 3, foreground is not necessarily there, only if the model itself predicts the foreground, this property will be valid. +- **contain_foreground**(bool):Indicates whether the predicted outcome includes the foreground. +- **shape**(list of int): When `contain_foreground` is false, the shape only contains `(H,W)`, when `contain_foreground` is `true,` the shape contains `(H,W,C)`, C is generally 3. + +## OCRResult +The OCRResult code is defined in `fastdeploy/vision/common/result.h` and is used to indicate the text box detected in the image, the text box orientation classification, and the text content recognized inside the text box. + +API:`fastdeploy.vision.OCRResult`, The OCRResult will return: +- **boxes**(list of list(int)): Member variable, indicates the coordinates of all target boxes detected in a single image, `boxes.size()` indicates the number of boxes detected in a single image, each box is represented by 8 int values in order of the 4 coordinate points of the box, the order is lower left, lower right, upper right, upper left. +- **text**(list of string):Member variable indicating the content of the recognized text in multiple text boxes, with the same number of elements as `boxes.size()`. +- **rec_scores**(list of float):Member variable indicating the confidence level of the text identified in the box, the number of elements is the same as `boxes.size()`. +- **cls_scores**(list of float):Member variable indicating the confidence level of the classification result of the text box, with the same number of elements as `boxes.size()`. +- **cls_labels**(list of int):Member variable indicating the orientation category of the text box, the number of elements is the same as `boxes.size()`.