+
## PaddleDetection Python接口
-```
+```python
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE)
fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE)
fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE)
@@ -54,7 +54,7 @@ PaddleDetection模型加载和初始化,其中model_file, params_file为导
### predict函数
PaddleDetection中各个模型,包括PPYOLOE/PicoDet/PaddleYOLOX/YOLOv3/PPYOLO/FasterRCNN,均提供如下同样的成员函数用于进行图像的检测
-> ```
+> ```python
> PPYOLOE.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/scaledyolov4/README.md b/examples/vision/detection/scaledyolov4/README.md
index d54e04d4c..36ec1af0c 100644
--- a/examples/vision/detection/scaledyolov4/README.md
+++ b/examples/vision/detection/scaledyolov4/README.md
@@ -11,7 +11,7 @@
访问[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)官方github库,按照指引下载安装,下载`scaledyolov4.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现问题,可以参考[ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401)的解决办法
- ```
+ ```bash
#下载ScaledYOLOv4模型文件
Download from the goole drive https://drive.google.com/file/d/1aXZZE999sHMP1gev60XhNChtHPRMH3Fz/view?usp=sharing
diff --git a/examples/vision/detection/scaledyolov4/cpp/README.md b/examples/vision/detection/scaledyolov4/cpp/README.md
index e5740eb05..04325f822 100644
--- a/examples/vision/detection/scaledyolov4/cpp/README.md
+++ b/examples/vision/detection/scaledyolov4/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## ScaledYOLOv4 C++接口
### ScaledYOLOv4类
-```
+```c++
fastdeploy::vision::detection::ScaledYOLOv4(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> ScaledYOLOv4::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/scaledyolov4/python/README.md b/examples/vision/detection/scaledyolov4/python/README.md
index b2055bae3..df563044a 100644
--- a/examples/vision/detection/scaledyolov4/python/README.md
+++ b/examples/vision/detection/scaledyolov4/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/scaledyolov4/python/
@@ -30,7 +30,7 @@ python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device
## ScaledYOLOv4 Python接口
-```
+```python
fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> ScaledYOLOv4.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolor/README.md b/examples/vision/detection/yolor/README.md
index 81e57fde5..ffe29f39f 100644
--- a/examples/vision/detection/yolor/README.md
+++ b/examples/vision/detection/yolor/README.md
@@ -11,7 +11,7 @@
访问[YOLOR](https://github.com/WongKinYiu/yolor)官方github库,按照指引下载安装,下载`yolor.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现精度不达标或者是数据维度的问题,可以参考[yolor#32](https://github.com/WongKinYiu/yolor/issues/32)的解决办法
- ```
+ ```bash
#下载yolor模型文件
wget https://github.com/WongKinYiu/yolor/releases/download/weights/yolor-d6-paper-570.pt
diff --git a/examples/vision/detection/yolor/cpp/README.md b/examples/vision/detection/yolor/cpp/README.md
index 7ce7c2d85..2cf9a47fa 100644
--- a/examples/vision/detection/yolor/cpp/README.md
+++ b/examples/vision/detection/yolor/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOR C++接口
### YOLOR类
-```
+```c++
fastdeploy::vision::detection::YOLOR(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOR::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolor/python/README.md b/examples/vision/detection/yolor/python/README.md
index 5b88ed787..7fb76f5f1 100644
--- a/examples/vision/detection/yolor/python/README.md
+++ b/examples/vision/detection/yolor/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolor/python/
@@ -30,7 +30,7 @@ python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg
## YOLOR Python接口
-```
+```python
fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOR.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov5/cpp/README.md b/examples/vision/detection/yolov5/cpp/README.md
index c430e0ebe..7a6e55335 100644
--- a/examples/vision/detection/yolov5/cpp/README.md
+++ b/examples/vision/detection/yolov5/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5 C++接口
### YOLOv5类
-```
+```c++
fastdeploy::vision::detection::YOLOv5(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv5::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov5/python/README.md b/examples/vision/detection/yolov5/python/README.md
index 48680e39a..9a8a44a11 100644
--- a/examples/vision/detection/yolov5/python/README.md
+++ b/examples/vision/detection/yolov5/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu --use
## YOLOv5 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov5lite/README.md b/examples/vision/detection/yolov5lite/README.md
index 9c6d0ece8..e8f72099b 100644
--- a/examples/vision/detection/yolov5lite/README.md
+++ b/examples/vision/detection/yolov5lite/README.md
@@ -12,7 +12,7 @@
- 自动获取
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
官方github库,按照指引下载安装,下载`yolov5-lite-xx.onnx` 模型(Tips:官方提供的ONNX文件目前是没有decode模块的)
- ```
+ ```bash
#下载yolov5-lite模型文件(.onnx)
Download from https://drive.google.com/file/d/1bJByk9eoS6pv8Z3N4bcLRCV3i7uk24aU/view
官方Repo也支持百度云下载
@@ -27,7 +27,7 @@
首先需要参考[YOLOv5-Lite#189](https://github.com/ppogg/YOLOv5-Lite/pull/189)的解决办法,修改代码。
- ```
+ ```bash
#下载yolov5-lite模型文件(.pt)
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
官方Repo也支持百度云下载
@@ -39,7 +39,7 @@
```
- 导出无decode模块的ONNX文件(不需要修改代码)
- ```
+ ```bash
#下载yolov5-lite模型文件
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
官方Repo也支持百度云下载
diff --git a/examples/vision/detection/yolov5lite/cpp/README.md b/examples/vision/detection/yolov5lite/cpp/README.md
index d22550aeb..f548cb1bd 100644
--- a/examples/vision/detection/yolov5lite/cpp/README.md
+++ b/examples/vision/detection/yolov5lite/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5Lite C++接口
### YOLOv5Lite类
-```
+```c++
fastdeploy::vision::detection::YOLOv5Lite(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> YOLOv5Lite::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov5lite/python/README.md b/examples/vision/detection/yolov5lite/python/README.md
index 096917924..c1df09e97 100644
--- a/examples/vision/detection/yolov5lite/python/README.md
+++ b/examples/vision/detection/yolov5lite/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5lite/python/
@@ -30,7 +30,7 @@ python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device
## YOLOv5Lite Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> YOLOv5Lite.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov6/cpp/README.md b/examples/vision/detection/yolov6/cpp/README.md
index 753805fe8..ceeb286b8 100644
--- a/examples/vision/detection/yolov6/cpp/README.md
+++ b/examples/vision/detection/yolov6/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv6 C++接口
### YOLOv6类
-```
+```c++
fastdeploy::vision::detection::YOLOv6(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv6::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov6/python/README.md b/examples/vision/detection/yolov6/python/README.md
index 691815072..8d769fff2 100644
--- a/examples/vision/detection/yolov6/python/README.md
+++ b/examples/vision/detection/yolov6/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov6/python/
@@ -31,7 +31,7 @@ python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use
## YOLOv6 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -46,7 +46,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv6.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov7/README.md b/examples/vision/detection/yolov7/README.md
index e3701f558..14ff1ae46 100644
--- a/examples/vision/detection/yolov7/README.md
+++ b/examples/vision/detection/yolov7/README.md
@@ -10,7 +10,7 @@
## 导出ONNX模型
-```
+```bash
# 下载yolov7模型文件
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
diff --git a/examples/vision/detection/yolov7/cpp/README.md b/examples/vision/detection/yolov7/cpp/README.md
index 6c2500b67..a4f4232e7 100644
--- a/examples/vision/detection/yolov7/cpp/README.md
+++ b/examples/vision/detection/yolov7/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv7 C++接口
### YOLOv7类
-```
+```c++
fastdeploy::vision::detection::YOLOv7(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv7::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov7/python/README.md b/examples/vision/detection/yolov7/python/README.md
index 19874bc75..29dbae78c 100644
--- a/examples/vision/detection/yolov7/python/README.md
+++ b/examples/vision/detection/yolov7/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv7在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov7/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_
## YOLOv7 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolox/cpp/README.md b/examples/vision/detection/yolox/cpp/README.md
index aebdca1f1..0f8a0e623 100644
--- a/examples/vision/detection/yolox/cpp/README.md
+++ b/examples/vision/detection/yolox/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOX C++接口
### YOLOX类
-```
+```c++
fastdeploy::vision::detection::YOLOX(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOX::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolox/python/README.md b/examples/vision/detection/yolox/python/README.md
index 021feae86..f3471dee7 100644
--- a/examples/vision/detection/yolox/python/README.md
+++ b/examples/vision/detection/yolox/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolox/python/
@@ -30,7 +30,7 @@ python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu --use
## YOLOX Python接口
-```
+```python
fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOX.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/retinaface/cpp/README.md b/examples/vision/facedet/retinaface/cpp/README.md
index c2e0429fb..b501ae4dd 100644
--- a/examples/vision/facedet/retinaface/cpp/README.md
+++ b/examples/vision/facedet/retinaface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,13 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
## RetinaFace C++接口
### RetinaFace类
-```
+```c++
fastdeploy::vision::facedet::RetinaFace(
const string& model_file,
const string& params_file = "",
@@ -57,7 +59,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> RetinaFace::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/retinaface/python/README.md b/examples/vision/facedet/retinaface/python/README.md
index e03c48ccd..211fd6efc 100644
--- a/examples/vision/facedet/retinaface/python/README.md
+++ b/examples/vision/facedet/retinaface/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成RetinaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision//retinaface/python/
@@ -30,7 +30,7 @@ python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image test_
## RetinaFace Python接口
-```
+```python
fastdeploy.vision.facedet.RetinaFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> RetinaFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/scrfd/README.md b/examples/vision/facedet/scrfd/README.md
index 93ff8b998..a2aaffce8 100644
--- a/examples/vision/facedet/scrfd/README.md
+++ b/examples/vision/facedet/scrfd/README.md
@@ -8,7 +8,7 @@
## 导出ONNX模型
- ```
+ ```bash
#下载scrfd模型文件
e.g. download from https://onedrive.live.com/?authkey=%21ABbFJx2JMhNjhNA&id=4A83B6B633B029CC%215542&cid=4A83B6B633B029CC
diff --git a/examples/vision/facedet/scrfd/cpp/README.md b/examples/vision/facedet/scrfd/cpp/README.md
index 88fb29426..3d129470b 100644
--- a/examples/vision/facedet/scrfd/cpp/README.md
+++ b/examples/vision/facedet/scrfd/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## SCRFD C++接口
### SCRFD类
-```
+```c++
fastdeploy::vision::facedet::SCRFD(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> SCRFD::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/scrfd/python/README.md b/examples/vision/facedet/scrfd/python/README.md
index 0a5f9ded3..7e7fea420 100644
--- a/examples/vision/facedet/scrfd/python/README.md
+++ b/examples/vision/facedet/scrfd/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成SCRFD在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/scrfd/python/
@@ -30,7 +30,7 @@ python infer.py --model scrfd_500m_bnkps_shape640x640.onnx --image test_lite_fac
## SCRFD Python接口
-```
+```python
fastdeploy.vision.facedet.SCRFD(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> SCRFD.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/ultraface/cpp/README.md b/examples/vision/facedet/ultraface/cpp/README.md
index 79cc92334..3189c3f0b 100644
--- a/examples/vision/facedet/ultraface/cpp/README.md
+++ b/examples/vision/facedet/ultraface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## UltraFace C++接口
### UltraFace类
-```
+```c++
fastdeploy::vision::facedet::UltraFace(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式
#### Predict函数
-> ```
+> ```c++
> UltraFace::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/ultraface/python/README.md b/examples/vision/facedet/ultraface/python/README.md
index 60c63020f..efa37290b 100644
--- a/examples/vision/facedet/ultraface/python/README.md
+++ b/examples/vision/facedet/ultraface/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成UltraFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/ultraface/python/
@@ -30,7 +30,7 @@ python infer.py --model version-RFB-320.onnx --image test_lite_face_detector_3.j
## UltraFace Python接口
-```
+```python
fastdeploy.vision.facedet.UltraFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> UltraFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/yolov5face/cpp/README.md b/examples/vision/facedet/yolov5face/cpp/README.md
index c2afde648..8c0242f98 100644
--- a/examples/vision/facedet/yolov5face/cpp/README.md
+++ b/examples/vision/facedet/yolov5face/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li

+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5Face C++接口
### YOLOv5Face类
-```
+```c++
fastdeploy::vision::facedet::YOLOv5Face(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> YOLOv5Face::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/yolov5face/python/README.md b/examples/vision/facedet/yolov5face/python/README.md
index a029cb839..ef0f571eb 100644
--- a/examples/vision/facedet/yolov5face/python/README.md
+++ b/examples/vision/facedet/yolov5face/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5Face在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/yolov5face/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov5s-face.onnx --image test_lite_face_detector_3.jpg
## YOLOv5Face Python接口
-```
+```python
fastdeploy.vision.facedet.YOLOv5Face(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> YOLOv5Face.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/faceid/insightface/README.md b/examples/vision/faceid/insightface/README.md
index 2318d24ab..d2b5b4b31 100644
--- a/examples/vision/faceid/insightface/README.md
+++ b/examples/vision/faceid/insightface/README.md
@@ -18,7 +18,7 @@
访问[ArcFace](https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch)官方github库,按照指引下载安装,下载pt模型文件,利用 `torch2onnx.py` 得到`onnx`格式文件。
* 下载ArcFace模型文件
- ```
+ ```bash
Link: https://pan.baidu.com/share/init?surl=CL-l4zWqsI1oDuEEYVhj-g code: e8pw
```
diff --git a/examples/vision/faceid/insightface/cpp/README.md b/examples/vision/faceid/insightface/cpp/README.md
index cc06f7bda..547c527a3 100644
--- a/examples/vision/faceid/insightface/cpp/README.md
+++ b/examples/vision/faceid/insightface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -40,11 +40,14 @@ wget https://bj.bcebos.com/paddlehub/test_samples/test_lite_focal_arcface_2.JPG
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## InsightFace C++接口
### ArcFace类
-```
+```c++
fastdeploy::vision::faceid::ArcFace(
const string& model_file,
const string& params_file = "",
@@ -56,7 +59,7 @@ ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式
### CosFace类
-```
+```c++
fastdeploy::vision::faceid::CosFace(
const string& model_file,
const string& params_file = "",
@@ -68,7 +71,7 @@ CosFace模型加载和初始化,其中model_file为导出的ONNX模型格式
### PartialFC类
-```
+```c++
fastdeploy::vision::faceid::PartialFC(
const string& model_file,
const string& params_file = "",
@@ -80,7 +83,7 @@ PartialFC模型加载和初始化,其中model_file为导出的ONNX模型格式
### VPL类
-```
+```c++
fastdeploy::vision::faceid::VPL(
const string& model_file,
const string& params_file = "",
@@ -98,7 +101,7 @@ VPL模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> ArcFace::Predict(cv::Mat* im, FaceRecognitionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/faceid/insightface/python/README.md b/examples/vision/faceid/insightface/python/README.md
index 9bf352d57..7f61df114 100644
--- a/examples/vision/faceid/insightface/python/README.md
+++ b/examples/vision/faceid/insightface/python/README.md
@@ -8,7 +8,7 @@
以ArcFace为例子, 提供`infer_arcface.py`快速完成ArcFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/faceid/insightface/python/
@@ -35,7 +35,7 @@ python infer_arcface.py --model ms1mv3_arcface_r100.onnx --face test_lite_focal_