diff --git a/README.md b/README.md
index ba6d1e981..8d414580a 100644
--- a/README.md
+++ b/README.md
@@ -19,7 +19,7 @@
## 近期更新
- 🔥 **2022.8.18:发布FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/releases/tag/release%2F0.2.0)**
- - **服务端全新升级:一套SDK,覆盖全量模型**
+ - **服务端全新升级:一套SDK,覆盖全量模型**
- 发布基于x86 CPU、NVIDIA GPU的易用、高性能推理引擎SDK,推理速度大幅提升
- 支持ONNXRuntime、Paddle Inference、TensorRT推理引擎
- 支持YOLOv7、YOLOv6、YOLOv5、PP-YOLOE等目标检测最优模型及[Demo示例](examples/vision/detection/)
@@ -51,7 +51,7 @@
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## NanoDetPlus C++接口
### NanoDetPlus类
-```
+```c++
fastdeploy::vision::detection::NanoDetPlus(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> NanoDetPlus::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/nanodet_plus/python/README.md b/examples/vision/detection/nanodet_plus/python/README.md
index 9f43c523e..007e08850 100644
--- a/examples/vision/detection/nanodet_plus/python/README.md
+++ b/examples/vision/detection/nanodet_plus/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/nanodet_plus/python/
@@ -30,7 +30,7 @@ python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --devic
## NanoDetPlus Python接口
-```
+```python
fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> NanoDetPlus.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/paddledetection/cpp/README.md b/examples/vision/detection/paddledetection/cpp/README.md
index 875e8da32..16deb3fc6 100644
--- a/examples/vision/detection/paddledetection/cpp/README.md
+++ b/examples/vision/detection/paddledetection/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
以ppyoloe为例进行推理部署
#下载SDK,编译模型examples代码(SDK中包含了examples代码)
@@ -34,12 +34,15 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
```
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## PaddleDetection C++接口
### 模型类
PaddleDetection目前支持6种模型系列,类名分别为`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,所有类名的构造函数和预测函数在参数上完全一致,本文档以PPYOLOE为例讲解API
-```
+```c++
fastdeploy::vision::detection::PPYOLOE(
const string& model_file,
const string& params_file,
@@ -60,7 +63,7 @@ PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ON
#### Predict函数
-> ```
+> ```c++
> PPYOLOE::Predict(cv::Mat* im, DetectionResult* result)
> ```
>
diff --git a/examples/vision/detection/paddledetection/python/README.md b/examples/vision/detection/paddledetection/python/README.md
index cc36ce1ee..ad9587463 100644
--- a/examples/vision/detection/paddledetection/python/README.md
+++ b/examples/vision/detection/paddledetection/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer_xxx.py`快速完成PPYOLOE/PicoDet等模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/paddledetection/python/
@@ -26,13 +26,13 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439
```
运行完成可视化结果如下图所示
-
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## ScaledYOLOv4 C++接口
### ScaledYOLOv4类
-```
+```c++
fastdeploy::vision::detection::ScaledYOLOv4(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> ScaledYOLOv4::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/scaledyolov4/python/README.md b/examples/vision/detection/scaledyolov4/python/README.md
index b2055bae3..df563044a 100644
--- a/examples/vision/detection/scaledyolov4/python/README.md
+++ b/examples/vision/detection/scaledyolov4/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/scaledyolov4/python/
@@ -30,7 +30,7 @@ python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device
## ScaledYOLOv4 Python接口
-```
+```python
fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> ScaledYOLOv4.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolor/README.md b/examples/vision/detection/yolor/README.md
index 81e57fde5..ffe29f39f 100644
--- a/examples/vision/detection/yolor/README.md
+++ b/examples/vision/detection/yolor/README.md
@@ -11,7 +11,7 @@
访问[YOLOR](https://github.com/WongKinYiu/yolor)官方github库,按照指引下载安装,下载`yolor.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现精度不达标或者是数据维度的问题,可以参考[yolor#32](https://github.com/WongKinYiu/yolor/issues/32)的解决办法
- ```
+ ```bash
#下载yolor模型文件
wget https://github.com/WongKinYiu/yolor/releases/download/weights/yolor-d6-paper-570.pt
diff --git a/examples/vision/detection/yolor/cpp/README.md b/examples/vision/detection/yolor/cpp/README.md
index 7ce7c2d85..2cf9a47fa 100644
--- a/examples/vision/detection/yolor/cpp/README.md
+++ b/examples/vision/detection/yolor/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOR C++接口
### YOLOR类
-```
+```c++
fastdeploy::vision::detection::YOLOR(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOR::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolor/python/README.md b/examples/vision/detection/yolor/python/README.md
index 5b88ed787..7fb76f5f1 100644
--- a/examples/vision/detection/yolor/python/README.md
+++ b/examples/vision/detection/yolor/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolor/python/
@@ -30,7 +30,7 @@ python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg
## YOLOR Python接口
-```
+```python
fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOR.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov5/cpp/README.md b/examples/vision/detection/yolov5/cpp/README.md
index c430e0ebe..7a6e55335 100644
--- a/examples/vision/detection/yolov5/cpp/README.md
+++ b/examples/vision/detection/yolov5/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5 C++接口
### YOLOv5类
-```
+```c++
fastdeploy::vision::detection::YOLOv5(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv5::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov5/python/README.md b/examples/vision/detection/yolov5/python/README.md
index 48680e39a..9a8a44a11 100644
--- a/examples/vision/detection/yolov5/python/README.md
+++ b/examples/vision/detection/yolov5/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu --use
## YOLOv5 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov5lite/README.md b/examples/vision/detection/yolov5lite/README.md
index 9c6d0ece8..e8f72099b 100644
--- a/examples/vision/detection/yolov5lite/README.md
+++ b/examples/vision/detection/yolov5lite/README.md
@@ -12,7 +12,7 @@
- 自动获取
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
官方github库,按照指引下载安装,下载`yolov5-lite-xx.onnx` 模型(Tips:官方提供的ONNX文件目前是没有decode模块的)
- ```
+ ```bash
#下载yolov5-lite模型文件(.onnx)
Download from https://drive.google.com/file/d/1bJByk9eoS6pv8Z3N4bcLRCV3i7uk24aU/view
官方Repo也支持百度云下载
@@ -27,7 +27,7 @@
首先需要参考[YOLOv5-Lite#189](https://github.com/ppogg/YOLOv5-Lite/pull/189)的解决办法,修改代码。
- ```
+ ```bash
#下载yolov5-lite模型文件(.pt)
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
官方Repo也支持百度云下载
@@ -39,7 +39,7 @@
```
- 导出无decode模块的ONNX文件(不需要修改代码)
- ```
+ ```bash
#下载yolov5-lite模型文件
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
官方Repo也支持百度云下载
diff --git a/examples/vision/detection/yolov5lite/cpp/README.md b/examples/vision/detection/yolov5lite/cpp/README.md
index d22550aeb..f548cb1bd 100644
--- a/examples/vision/detection/yolov5lite/cpp/README.md
+++ b/examples/vision/detection/yolov5lite/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5Lite C++接口
### YOLOv5Lite类
-```
+```c++
fastdeploy::vision::detection::YOLOv5Lite(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> YOLOv5Lite::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov5lite/python/README.md b/examples/vision/detection/yolov5lite/python/README.md
index 096917924..c1df09e97 100644
--- a/examples/vision/detection/yolov5lite/python/README.md
+++ b/examples/vision/detection/yolov5lite/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5lite/python/
@@ -30,7 +30,7 @@ python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device
## YOLOv5Lite Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> YOLOv5Lite.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov6/cpp/README.md b/examples/vision/detection/yolov6/cpp/README.md
index 753805fe8..ceeb286b8 100644
--- a/examples/vision/detection/yolov6/cpp/README.md
+++ b/examples/vision/detection/yolov6/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv6 C++接口
### YOLOv6类
-```
+```c++
fastdeploy::vision::detection::YOLOv6(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv6::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov6/python/README.md b/examples/vision/detection/yolov6/python/README.md
index 691815072..8d769fff2 100644
--- a/examples/vision/detection/yolov6/python/README.md
+++ b/examples/vision/detection/yolov6/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov6/python/
@@ -31,7 +31,7 @@ python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use
## YOLOv6 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -46,7 +46,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv6.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolov7/README.md b/examples/vision/detection/yolov7/README.md
index e3701f558..14ff1ae46 100644
--- a/examples/vision/detection/yolov7/README.md
+++ b/examples/vision/detection/yolov7/README.md
@@ -10,7 +10,7 @@
## 导出ONNX模型
-```
+```bash
# 下载yolov7模型文件
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
diff --git a/examples/vision/detection/yolov7/cpp/README.md b/examples/vision/detection/yolov7/cpp/README.md
index 6c2500b67..a4f4232e7 100644
--- a/examples/vision/detection/yolov7/cpp/README.md
+++ b/examples/vision/detection/yolov7/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv7 C++接口
### YOLOv7类
-```
+```c++
fastdeploy::vision::detection::YOLOv7(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOv7::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolov7/python/README.md b/examples/vision/detection/yolov7/python/README.md
index 19874bc75..29dbae78c 100644
--- a/examples/vision/detection/yolov7/python/README.md
+++ b/examples/vision/detection/yolov7/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv7在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov7/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_
## YOLOv7 Python接口
-```
+```python
fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/detection/yolox/cpp/README.md b/examples/vision/detection/yolox/cpp/README.md
index aebdca1f1..0f8a0e623 100644
--- a/examples/vision/detection/yolox/cpp/README.md
+++ b/examples/vision/detection/yolox/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOX C++接口
### YOLOX类
-```
+```c++
fastdeploy::vision::detection::YOLOX(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> YOLOX::Predict(cv::Mat* im, DetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/detection/yolox/python/README.md b/examples/vision/detection/yolox/python/README.md
index 021feae86..f3471dee7 100644
--- a/examples/vision/detection/yolox/python/README.md
+++ b/examples/vision/detection/yolox/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolox/python/
@@ -30,7 +30,7 @@ python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu --use
## YOLOX Python接口
-```
+```python
fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> YOLOX.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/retinaface/cpp/README.md b/examples/vision/facedet/retinaface/cpp/README.md
index c2e0429fb..b501ae4dd 100644
--- a/examples/vision/facedet/retinaface/cpp/README.md
+++ b/examples/vision/facedet/retinaface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,13 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
## RetinaFace C++接口
### RetinaFace类
-```
+```c++
fastdeploy::vision::facedet::RetinaFace(
const string& model_file,
const string& params_file = "",
@@ -57,7 +59,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> RetinaFace::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/retinaface/python/README.md b/examples/vision/facedet/retinaface/python/README.md
index e03c48ccd..211fd6efc 100644
--- a/examples/vision/facedet/retinaface/python/README.md
+++ b/examples/vision/facedet/retinaface/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成RetinaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision//retinaface/python/
@@ -30,7 +30,7 @@ python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image test_
## RetinaFace Python接口
-```
+```python
fastdeploy.vision.facedet.RetinaFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> RetinaFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/scrfd/README.md b/examples/vision/facedet/scrfd/README.md
index 93ff8b998..a2aaffce8 100644
--- a/examples/vision/facedet/scrfd/README.md
+++ b/examples/vision/facedet/scrfd/README.md
@@ -8,7 +8,7 @@
## 导出ONNX模型
- ```
+ ```bash
#下载scrfd模型文件
e.g. download from https://onedrive.live.com/?authkey=%21ABbFJx2JMhNjhNA&id=4A83B6B633B029CC%215542&cid=4A83B6B633B029CC
diff --git a/examples/vision/facedet/scrfd/cpp/README.md b/examples/vision/facedet/scrfd/cpp/README.md
index 88fb29426..3d129470b 100644
--- a/examples/vision/facedet/scrfd/cpp/README.md
+++ b/examples/vision/facedet/scrfd/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## SCRFD C++接口
### SCRFD类
-```
+```c++
fastdeploy::vision::facedet::SCRFD(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> SCRFD::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/scrfd/python/README.md b/examples/vision/facedet/scrfd/python/README.md
index 0a5f9ded3..7e7fea420 100644
--- a/examples/vision/facedet/scrfd/python/README.md
+++ b/examples/vision/facedet/scrfd/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成SCRFD在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/scrfd/python/
@@ -30,7 +30,7 @@ python infer.py --model scrfd_500m_bnkps_shape640x640.onnx --image test_lite_fac
## SCRFD Python接口
-```
+```python
fastdeploy.vision.facedet.SCRFD(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> SCRFD.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/ultraface/cpp/README.md b/examples/vision/facedet/ultraface/cpp/README.md
index 79cc92334..3189c3f0b 100644
--- a/examples/vision/facedet/ultraface/cpp/README.md
+++ b/examples/vision/facedet/ultraface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## UltraFace C++接口
### UltraFace类
-```
+```c++
fastdeploy::vision::facedet::UltraFace(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式
#### Predict函数
-> ```
+> ```c++
> UltraFace::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/ultraface/python/README.md b/examples/vision/facedet/ultraface/python/README.md
index 60c63020f..efa37290b 100644
--- a/examples/vision/facedet/ultraface/python/README.md
+++ b/examples/vision/facedet/ultraface/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成UltraFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/ultraface/python/
@@ -30,7 +30,7 @@ python infer.py --model version-RFB-320.onnx --image test_lite_face_detector_3.j
## UltraFace Python接口
-```
+```python
fastdeploy.vision.facedet.UltraFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> UltraFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/facedet/yolov5face/cpp/README.md b/examples/vision/facedet/yolov5face/cpp/README.md
index c2afde648..8c0242f98 100644
--- a/examples/vision/facedet/yolov5face/cpp/README.md
+++ b/examples/vision/facedet/yolov5face/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -34,11 +34,14 @@ wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/li
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## YOLOv5Face C++接口
### YOLOv5Face类
-```
+```c++
fastdeploy::vision::facedet::YOLOv5Face(
const string& model_file,
const string& params_file = "",
@@ -57,7 +60,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格
#### Predict函数
-> ```
+> ```c++
> YOLOv5Face::Predict(cv::Mat* im, FaceDetectionResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/facedet/yolov5face/python/README.md b/examples/vision/facedet/yolov5face/python/README.md
index a029cb839..ef0f571eb 100644
--- a/examples/vision/facedet/yolov5face/python/README.md
+++ b/examples/vision/facedet/yolov5face/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成YOLOv5Face在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/facedet/yolov5face/python/
@@ -30,7 +30,7 @@ python infer.py --model yolov5s-face.onnx --image test_lite_face_detector_3.jpg
## YOLOv5Face Python接口
-```
+```python
fastdeploy.vision.facedet.YOLOv5Face(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -45,7 +45,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格
### predict函数
-> ```
+> ```python
> YOLOv5Face.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/faceid/insightface/README.md b/examples/vision/faceid/insightface/README.md
index 2318d24ab..d2b5b4b31 100644
--- a/examples/vision/faceid/insightface/README.md
+++ b/examples/vision/faceid/insightface/README.md
@@ -18,7 +18,7 @@
访问[ArcFace](https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch)官方github库,按照指引下载安装,下载pt模型文件,利用 `torch2onnx.py` 得到`onnx`格式文件。
* 下载ArcFace模型文件
- ```
+ ```bash
Link: https://pan.baidu.com/share/init?surl=CL-l4zWqsI1oDuEEYVhj-g code: e8pw
```
diff --git a/examples/vision/faceid/insightface/cpp/README.md b/examples/vision/faceid/insightface/cpp/README.md
index cc06f7bda..547c527a3 100644
--- a/examples/vision/faceid/insightface/cpp/README.md
+++ b/examples/vision/faceid/insightface/cpp/README.md
@@ -9,7 +9,7 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://https://bj.bcebos.com/paddlehub/fastdeploy/cpp/fastdeploy-linux-x64-gpu-0.2.0.tgz
@@ -40,11 +40,14 @@ wget https://bj.bcebos.com/paddlehub/test_samples/test_lite_focal_arcface_2.JPG
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
+
## MODNet C++接口
### MODNet类
-```
+```c++
fastdeploy::vision::matting::MODNet(
const string& model_file,
const string& params_file = "",
@@ -59,7 +62,7 @@ MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。
#### Predict函数
-> ```
+> ```c++
> MODNet::Predict(cv::Mat* im, MattingResult* result,
> float conf_threshold = 0.25,
> float nms_iou_threshold = 0.5)
diff --git a/examples/vision/matting/modnet/python/README.md b/examples/vision/matting/modnet/python/README.md
index 2cd02ddf6..0d441c400 100644
--- a/examples/vision/matting/modnet/python/README.md
+++ b/examples/vision/matting/modnet/python/README.md
@@ -7,7 +7,7 @@
本目录下提供`infer.py`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
-```
+```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/matting/modnet/python/
@@ -31,7 +31,7 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image test_l
## MODNet Python接口
-```
+```python
fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
```
@@ -46,7 +46,7 @@ MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式
### predict函数
-> ```
+> ```python
> MODNet.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
diff --git a/examples/vision/segmentation/paddleseg/cpp/README.md b/examples/vision/segmentation/paddleseg/cpp/README.md
index 04bc54761..0ecf54dae 100644
--- a/examples/vision/segmentation/paddleseg/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/cpp/README.md
@@ -9,13 +9,13 @@
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
-```
+```bash
mkdir build
cd build
wget https://bj.bcebos.com/paddlehub/fastdeploy/libs/0.2.0/fastdeploy-linux-x64-gpu-0.2.0.tgz
tar xvf fastdeploy-linux-x64-gpu-0.2.0.tgz
cd fastdeploy-linux-x64-gpu-0.2.0/examples/vision/segmentation/paddleseg/cpp/build
-cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.2.0
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.2.0
make -j
# 下载Unet模型文件和测试图片
@@ -33,15 +33,18 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
```
运行完成可视化结果如下图所示
-