mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 17:17:14 +08:00
[Docs] rename ppseg kunlun docs -> kunlunxin (#1662)
* [Docs] rename ppseg kunlun -> kunlunxin * [Docs] rename ppseg fastdeploy kunlun docs -> kunlunxin
This commit is contained in:
@@ -30,7 +30,7 @@
|
|||||||
|ARM CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
|ARM CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(集成显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
|Intel GPU(集成显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(独立显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
|Intel GPU(独立显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||||
|昆仑|✅|[链接](semantic_segmentation/kunlun)|✅|✅|
|
|昆仑|✅|[链接](semantic_segmentation/kunlunxin)|✅|✅|
|
||||||
|昇腾|✅|[链接](semantic_segmentation/ascend)|✅|✅|
|
|昇腾|✅|[链接](semantic_segmentation/ascend)|✅|✅|
|
||||||
|瑞芯微|✅|[链接](semantic_segmentation/rockchip)|✅|✅|
|
|瑞芯微|✅|[链接](semantic_segmentation/rockchip)|✅|✅|
|
||||||
|晶晨|✅|[链接](semantic_segmentation/amlogic)|--|✅|✅|
|
|晶晨|✅|[链接](semantic_segmentation/amlogic)|--|✅|✅|
|
||||||
@@ -58,9 +58,9 @@
|
|||||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||||
- 昆仑 XPU
|
- 昆仑 XPU
|
||||||
- [部署模型准备](semantic_segmentation/kunlun)
|
- [部署模型准备](semantic_segmentation/kunlunxin)
|
||||||
- [Python部署示例](semantic_segmentation/kunlun/python/)
|
- [Python部署示例](semantic_segmentation/kunlunxin/python/)
|
||||||
- [C++部署示例](semantic_segmentation/kunlun/cpp/)
|
- [C++部署示例](semantic_segmentation/kunlunxin/cpp/)
|
||||||
- 昇腾 Ascend
|
- 昇腾 Ascend
|
||||||
- [部署模型准备](semantic_segmentation/ascend)
|
- [部署模型准备](semantic_segmentation/ascend)
|
||||||
- [Python部署示例](semantic_segmentation/ascend/python/)
|
- [Python部署示例](semantic_segmentation/ascend/python/)
|
||||||
@@ -97,7 +97,7 @@
|
|||||||
|ARM CPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
|ARM CPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(集成显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
|Intel GPU(集成显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(独立显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
|Intel GPU(独立显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||||
|昆仑|✅|[链接](matting/kunlun)|✅|✅|
|
|昆仑|✅|[链接](matting/kunlunxin)|✅|✅|
|
||||||
|昇腾|✅|[链接](matting/ascend)|✅|✅|
|
|昇腾|✅|[链接](matting/ascend)|✅|✅|
|
||||||
|
|
||||||
### 3.2 详细使用文档
|
### 3.2 详细使用文档
|
||||||
@@ -122,9 +122,9 @@
|
|||||||
- [Python部署示例](matting/cpu-gpu/python/)
|
- [Python部署示例](matting/cpu-gpu/python/)
|
||||||
- [C++部署示例](cpu-gpu/cpp/)
|
- [C++部署示例](cpu-gpu/cpp/)
|
||||||
- 昆仑 XPU
|
- 昆仑 XPU
|
||||||
- [部署模型准备](matting/kunlun)
|
- [部署模型准备](matting/kunlunxin)
|
||||||
- [Python部署示例](matting/kunlun/README.md)
|
- [Python部署示例](matting/kunlunxin/README.md)
|
||||||
- [C++部署示例](matting/kunlun/README.md)
|
- [C++部署示例](matting/kunlunxin/README.md)
|
||||||
- 昇腾 Ascend
|
- 昇腾 Ascend
|
||||||
- [部署模型准备](matting/ascend)
|
- [部署模型准备](matting/ascend)
|
||||||
- [Python部署示例](matting/ascend/README.md)
|
- [Python部署示例](matting/ascend/README.md)
|
||||||
|
@@ -13,7 +13,7 @@
|
|||||||
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|昆仑|✅|[链接](kunlun)|✅|✅|
|
|昆仑|✅|[链接](kunlunxin)|✅|✅|
|
||||||
|昇腾|✅|[链接](ascend)|✅|✅|
|
|昇腾|✅|[链接](ascend)|✅|✅|
|
||||||
|
|
||||||
## 3. 详细使用文档
|
## 3. 详细使用文档
|
||||||
@@ -38,9 +38,9 @@
|
|||||||
- [Python部署示例](cpu-gpu/python/)
|
- [Python部署示例](cpu-gpu/python/)
|
||||||
- [C++部署示例](cpu-gpu/cpp/)
|
- [C++部署示例](cpu-gpu/cpp/)
|
||||||
- 昆仑 XPU
|
- 昆仑 XPU
|
||||||
- [部署模型准备](kunlun)
|
- [部署模型准备](kunlunxin)
|
||||||
- [Python部署示例](kunlun/README.md)
|
- [Python部署示例](kunlunxin/README.md)
|
||||||
- [C++部署示例](kunlun/README.md)
|
- [C++部署示例](kunlunxin/README.md)
|
||||||
- 昇腾 Ascend
|
- 昇腾 Ascend
|
||||||
- [部署模型准备](ascend)
|
- [部署模型准备](ascend)
|
||||||
- [Python部署示例](ascend/README.md)
|
- [Python部署示例](ascend/README.md)
|
||||||
|
@@ -13,7 +13,7 @@
|
|||||||
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||||
|昆仑|✅|[链接](kunlun)|✅|✅|
|
|昆仑|✅|[链接](kunlunxin)|✅|✅|
|
||||||
|昇腾|✅|[链接](ascend)|✅|✅|
|
|昇腾|✅|[链接](ascend)|✅|✅|
|
||||||
|瑞芯微|✅|[链接](rockchip)|✅|✅|
|
|瑞芯微|✅|[链接](rockchip)|✅|✅|
|
||||||
|晶晨|✅|[链接](amlogic)|--|✅|
|
|晶晨|✅|[链接](amlogic)|--|✅|
|
||||||
@@ -41,9 +41,9 @@
|
|||||||
- [Python部署示例](cpu-gpu/python/)
|
- [Python部署示例](cpu-gpu/python/)
|
||||||
- [C++部署示例](cpu-gpu/cpp/)
|
- [C++部署示例](cpu-gpu/cpp/)
|
||||||
- 昆仑 XPU
|
- 昆仑 XPU
|
||||||
- [部署模型准备](kunlun)
|
- [部署模型准备](kunlunxin)
|
||||||
- [Python部署示例](kunlun/python/)
|
- [Python部署示例](kunlunxin/python/)
|
||||||
- [C++部署示例](kunlun/cpp/)
|
- [C++部署示例](kunlunxin/cpp/)
|
||||||
- 昇腾 Ascend
|
- 昇腾 Ascend
|
||||||
- [部署模型准备](ascend)
|
- [部署模型准备](ascend)
|
||||||
- [Python部署示例](ascend/python/)
|
- [Python部署示例](ascend/python/)
|
||||||
|
@@ -14,12 +14,12 @@
|
|||||||
```bash
|
```bash
|
||||||
# 下载部署示例代码
|
# 下载部署示例代码
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlun/cpp
|
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlunxin/cpp
|
||||||
# 如果您希望从PaddleSeg下载示例代码,请运行
|
# 如果您希望从PaddleSeg下载示例代码,请运行
|
||||||
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
||||||
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||||
# # git checkout develop
|
# # git checkout develop
|
||||||
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlun/cpp
|
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlunxin/cpp
|
||||||
|
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
@@ -14,12 +14,12 @@
|
|||||||
```bash
|
```bash
|
||||||
# 下载部署示例代码
|
# 下载部署示例代码
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlun/python
|
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/kunlunxin/python
|
||||||
# 如果您希望从PaddleSeg下载示例代码,请运行
|
# 如果您希望从PaddleSeg下载示例代码,请运行
|
||||||
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
||||||
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||||
# # git checkout develop
|
# # git checkout develop
|
||||||
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlun/python
|
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/kunlunxin/python
|
||||||
|
|
||||||
# 下载PP-LiteSeg模型文件和测试图片
|
# 下载PP-LiteSeg模型文件和测试图片
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
Reference in New Issue
Block a user