mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-16 13:41:30 +08:00

* Add Huawei Ascend NPU deploy through PaddleLite CANN * Add NNAdapter interface for paddlelite * Modify Huawei Ascend Cmake * Update way for compiling Huawei Ascend NPU deployment * remove UseLiteBackend in UseCANN * Support compile python whlee * Change names of nnadapter API * Add nnadapter pybind and remove useless API * Support Python deployment on Huawei Ascend NPU * Add models suppor for ascend * Add PPOCR rec reszie for ascend * fix conflict for ascend * Rename CANN to Ascend * Rename CANN to Ascend * Improve ascend * fix ascend bug * improve ascend docs * improve ascend docs * improve ascend docs * Improve Ascend * Improve Ascend * Move ascend python demo * Imporve ascend * Improve ascend * Improve ascend * Improve ascend * Improve ascend * Imporve ascend * Imporve ascend * Improve ascend * acc eval script * acc eval * remove acc_eval from branch huawei * Add detection and segmentation examples for Ascend deployment * Add detection and segmentation examples for Ascend deployment * Add PPOCR example for ascend deploy * Imporve paddle lite compiliation * Add FlyCV doc * Add FlyCV doc * Add FlyCV doc * Imporve Ascend docs * Imporve Ascend docs * Improve PPOCR example
PaddleSeg 模型部署
模型版本说明
目前FastDeploy支持如下模型的部署
【注意】如你部署的为PP-Matting、PP-HumanMatting以及ModNet请参考Matting模型部署
准备PaddleSeg部署模型
PaddleSeg模型导出,请参考其文档说明模型导出
注意
- PaddleSeg导出的模型包含
model.pdmodel
、model.pdiparams
和deploy.yaml
三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
下载预训练模型
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
- without-argmax导出方式为:不指定
--input_shape
,指定--output_op none
- with-argmax导出方式为:不指定
--input_shape
,指定--output_op argmax
开发者可直接下载使用。