Files
FastDeploy/examples/vision/classification/paddleclas/a311d/cpp
charl-u cbf88a46fa [Doc]Update English version of some documents (#1083)
* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples

* Add english version of documents in examples

* Add english version of documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples
2023-01-09 10:08:19 +08:00
..
2022-12-13 11:53:36 +08:00

English | 简体中文

PaddleClas A311D Development Board C++ Deployment Example

infer.cc in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on A311D.

Deployment Preparations

FastDeploy Cross-compile Environment Preparations

  1. For the software and hardware environment, and the cross-compile environment, please refer to FastDeploy Cross-compile environment.

Quantization Model Preparations

  1. You can directly use the quantized model provided by FastDeploy for deployment.
  2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)

For more information, please refer to Model Quantization.

Deploying the Quantized ResNet50_Vd Segmentation model on A311D

Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.

  1. Cross-compile the FastDeploy library as described in Cross-compile FastDeploy.

  2. Copy the compiled library to the current directory. You can run this line:

cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
  1. Download the model and example images required for deployment in current path.
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir models && mkdir images
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
tar -xvf resnet50_vd_ptq.tar
cp -r resnet50_vd_ptq models
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
cp -r ILSVRC2012_val_00000010.jpeg images
  1. Compile the deployment example. You can run the following lines:
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# After success, an install folder will be created with a running demo and libraries required for deployment.
  1. Deploy the ResNet50 segmentation model to A311D based on adb. You can run the following lines:
# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID

The output is:

Please note that the model deployed on A311D needs to be quantized. You can refer to Model Quantization.