mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 08:37:06 +08:00

* 第一次提交 * 补充一处漏翻译 * deleted: docs/en/quantize.md * Update one translation * Update en version * Update one translation in code * Standardize one writing * Standardize one writing * Update some en version * Fix a grammer problem * Update en version for api/vision result * Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop * Checkout the link in README in vision_results/ to the en documents * Modify a title * Add link to serving/docs/ * Finish translation of demo.md * Update english version of serving/docs/ * Update title of readme * Update some links * Modify a title * Update some links * Update en version of java android README * Modify some titles * Modify some titles * Modify some titles * modify article to document * update some english version of documents in examples * Add english version of documents in examples/visions * Sync to current branch * Add english version of documents in examples * Add english version of documents in examples * Add english version of documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples
3.4 KiB
Executable File
3.4 KiB
Executable File
English | 简体中文
PaddleClas RV1126 Development Board C++ Deployment Example
infer.cc
in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on RV1126.
Deployment Preparations
FastDeploy Cross-compile Environment Preparations
- For the software and hardware environment, and the cross-compile environment, please refer to Preparations for FastDeploy Cross-compile environment.
Model Preparations
- You can directly use the quantized model provided by FastDeploy for deployment.
- You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
For more information, please refer to Model Quantization.
Deploying the Quantized ResNet50_Vd Segmentation model on RV1126
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on RV1126.
-
Cross-compile the FastDeploy library as described in Cross-compile FastDeploy.
-
Copy the compiled library to the current directory. You can run this line:
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
- Download the model and example images required for deployment in current path.
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
mkdir models && mkdir images
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
tar -xvf resnet50_vd_ptq.tar
cp -r resnet50_vd_ptq models
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
cp -r ILSVRC2012_val_00000010.jpeg images
- Compile the deployment example. You can run the following lines:
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
# After success, an install folder will be created with a running demo and libraries required for deployment.
- Deploy the ResNet50 segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
The output is:

Please note that the model deployed on RV1126 needs to be quantized. You can refer to Model Quantization.