
* Create README_CN.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Update README.md * Update and rename README_en.md to README_CN.md * Update WebDemo.md * Update and rename WebDemo_en.md to WebDemo_CN.md * Update and rename DEVELOPMENT_cn.md to DEVELOPMENT_CN.md * Update DEVELOPMENT_CN.md * Update DEVELOPMENT.md * Update RNN.md * Update and rename RNN_EN.md to RNN_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update export.md * Update and rename export_EN.md to export_CN.md * Update README.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update kunlunxin.md * Update classification_result.md * Update classification_result_EN.md * Rename classification_result_EN.md to classification_result_CN.md * Update detection_result.md * Update and rename detection_result_EN.md to detection_result_CN.md * Update face_alignment_result.md * Update and rename face_alignment_result_EN.md to face_alignment_result_CN.md * Update face_detection_result.md * Update and rename face_detection_result_EN.md to face_detection_result_CN.md * Update face_recognition_result.md * Update and rename face_recognition_result_EN.md to face_recognition_result_CN.md * Update headpose_result.md * Update and rename headpose_result_EN.md to headpose_result_CN.md * Update keypointdetection_result.md * Update and rename keypointdetection_result_EN.md to keypointdetection_result_CN.md * Update matting_result.md * Update and rename matting_result_EN.md to matting_result_CN.md * Update mot_result.md * Update and rename mot_result_EN.md to mot_result_CN.md * Update ocr_result.md * Update and rename ocr_result_EN.md to ocr_result_CN.md * Update segmentation_result.md * Update and rename segmentation_result_EN.md to segmentation_result_CN.md * Update README.md * Update README.md * Update quantize.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
4.7 KiB
English | 简体中文
Paddle.js WeChat mini-program Demo
1 Introduction
This directory contains the text detection, text recognition mini-program demo, by using Paddle.js and Paddle.js WeChat mini-program plugin to complete the text detection frame selection effect on the mini-program using the computing power of the user terminal.
2. Project start
2.1 Preparations
- Apply for a WeChat mini-program account
- WeChat Mini Program Developer Tools
- Front-end development environment preparation: node, npm
- Configure the server domain name in the mini-program management background, or open the developer tool [do not verify the legal domain name]
For details, please refer to document.
2.2 Startup steps
1. Clone the demo code
git clone https://github.com/PaddlePaddle/FastDeploy
cd FastDeploy/examples/application/js/mini_program
2. Enter the mini-program directory and install dependencies
# Run the text recognition demo and enter the ocrXcx directory
cd ./ocrXcx && npm install
# Run the text detection demo and enter the ocrdetectXcx directory
# cd ./ocrdetectXcx && npm install
3. WeChat mini-program import code
Open WeChat Developer Tools --> Import --> Select a directory and enter relevant information
4. Add Paddle.js WeChat mini-program plugin
Mini Program Management Interface --> Settings --> Third Party Settings --> Plugin Management --> Add Plugins --> Search for wx7138a7bb793608c3
and add
Reference document
5. Build dependencies
Click on the menu bar in the developer tools: Tools --> Build npm
Reason: The node_modules directory will not be involved in compiling, uploading and packaging. If a small program wants to use npm packages, it must go through the process of "building npm". After the construction is completed, a miniprogram_npm directory will be generated, which will store the built and packaged npm packages. It is the npm package that the mini-program actually uses. * Reference Documentation
2.3 visualization

3. Model inference pipeline
// Introduce paddlejs and paddlejs-plugin, register the mini-program environment variables and the appropriate backend
import * as paddlejs from '@paddlejs/paddlejs-core';
import '@paddlejs/paddlejs-backend-webgl';
const plugin = requirePlugin('paddlejs-plugin');
plugin.register(paddlejs, wx);
// Initialize the inference engine
const runner = new paddlejs.Runner({modelPath, feedShape, mean, std});
await runner.init();
// get image information
wx.canvasGetImageData({
canvasId: canvasId,
x: 0,
y: 0,
width: canvas.width,
height: canvas.height,
success(res) {
// inference prediction
runner.predict({
data: res.data,
width: canvas.width,
height: canvas.height,
}, function (data) {
// get the inference result
console.log(data)
});
}
});
4. FAQ
-
4.1 An error occurs
Invalid context type [webgl2] for Canvas#getContext
A: You can leave it alone, it will not affect the normal code operation and demo function
-
4.2 Preview can't see the result
A: It is recommended to try real machine debugging
-
4.3 A black screen appears in the WeChat developer tool, and then there are too many errors
A: Restart WeChat Developer Tools
-
4.4 The debugging results of the simulation and the real machine are inconsistent; the simulation cannot detect the text, etc.
A: The real machine can prevail; If the simulation cannot detect the text, etc., you can try to change the code at will (add, delete, newline, etc.) and then click to compile
-
4.5 Prompts such as no response for a long time appear when the phone is debugged or running
A: Please continue to wait, model inference will take some time