
* add contributor * add package readme * refine ocr readme * refine ocr readme * add en readme about js
English | 中文
Paddle.js WeChat mini-program Demo
- [1. Introduction] (#1)
- 2. Project Start
- 3. Model inference pipeline
- 4. FAQ
1 Introduction
This directory contains the text detection, text recognition mini-program demo, by using Paddle.js and [Paddle.js WeChat mini-program plugin](https:// mp.weixin.qq.com/wxopen/plugindevdoc?appid=wx7138a7bb793608c3&token=956931339&lang=zh_CN) to complete the text detection frame selection effect on the mini-program using the computing power of the user terminal.
2. Project start
2.1 Preparations
- Apply for a WeChat mini-program account
- WeChat Mini Program Developer Tools
- Front-end development environment preparation: node, npm
- Configure the server domain name in the mini-program management background, or open the developer tool [do not verify the legal domain name]
For details, please refer to: https://mp.weixin.qq.com/wxamp/devprofile/get_profile?token=1132303404&lang=zh_CN)
2.2 Startup steps
1. Clone the demo code
git clone https://github.com/PaddlePaddle/FastDeploy
cd FastDeploy/examples/application/js/mini_program
2. Enter the mini-program directory and install dependencies
# Run the text recognition demo and enter the ocrXcx directory
cd ./ocrXcx && npm install
# Run the text detection demo and enter the ocrdetectXcx directory
# cd ./ocrdetectXcx && npm install
3. WeChat mini-program import code
Open WeChat Developer Tools --> Import --> Select a directory and enter relevant information
4. Add Paddle.js WeChat mini-program plugin
Mini Program Management Interface --> Settings --> Third Party Settings --> Plugin Management --> Add Plugins --> Search for wx7138a7bb793608c3
and add
Reference document
5. Build dependencies
Click on the menu bar in the developer tools: Tools --> Build npm
Reason: The node_modules directory will not be involved in compiling, uploading and packaging. If a small program wants to use npm packages, it must go through the process of "building npm". After the construction is completed, a miniprogram_npm directory will be generated, which will store the built and packaged npm packages. It is the npm package that the mini-program actually uses. * Reference Documentation
2.3 visualization

3. Model inference pipeline
// Introduce paddlejs and paddlejs-plugin, register the mini-program environment variables and the appropriate backend
import * as paddlejs from '@paddlejs/paddlejs-core';
import '@paddlejs/paddlejs-backend-webgl';
const plugin = requirePlugin('paddlejs-plugin');
plugin.register(paddlejs, wx);
// Initialize the inference engine
const runner = new paddlejs.Runner({modelPath, feedShape, mean, std});
await runner.init();
// get image information
wx.canvasGetImageData({
canvasId: canvasId,
x: 0,
y: 0,
width: canvas.width,
height: canvas.height,
success(res) {
// inference prediction
runner.predict({
data: res.data,
width: canvas.width,
height: canvas.height,
}, function (data) {
// get the inference result
console.log(data)
});
}
});
4. FAQ
4.1 An error occurs Invalid context type [webgl2] for Canvas#getContext
You can leave it alone, it will not affect the normal code operation and demo function
4.2 Preview can't see the result
It is recommended to try real machine debugging
4.3 A black screen appears in the WeChat developer tool, and then there are too many errors
Restart WeChat Developer Tools
4.4 The debugging results of the simulation and the real machine are inconsistent; the simulation cannot detect the text, etc.
The real machine can prevail;
If the simulation cannot detect the text, etc., you can try to change the code at will (add, delete, newline, etc.) and then click to compile
4.5 Prompts such as no response for a long time appear when the phone is debugged or running
Please continue to wait, model inference will take some time