mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00

* 第一次提交 * 补充一处漏翻译 * deleted: docs/en/quantize.md * Update one translation * Update en version * Update one translation in code * Standardize one writing * Standardize one writing * Update some en version * Fix a grammer problem * Update en version for api/vision result * Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop * Checkout the link in README in vision_results/ to the en documents * Modify a title * Add link to serving/docs/ * Finish translation of demo.md * Update english version of serving/docs/ * Update title of readme * Update some links * Modify a title * Update some links * Update en version of java android README * Modify some titles * Modify some titles * Modify some titles * modify article to document * update some english version of documents in examples * Add english version of documents in examples/visions * Sync to current branch * Add english version of documents in examples * Add english version of documents in examples * Add english version of documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples
English | 简体中文
PP-OCRv3 Wechat Mini-program Deployment Example
This document introduces the deployment of PP-OCRv3 model from PaddleOCR in Wechat mini-program, and the js interface in the @paddle-js-models/ocr npm package.
Deploy PP-OCRv3 models in Wechat Mini-program
For the deployment of PP-OCRv3 models in Wechat mini-program, refer to reference document
PP-OCRv3 js interface
import * as ocr from "@paddle-js-models/ocr";
await ocr.init(detConfig, recConfig);
const res = await ocr.recognize(img, option, postConfig);
ocr model loading and initialization, where the model is in Paddle.js model format. For the conversion of js models, refer to document
init function parameter
- detConfig(dict): The configuration parameter for text detection model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; Among them, modelPath is the path of the text detection model; fill is the padding value in the image pre-processing; mean and std are the mean and standard deviation in the pre-processing
- recConfig(dict)): The configuration parameter for text recognition model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; Among them, modelPath is the path of the text detection model, fill is the padding value in the image pre-processing, and mean/std are the mean and standard deviation in the pre-processing
recognize function parameter
- img(HTMLImageElement): Enter an image parameter in HTMLImageElement.
- option(dict): The canvas parameter of the visual text detection box. No need to set.
- postConfig(dict): Text detection post-processing parameter. Default: {shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh is the binarization threshold of the output prediction image; box_thresh is the threshold of the output box, below which the prediction box will be discarded; unclip_ratio is the expansion ratio of the output box.