ALPR Example
Overview
This example demonstrates full Automatic License Plate Recognition (ALPR) by using two models
- YOLOv8n - Used for license plate detection in an image
- LPRNet - Used for number plate recognition to read the text/characters on the number plate
The YOLOv8n model was trained on the License Plate Recognition Computer Vision Project Dataset then converted to ONNX and finally compiled to RKNN format. You have to use Rockchips fork of the Ultralytics YOLOv8 repository due to some optimizations they have made for the NPU discussed here.
Use the ultralytics/engine/exporter.py
script in this repository to convert the
PyTorch model to ONNX.
Then compile from ONNX to RKNN using the conversion script from the Model Zoo making sure to provide it a subset of images it was trained on for the quantization process.
The LPRNet model used from the RKNN Model Zoo is for Chinese License Plates.
I don't think it's very good, the input size of 94x24 pixels is too small and
the dataset it was trained on doesn't use angled number plate images.
These factors reduce accuracy, so training your own model for real world
usage would be better.
Usage
Make sure you have downloaded the data files first for the examples. You only need to do this once for all examples.
cd example/
git clone --depth=1 https://github.com/swdee/go-rknnlite-data.git data
Run the ALPR example on rk3588 or replace with your Platform model.
cd example/alpr
go run alpr.go -p rk3588
This will result in the output of:
Model first run speed: YOLO inference=22.395264ms, YOLO post processing=126.581µs, Plate recognition=4.295597ms, Plate post processing=437.202µs, Total time=27.254644ms
Saved object detection result to ../data/car-cn-alpr-out.jpg
Benchmark time=2.976581181s, count=100, average total time=29.765811ms
done
The saved JPG image with object detection markers and license plate recognised. The percentage in brackets indicates the confidence score from the YOLO model for license plate detection.
See help for passing parameters to try your own images.
$ go run alpr.go -h
Usage of /tmp/go-build3203844595/b001/exe/alpr:
-f string
The TTF font to use (default "../data/fzhei-b01s-regular.ttf")
-i string
Image file to run object detection on (default "../data/car-cn.jpg")
-l string
RKNN compiled LPRNet model file (default "../data/models/rk3588/lprnet-rk3588.rknn")
-m string
RKNN compiled YOLO model file (default "../data/models/rk3588/lpd-yolov8n-rk3588.rknn")
-o string
The output JPG file with object detection markers (default "../data/car-cn-alpr-out.jpg")
-p string
Rockchip CPU Model number [rk3562|rk3566|rk3568|rk3576|rk3582|rk3582|rk3588] (default "rk3588")
-t string
The text drawing mode [cn|en] (default "cn")
Docker
To run the ALPR example using the prebuilt docker image, make sure the data files have been downloaded first, then run.
# from project root directory
docker run --rm \
--device /dev/dri:/dev/dri \
-v "$(pwd):/go/src/app" \
-v "$(pwd)/example/data:/go/src/data" \
-v "/usr/include/rknn_api.h:/usr/include/rknn_api.h" \
-v "/usr/lib/librknnrt.so:/usr/lib/librknnrt.so" \
-w /go/src/app \
swdee/go-rknnlite:latest \
go run ./example/alpr/alpr.go -p rk3588
Benchmarks
The following table shows a comparison of the benchmark results across the three distinct platforms.
Platform | Execution Time | Average Inference Time Per Image |
---|---|---|
rk3588 | 2.97s | 29.76ms |
rk3576 | 3.20s | 31.83ms |
rk3566 | 6.93s | 69.36ms |
Note that these examples are only using a single NPU core to run inference on. The results would be different when running a Pool of models using all NPU cores available.