diff --git a/docs/api/runtime_option.md b/docs/api/runtime_option.md index d30a98b4e..da92b5ada 100644 --- a/docs/api/runtime_option.md +++ b/docs/api/runtime_option.md @@ -29,7 +29,7 @@ RuntimeOption( device_id : 0 # 推理硬件id(针对GPU) model_file : yolov5s.onnx # 模型文件路径 params_file : # 参数文件路径 - model_format : Frontend.ONNX # 模型格式 + model_format : ModelFormat.ONNX # 模型格式 ort_execution_mode : -1 # 前辍为ort的表示为ONNXRuntime后端专用参数 ort_graph_opt_level : -1 ort_inter_op_num_threads : -1 @@ -57,7 +57,7 @@ RuntimeOption( > * **device_id**(int): 设备id,在GPU下使用 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径 -> * **model_format**(Frontend): 模型格式, `fd.Frontend.PADDLE`/`fd.Frontend.ONNX` +> * **model_format**(ModelFormat): 模型格式, `fd.ModelFormat.PADDLE`/`fd.ModelFormat.ONNX` > * **ort_execution_mode**(int): ORT后端执行方式,0表示按顺序执行所有算子,1表示并行执行算子,默认为-1,即按ORT默认配置方式执行 > * **ort_graph_opt_level**(int): ORT后端图优化等级;0:禁用图优化;1:基础优化 2:额外拓展优化;99:全部优化; 默认为-1,即按ORT默认配置方式执行 > * **ort_inter_op_num_threads**(int): 当`ort_execution_mode`为1时,此参数设置算子间并行的线程数 @@ -100,7 +100,7 @@ model = fd.vision.classification.PaddleClasModel( > * **device_id**(int): 设备id,在GPU下使用 > * **model_file**(string): 模型文件路径 > * **params_file**(string): 参数文件路径 -> * **model_format**(fastdeploy::Frontend): 模型格式, `Frontend::PADDLE`/`Frontend::ONNX` +> * **model_format**(fastdeploy::ModelFormat): 模型格式, `ModelFormat::PADDLE`/`ModelFormat::ONNX` > * **ort_execution_mode**(int): ORT后端执行方式,0表示按顺序执行所有算子,1表示并行执行算子,默认为-1,即按ORT默认配置方式执行 > * **ort_graph_opt_level**(int): ORT后端图优化等级;0:禁用图优化;1:基础优化 2:额外拓展优化;99:全部优化; 默认为-1,即按ORT默认配置方式执行 > * **ort_inter_op_num_threads**(int): 当`ort_execution_mode`为1时,此参数设置算子间并行的线程数 diff --git a/docs/docs_en/api/runtime_option.md b/docs/docs_en/api/runtime_option.md index 3f690b977..f2280adcb 100644 --- a/docs/docs_en/api/runtime_option.md +++ b/docs/docs_en/api/runtime_option.md @@ -31,7 +31,7 @@ RuntimeOption( device_id : 0 # Inference hardware id (for GPU) model_file : yolov5s.onnx # Path to the model file params_file : # Parameter file path - model_format : Frontend.ONNX # odel format + model_format : ModelFormat.ONNX # odel format ort_execution_mode : -1 # The prefix ort indicates ONNXRuntime backend parameters ort_graph_opt_level : -1 ort_inter_op_num_threads : -1 @@ -61,7 +61,7 @@ RuntimeOption( > * **device_id**(int): Device id, used on GPU > * **model_file**(str): Model file path > * **params_file**(str): Parameter file path -> * **model_format**(Frontend): Model format, `fd.Frontend.PADDLE`/`fd.Frontend.ONNX` +> * **model_format**(ModelFormat): Model format, `fd.ModelFormat.PADDLE`/`fd.ModelFormat.ONNX` > * **ort_execution_mode**(int): ORT back-end execution mode, 0 for sequential execution of all operators, 1 for parallel execution of operators, default is -1, i.e. execution in the ORT default configuration > * **ort_graph_opt_level**(int): ORT back-end image optimisation level; 0: disable image optimisation; 1: basic optimisation 2: additional expanded optimisation; 99: all optimisation; default is -1, i.e. executed in the ORT default configuration > * **ort_inter_op_num_threads**(int): When `ort_execution_mode` is 1, this parameter sets the number of threads in parallel between operators @@ -106,7 +106,7 @@ model = fd.vision.classification.PaddleClasModel( > * **device_id**(int): Device id, used on GPU > * **model_file**(string): Model file path > * **params_file**(string): Parameter file path -> * **model_format**(fastdeploy::Frontend): Model format,`Frontend::PADDLE`/`Frontend::ONNX` +> * **model_format**(fastdeploy::ModelFormat): Model format,`ModelFormat::PADDLE`/`ModelFormat::ONNX` > * **ort_execution_mode**(int): ORT back-end execution mode, 0 for sequential execution of all operators, 1 for parallel execution of operators, default is -1, i.e. execution in the ORT default configuration > * **ort_graph_opt_level**(int): ORT back-end image optimisation level; 0: disable image optimisation; 1: basic optimisation 2: additional expanded optimisation; 99: all optimisation; default is -1, i.e. executed in the ORT default configuration > * **ort_inter_op_num_threads**(int): When `ort_execution_mode` is 1, this parameter sets the number of threads in parallel between operators diff --git a/examples/text/uie/cpp/README.md b/examples/text/uie/cpp/README.md index c5a2d0290..8b390aca2 100644 --- a/examples/text/uie/cpp/README.md +++ b/examples/text/uie/cpp/README.md @@ -85,20 +85,20 @@ UIEModel( const std::vector& schema, const fastdeploy::RuntimeOption& custom_option = fastdeploy::RuntimeOption(), - const fastdeploy::Frontend& model_format = fastdeploy::Frontend::PADDLE); + const fastdeploy::ModelFormat& model_format = fastdeploy::ModelFormat::PADDLE); UIEModel( const std::string& model_file, const std::string& params_file, const std::string& vocab_file, float position_prob, size_t max_length, const SchemaNode& schema, const fastdeploy::RuntimeOption& custom_option = fastdeploy::RuntimeOption(), - const fastdeploy::Frontend& model_format = fastdeploy::Frontend::PADDLE); + const fastdeploy::ModelFormat& model_format = fastdeploy::ModelFormat::PADDLE); UIEModel( const std::string& model_file, const std::string& params_file, const std::string& vocab_file, float position_prob, size_t max_length, const std::vector& schema, const fastdeploy::RuntimeOption& custom_option = fastdeploy::RuntimeOption(), - const fastdeploy::Frontend& model_format = fastdeploy::Frontend::PADDLE); + const fastdeploy::ModelFormat& model_format = fastdeploy::ModelFormat::PADDLE); ``` UIE模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)。 @@ -112,7 +112,7 @@ UIE模型加载和初始化,其中model_file, params_file为训练模型导出 > * **max_length**(int): 输入文本的最大长度。输入文本下标超过`max_length`的部分将被截断。默认为128 > * **schema**(list(SchemaNode) | SchemaNode | list(str)): 抽取任务的目标模式。 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 #### SetSchema函数 diff --git a/examples/text/uie/python/README.md b/examples/text/uie/python/README.md index 061a500fa..0133c1525 100644 --- a/examples/text/uie/python/README.md +++ b/examples/text/uie/python/README.md @@ -329,7 +329,7 @@ fd.text.uie.UIEModel(model_file, position_prob=0.5, max_length=128, schema=[], - runtime_option=None,model_format=Frontend.PADDLE) + runtime_option=None,model_format=ModelFormat.PADDLE) ``` UIEModel模型加载和初始化,其中`model_file`, `params_file`为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2),`vocab_file`为词表文件,UIE模型的词表可在[UIE配置文件](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)中下载相应的UIE模型的vocab_file。 @@ -343,7 +343,7 @@ UIEModel模型加载和初始化,其中`model_file`, `params_file`为训练模 > * **max_length**(int): 输入文本的最大长度。输入文本下标超过`max_length`的部分将被截断。默认为128 > * **schema**(list|dict): 抽取任务的目标信息。 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### set_schema函数 diff --git a/examples/vision/classification/paddleclas/cpp/README.md b/examples/vision/classification/paddleclas/cpp/README.md index aeddac038..5e96a635d 100644 --- a/examples/vision/classification/paddleclas/cpp/README.md +++ b/examples/vision/classification/paddleclas/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::classification::PaddleClasModel( const string& params_file, const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE) + const ModelFormat& model_format = ModelFormat::PADDLE) ``` PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA) @@ -57,7 +57,7 @@ PaddleClas模型加载和初始化,其中model_file, params_file为训练模 > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 #### Predict函数 diff --git a/examples/vision/classification/paddleclas/python/README.md b/examples/vision/classification/paddleclas/python/README.md index a144f69b9..0d51afc1b 100644 --- a/examples/vision/classification/paddleclas/python/README.md +++ b/examples/vision/classification/paddleclas/python/README.md @@ -36,7 +36,7 @@ scores: 0.686229, ## PaddleClasModel Python接口 ```python -fd.vision.classification.PaddleClasModel(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) +fd.vision.classification.PaddleClasModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA) @@ -47,7 +47,7 @@ PaddleClas模型加载和初始化,其中model_file, params_file为训练模 > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### predict函数 diff --git a/examples/vision/detection/nanodet_plus/cpp/README.md b/examples/vision/detection/nanodet_plus/cpp/README.md index 0575c85cf..49659407e 100644 --- a/examples/vision/detection/nanodet_plus/cpp/README.md +++ b/examples/vision/detection/nanodet_plus/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::NanoDetPlus( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/nanodet_plus/python/README.md b/examples/vision/detection/nanodet_plus/python/README.md index be4075bfa..664c39e2c 100644 --- a/examples/vision/detection/nanodet_plus/python/README.md +++ b/examples/vision/detection/nanodet_plus/python/README.md @@ -31,7 +31,7 @@ python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --devic ## NanoDetPlus Python接口 ```python -fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/paddledetection/cpp/README.md b/examples/vision/detection/paddledetection/cpp/README.md index f1ae1de0a..36c7e5e69 100644 --- a/examples/vision/detection/paddledetection/cpp/README.md +++ b/examples/vision/detection/paddledetection/cpp/README.md @@ -48,7 +48,7 @@ fastdeploy::vision::detection::PPYOLOE( const string& params_file, const string& config_file const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE) + const ModelFormat& model_format = ModelFormat::PADDLE) ``` PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -59,7 +59,7 @@ PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ON > * **params_file**(str): 参数文件路径 > * **config_file**(str): 配置文件路径,即PaddleDetection导出的部署yaml文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为PADDLE格式 +> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式 #### Predict函数 diff --git a/examples/vision/detection/paddledetection/python/README.md b/examples/vision/detection/paddledetection/python/README.md index 9657612ea..835b9a7f2 100644 --- a/examples/vision/detection/paddledetection/python/README.md +++ b/examples/vision/detection/paddledetection/python/README.md @@ -33,13 +33,13 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439 ## PaddleDetection Python接口 ```python -fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.YOLOv3(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.PPYOLO(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.FasterRCNN(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) -fastdeploy.vision.detection.MaskRCNN(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) +fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.YOLOv3(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.PPYOLO(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.FasterRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) +fastdeploy.vision.detection.MaskRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` PaddleDetection模型加载和初始化,其中model_file, params_file为导出的Paddle部署模型格式, config_file为PaddleDetection同时导出的部署配置yaml文件 @@ -50,7 +50,7 @@ PaddleDetection模型加载和初始化,其中model_file, params_file为导 > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理配置yaml文件路径 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle +> * **model_format**(ModelFormat): 模型格式,默认为Paddle ### predict函数 diff --git a/examples/vision/detection/scaledyolov4/cpp/README.md b/examples/vision/detection/scaledyolov4/cpp/README.md index 2a4431173..ca6c2ab03 100644 --- a/examples/vision/detection/scaledyolov4/cpp/README.md +++ b/examples/vision/detection/scaledyolov4/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::ScaledYOLOv4( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/scaledyolov4/python/README.md b/examples/vision/detection/scaledyolov4/python/README.md index 0f99e4dc0..afd4d9a23 100644 --- a/examples/vision/detection/scaledyolov4/python/README.md +++ b/examples/vision/detection/scaledyolov4/python/README.md @@ -31,7 +31,7 @@ python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device ## ScaledYOLOv4 Python接口 ```python -fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolor/cpp/README.md b/examples/vision/detection/yolor/cpp/README.md index c7a17c859..1329106bc 100644 --- a/examples/vision/detection/yolor/cpp/README.md +++ b/examples/vision/detection/yolor/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOR( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolor/python/README.md b/examples/vision/detection/yolor/python/README.md index 1807437e0..0dadd72b3 100644 --- a/examples/vision/detection/yolor/python/README.md +++ b/examples/vision/detection/yolor/python/README.md @@ -31,7 +31,7 @@ python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg ## YOLOR Python接口 ```python -fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov5/cpp/README.md b/examples/vision/detection/yolov5/cpp/README.md index f66f38ad5..18405b37a 100644 --- a/examples/vision/detection/yolov5/cpp/README.md +++ b/examples/vision/detection/yolov5/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOv5( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov5/python/README.md b/examples/vision/detection/yolov5/python/README.md index 764aae256..b8dc88bc1 100644 --- a/examples/vision/detection/yolov5/python/README.md +++ b/examples/vision/detection/yolov5/python/README.md @@ -31,7 +31,7 @@ python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu --use ## YOLOv5 Python接口 ```python -fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov5lite/cpp/README.md b/examples/vision/detection/yolov5lite/cpp/README.md index 212adb662..d255fa955 100644 --- a/examples/vision/detection/yolov5lite/cpp/README.md +++ b/examples/vision/detection/yolov5lite/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOv5Lite( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov5lite/python/README.md b/examples/vision/detection/yolov5lite/python/README.md index df2cef590..759f5e94f 100644 --- a/examples/vision/detection/yolov5lite/python/README.md +++ b/examples/vision/detection/yolov5lite/python/README.md @@ -31,7 +31,7 @@ python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device ## YOLOv5Lite Python接口 ```python -fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov6/cpp/README.md b/examples/vision/detection/yolov6/cpp/README.md index 019d2e26b..f195910c5 100644 --- a/examples/vision/detection/yolov6/cpp/README.md +++ b/examples/vision/detection/yolov6/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOv6( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov6/python/README.md b/examples/vision/detection/yolov6/python/README.md index c90706372..792d711c9 100644 --- a/examples/vision/detection/yolov6/python/README.md +++ b/examples/vision/detection/yolov6/python/README.md @@ -32,7 +32,7 @@ python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use ## YOLOv6 Python接口 ```python -fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -42,7 +42,7 @@ YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov7/cpp/README.md b/examples/vision/detection/yolov7/cpp/README.md index 9887d3bc2..e0cd2e7d9 100644 --- a/examples/vision/detection/yolov7/cpp/README.md +++ b/examples/vision/detection/yolov7/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOv7( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov7/python/README.md b/examples/vision/detection/yolov7/python/README.md index df3d8a7ba..4440e8049 100644 --- a/examples/vision/detection/yolov7/python/README.md +++ b/examples/vision/detection/yolov7/python/README.md @@ -33,7 +33,7 @@ python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_ ## YOLOv7 Python接口 ```python -fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -43,7 +43,7 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov7/python/README_EN.md b/examples/vision/detection/yolov7/python/README_EN.md index 0de92a88c..64ce3b6ed 100644 --- a/examples/vision/detection/yolov7/python/README_EN.md +++ b/examples/vision/detection/yolov7/python/README_EN.md @@ -34,7 +34,7 @@ The visualisation of the results is as follows. ## YOLOv7 Python Interface ```python -fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv7 model loading and initialisation, with model_file being the exported ONNX model format. @@ -44,7 +44,7 @@ YOLOv7 model loading and initialisation, with model_file being the exported ONNX > * **model_file**(str): Model file path > * **params_file**(str): Parameter file path. If the model format is ONNX, the parameter can be filled with an empty string. > * **runtime_option**(RuntimeOption): Back-end inference configuration. The default is None, i.e. the default is applied -> * **model_format**(Frontend): Model format. The default is ONNX format +> * **model_format**(ModelFormat): Model format. The default is ONNX format ### Predict Function diff --git a/examples/vision/detection/yolov7end2end_ort/cpp/README.md b/examples/vision/detection/yolov7end2end_ort/cpp/README.md index 3aadaf9ea..a6ce9f3e3 100644 --- a/examples/vision/detection/yolov7end2end_ort/cpp/README.md +++ b/examples/vision/detection/yolov7end2end_ort/cpp/README.md @@ -51,7 +51,7 @@ fastdeploy::vision::detection::YOLOv7End2EndORT( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv7End2EndORT 模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -61,7 +61,7 @@ YOLOv7End2EndORT 模型加载和初始化,其中model_file为导出的ONNX模 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov7end2end_ort/python/README.md b/examples/vision/detection/yolov7end2end_ort/python/README.md index 8118eadab..00f85a267 100644 --- a/examples/vision/detection/yolov7end2end_ort/python/README.md +++ b/examples/vision/detection/yolov7end2end_ort/python/README.md @@ -36,7 +36,7 @@ python infer.py --model yolov7-end2end-ort-nms.onnx --image 000000014439.jpg --d ## YOLOv7End2EndORT Python接口 ```python -fastdeploy.vision.detection.YOLOv7End2EndORT(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv7End2EndORT(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv7End2EndORT模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -46,7 +46,7 @@ YOLOv7End2EndORT模型加载和初始化,其中model_file为导出的ONNX模 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolov7end2end_trt/cpp/README.md b/examples/vision/detection/yolov7end2end_trt/cpp/README.md index b9d9318df..1e3792e10 100644 --- a/examples/vision/detection/yolov7end2end_trt/cpp/README.md +++ b/examples/vision/detection/yolov7end2end_trt/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOv7End2EndTRT( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv7End2EndTRT 模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv7End2EndTRT 模型加载和初始化,其中model_file为导出的ONNX模 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolov7end2end_trt/python/README.md b/examples/vision/detection/yolov7end2end_trt/python/README.md index c07b8f69c..deac93020 100644 --- a/examples/vision/detection/yolov7end2end_trt/python/README.md +++ b/examples/vision/detection/yolov7end2end_trt/python/README.md @@ -32,7 +32,7 @@ python infer.py --model yolov7-end2end-trt-nms.onnx --image 000000014439.jpg --d ## YOLOv7End2EndTRT Python接口 ```python -fastdeploy.vision.detection.YOLOv7End2EndTRT(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOv7End2EndTRT(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv7End2EndTRT 模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -42,7 +42,7 @@ YOLOv7End2EndTRT 模型加载和初始化,其中model_file为导出的ONNX模 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/detection/yolox/cpp/README.md b/examples/vision/detection/yolox/cpp/README.md index bc6e040cd..10dd8d655 100644 --- a/examples/vision/detection/yolox/cpp/README.md +++ b/examples/vision/detection/yolox/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::detection::YOLOX( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/detection/yolox/python/README.md b/examples/vision/detection/yolox/python/README.md index b9963aeee..0dc9052b4 100644 --- a/examples/vision/detection/yolox/python/README.md +++ b/examples/vision/detection/yolox/python/README.md @@ -31,7 +31,7 @@ python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu --use ## YOLOX Python接口 ```python -fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/facedet/retinaface/cpp/README.md b/examples/vision/facedet/retinaface/cpp/README.md index 6256d2430..ca0de9776 100644 --- a/examples/vision/facedet/retinaface/cpp/README.md +++ b/examples/vision/facedet/retinaface/cpp/README.md @@ -45,7 +45,7 @@ fastdeploy::vision::facedet::RetinaFace( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -55,7 +55,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/facedet/retinaface/python/README.md b/examples/vision/facedet/retinaface/python/README.md index 4d9879705..e42965e16 100644 --- a/examples/vision/facedet/retinaface/python/README.md +++ b/examples/vision/facedet/retinaface/python/README.md @@ -31,7 +31,7 @@ python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image test_ ## RetinaFace Python接口 ```python -fastdeploy.vision.facedet.RetinaFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.facedet.RetinaFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/facedet/scrfd/cpp/README.md b/examples/vision/facedet/scrfd/cpp/README.md index b43bcd029..4a24e93db 100644 --- a/examples/vision/facedet/scrfd/cpp/README.md +++ b/examples/vision/facedet/scrfd/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::facedet::SCRFD( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/facedet/scrfd/python/README.md b/examples/vision/facedet/scrfd/python/README.md index 09c7bd5c3..12a0140ee 100644 --- a/examples/vision/facedet/scrfd/python/README.md +++ b/examples/vision/facedet/scrfd/python/README.md @@ -31,7 +31,7 @@ python infer.py --model scrfd_500m_bnkps_shape640x640.onnx --image test_lite_fac ## SCRFD Python接口 ```python -fastdeploy.vision.facedet.SCRFD(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.facedet.SCRFD(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ SCRFD模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/facedet/ultraface/cpp/README.md b/examples/vision/facedet/ultraface/cpp/README.md index ffd1faea3..e111844f2 100644 --- a/examples/vision/facedet/ultraface/cpp/README.md +++ b/examples/vision/facedet/ultraface/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::facedet::UltraFace( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/facedet/ultraface/python/README.md b/examples/vision/facedet/ultraface/python/README.md index d747a8383..ce55c780f 100644 --- a/examples/vision/facedet/ultraface/python/README.md +++ b/examples/vision/facedet/ultraface/python/README.md @@ -31,7 +31,7 @@ python infer.py --model version-RFB-320.onnx --image test_lite_face_detector_3.j ## UltraFace Python接口 ```python -fastdeploy.vision.facedet.UltraFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.facedet.UltraFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/facedet/yolov5face/cpp/README.md b/examples/vision/facedet/yolov5face/cpp/README.md index a99c51ff3..8a069ad22 100644 --- a/examples/vision/facedet/yolov5face/cpp/README.md +++ b/examples/vision/facedet/yolov5face/cpp/README.md @@ -46,7 +46,7 @@ fastdeploy::vision::facedet::YOLOv5Face( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -56,7 +56,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/facedet/yolov5face/python/README.md b/examples/vision/facedet/yolov5face/python/README.md index 75768280e..c7daf9717 100644 --- a/examples/vision/facedet/yolov5face/python/README.md +++ b/examples/vision/facedet/yolov5face/python/README.md @@ -31,7 +31,7 @@ python infer.py --model yolov5s-face.onnx --image test_lite_face_detector_3.jpg ## YOLOv5Face Python接口 ```python -fastdeploy.vision.facedet.YOLOv5Face(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.facedet.YOLOv5Face(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -41,7 +41,7 @@ YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/faceid/insightface/cpp/README.md b/examples/vision/faceid/insightface/cpp/README.md index 27e339995..82b0e8e0b 100644 --- a/examples/vision/faceid/insightface/cpp/README.md +++ b/examples/vision/faceid/insightface/cpp/README.md @@ -52,7 +52,7 @@ fastdeploy::vision::faceid::ArcFace( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -64,7 +64,7 @@ fastdeploy::vision::faceid::CosFace( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` CosFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -76,7 +76,7 @@ fastdeploy::vision::faceid::PartialFC( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` PartialFC模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -88,7 +88,7 @@ fastdeploy::vision::faceid::VPL( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` VPL模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -97,7 +97,7 @@ VPL模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/faceid/insightface/python/README.md b/examples/vision/faceid/insightface/python/README.md index 143e6c9dc..7ef852a39 100644 --- a/examples/vision/faceid/insightface/python/README.md +++ b/examples/vision/faceid/insightface/python/README.md @@ -47,10 +47,10 @@ Detect Done! Cosine 01: 0.814385, Cosine 02:-0.059388 ## InsightFace Python接口 ```python -fastdeploy.vision.faceid.ArcFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) -fastdeploy.vision.faceid.CosFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) -fastdeploy.vision.faceid.PartialFC(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) -fastdeploy.vision.faceid.VPL(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.faceid.ArcFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) +fastdeploy.vision.faceid.CosFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) +fastdeploy.vision.faceid.PartialFC(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) +fastdeploy.vision.faceid.VPL(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -60,7 +60,7 @@ ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/matting/modnet/cpp/README.md b/examples/vision/matting/modnet/cpp/README.md index 943e162ab..c7a5074d1 100644 --- a/examples/vision/matting/modnet/cpp/README.md +++ b/examples/vision/matting/modnet/cpp/README.md @@ -53,7 +53,7 @@ fastdeploy::vision::matting::MODNet( const string& model_file, const string& params_file = "", const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::ONNX) + const ModelFormat& model_format = ModelFormat::ONNX) ``` MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。 @@ -63,7 +63,7 @@ MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX格式 +> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 #### Predict函数 diff --git a/examples/vision/matting/modnet/python/README.md b/examples/vision/matting/modnet/python/README.md index 92cd7b494..b14919295 100644 --- a/examples/vision/matting/modnet/python/README.md +++ b/examples/vision/matting/modnet/python/README.md @@ -37,7 +37,7 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image mattin ## MODNet Python接口 ```python -fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) ``` MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式 @@ -47,7 +47,7 @@ MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为ONNX +> * **model_format**(ModelFormat): 模型格式,默认为ONNX ### predict函数 diff --git a/examples/vision/matting/ppmatting/cpp/README.md b/examples/vision/matting/ppmatting/cpp/README.md index 057ac1987..0f2fcb3cf 100644 --- a/examples/vision/matting/ppmatting/cpp/README.md +++ b/examples/vision/matting/ppmatting/cpp/README.md @@ -54,7 +54,7 @@ fastdeploy::vision::matting::PPMatting( const string& params_file = "", const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE) + const ModelFormat& model_format = ModelFormat::PADDLE) ``` PPMatting模型加载和初始化,其中model_file为导出的Paddle模型格式。 @@ -65,7 +65,7 @@ PPMatting模型加载和初始化,其中model_file为导出的Paddle模型格 > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 #### Predict函数 diff --git a/examples/vision/matting/ppmatting/python/README.md b/examples/vision/matting/ppmatting/python/README.md index 398f80652..633b3c1e3 100644 --- a/examples/vision/matting/ppmatting/python/README.md +++ b/examples/vision/matting/ppmatting/python/README.md @@ -35,7 +35,7 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg ## PPMatting Python接口 ```python -fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) +fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` PPMatting模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) @@ -46,7 +46,7 @@ PPMatting模型加载和初始化,其中model_file, params_file以及config_fi > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### predict函数 diff --git a/examples/vision/ocr/PPOCRSystemv2/cpp/README.md b/examples/vision/ocr/PPOCRSystemv2/cpp/README.md index 62563fb86..dd61ef6ed 100644 --- a/examples/vision/ocr/PPOCRSystemv2/cpp/README.md +++ b/examples/vision/ocr/PPOCRSystemv2/cpp/README.md @@ -98,7 +98,7 @@ PPOCRSystemv2 的初始化,由检测,识别模型串联构成(无分类器) ``` fastdeploy::vision::ocr::DBDetector(const std::string& model_file, const std::string& params_file = "", const RuntimeOption& custom_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE); + const ModelFormat& model_format = ModelFormat::PADDLE); ``` DBDetector模型加载和初始化,其中模型为paddle模型格式。 @@ -108,7 +108,7 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### Classifier类与DBDetector类相同 @@ -118,7 +118,7 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 const std::string& params_file = "", const std::string& label_path = "", const RuntimeOption& custom_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE); + const ModelFormat& model_format = ModelFormat::PADDLE); ``` Recognizer类初始化时,需要在label_path参数中,输入识别模型所需的label文件,其他参数均与DBDetector类相同 diff --git a/examples/vision/ocr/PPOCRSystemv2/python/README.md b/examples/vision/ocr/PPOCRSystemv2/python/README.md index 15f5ea36c..e2d2f6e06 100644 --- a/examples/vision/ocr/PPOCRSystemv2/python/README.md +++ b/examples/vision/ocr/PPOCRSystemv2/python/README.md @@ -75,7 +75,7 @@ PPOCRSystemv2的初始化,输入的参数是检测模型,分类模型和识别 ### DBDetector类 ``` -fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=Frontend.PADDLE) +fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` DBDetector模型加载和初始化,其中模型为paddle模型格式。 @@ -85,14 +85,14 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为PADDLE格式 +> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式 ### Classifier类与DBDetector类相同 ### Recognizer类 ``` fastdeploy.vision.ocr.Recognizer(rec_model_file,rec_params_file,rec_label_file, - runtime_option=rec_runtime_option,model_format=Frontend.PADDLE) + runtime_option=rec_runtime_option,model_format=ModelFormat.PADDLE) ``` Recognizer类初始化时,需要在rec_label_file参数中,输入识别模型所需的label文件路径,其他参数均与DBDetector类相同 diff --git a/examples/vision/ocr/PPOCRSystemv3/cpp/README.md b/examples/vision/ocr/PPOCRSystemv3/cpp/README.md index 1653fbb50..185d07785 100644 --- a/examples/vision/ocr/PPOCRSystemv3/cpp/README.md +++ b/examples/vision/ocr/PPOCRSystemv3/cpp/README.md @@ -98,7 +98,7 @@ PPOCRSystemv3 的初始化,由检测,识别模型串联构成(无分类器) ``` fastdeploy::vision::ocr::DBDetector(const std::string& model_file, const std::string& params_file = "", const RuntimeOption& custom_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE); + const ModelFormat& model_format = ModelFormat::PADDLE); ``` DBDetector模型加载和初始化,其中模型为paddle模型格式。 @@ -108,7 +108,7 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### Classifier类与DBDetector类相同 @@ -118,7 +118,7 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 const std::string& params_file = "", const std::string& label_path = "", const RuntimeOption& custom_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE); + const ModelFormat& model_format = ModelFormat::PADDLE); ``` Recognizer类初始化时,需要在label_path参数中,输入识别模型所需的label文件,其他参数均与DBDetector类相同 diff --git a/examples/vision/ocr/PPOCRSystemv3/python/README.md b/examples/vision/ocr/PPOCRSystemv3/python/README.md index b71e7f690..4e438fc5c 100644 --- a/examples/vision/ocr/PPOCRSystemv3/python/README.md +++ b/examples/vision/ocr/PPOCRSystemv3/python/README.md @@ -74,7 +74,7 @@ PPOCRSystemv3的初始化,输入的参数是检测模型,分类模型和识别 ### DBDetector类 ``` -fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=Frontend.PADDLE) +fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` DBDetector模型加载和初始化,其中模型为paddle模型格式。 @@ -84,14 +84,14 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。 > * **model_file**(str): 模型文件路径 > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为PADDLE格式 +> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式 ### Classifier类与DBDetector类相同 ### Recognizer类 ``` fastdeploy.vision.ocr.Recognizer(rec_model_file,rec_params_file,rec_label_file, - runtime_option=rec_runtime_option,model_format=Frontend.PADDLE) + runtime_option=rec_runtime_option,model_format=ModelFormat.PADDLE) ``` Recognizer类初始化时,需要在rec_label_file参数中,输入识别模型所需的label文件路径,其他参数均与DBDetector类相同 diff --git a/examples/vision/segmentation/paddleseg/cpp/README.md b/examples/vision/segmentation/paddleseg/cpp/README.md index a6a2a69e0..16f267a28 100644 --- a/examples/vision/segmentation/paddleseg/cpp/README.md +++ b/examples/vision/segmentation/paddleseg/cpp/README.md @@ -50,7 +50,7 @@ fastdeploy::vision::segmentation::PaddleSegModel( const string& params_file = "", const string& config_file, const RuntimeOption& runtime_option = RuntimeOption(), - const Frontend& model_format = Frontend::PADDLE) + const ModelFormat& model_format = ModelFormat::PADDLE) ``` PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。 @@ -61,7 +61,7 @@ PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模 > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 #### Predict函数 diff --git a/examples/vision/segmentation/paddleseg/python/README.md b/examples/vision/segmentation/paddleseg/python/README.md index ab653679d..46fce690a 100644 --- a/examples/vision/segmentation/paddleseg/python/README.md +++ b/examples/vision/segmentation/paddleseg/python/README.md @@ -33,7 +33,7 @@ python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_ ## PaddleSegModel Python接口 ```python -fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE) +fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) ``` PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md) @@ -44,7 +44,7 @@ PaddleSeg模型加载和初始化,其中model_file, params_file以及config_fi > * **params_file**(str): 参数文件路径 > * **config_file**(str): 推理部署配置文件 > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 -> * **model_format**(Frontend): 模型格式,默认为Paddle格式 +> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式 ### predict函数