|
FastDeploy
latest
Fast & Easy to Deploy!
|
Classifier object is used to load the classification model provided by PaddleOCR. More...
#include <classifier.h>


Public Member Functions | |
| Classifier (const std::string &model_file, const std::string ¶ms_file="", const RuntimeOption &custom_option=RuntimeOption(), const ModelFormat &model_format=ModelFormat::PADDLE) | |
| Set path of model file, and the configuration of runtime. More... | |
| virtual std::unique_ptr< Classifier > | Clone () const |
| Clone a new Classifier with less memory usage when multiple instances of the same model are created. More... | |
| std::string | ModelName () const |
| Get model's name. | |
| virtual bool | Predict (const cv::Mat &img, int32_t *cls_label, float *cls_score) |
| Predict the input image and get OCR classification model cls_result. More... | |
| virtual bool | Predict (const cv::Mat &img, vision::OCRResult *ocr_result) |
| Predict the input image and get OCR recognition model result. More... | |
| virtual bool | BatchPredict (const std::vector< cv::Mat > &images, vision::OCRResult *ocr_result) |
| BatchPredict the input image and get OCR classification model result. More... | |
| virtual bool | BatchPredict (const std::vector< cv::Mat > &images, std::vector< int32_t > *cls_labels, std::vector< float > *cls_scores) |
| BatchPredict the input image and get OCR classification model cls_result. More... | |
| virtual ClassifierPreprocessor & | GetPreprocessor () |
| Get preprocessor reference of ClassifierPreprocessor. | |
| virtual ClassifierPostprocessor & | GetPostprocessor () |
| Get postprocessor reference of ClassifierPostprocessor. | |
Public Member Functions inherited from fastdeploy::FastDeployModel | |
| virtual bool | Infer (std::vector< FDTensor > &input_tensors, std::vector< FDTensor > *output_tensors) |
Inference the model by the runtime. This interface is included in the Predict() function, so we don't call Infer() directly in most common situation. | |
| virtual bool | Infer () |
| Inference the model by the runtime. This interface is using class member reused_input_tensors_ to do inference and writing results to reused_output_tensors_. | |
| virtual int | NumInputsOfRuntime () |
| Get number of inputs for this model. | |
| virtual int | NumOutputsOfRuntime () |
| Get number of outputs for this model. | |
| virtual TensorInfo | InputInfoOfRuntime (int index) |
| Get input information for this model. | |
| virtual TensorInfo | OutputInfoOfRuntime (int index) |
| Get output information for this model. | |
| virtual bool | Initialized () const |
| Check if the model is initialized successfully. | |
| virtual void | EnableRecordTimeOfRuntime () |
| This is a debug interface, used to record the time of runtime (backend + h2d + d2h) More... | |
| virtual void | DisableRecordTimeOfRuntime () |
Disable to record the time of runtime, see EnableRecordTimeOfRuntime() for more detail. | |
| virtual std::map< std::string, float > | PrintStatisInfoOfRuntime () |
Print the statistic information of runtime in the console, see function EnableRecordTimeOfRuntime() for more detail. | |
| virtual bool | EnabledRecordTimeOfRuntime () |
Check if the EnableRecordTimeOfRuntime() method is enabled. | |
| virtual double | GetProfileTime () |
| Get profile time of Runtime after the profile process is done. | |
| virtual void | ReleaseReusedBuffer () |
| Release reused input/output buffers. | |
Additional Inherited Members | |
Public Attributes inherited from fastdeploy::FastDeployModel | |
| std::vector< Backend > | valid_cpu_backends = {Backend::ORT} |
| Model's valid cpu backends. This member defined all the cpu backends have successfully tested for the model. | |
| std::vector< Backend > | valid_gpu_backends = {Backend::ORT} |
| std::vector< Backend > | valid_ipu_backends = {} |
| std::vector< Backend > | valid_timvx_backends = {} |
| std::vector< Backend > | valid_directml_backends = {} |
| std::vector< Backend > | valid_ascend_backends = {} |
| std::vector< Backend > | valid_kunlunxin_backends = {} |
| std::vector< Backend > | valid_rknpu_backends = {} |
| std::vector< Backend > | valid_sophgonpu_backends = {} |
Classifier object is used to load the classification model provided by PaddleOCR.
| fastdeploy::vision::ocr::Classifier::Classifier | ( | const std::string & | model_file, |
| const std::string & | params_file = "", |
||
| const RuntimeOption & | custom_option = RuntimeOption(), |
||
| const ModelFormat & | model_format = ModelFormat::PADDLE |
||
| ) |
Set path of model file, and the configuration of runtime.
| [in] | model_file | Path of model file, e.g ./ch_ppocr_mobile_v2.0_cls_infer/model.pdmodel. |
| [in] | params_file | Path of parameter file, e.g ./ch_ppocr_mobile_v2.0_cls_infer/model.pdiparams, if the model format is ONNX, this parameter will be ignored. |
| [in] | custom_option | RuntimeOption for inference, the default will use cpu, and choose the backend defined in valid_cpu_backends. |
| [in] | model_format | Model format of the loaded model, default is Paddle format. |
|
virtual |
BatchPredict the input image and get OCR classification model result.
| [in] | img | The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format. |
| [in] | ocr_result | The output of OCR classification model result will be writen to this structure. |
|
virtual |
BatchPredict the input image and get OCR classification model cls_result.
| [in] | images | The list of input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format. |
| [in] | cls_labels | The label results of cls model will be written in to this vector. |
| [in] | cls_scores | The score results of cls model will be written in to this vector. |
|
virtual |
Clone a new Classifier with less memory usage when multiple instances of the same model are created.
|
virtual |
Predict the input image and get OCR classification model cls_result.
| [in] | img | The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format. |
| [in] | cls_label | The label result of cls model will be written in to this param. |
| [in] | cls_score | The score result of cls model will be written in to this param. |
|
virtual |
Predict the input image and get OCR recognition model result.
| [in] | img | The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format. |
| [in] | ocr_result | The output of OCR recognition model result will be writen to this structure. |
1.8.13