Fine-tuning issue of text detection in Paddle #13821
Replies: 2 comments 1 reply
-
Config file I gave during training: Global:
Architecture: Loss:
Optimizer: PostProcess: Metric: Train: Eval: Config file that I used for converting to Inference model (I got this during fine-tuning of the model):
|
Beta Was this translation helpful? Give feedback.
-
For the pursuit of accuracy, choose the teacher model; for the pursuit of speed, choose the student model. |
Beta Was this translation helpful? Give feedback.
-
Dear Paddle Community,
I am currently performing fine-tuning on text detection for paddle OCR on customized dataset. I have shared my YML file for your perusal.
Model I used: Currently I am performing fine-tuning with en_PP-OCRv3_det from the link: https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_en/models_list_en.md to perform fine-tuning on text detection. I am using trained model and ch_PP-OCRv3_det_cml.yml as config file during training.
Relevant Information: After training the Paddle OCR in sample dataset, I got the following on the images displayed as the results.

Task: I wanted to convert the best_accuracy into a inference model so that I can use it for text detection tasks. I used tools/export_model.py and gave the following command: -c C:/Users/Adhi/paddle_fine_tuning/OCRv3_fine-tuned/config.yml -o Global.pretrained_model=C:/Adhi/I011786/paddle_fine_tuning/OCRv3_fine-tuned/best_accuracy Global.save_inference_dir=C:/Adhi/I011786/paddle_fine_tuning/det_inference
Config file: I used a config file which I used during training, generated another config file, which I used for inference model.
Problem: Now I got three folders of Inference model and I am confused which one to use now?

All three folders, consists of three sets of inference.pdparams, .pdparams.info, .pdmodel
Please guide me on which inference file to use, thanks!
Best Regards,
Adhitya
Beta Was this translation helpful? Give feedback.
All reactions