Open
Description
🔎 Search before asking
- I have searched the PaddleOCR Docs and found no similar bug report.
- I have searched the PaddleOCR Issues and found no similar bug report.
- I have searched the PaddleOCR Discussions and found no similar bug report.
🐛 Bug (问题描述)
Previously, I deployed PaddleOCR 2.10.0 using the server models version (v4) on an AWS g4dn.2xlarge instance with paddlepaddle=3.0.0rc1. It is working fine.
I want to use the v5 server models with paddleocr=3.0.2 (also from git), paddlex=3.0.2 (also from git), and paddlepaddle nightly build (also 3.0.0rc1 as before), but with every prediction, the GPU VRAM usage keeps growing until it crashes.
The RAM usage is stable. Do you have any idea what the reason would be?
🏃♂️ Environment (运行环境)
paddleocr=3.0.2 (also from git)
paddlex=3.0.2 (also from git)
paddlepaddle nightly build (also 3.0.0rc1 as before)
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
pretrained_ocr = PaddleOCR(text_detection_model_name="PP-OCRv5_server_det",
text_detection_model_dir=f"{paddle_ocr_base_path}/pretrained/PP-OCRv5_server_det_infer",
text_recognition_model_name="PP-OCRv4_server_rec_doc",
text_recognition_model_dir=f"{paddle_ocr_base_path}/pretrained/PP-OCRv4_server_rec_doc_infer",
textline_orientation_model_name="PP-LCNet_x1_0_textline_ori",
textline_orientation_model_dir=f"{paddle_ocr_base_path}/pretrained/PP-LCNet_x1_0_textline_ori_infer",
use_textline_orientation=True, use_doc_unwarping=False, use_doc_orientation_classify=False,
enable_mkldnn=True, mkldnn_cache_capacity=10, device="gpu:0")
Metadata
Metadata
Assignees
Labels
No labels