Skip to content

RepSVTR模型使用fastdeploy - TensorRT部署推理,设置推理精度为FP16和FP32,单图推理耗时差异较小(0.5%) #14975

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
Yins11 opened this issue Apr 3, 2025 · 1 comment
Assignees

Comments

@Yins11
Copy link

Yins11 commented Apr 3, 2025

🔎 Search before asking

  • I have searched the PaddleOCR Docs and found no similar bug report.
  • I have searched the PaddleOCR Issues and found no similar bug report.
  • I have searched the PaddleOCR Discussions and found no similar bug report.

🐛 Bug (问题描述)

训练所用配置文件:rec_repsvtr_ch.yml
可使用官方提供的推理模型测试https://paddlepaddle.github.io/PaddleOCR/main/algorithm/text_recognition/algorithm_rec_svtrv2.html

使用tensorrt(FP16)和tensorrt(FP32)测试的单图推理耗时差异较小,是什么原因呢?

🏃‍♂️ Environment (运行环境)

windows RTX3070

🌰 Minimal Reproducible Example (最小可复现问题的Demo)

使用c++部署推理,非python,推理batchsize =12,thread_num=4

@jingsongliujing
Copy link
Collaborator

试试thread_num改成1,对于单图推理,thread_num=1 可能更高效

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants