feat: Add ONNX support for inference acceleration #276
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This commit introduces ONNX-based inference capabilities to the project.
Key changes include:
onnx
andonnxruntime
to dependencies.export_onnx.py
script to convert PyTorch models to ONNX format. This script supports dynamic axes for variable sequence lengths.melo/api.py
by adding aTTS_ONNX
class that usesonnxruntime
for inference, mirroring the existingTTS
class structure.melo/infer.py
to allow selection between PyTorch and ONNX models via CLI flags (--use_onnx
,--onnx_path
).test/test_onnx_inference.py
to provide basic testing for the ONNX inference pipeline, including model export and audio generation.README.md
to document the new ONNX export and inference functionalities, including installation, model conversion, and usage instructions.This allows you to potentially achieve faster inference speeds by converting models to ONNX and using the ONNX Runtime.