-
Notifications
You must be signed in to change notification settings - Fork 492
[Inference] Add ASR support for Replicate provider #1679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. 👍🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lucataco, thanks a lot for the contribution! could you also add the automatic-speech-recognition
mapping for Replicate in
export const PROVIDERS: Record<InferenceProvider, Partial<Record<InferenceTask, TaskProviderHelper>>> = { |
you can find the complete guideline for provider/task JS integration in the documentation here: https://huggingface.co/docs/inference-providers/register-as-a-provider#2-js-client-integration
Thank you for taking a look! Ive added the mapping as specified |
const out = response?.output as | ||
| undefined | ||
| { | ||
transcription?: string; | ||
translation?: string; | ||
txt_file?: string; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
following the schema defined in https://replicate.com/openai/whisper/api/schema#output-schema
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @lucataco for the PR! I pushed a commit to fix the response parsing part.
Also, i think the version is missing in the providerId
defined in the Replicate model mapping: https://huggingface.co/api/partners/replicate/models. it should be
"openai/whisper:8099696689d249cf8b122d833c36ac3f75505c666a395ca40ef26f68e7d3d16e"
. could you update it accordingly? thanks 🙏
Oh good catch, thank you! Yes of course. |
Gentle bump. Anything blocking getting this shipped? |
merging @SBrandeis @hanouticelina |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds Automatic Speech Recognition (ASR) support for the Replicate provider in the inference package. It enables users to perform speech-to-text transcription using Replicate models like OpenAI's Whisper.
- Implements
ReplicateAutomaticSpeechRecognitionTask
class to handle ASR requests for Replicate provider - Removes existing output validation from the generic ASR function to allow provider-specific handling
- Registers the new ASR task in the provider configuration
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
packages/inference/src/tasks/audio/automaticSpeechRecognition.ts | Removes generic output validation to allow provider-specific response handling |
packages/inference/src/providers/replicate.ts | Implements new ASR task class with audio input processing and response parsing |
packages/inference/src/lib/getProviderHelper.ts | Registers the new ASR task for the Replicate provider |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
if (!blob || !(blob instanceof Blob)) { | ||
throw new Error("Audio input must be a Blob"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message 'Audio input must be a Blob' is not descriptive enough. Consider providing more context about expected input formats and how to convert them to Blob.
if (!blob || !(blob instanceof Blob)) { | |
throw new Error("Audio input must be a Blob"); | |
throw new Error( | |
"Audio input must be a Blob (e.g., a File or Blob object from the browser). " + | |
"Received: " + (blob === undefined ? "undefined" : typeof blob) + ". " + | |
"To convert an ArrayBuffer or base64 string to a Blob, use: " + | |
"`new Blob([arrayBuffer], { type: 'audio/wav' })` or " + | |
"`fetch('data:audio/wav;base64,...').then(res => res.blob())`. " + | |
"See documentation for supported input formats." | |
); |
Copilot uses AI. Check for mistakes.
Hello! This PR adds support for the `Automatic Speech Recognition` task type for Replicate models. Example: - [huggingface.co/openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) - [replicate.com/openai/whisper](https://replicate.com/openai/whisper) cc @hanouticelina --------- Co-authored-by: Celina Hanouti <hanouticelina@gmail.com> Co-authored-by: Eliott C. <coyotte508@gmail.com>
Hello! This PR adds support for the
Automatic Speech Recognition
task type for Replicate models.Example:
cc @hanouticelina