Skip to content

Conversation

SangeetaMishr
Copy link
Collaborator

@SangeetaMishr SangeetaMishr commented Aug 12, 2025

Summary by CodeRabbit

  • Documentation
    • Revamped Bhashini integration guide into an end-to-end tutorial for Speech-to-Text and Text-to-Speech.
    • Added precise step-by-step flows with clear inputs, outputs, and parameter guidance.
    • Added sample flows, a showcase video, and a blogs section with external resources.
    • Updated visuals and hosted assets; highlighted Google Cloud Storage setup for voice notes.
    • Expanded use cases (multilingual campaigns, regional responses, transliteration).
    • Updated page metadata: advanced level, ~6-minute read, and latest update timestamp.

mahajantejas and others added 5 commits June 27, 2024 15:49
Deleted extra bhashini asr documentation
Updated documentation for bhashini speech to text 
Added documentation for bhashini text to speech
Copy link
Contributor

coderabbitai bot commented Aug 12, 2025

Warning

Rate limit exceeded

@akanshaaa19 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 13 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 0e0e3f3 and 0307662.

📒 Files selected for processing (1)
  • docs/5. Integrations/Bhashini Integrations.md (1 hunks)

Walkthrough

Comprehensive rewrite of the Bhashini integration doc: converts a brief overview into an end-to-end, stepwise guide for Speech-to-Text (STT) and Text-to-Speech (TTS), adds explicit node/result names, parameter mappings, sample flows, video/blog links, visuals, and Google Cloud Storage guidance. Metadata updated (duration, level, last updated).

Changes

Cohort / File(s) Change summary
Documentation overhaul
docs/4. Integrations/Bhashini Integrations.md
Fully rewritten as an end-to-end tutorial for STT and TTS: updated header/metadata (6-minute read, Advanced, Last Updated: August 2025), explicit step-by-step STT (speech → speech_to_text_with_bhasinibhashini_asr) and TTS (text → nmt_tts_with_bhasinibhashini_tts) workflows, named result conventions, function body parameter examples, sample flow and showcase video/blog links, updated visuals, and Google Cloud Storage setup note for storing voice media.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor User as User
    participant Flow as Glific Flow
    participant WH as Call Webhook (speech_to_text_with_bhasini)
    participant ASR as Bhashini ASR
    participant GCS as Google Cloud Storage

    rect rgba(220,235,255,0.4)
    note over User,Flow: STT — Capture audio and transcribe
    User->>Flow: Send voice note
    Flow->>Flow: Wait for Response (audio required)
    Flow->>WH: Invoke webhook with `speech` + contact
    WH->>ASR: Submit audio for transcription
    ASR-->>WH: asr_response_text
    WH-->>Flow: Result: `bhashini_asr`
    Flow->>GCS: (optional) Store voice media
    Flow-->>User: Send message using `@results.bhashini_asr.asr_response_text`
    end
Loading
sequenceDiagram
    autonumber
    actor User as User
    participant Flow as Glific Flow
    participant WH as Call Webhook (nmt_tts_with_bhasini)
    participant NMT as Bhashini NMT/TTS
    participant GCS as Google Cloud Storage

    rect rgba(220,255,220,0.4)
    note over User,Flow: TTS — Translate text and synthesize audio
    Flow->>User: Prompt for text
    User-->>Flow: Provide phrase
    Flow->>WH: Invoke webhook with `text`, `source_language`, `target_language`
    WH->>NMT: Translate + synthesize
    NMT-->>WH: `media_url` (+ `translated_text`)
    WH-->>Flow: Result: `bhashini_tts`
    Flow->>GCS: (optional) Store generated audio
    Flow-->>User: Send message with audio (`media_url`) and optional `translated_text`
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I twitch my whiskers at docs anew,
Steps and webhooks hop into view.
ASR hums, TTS sings bright,
Media stored and flows take flight.
A rabbit cheers for Bhashini’s light 🥕✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch update-bhashinisttandtts

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
docs/4. Integrations/Bhashini Integrations.md (6)

19-19: Fix markdown heading structure.

The heading level jumps from h2 to h4, violating markdown best practices. Also remove the trailing colon from the heading.

-### This integration can be especially useful in use cases such as:
+## Use Cases
+
+This integration can be especially useful in scenarios such as:

29-31: Remove trailing punctuation from headings.

Multiple headings contain trailing periods which violate markdown best practices.

-#### Step 1: Create a `Send message` node directing users to send their responses as audio messages, based on their preference.
+#### Step 1: Create a `Send message` node directing users to send their responses as audio messages, based on their preference

-#### Step 2: In the `Wait for response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name.
+#### Step 2: In the `Wait for response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name

-#### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service.
+#### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service

-#### Step 4: Click on `Function Body` (top right corner) and add the parameters as shown in the screenshot below
+#### Step 4: Click on `Function Body` (top right corner) and add the parameters as shown in the screenshot below

-#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user

Also applies to: 35-35, 43-43, 50-50


70-70: Fix heading structure and remove trailing punctuation.

Multiple TTS section headings have incorrect structure and trailing punctuation.

-#### Step 1: Create a `Send Message` node asking users to reply in text if they prefer.
+#### Step 1: Create a `Send Message` node asking users to reply in text if they prefer

-#### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name.
+#### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name

-#### Step 3: Create a  'Call Webhook' node.
+#### Step 3: Create a 'Call Webhook' node

-#### Step 4: Click  Function Body (top right corner) and add the parameters as shown in the screenshot below.
+#### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below

-#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`

Also applies to: 73-73, 79-79, 88-88, 115-115


101-101: Fix heading indentation.

The heading is not properly indented and should start at the beginning of the line.

- #### Step 5: Create a `send Message` node and paste the variable.
+#### Step 5: Create a `send Message` node and paste the variable

112-112: Consider more concise wording.

The phrase "In order to get" can be simplified for better readability.

-Please note: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization.
+Please note: To get voice notes as outputs, the Glific instance must be linked to Google Cloud Storage for your organization.

139-139: Fix emphasis used as heading.

The italic text should be converted to a proper heading or remain as regular text.

-_Watch from 25 minute mark to watch the Bhashini integration part_
+**Note:** Watch from the 25-minute mark to see the Bhashini integration demonstration.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 78a409f and e7241df.

📒 Files selected for processing (3)
  • docs/4. Integrations/Bhashini ASR (0 hunks)
  • docs/4. Integrations/Bhashini ASR.md (0 hunks)
  • docs/4. Integrations/Bhashini Integrations.md (1 hunks)
💤 Files with no reviewable changes (2)
  • docs/4. Integrations/Bhashini ASR.md
  • docs/4. Integrations/Bhashini ASR
🧰 Additional context used
🪛 LanguageTool
docs/4. Integrations/Bhashini Integrations.md

[style] ~112-~112: Consider a more concise word here.
Context: ...fb-9178-66eaee0553a7" /> Please note: In order to get the voice notes as outputs, the Gli...

(IN_ORDER_TO_PREMIUM)

🪛 markdownlint-cli2 (0.17.2)
docs/4. Integrations/Bhashini Integrations.md

19-19: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


19-19: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


29-29: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


29-29: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


31-31: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


35-35: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


50-50: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


70-70: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


70-70: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


73-73: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


79-79: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


88-88: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


101-101: Headings must start at the beginning of the line

(MD023, heading-start-left)


101-101: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


115-115: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


139-139: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🔇 Additional comments (8)
docs/4. Integrations/Bhashini Integrations.md (8)

1-11: LGTM! Clean document metadata structure.

The metadata table provides essential information about reading time, complexity level, and last updated date in a well-structured format.


12-25: LGTM! Clear introduction and context.

The introduction effectively explains Bhashini's purpose and its integration benefits for NGOs. The use cases section provides concrete examples that help users understand the value proposition.


37-37: LGTM! Critical function name specification.

Correctly specifies the exact function name speech_to_text_with_bhasini that must be used for the Bhashini STT integration.


47-48: LGTM! Clear parameter mapping documentation.

The documentation clearly explains how to map the speech parameter using the result name from the audio capture step, with a helpful alternative example for different variable names.


50-50: LGTM! Essential result variable reference.

Correctly documents the result variable @results.bhashini_asr.asr_response_text that users need to access the converted text output.


81-81: LGTM! Critical function name specification for TTS.

Correctly specifies the exact function name nmt_tts_with_bhasini that must be used for the Bhashini TTS integration.


94-99: LGTM! Comprehensive parameter documentation.

Excellent documentation of the TTS parameters including clear explanations of source/target language usage, same-language handling, and the complete list of supported languages.


58-58: LGTM! Valuable external resources.

The document includes helpful sample flows, blog references, and video demonstrations that provide additional learning resources for users.

Also applies to: 123-123, 130-130, 136-136



#### Step 5: Create a `send Message` node and paste the variable.
`@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix variable name inconsistency.

There's a typo in the variable name - it should be bhashini_tts to match the webhook result name mentioned earlier.

- `@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
+ `@results.bhashini_tts.media_url` for the voice input. `bhashini_tts` is the webhook result name used in the given example.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
`@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
`@results.bhashini_tts.media_url` for the voice input. `bhashini_tts` is the webhook result name used in the given example.
🤖 Prompt for AI Agents
In docs/4. Integrations/Bhashini Integrations.md around line 102, the inline
variable name uses `bhasini_tts` which is a typo; change it to `bhashini_tts` so
it matches the webhook result name used earlier (i.e., replace occurrences of
`@results.bhasini_tts.media_url` with `@results.bhashini_tts.media_url`).


- Go to `Attachments` in the `Send Message` node
- Select `Expression` from the dropdown.
- Use the following expression: `@results.bhasini_tts.media_url`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix variable name inconsistency in expression.

The expression should use bhashini_tts to match the webhook result name.

-- Use the following expression: `@results.bhasini_tts.media_url`
+- Use the following expression: `@results.bhashini_tts.media_url`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Use the following expression: `@results.bhasini_tts.media_url`
- Use the following expression: `@results.bhasini_tts.media_url`
+ Use the following expression: `@results.bhashini_tts.media_url`
🤖 Prompt for AI Agents
In docs/4. Integrations/Bhashini Integrations.md around line 106, the example
expression uses the incorrect variable name `bhasini_tts`; update the expression
to use `bhashini_tts` to match the webhook result name (i.e., replace
`@results.bhasini_tts.media_url` with the correctly spelled
`@results.bhashini_tts.media_url`).

@akanshaaa19 akanshaaa19 requested a review from Fawas003 August 25, 2025 06:45
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
docs/4. Integrations/Bhashini Integrations.md (1)

93-97: Fix webhook result variable name and casing; also clarify “voice input” wording.

In multiple places, the result key is misspelled as bhasini_tts (missing “h”) and once capitalized as Bhasini_tts. Users will copy/paste these; the current typos will break flows. Also, “voice input” here is actually an output. Apply the following patch.

-`@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
+`@results.bhashini_tts.media_url` for the voice note output. `bhashini_tts` is the webhook result name used in the given example.
-- Use the following expression: `@results.bhasini_tts.media_url`
+- Use the following expression: `@results.bhashini_tts.media_url`
-#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+#### Step 6: To get the translated text out, create another Send Message node and call `@results.bhashini_tts.translated_text`

Also applies to: 103-103

🧹 Nitpick comments (13)
docs/4. Integrations/Bhashini Integrations.md (13)

85-89: Normalize parameter names and list formatting for consistency.

Use snake_case consistently for parameter keys and make the language list readable.

-- `Source_language` : The original language of the text
-`target_language` : The language in which the voice note will be generated
-- If translation is not needed, keep both `Source_language` and `target_language` the same.
-- Supported Target Languages: `"tamil" "kannada" "malayalam" "telugu" "assamese" "gujarati" "bengali" "punjabi" "marathi" "urdu" "spanish" "english" "hindi"`
+- `source_language`: The original language of the text
+- `target_language`: The language in which the voice note will be generated
+- If translation is not needed, keep both `source_language` and `target_language` the same.
+- Supported target languages: "tamil", "kannada", "malayalam", "telugu", "assamese", "gujarati", "bengali", "punjabi", "marathi", "urdu", "spanish", "english", "hindi"

39-40: Fix comma splice in STT function description.

Split into two sentences.

-- In the `Function` field, enter `speech_to_text_with_bhasini`. The function name is pre-defined, you should always use the function `speech_to_text_with_bhasini` to call the Bhashini API for converting audio to text.
+- In the `Function` field, enter `speech_to_text_with_bhasini`. The function name is pre-defined. You should always use the function `speech_to_text_with_bhasini` to call the Bhashini API for converting audio to text.

76-77: Fix comma splice in TTS function description.

Same issue as STT.

-- In the `Function` field, enter `nmt_tts_with_bhasini` The function name is pre-defined, you should always use the function `nmt_tts_with_bhasini` to call the Bhashini API for converting text to audio.
+- In the `Function` field, enter `nmt_tts_with_bhasini`. The function name is pre-defined. You should always use the function `nmt_tts_with_bhasini` to call the Bhashini API for converting text to audio.

18-18: Address markdownlint issues: heading increments and trailing punctuation.

  • Demote/adjust heading levels to increment by one where appropriate.
  • Remove trailing punctuation from headings.
-### This integration can be especially useful in use cases such as:
+## This integration can be especially useful in use cases such as

-#### Step 1: Create a `Send message` node directing users to send their responses as audio messages, based on their preference.
+### Step 1: Create a `Send Message` node directing users to send their responses as audio messages, based on their preference

-#### Step 2: In the `Wait for response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name.
+### Step 2: In the `Wait for Response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name

-#### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service.
+### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service

-#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+### Step 5: Once the webhook is updated, you can refer to the transcribed text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user

-#### Step 1: Create a `Send Message` node asking users to reply in text if they prefer.
+### Step 1: Create a `Send Message` node asking users to reply in text if they prefer

-#### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name.
+### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name

-#### Step 3: Create a 'Call Webhook' node.
+### Step 3: Create a 'Call Webhook' node

-#### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below.
+### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below

-#### Step 5: Create a `send Message` node and paste the variable.
+### Step 5: Create a `Send Message` node and paste the variable

-#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+### Step 6: To get the translated text out, create another Send Message node and call `@results.bhashini_tts.translated_text`

Also applies to: 30-30, 32-32, 36-36, 51-51, 67-67, 69-69, 73-73, 81-81, 91-91, 103-103


51-51: Use “transcribed” (ASR) instead of “translated” in STT section.

The variable asr_response_text indicates automatic speech recognition output (transcription), not translation.

-#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+#### Step 5: Once the webhook is updated, you can refer to the transcribed text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.

55-55: Tighten sentence and add comma after introductory phrase.

-The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
+The output depends on the user's language preference. For instance, if a user has selected Hindi, the response will be in the Hindi script.

61-61: Hyphenate “Text-to-Speech” for consistency with the title.

-## Steps to Integrate Bhashini Text To Speech in Glific Flows
+## Steps to Integrate Bhashini Text-to-Speech in Glific Flows

101-101: Polish note style and clarity.

-Please note: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization. This is to facilitate storage of the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage [click here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/)
+Note: To receive voice notes as outputs, your Glific instance must be linked to Google Cloud Storage for your organization. This stores the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage, see the guide [here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/).

121-121: Minor grammar nit: hyphenate “25-minute.”

Also avoids “emphasis as heading” lint warning; consider making this a plain sentence or a blockquote.

-_Watch from 25 minute mark to watch the Bhashini integration part_
+Watch from the 25-minute mark to see the Bhashini integration segment.

93-93: Optional: add explicit expression example for the TTS text parameter.

To reduce guesswork, you could show how to map the Wait for Response result (e.g., result_3) to text in the Function Body.

-- `text` : It should be updated with the result name given for the response/query provided by the user.
+- `text`: Set this to the user's reply, e.g., `@results.result_3.input` (replace `result_3` with your result name).

95-99: Make attachment steps crisper.

Consider combining bullets to reduce scanning friction.

-- Go to `Attachments` in the `Send Message` node
-- Select `Expression` from the dropdown.
-- Use the following expression: `@results.bhasini_tts.media_url`
+- In the `Send Message` node, open `Attachments` → select `Expression` → set it to `@results.bhashini_tts.media_url`.

57-58: Sample flow links: add brief context for what each demonstrates.

Adding a one-liner (e.g., “STT only” / “TTS with translation”) improves discoverability.

I can propose short captions if you share the flow contents or goals.


1-9: Minor HTML-in-heading pattern — acceptable if consistent with docs site.

If this pattern isn’t standard across docs, consider moving the read-time/level/updated info into a callout or front matter to simplify the DOM and avoid heading nesting quirks.

I can provide a Markdown-only alternative that passes common linters if desired.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e7241df and eef65ad.

📒 Files selected for processing (1)
  • docs/4. Integrations/Bhashini Integrations.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/4. Integrations/Bhashini Integrations.md

[grammar] ~39-~39: There might be a mistake here.
Context: ...ashini API for converting audio to text. - Give the webhook result name - you can u...

(QB_NEW_EN)


[grammar] ~55-~55: There might be a mistake here.
Context: ...of the user. For instance if a user has selected Hindi language, the response from Glifi...

(QB_NEW_EN)


[grammar] ~76-~76: There might be a mistake here.
Context: ...ashini API for converting text to audio. - Give the webhook result name - you can u...

(QB_NEW_EN)


[grammar] ~87-~87: There might be a mistake here.
Context: ...n which the voice note will be generated - If translation is not needed, keep both ...

(QB_NEW_EN)


[grammar] ~88-~88: There might be a mistake here.
Context: ...anguageandtarget_languagethe same. - Supported Target Languages:"tamil" "ka...

(QB_NEW_EN)


[grammar] ~96-~96: There might be a mistake here.
Context: ...- Select Expression from the dropdown. - Use the following expression: `@results....

(QB_NEW_EN)


[style] ~101-~101: Consider a more concise word here.
Context: ...7fb-9178-66eaee0553a7" /> Please note: In order to get the voice notes as outputs, the Gli...

(IN_ORDER_TO_PREMIUM)

🪛 markdownlint-cli2 (0.17.2)
docs/4. Integrations/Bhashini Integrations.md

18-18: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


18-18: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


30-30: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


30-30: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


32-32: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


36-36: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


51-51: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


67-67: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


67-67: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


69-69: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


73-73: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


81-81: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


91-91: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


103-103: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


121-121: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🔇 Additional comments (4)
docs/4. Integrations/Bhashini Integrations.md (4)

11-24: Solid restructure and clarity upgrade — nice work.

The doc moves from high-level to actionable, stepwise guidance for both STT and TTS with screenshots and sample flows. This will materially reduce implementation errors for users.


69-69: Verify UI label “has only the phrase.”

Confirm that the Wait for Response option label matches the current Glific UI copy; otherwise, update to the exact product wording.

Would you like me to scan other docs in this repo to align the phrasing across all flow-related pages?


89-89: Double-check supported language list (includes “spanish”).

Bhashini primarily targets Indic languages. Please verify whether “spanish” is actually supported in this specific endpoint and update the list accordingly or add a note about partial support.

If confirmed, consider linking to Bhashini’s official supported-language list for this API.


101-101: Validate GCS Setup link & Onboarding nav

  • The “click here” URL is missing the file’s 01. prefix. Update the link from
    https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/
    to
    https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/01.%20Google%20Cloud%20Storage%20Setup/
    so it matches docs/2. Onboarding/GCS Setup/01. Google Cloud Storage Setup.md.
  • Confirm that 01. Google Cloud Storage Setup.md is listed under the Onboarding → GCS Setup section in your site’s navigation configuration (e.g., mkdocs.yml).

Copy link

github-actions bot commented Aug 25, 2025

@github-actions github-actions bot temporarily deployed to pull request August 25, 2025 07:17 Inactive
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
docs/4. Integrations/Bhashini Integrations.md (2)

93-99: Fix result name typo: use bhashini_tts consistently in expressions and text.

The webhook result is introduced as bhashini_tts (Line 77) but referenced as bhasini_tts and Bhasini_tts here. This will break expressions at runtime.

Apply this diff:

-`@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
+`@results.bhashini_tts.media_url` for the voice input. `bhashini_tts` is the webhook result name used in the given example.

 - Use the following expression: `@results.bhasini_tts.media_url`
+ - Use the following expression: `@results.bhashini_tts.media_url`

103-103: Fix result name typo in Step 6 and remove trailing punctuation in the heading.

Use the same result name bhashini_tts everywhere; also remove the trailing period per markdownlint MD026.

-#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+#### Step 6: To get the translated text out, create another send message node, and call `@results.bhashini_tts.translated_text`
🧹 Nitpick comments (12)
docs/4. Integrations/Bhashini Integrations.md (12)

51-51: STT wording: this is “transcribed” text, not “translated.”

ASR returns a transcript. Recommend “transcribed text” to avoid confusion with NMT/TTS.

-#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+#### Step 5: Once the webhook is updated, you can refer to the transcribed text as `@results.bhashini_asr.asr_response_text` inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.

30-51: Normalize heading levels and remove trailing periods in STT steps; align node names with UI.

Current headings jump levels (MD001) and several step headings end with a period (MD026). Also standardize node names: “Send Message”, “Wait for Response”.

-#### Step 1: Create a `Send message` node directing users to send their responses as audio messages, based on their preference.
+### Step 1: Create a `Send Message` node directing users to send their responses as audio messages based on their preference

-#### Step 2: In the `Wait for response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name.
+### Step 2: In the `Wait for Response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name.

-#### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service.
+### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service

-#### Step 4: Click on `Function Body` (top right corner) and add the parameters as shown in the screenshot below
+### Step 4: Click `Function Body` (top-right corner) and add the parameters as shown in the screenshot below

-#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+### Step 5: Once the webhook is updated, you can refer to the transcribed text as `@results.bhashini_asr.asr_response_text` inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user

67-91: Normalize heading levels and punctuation in TTS steps; fix capitalization of node names.

Bring steps to h3 under the section h2, remove trailing periods (MD026), and use Send Message consistently.

-#### Step 1: Create a `Send Message` node asking users to reply in text if they prefer.
+### Step 1: Create a `Send Message` node asking users to reply in text if they prefer

-#### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name.
+### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name

-#### Step 3: Create a 'Call Webhook' node.
+### Step 3: Create a `Call Webhook` node

-#### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below.
+### Step 4: Click `Function Body` (top-right corner) and add the parameters as shown in the screenshot below

-#### Step 5: Create a `send Message` node and paste the variable.
+### Step 5: Create a `Send Message` node and paste the variable

-#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+### Step 6: To get the translated text out, create another `Send Message` node and call `@results.bhashini_tts.translated_text`

Also applies to: 103-103


18-24: Fix heading level and trailing punctuation.

After the main h1, the next heading should be h2 (MD001). Also remove trailing colon (MD026).

-### This integration can be especially useful in use cases such as:
+## This integration can be especially useful in use cases such as

39-40: Tighten grammar and punctuation in webhook bullet points.

Use an em dash instead of repeated hyphens and clarify the sentence.

-- Give the webhook result name - you can use any name. In the screenshot example, it’s named `bhashini_asr`.
+- Give the webhook result name—use any name. In the screenshot example, it’s `bhashini_asr`.

Apply a similar edit in the TTS section (Lines 76–78).


55-55: Comma splice and article use.

Add a comma after “For instance” and the article “the.”

-The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
+The text output from Bhashini depends on the user’s language preference. For instance, if a user has selected the Hindi language, the response from Glific will be in the Hindi script.

81-81: Hyphenate “top-right” for consistency.

-#### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below.
+#### Step 4: Click Function Body (top-right corner) and add the parameters as shown in the screenshot below.

85-90: Parameter casing and supported languages: align with function schema and verify list.

  • Use snake_case consistently for parameters to match the function schema (the doc elsewhere references text, source_language, target_language).
  • Verify the “Supported Target Languages” list against the current Bhashini capabilities; “spanish” may not be supported by Bhashini.
-- `Source_language` : The original language of the text
-`target_language` : The language in which the voice note will be generated
-- If translation is not needed, keep both `Source_language` and `target_language` the same.
-- Supported Target Languages: `"tamil" "kannada" "malayalam" "telugu" "assamese" "gujarati" "bengali" "punjabi" "marathi" "urdu" "spanish" "english" "hindi"`
+- `source_language`: The original language of the text
+- `target_language`: The language in which the voice note will be generated
+- If translation is not needed, keep both `source_language` and `target_language` the same.
+- Supported target languages: please confirm against the latest Bhashini docs. If we want to list them here, use a comma-separated, lowercase list (e.g., `tamil, kannada, malayalam, telugu, assamese, gujarati, bengali, punjabi, marathi, urdu, english, hindi`).

If you’d like, I can update this after you confirm the authoritative list.


101-101: Streamline the GCS note for clarity and tone.

“Please note: In order to …” is wordy. Suggest concise phrasing.

-Please note: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization. This is to facilitate storage of the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage [click here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/)
+To receive voice notes as output, your Glific instance must be linked to Google Cloud Storage for your organization. This stores the audio generated by Bhashini via the webhook. To set up Google Cloud Storage, [click here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/).

57-58: Ensure long-term availability of Sample Flow links.

Google Drive links can break or change permissions. Prefer hosting sample flows in the docs site or repo (e.g., versioned assets) and linking to them.

Would you like me to create a PR to add these flows to the docs assets and update the links?

Also applies to: 107-107


111-121: Heading levels and emphasis: promote sections to h2 and convert the emphasized note to a sentence.

Bring “Blogs” and “Video of Showcase” to h2 (siblings of other top sections). Avoid using emphasis as a heading (MD036).

-### Blogs
+## Blogs

-### Video of Showcase
+## Video Showcase

-_Watch from 25 minute mark to watch the Bhashini integration part_
+Watch from the 25-minute mark to see the Bhashini integration.

24-24: Add terminal punctuation to the sentence.

-Bhashini specializes in Indic language translation and transliteration, supporting a wide range of languages and dialects. You can learn more about the platform [here](https://bhashini.gov.in)
+Bhashini specializes in Indic language translation and transliteration, supporting a wide range of languages and dialects. You can learn more about the platform [here](https://bhashini.gov.in).
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eef65ad and 0e0e3f3.

📒 Files selected for processing (1)
  • docs/4. Integrations/Bhashini Integrations.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/4. Integrations/Bhashini Integrations.md

[grammar] ~39-~39: There might be a mistake here.
Context: ...ashini API for converting audio to text. - Give the webhook result name - you can u...

(QB_NEW_EN)


[grammar] ~55-~55: There might be a mistake here.
Context: ...of the user. For instance if a user has selected Hindi language, the response from Glifi...

(QB_NEW_EN)


[grammar] ~76-~76: There might be a mistake here.
Context: ...ashini API for converting text to audio. - Give the webhook result name - you can u...

(QB_NEW_EN)


[grammar] ~87-~87: There might be a mistake here.
Context: ...n which the voice note will be generated - If translation is not needed, keep both ...

(QB_NEW_EN)


[grammar] ~88-~88: There might be a mistake here.
Context: ...anguageandtarget_languagethe same. - Supported Target Languages:"tamil" "ka...

(QB_NEW_EN)


[grammar] ~96-~96: There might be a mistake here.
Context: ...- Select Expression from the dropdown. - Use the following expression: `@results....

(QB_NEW_EN)


[style] ~101-~101: Consider a more concise word here.
Context: ...7fb-9178-66eaee0553a7" /> Please note: In order to get the voice notes as outputs, the Gli...

(IN_ORDER_TO_PREMIUM)

🪛 markdownlint-cli2 (0.17.2)
docs/4. Integrations/Bhashini Integrations.md

18-18: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


18-18: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


30-30: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


30-30: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


32-32: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


36-36: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


51-51: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


67-67: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)


67-67: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


69-69: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


73-73: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


81-81: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


91-91: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


103-103: Trailing punctuation in heading
Punctuation: '.'

(MD026, no-trailing-punctuation)


121-121: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

@github-actions github-actions bot temporarily deployed to pull request August 25, 2025 07:32 Inactive
@akanshaaa19 akanshaaa19 merged commit 6b1d283 into main Aug 25, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants