Skip to content

Conversation

roomote[bot]
Copy link

@roomote roomote bot commented Oct 3, 2025

Summary

This PR addresses Issue #8482 where GLM-4.6 models occasionally return empty streams, causing "Unexpected API Response" errors in Roo Code.

Problem

Users experiencing intermittent failures when using GLM-4.6 with both OpenAI Compatible and Z AI providers. The error occurs when the model returns an empty stream with no text content, causing the application to fail with an unhelpful error message.

Solution

Implemented a three-layer approach to handle empty streams gracefully:

  1. Provider Level: Added GLM-specific detection and fallback responses in both OpenAI and base OpenAI-compatible providers
  2. Task Level: Enhanced error handling in Task.ts to detect GLM models and automatically retry with clarification prompts
  3. User Experience: Replaced generic error messages with actionable guidance for users

Changes

  • 🛡️ Added graceful handling for empty API responses in Task.ts
  • 🔧 Implemented GLM-specific fallback responses in OpenAI and base providers
  • 🔄 Added retry logic for GLM models when empty streams occur
  • ✅ Added comprehensive tests for empty stream scenarios

Testing

  • Added new test suite with 4 test cases covering both GLM and non-GLM models
  • All existing tests pass
  • Linting and type checking pass

Impact

  • GLM Users: Will no longer experience crashes from empty streams
  • Other Models: No impact - original error behavior preserved
  • Performance: Minimal - only adds lightweight model ID checking

Fixes #8482


Important

Fixes handling of empty stream responses from GLM models by adding fallback responses and retry logic, improving error handling and user experience.

  • Behavior:
    • Handles empty streams from GLM models in Task.ts by retrying with clarification prompts.
    • Adds fallback responses for GLM models in OpenAiHandler and BaseOpenAiCompatibleProvider.
    • Improves error messages for users when GLM models return empty streams.
  • Testing:
    • Adds glm-empty-stream.spec.ts with 4 test cases for GLM and non-GLM models.
    • Ensures all existing tests pass.
  • Impact:
    • Prevents crashes for GLM users due to empty streams.
    • No impact on other models; original error behavior preserved.

This description was created by Ellipsis for 8663724. You can customize this summary. It will automatically update as commits are pushed.

- Add graceful handling for empty API responses in Task.ts
- Implement GLM-specific fallback responses in OpenAI and base providers
- Add retry logic for GLM models when empty streams occur
- Add comprehensive tests for empty stream scenarios

Fixes #8482
@roomote roomote bot requested review from cte, jr and mrubens as code owners October 3, 2025 05:57
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Oct 3, 2025
) {
yield {
type: "text",
text: "I'm having trouble generating a response. Please try rephrasing your request or breaking it down into smaller steps.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GLM fallback response here is hardcoded. To ensure consistency and proper localization, consider wrapping this error message in a translation function.

Suggested change
text: "I'm having trouble generating a response. Please try rephrasing your request or breaking it down into smaller steps.",
text: t("I'm having trouble generating a response. Please try rephrasing your request or breaking it down into smaller steps."),

This comment was generated because it violated a code review rule: irule_C0ez7Rji6ANcGkkX.

const isGLMModel =
modelId && (modelId.toLowerCase().includes("glm") || modelId.toLowerCase().includes("chatglm"))

if (isGLMModel) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the GLM-specific branch handling empty responses (around lines 2349–2365), fallback messages and prompts are hardcoded. Consider using a translation function (e.g. t('...')) and possibly refactoring the duplicated GLM check into a helper for clarity.

This comment was generated because it violated a code review rule: irule_C0ez7Rji6ANcGkkX.

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Oct 3, 2025
Copy link
Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performing self-review: a robot grading its own homework—what could possibly be biased?

const isGLMModel =
modelId && (modelId.toLowerCase().includes("glm") || modelId.toLowerCase().includes("chatglm"))

if (isGLMModel) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] Potential infinite retry loop for GLM empty streams. This path pushes a clarification and immediately retries with no max attempts or backoff. If the provider keeps returning empty streams, this can loop indefinitely and spam error messages. Recommend adding a capped retry counter (e.g., 1–3 attempts) with small delay and logging, and falling back to the standard error after the cap. Also consider adding unit tests that simulate repeated empty responses to verify the retry cap and final behavior.

}

// For GLM models that may return empty streams, provide a fallback response
if (
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P2] Duplicate GLM detection logic. The same modelId.includes('glm'|'chatglm') condition appears here, in BaseOpenAiCompatibleProvider, and in Task.ts. Consider extracting a small helper (e.g., isGlmModel(modelId)) to keep behavior consistent and reduce drift.

@@ -0,0 +1,193 @@
import { describe, it, expect, vi, beforeEach } from "vitest"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P3] Unused import 'beforeEach' may trip stricter lint settings; suggest removing it.

Suggested change
import { describe, it, expect, vi, beforeEach } from "vitest"
import { describe, it, expect, vi } from "vitest"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Triage

Development

Successfully merging this pull request may close these issues.

[BUG] GLM 4.6 and Roo Code usage error

2 participants