Skip to content

Conversation

abhishek-oai
Copy link

@abhishek-oai abhishek-oai commented Aug 19, 2025

Summary

Adds “bang” command passthrough in the TUI: any input starting with ! (e.g. !ls, !cat README.md) runs in the user’s shell and the output is rendered in conversation history.

Rationale

  • Enables quick terminal commands inline without leaving the TUI.
  • Mirrors common chat UIs that support “bang” passthrough while preserving Codex’s transcript and consistent styling.

User Experience

  • Type a command prefixed with ! in the input box and press Enter.
  • The TUI immediately shows a “Running local cmd: ” entry.
  • When the command finishes, its output is appended:
    • Success: stdout is shown.
    • Failure: stderr is shown.
  • Examples:
    • !ls
    • !cat README.md
    • !rg -n TODO -S

Behavior Details

  • Executed via bash -lc "" in the configured workspace cwd, then adapted to the detected user shell when appropriate (e.g., zsh profile sourcing, PowerShell wrapping on Windows).
  • Environment is cleared and re‑constructed via the configured shell_environment_policy (defaults to inheriting the full environment, minus filtered secret‑like vars), then passed to the
    child process.
  • Output is captured and displayed after the process exits (not streamed).
  • Normal chat and /slash commands are unchanged; @file mentions still work.
  • Ctrl+C interrupts a running local command (on Unix, sends SIGINT to the whole process group).

Implementation

  • Protocol:
    • New events: LocalCommandBegin { command: Vec }, LocalCommandEnd { stdout, stderr, exit_code }.
  • Core:
    • On Op::LocalExec { raw_cmd }, wraps into ["bash","-lc", raw_cmd], then optionally translates via user_shell.format_default_shell_invocation.
    • Spawns the process in a background tokio task, configures a new process group on Unix, records/clears the child for interruption, and rebuilds env via create_env(...).
    • Emits LocalCommandBegin before spawn and LocalCommandEnd with captured stdout/stderr/exit code when complete.
  • TUI (ChatWidget):
    • Detects leading ! in handle_key_event and submits Op::LocalExec { raw_cmd }.
    • On LocalCommandBegin, adds a “Running local cmd: …” entry and marks a running task.
    • On LocalCommandEnd, renders output via history_cell::new_local_command_output(stdout, stderr, exit_code).
  • TUI rendering:
    • new_running_local_command(...) shows the command being executed.
    • new_local_command_output(...) displays stdout on success and stderr on failure with ANSI styling preserved.

Notes / Limitations

  • Output appears only after the process exits (no incremental streaming yet).
  • Uses bash -lc; requires a Unix‑like environment or bash in PATH; Windows is adapted via PowerShell when detected.
  • Explicitly user‑initiated; bypasses model tool approval and sandbox by design (like a local terminal), while still honoring the configured shell_environment_policy.

Copy link

github-actions bot commented Aug 19, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@abhishek-oai abhishek-oai requested a review from gpeal August 19, 2025 21:51
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request that is ready for review, or mark a draft as ready for review. You can also ask for a review by commenting "@codex review".

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

@easong-openai
Copy link
Collaborator

We need to make sure cancellation works, particularly ctr-c. Right now it doesn't actually cancel the command.

If possible, we should show the command we just ran + any output and delineate it as a user command instead of using the shortening UI we do for the model.

Otherwise nice work!

@easong-openai
Copy link
Collaborator

This also doesn't support windows, which isn't necessarily a blocker but we might need to look at.

@abhishek-oai
Copy link
Author

We need to make sure cancellation works, particularly ctr-c. Right now it doesn't actually cancel the command.

If possible, we should show the command we just ran + any output and delineate it as a user command instead of using the shortening UI we do for the model.

Otherwise nice work!

Happy to send a patch with both! Wanted to get something out to get buy-in first. Thanks for the prompt feedback.

@SebastianSzturo
Copy link

This is great! Would be also fantastic to have a way to run background commands like with Claude Code and have the output continuously attached to any new messages.

Background commands: (Ctrl-b) to run any Bash command in the background so Claude can keep working (great for dev servers, tailing logs, etc.)

Here is also a good description of the feature: sst/opencode#1970

@abhishek-oai abhishek-oai changed the title TUI: ! commands passthrough to shell for execution [tui][core] ! commands passthrough to shell for execution Aug 24, 2025
@abhishek-oai
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Aug 24, 2025
@abhishek-oai
Copy link
Author

We need to make sure cancellation works, particularly ctr-c. Right now it doesn't actually cancel the command.

If possible, we should show the command we just ran + any output and delineate it as a user command instead of using the shortening UI we do for the model.

Otherwise nice work!

Alright both those things should work.

Also, the Tui still remains like the "frontend" and forwards the request to the "core". This is in line with the existing archiecture.

Copy link

Summary

  • Adds local “!” command execution with session events, env isolation, and TUI/CLI display. Updates docs and config to describe usage and line-capping.

Notes

  • codex-rs/README.md: New “Run a local command with !” section and usage tips.
  • codex-rs/config.md: Documents [tui] local_shell_max_lines (example 150; default described as 100).
  • codex-rs/core: Implements local exec runtime, process-group SIGINT on Unix, env derivation via policy, and LocalCommandBegin/End events.
  • codex-rs/tui: Wires local command begin/end into chat history; updates fixture.
  • codex-rs/exec,mcp-server: Handles/ignores new local-command events appropriately.

Review
Overall looks solid: process-group signaling, env hygiene, and event plumbing are well-done. A few small inconsistencies to address:

  • README link: “See config.md#tui.local_shell_max_lines” should point to config.md#tui (no anchor for the key).
  • Apply the documented line cap: local_shell_max_lines (core/src/config_types.rs, config.md) isn’t used; tui/src/history_cell.rs::new_local_command_output currently prints all lines. Consider using existing output_lines(...) with a default of 100 (configurable via Config.tui.local_shell_max_lines) and showing the “… +N lines” summary.
  • Non-Unix interrupt: core/src/local_exec.rs sets a flag but doesn’t actually signal/terminate the process. If acceptable for now, note the limitation; otherwise consider a Windows-friendly cancellation approach.

Follow-up (outside this PR scope)

  • Optionally align the CLI’s MAX_OUTPUT_LINES_FOR_EXEC_TOOL_CALL with the TUI default or make it configurable for consistency.

View workflow run

@tibo-openai
Copy link
Collaborator

@codex review in depth

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

@easong-openai
Copy link
Collaborator

Seems worth addressing the codex review notes above ^

@abhishek-oai abhishek-oai enabled auto-merge (squash) August 25, 2025 23:19
@abhishek-oai
Copy link
Author

Seems worth addressing the codex review notes above ^

Fixed.

@abhishek-oai
Copy link
Author

@codex review in depth

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

@abhishek-oai abhishek-oai force-pushed the bang-command-support branch 2 times, most recently from 771ae55 to 8aaf9f0 Compare August 26, 2025 02:58
@abhishek-oai
Copy link
Author

Seems worth addressing the codex review notes above ^

All fixed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants