Skip to content

Conversation

@carlospolop
Copy link
Collaborator

🤖 Automated Content Update

This PR was automatically generated by the HackTricks News Bot based on a technical blog post.

📝 Source Information

🎯 Content Summary

Scope and timeline
Tenable Research (post dated November 5, 2025) documents seven concrete exploitation techniques against ChatGPT’s browsing and search features that enable indirect prompt injection, safety bypass, stealth, persistence, and private-data exfiltration. Researchers validated behavior primarily on ChatGPT 4o and verified several techniques and PoCs still function on ChatGPT 5.

Relevant ChatGPT internals (how it works)
System ...

🔧 Technical Details

Prompt‑poison any LLM browsing/search step: Hide instructions in web content (e.g., comments) or serve them conditionally to the crawler/browsing agent’s user agent/headers. Once indexed, a victim’s ordinary query can trigger 0‑click compromise via the model’s search→browse pipeline.

Bypass link‑safety with trusted redirectors: Abuse whitelisted bing.com/ck/a tracking URLs to render links that ultimately redirect to attacker‑controlled sites. Pre‑index one page per character and exfiltrate data by outputting sequences of these Bing‑wrapped links.

Bridge browsing→assistant with Conversation Injection: Make the browsing agent append attacker instructions to its own reply so that the assistant, when re‑parsing conversation history, treats them as its instructions and complies on the next turn.

Hide payloads using a markdown rendering quirk: Place directives on the opening code‑fence line ...

🤖 Agent Actions

I reviewed the repository and existing AI security content and found that while HackTricks already covers general prompt injection, indirect injection, and some agentic browser issues, it did not include Tenable’s new attack surface and exploitation details for ChatGPT’s browsing+search pipeline: URL safety redirector abuse, conversation bridging across browsing→assistant, the markdown code-fence rendering quirk, persistent memory injection, and the 1-click injection via chatgpt.com/?q. These are concrete, technical techniques with PoC-ready guidance and were not previously documented in HackTricks.

Actions taken:

  • Updated src/AI/AI-Prompts.md with a new subsection:
    “Agentic Browsing/Search: Prompt Injection, Redirector Exfiltration, Conversation Bridging, Markdown Stealth, Memory Persistence”
    This section:
    • Summarizes relevant ChatGPT internals (web tool contexts: open_url/search, user agents, url_safe gate, memory/bio).
    • Details seven offensive techniques:
      1. Indirect prompt injection through trusted sites (Browsing Context)
      2. 0-click prompt injection via Search Context poisoning
      3. 1-click injection via https://chatgpt.com/?q={payload}
      4. Safety bypass and exfil via Bing redirectors (bing.com/ck/a) with per-character exfil strategy
      5. Conversation Injection (browsing→assistant isolation bridge)
      6. Markdown code-fence stealth UI quirk for hidden payloads
      7. Memory injection for persistence via the bio tool
    • Adds practical operator notes for reproduction (fingerprinting UAs, building redirector sequences, code-fence hiding, persistence).
    • Includes a concise defensive takeaways block for those building agents (provenance binding, redirector handling, markdown normalization, gated memory writes).
  • Restored the “## Tools” heading after insertion to preserve structure.
  • Appended references at the bottom of the AI-Prompts page:
    • Tenable’s HackedGPT blog
    • OpenAI “Memory and new controls for ChatGPT”
    • EmbraceTheRed url_safe mitigation analysis

Why this placement:

  • AI-Prompts.md already hosts prompt injection/jailbreak content and indirect-injection basics. The new subsection extends it specifically for agentic browsing/search pipeline abuse, redirector-based exfil, and ChatGPT memory persistence—fitting the “AI Security Methodology” angle without creating a redundant new page.

Files modified:

  • src/AI/AI-Prompts.md

No other files were changed.


This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.

📚 Repository Maintenance

  • MD files processed: 909
  • MD files fixed: 1

All .md files have been checked for proper formatting (headers, includes, etc.).

@carlospolop
Copy link
Collaborator Author

🔗 Additional Context

Original Blog Post: https://www.tenable.com/blog/hackedgpt-novel-ai-vulnerabilities-open-the-door-for-private-data-leakage

Content Categories: Based on the analysis, this content was categorized under "AI Security > AI Security Methodology (new subsection: Agentic browsing/search prompt injection, redirector-based safety bypass, conversation bridging, markdown stealth, and memory persistence)".

Repository Maintenance:

  • MD Files Formatting: 909 files processed (1 files fixed)

Review Notes:

  • This content was automatically processed and may require human review for accuracy
  • Check that the placement within the repository structure is appropriate
  • Verify that all technical details are correct and up-to-date
  • All .md files have been checked for proper formatting (headers, includes, etc.)

Bot Version: HackTricks News Bot v1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants