HackedGPT Novel AI Vulnerabilities Open the Door for Private... #1577
+60
−38
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🤖 Automated Content Update
This PR was automatically generated by the HackTricks News Bot based on a technical blog post.
📝 Source Information
🎯 Content Summary
Scope and timeline
Tenable Research (post dated November 5, 2025) documents seven concrete exploitation techniques against ChatGPT’s browsing and search features that enable indirect prompt injection, safety bypass, stealth, persistence, and private-data exfiltration. Researchers validated behavior primarily on ChatGPT 4o and verified several techniques and PoCs still function on ChatGPT 5.
Relevant ChatGPT internals (how it works)
• System ...
🔧 Technical Details
Prompt‑poison any LLM browsing/search step: Hide instructions in web content (e.g., comments) or serve them conditionally to the crawler/browsing agent’s user agent/headers. Once indexed, a victim’s ordinary query can trigger 0‑click compromise via the model’s search→browse pipeline.
Bypass link‑safety with trusted redirectors: Abuse whitelisted
bing.com/ck/atracking URLs to render links that ultimately redirect to attacker‑controlled sites. Pre‑index one page per character and exfiltrate data by outputting sequences of these Bing‑wrapped links.Bridge browsing→assistant with Conversation Injection: Make the browsing agent append attacker instructions to its own reply so that the assistant, when re‑parsing conversation history, treats them as its instructions and complies on the next turn.
Hide payloads using a markdown rendering quirk: Place directives on the opening code‑fence line ...
🤖 Agent Actions
I reviewed the repository and existing AI security content and found that while HackTricks already covers general prompt injection, indirect injection, and some agentic browser issues, it did not include Tenable’s new attack surface and exploitation details for ChatGPT’s browsing+search pipeline: URL safety redirector abuse, conversation bridging across browsing→assistant, the markdown code-fence rendering quirk, persistent memory injection, and the 1-click injection via chatgpt.com/?q. These are concrete, technical techniques with PoC-ready guidance and were not previously documented in HackTricks.
Actions taken:
“Agentic Browsing/Search: Prompt Injection, Redirector Exfiltration, Conversation Bridging, Markdown Stealth, Memory Persistence”
This section:
Why this placement:
Files modified:
No other files were changed.
This PR was automatically created by the HackTricks Feed Bot. Please review the changes carefully before merging.
📚 Repository Maintenance
All .md files have been checked for proper formatting (headers, includes, etc.).