Skip to content
#

input-manipulation

Here is 1 public repository matching this topic...

🧠 Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt injection through innovative attack methods and research insights.

  • Updated Nov 13, 2025

Improve this page

Add a description, image, and links to the input-manipulation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the input-manipulation topic, visit your repo's landing page and select "manage topics."

Learn more