Tag: injection
An adversarial technique where malicious instructions are injected into an LLM's context window — via user input, retrieved documents, or tool results — to hijack the model's behaviour.
1mo ago
ai_ml advanced
Accepting file uploads without validating type, extension, and content can allow PHP shell uploads and RCE.
CWE-434 OWASP A4:2021
2mo ago
security intermediate
9.8