Press enter or click to view image in full size
Introduction
Vulnerable AI Lab is an intentionally vulnerable training environment for modern AI applications. It is designed to help security engineers, developers, red teamers, instructors, and students observe how real LLM application pipelines fail under adversarial conditions.
Unlike a normal chatbot demo, this project does not focus only on the model response. It focuses on the full application path around the model:
- the system prompt
- retrieval-augmented generation (RAG)
- tool calling
- output handling
- scoring
- telemetry
That structure matters, because many AI vulnerabilities are not model-only problems. They emerge from the way the application trusts user input, retrieved documents, tool arguments, or model output.