⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs - View it on GitHub
Star
0
Rank
13829210