Gitstar Ranking
Users
Organizations
Repositories
Rankings
Users
Organizations
Repositories
Sign in with GitHub
gmh5225
Fetched on 2026/03/14 09:26
gmh5225
/
llm-safety-evaluator
An automated Red Teaming platform that audits LLMs (DeepSeek, OpenAI, Claude, Gemini) for safety vulnerabilities. Features LLM-as-a-Judge scoring, jailbreak detection, and PDF compliance reporting. -
View it on GitHub
Star
0
Rank
13855001