LLM Prompt Injection Security
Python • Flask • PyTorch • Neural Network • Random Forest
Security research project analyzing prompt-injection vulnerabilities in large language models. Conducted offensive testing on multiple LLMs and built ML-based detection and sanitization mechanisms. Research paper currently under editorial review.