Protect your Generative AI applications from threats like prompt injection and data leaks with Model Armor, the new security guard for any LLM. This video dives into how Model Armor uses centralized policies and prompt/response filtering to address some of the OWASP LLM Top 10 risks. We’ll explore key features and benefits, then see a live demo showing Model Armor in action against unsafe prompts and jailbreaking attempts, malicious URLs, and attempts to exchange sensitive data, both in user inputs and model outputs.
Resources:
Read the Model Armor documentation → https://goo.gle/43fWaK6
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
Speakers: Aron Eidelman
Products Mentioned: AI Infrastructure