0 views

We uncover the hidden risks inside AI-generated code and what developers and defenders need to know to stay ahead. From missing input validation and hardcoded credentials to weak access controls and hallucinated dependencies, our guests break down the new threat landscape forming at the intersection of AI and software development.
We explore
• Why “functionally correct” code isn’t always secure
• How model hallucinations can introduce unseen vulnerabilities
• The challenge of finding and fixing these issues at scale
If you build, test, or secure software, this conversation will change how you think about AI in your toolchain.
Date: October 26, 2025











