Most organizations treating AI security as a model problem are defending the wrong layer. Security teams filter prompts, patch jailbreaks, and tune model behavior, which is all necessary work, while the actual attack surface sits largely unexamined underneath. That surface is the API layer: the endpoints AI systems use to retrieve data, call tools, and take action on behalf of users. This isn’t a theoretical gap. According to Wallarm’s 2026 ThreatStats Report, 36% of AI…
Your legal team just handed you a 400-page document and said “figure out compliance.” The EU AI Act is live,…
Every secure API draws a line between code and data. HTTP separates headers from bodies. SQL has prepared statements. Even…
TL;DR AI risk doesn’t live in the model. It lives in the APIs behind it. Every AI interaction triggers a…
Dimitris Georgiou has been a self-professed computer geek since the early 80s. At university, he studied the convergence of educational…
Your board wants AI. Your developers are building with it. Your budget committee is asking for an ROI timeline. But…
AI systems are no longer just isolated models responding to human prompts. In modern production environments, they are increasingly chained…
Broken authorization is one of the most widely known API vulnerabilities. It features in the OWASP Top 10, AppSec conversations,…
The shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner…
API security has been a growing concern for years. However, while it was always seen as important, it often came…
