- The Governance Gap: How the EU AI Act Makes API Security a Compliance ImperativeYour legal team just handed you a 400-page document and said "figure out compliance." The EU AI Act is live, your organization falls under its scope, which is broader than many expect. Even non‑EU companies must comply if their AI systems are used, deployed, or produce effects within the European Union. In practice, that means that global organizations
- Attacking the MCP Trust BoundaryEvery secure API draws a line between code and data. HTTP separates headers from bodies. SQL has prepared statements. Even email distinguishes the envelope from the message. The Model Context Protocol (MCP), the fast-growing standard for connecting AI agents to external services, inherits that gap from the models it sits on top of. Its central
- Why API Discovery Is the First Step to Securing AITL;DR AI risk doesn’t live in the model. It lives in the APIs behind it. Every AI interaction triggers a chain of API calls across your environment. Many of those APIs aren’t documented or tracked. That’s your real exposure. Shadow API discovery gives you visibility into those hidden endpoints, so you can find them before
- CISO Spotlight: Dimitris Georgiou on Building Security that Serves People FirstDimitris Georgiou has been a self-professed computer geek since the early 80s. At university, he studied the convergence of educational technology with computer science as part of his psychology MA – finding, to his disbelief, that systems were perilously insecure. Since then, he’s always worked in and around cybersecurity. He’s had roles as a computer
- The CISO’s Dilemma: How To Scale AI SecurelyYour board wants AI. Your developers are building with it. Your budget committee is asking for an ROI timeline. But as CISO, you're the one who has to answer when the inevitable question comes up: "How do we know this is secure?" If you're like most security leaders, you're caught between two impossible positions. Say
- Agent-to-Agent Attacks Are Coming: What API Security Teaches Us About Securing AI SystemsAI systems are no longer just isolated models responding to human prompts. In modern production environments, they are increasingly chained together – delegating tasks, calling tools, and coordinating decisions with limited or no human oversight. Almost all that communication happens through APIs. This shift offers enormous productivity benefits. But it has also complicated security. Because
- Everyone Knows About Broken Authorization - So Why Does It Still Work for Attackers?Broken authorization is one of the most widely known API vulnerabilities. It features in the OWASP Top 10, AppSec conversations, and secure coding guidelines. Broken Object Level Authorization (BOLA) and Broken Function Level Authorization (BFLA) account for hundreds of API vulnerabilities every quarter. According to the 2026 API ThreatStats report, authorization issues ranked ninth in
- From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most DefensesThe shadow technology problem is getting worse. Over the past few years, organizations have scaled microservices, cloud-native apps, and partner integrations faster than corporate governance models could keep up, resulting in undocumented or shadow APIs. We’re now seeing this pattern all over again with AI systems. And, even worse, AI introduces non-deterministic behavior, autonomous actions,
- Inside Modern API Attacks: What We Learn from the 2026 API ThreatStats ReportAPI security has been a growing concern for years. However, while it was always seen as important, it often came second to application security or hardening infrastructure. In 2025, the picture changed. Wallarm’s 2026 API ThreatStats Report revealed that APIs are now the primary attack surface for digital business, and not because bad actors discovered
