Every inline deployment introduces a tradeoff: enhanced inspection versus increased risk of downtime. Inline protection is important, especially for APIs, which are now the most targeted attack surface, but so is consistent uptime and performance. This is where a fail-open architecture comes in.
This Wallarm How-To blog outlines how to deploy Wallarm’s Security Edge platform on Azure using a fail-open design, ensuring high availability and zero disruption, even if the filtering infrastructure becomes unresponsive.
APIs drive business-critical operations. As such, their availability is non-negotiable. Any inline solution, no matter how effective, introduces the possibility of becoming a single point of failure. If the traffic filtering node goes offline or becomes unresponsive, users could face delays, broken integrations, or full application outages.
This is one of the most common objections to inline deployments. While legacy WAFs might require tradeoffs between protection and availability, modern cloud architectures allow for both. By using Azure Front Door alongside Wallarm’s distributed Security Edge nodes, organizations can architect a highly available, auto-failover system that maintains protection without jeopardizing performance.
Wallarm Security Edge is a cloud-native, managed service that deploys filtering nodes across multiple geographic regions. These nodes inspect traffic inline in real time, identifying and blocking malicious API calls before they can reach your origin servers.
Unlike traditional security appliances, Security Edge doesn’t require you to install or manage any on-prem hardware. You simply route your API and web traffic through the Wallarm filtering nodes and benefit from real-time detection of OWASP Top 10 threats, API exploits, and emerging attacks like LLM prompt injections.
But what happens if the filtering cluster becomes unreachable?
By integrating Azure Front Door’s active/passive routing capabilities, organizations can implement a resilient, fail-open architecture that bypasses the filtering nodes in the rare event of failure, thus ensuring uninterrupted API availability.
Azure Front Door acts as the global entry point for incoming traffic. When you create a Front Door instance, it provides a fully qualified domain name (FQDN) – for example, azureFrontDoor-a7ajbwefb6bza6ez.z01.azurefd.net.
Typically, you’d configure a CNAME record that maps your public subdomain (api.example.com, for example) to this FQDN, allowing all requests to route through Front Door.
To enable automatic failover, you’ll configure two origin endpoints into a single origin group:
Azure Front Door lets you assign priority levels to each origin. A lower number means higher priority:
Traffic is always routed to the highest-priority healthy origin. If the Wallarm node cluster becomes unavailable, Azure Front Door automatically switches to the secondary origin, ensuring continuous service.
This is the essence of a fail-open architecture: if security infrastructure fails, availability wins by design.
To detect failure conditions, Azure Front Door relies on health probes. These periodic checks validate whether the filtering node cluster is responsive. If the probe fails for a set number of consecutive attempts, traffic is redirected to the healthy fallback origin.
You can customize these probes with:
This flexibility gives your security and infrastructure teams precise control over failover behavior.
Once deployed, here’s what a typical request flow looks like:
This architecture provides the best of both worlds:
Together, Wallarm’s Security Edge nodes and Azure Front Door offer a resilient, cloud-native security model tailored for modern API environments. To learn more about deploying Wallarm Security Edge inline with Azure and building your own fail-open architecture, check out the official Wallarm documentation.