As many of our customers know, Wallarm has been an active member of the CNCF community for more than 18 months. Many of our enterprise customers and partners are actively using Wallarm Kubernetes native API security, which has been supported with Kubernetes ingress and NGINX Plus ingress controllers.

This fall, we are extending our capabilities to support Service Mesh architecture and Envoy Proxy.

While relatively new, Envoy proxy already has a significant following in the community. Envoy was incubated in CNCF and has officially graduated in the fall of 2018. Envoy advantage is that it is extremely lightweight – which makes it perform very efficiently. The speed and performance are what is making Envoy so popular both as an alternative to NGINX as an Ingress controller and as a foundation of the service-to-service communication within Service Mesh infrastructure. Broad support of modern protocols including HTTP2 and gRPC, advanced load balancing features and capabilities specifically designed for API introspection and observability add to Envoy popularity.

Envoy popularity has extended to the major projects at companies such as Uber, Airbnb, Netflix, Twilio, Google, Verizon and many others. 

In the words of Matt Klein, an engineer at Lyft and one of the principal Envoy architects: “The network should be transparent to applications. When network and application problems do occur it should be easy to determine the source of the problem”.

Matt continues: “The proxy architecture provides two key pieces missing in most stacks moving from monolith legacy systems to SOA — robust observability and easy debugging. Having these tools in place allows developers to focus on business logic.”

The Service Mesh infrastructure itself deserves a few words as a concept that is used both in Kubernetes architecture and in other distributed environments. The Service mesh fabric is supposed to provide the service necessary to service-to-service communications such as service discovery, registration, authentication, and connection establishment making these:

  • Required
  • Uniformed
  • Managed via a unified set of the control plane policies

The most popular implementation of the Service Mesh today is Istio and it is based on the use of Envoy proxy deployed as a sidecar service with each of the microservices in the system. The advantage of such an approach is that it’s minimally invasive – existing microservices can run as is, without modification.

“Istio at its core decouples developers from operations, “said Eric Brewer, Google Fellow and vice president. “If you start developing services on your laptop and have five or six services, you don’t need Istio. But when you’re an enterprise with many different teams that write services that don’t know much about each other and you want common policies across all these teams — now you need Istio. And you need to view managing your services as a world-class thing”

Wallarm for Envoy security

Following many requests from our customers, Wallarm has extended its app and API security solution to work with the distributed applications using Envoy proxy. Wallarm can protect North-South API in the applications that use Envoy as an alternative Ingress controller at the front end of a Kubernetes cluster. Wallarm can also protect edge traffic and also East-West Envoy API for Service-Mesh and Istio.

One of the original premises of Envoy was observability, which is what allowed large teams to monitor and troubleshoot issues in hybrid, cloud and distributed environments. When Lyft has originally built Envoy as an edge proxy, it sought to improve its understanding of the latency within the AWS environment. Envoy monitoring and observability features are built on Layer 7API inspection. With Wallarm Advanced Cloud-Native WAF deployed directly on Envoy, companies get full visibility on the API (XML, JSON, gRPC and others), real-time protection against static attacks such as Xss, RCE, Path Traversal and other OWASP top 10, protection from behavioral attacks and more.

Wallarm Envoy API and Kubernetes-native security support

Key Innovation in Wallarm for Envoy

  • Installs directly onto the Envoy infrastructure
  • Protects Service Mesh, including Istio
  • Protects North-South and East-West APIs
  • Extremely low overhead
  • Broad API protocol detection, including gRPC
  • Low false positives

Try it now

To run the trial you will need a Wallarm SaaS account. You can get a trial account by Signing up at https://us1.my.wallarm.com/signup.

To try Wallarm protection for Envoy you will need: example.com — protected application or API; deploy@example.com — your login for my.wallarm.com; very_secret — password for my.wallarm.com. 

To deploy Wallarm-secured Envoy, run a container with necessary parameters:

docker run -d -e DEPLOY_USER="deploy@example.com" -e DEPLOY_PASSWORD="very_secret" -e ENVOY_BACKEND=example.com -p 80:80 wallarm/envoy

As a result, the container should be running, and the protected website should be available on server port 80. New Node should be registered at Wallarm Cloud.

Additional Resources:

Request a live demo

Leave a Reply

Show Buttons
Hide Buttons