by Renata Budko
Intro: About microservices
The introduction of cloud-native applications and containers changed the way of how people structure and provision applications. Deploying and configuring physical servers at every edge location and trying to balance the load is just not feasible for dynamic loads and hybrid infrastructures. The solution to this problem is microservices.
In addition to supporting dynamic scalability, the move from monolithic applications to microservices avoids the bottlenecks of a central database and supports portability and code reuse. It also provides additional benefits of easier automation, integration in CI/CD pipeline, observability and high availability, although microservices security and management may create certain new challenges.

A microservices system, like Kubernetes, is a framework for managing containers as a unified scalable distributed system for running applications.
Google, Twitter, Netflix, Amazon and other leading SaaS and cloud companies have pioneered this approach and lead the movement to re-architect monolithic applications to microservices turning it into a new standard.

What microservices bring to the table is the ease of application delivery. When the application is split into small, contained chunks with immutable infrastructure, suddenly development becomes much simpler. Dependencies are eliminated. Different chunks can be written in different languages. Updating an interface or an external API doesn’t require the entire monolithic app to be touched. Plus there is code reuse. A microservice that works well can be used deliver services across multiple Kubernetes pods and multiple applications.
There are also a number of additional benefits to this architecture:
- Automation – microservices based system is a much better fit for a CI/CD infrastructure and can easily be integrated into a development pipeline
- Scalability – multiple instances of the same microservice can be deployed on demand or eliminated easilty as the request volume scales up and down
- Robustness – since most microservices are stateless, an error in one just leads to a quick re-spin of an individual service with very little downtime
What is Service Mesh
A service mesh is a framework that helps microservice communicate to each other in the new distributed architecture. Service Mesh integrates API directly into the underlying compute cluster making service-to-service communication faster, more efficient and easier to maintain. It solves one of the biggest challenges of the microservices architecture – the overhead of having to request services across a well-defined API. In addition to defining how microservices communicate and routing requests, a Service Mesh usually provides service-to-service authentication, monitoring and availability services.
The best thing about the Service Mesh is its awareness of the distributed nature of a microservices application and the issues related to horizontal scalability. Multiple instances of the same microservice might be running locally, in the cloud or across multiple locations. A Service mesh helps microservices discover where another service lives and structure the request to simplify the API logic, ensure fast connectivity, and automate horizontal scaling.
Andrew Jenkins of Aspen Mesh identifies three deployment options with regards to how a Service Mesh delivers its services:
- As a sidecar that runs alongside your microservice container
- As a library that can be built into each of the microservices
- As an agent that sit in the container infrastructure and provide the service to all the containers on that node
What is the history of Service Mesh
The concept of separating individual functions into contained services is not new. Even in legacy Microsoft Windows applications, the concept of dynamically linked libraries (DLL) that are patched into the application during the runtime is widely spread. In the era of REST APIs the technologies like Service Oriented Architecture (SOA) and API bus have moved into trying to optimize intra-application communications. One of the first microservices-focused efforts in the space is an open source project called Linkerd. It was initially created by the engineering team at Twitter andlin is now maintained by the the Cloud-Native Computing Foundation

Linkerd first introduced the idea of a proxy for each service (sidecar) which can than discover and connect with with similar proxies using a specialized network.
At the same time Matt Klein of Lyft came up with an ingenious method to represent a network as a code, microservices and APIs.
This became Envoy, which is now a widely supported project with members from IBM, Google and wide support from the community.
What is Envoy
Envoy is a high-performance distributed routing framework and a “universal data plane” designed for microservices and service mesh architectures. It was originally built C++ by a Lyft engineering team with an extremely low performance overhead and can also be used as a distributed proxy for single service applications. Currently, Envoy is a part of CNCF family and is supported by the foundation
Envoy is a great example of the third generation routing infrastructure learning form the challenges of the first-generation technology, like F5 application delivery network (ADN), and second generation solutions such as NGINX and HAProxy.
Envoy’s main advantage is the ability to provide networking features to each service in a platform-independent manner. By abstracting the network and centralizing routing controls, Envoy enables better observability, performance, and ability to find bottlenecks.
What is Istio
Istio is one of the popular Service Mesh implementations based on a technology called Envoy.
It is designed for extensibility and works in a variety of deployments. Istio works by deploying a lightweight Envoy-based sidecar with each of the microservice which intercepts all of the service-to-service communications. As a result, Istio Service Mesh allows the cluster to instantly add discovery, authentication, smart routing and monitoring services with few or no code changes to each of the microservices.
Istio routing services are configured and managed using its control and include:
- Rich routing rules, including availability and retry features
- Protocol support for HTTP, gRPC, WebSocket, and TCP and load balancing and metering
- Monitoring the logs and traces
- Identity-based authentication and authorization and access control lists

How does service mesh fit with the rest of the cloud infrastructure
A Service Mesh doesn’t replace a Kubernetes infrastructure. It complements it. While the mission of Kubernetes is to manage the underlying diverse compute resources, the idea between Service Mesh is to be the fabric underlying the service-to-=service connectivity and networking. An orchestration system, like Kubernetes, is a requirement for a service mesh to exist.
In fact, you could think of the Kubernetes routing functionality (’ “service” resource”) as a basic prototype of a service mesh, as it provides service discovery and round-robin balancing of requests. In contrast, fully implemented service meshes provide much richer functionality, including metered routing, policies, authentication and more.
While routing requests is an essential part of a microservices implementation, the key difference between a Service Mesh and an Ingress controller is what kinds of requests are being routed. Ingress controller, as other API gateways, is primarily responsible for routing requests from the outside of the clusters, such as requests from a mobile application or a web browser, to the microservices application, while Service Mesh takes charge of routing APIs between the microservices inside the cluster.
Benefits of service mesh architecture
Advanced routing
Most Service Meshes, for example, Istio, provide advance drounting features including, the ability to control the API calls and the flow of traffic between services. This includes simplifying configuration level properties, such as retries, time-outs, A/B testing, canary roll-outs and more.
Network security
Service Mesh additionally provides many important security features at the connectivity level.
The most important is authentication and authorization. Before it’s able to communicate with the cluster, every new service needs to authentication and authorize itself to the Service Mesh.
Additionally, Service Mesh insures a secure encrypted communication channel, both inside the pod, as well as pod-to-pod.
Observability
Istio and other Service Mesh implementations provide robust tracing and logging capabilities.
With improved visibility into API and packet behavior, DeVops and security can predict scalability and performance issues before they affect production, making the overall system more reliable and robust.
Conclusion: where service mesh makes sense
When applications are implemented with microservices, the underlying infrastructure needs to provide a lot of enabling functionality including traffic routing and management,load balancing, health monitoring, service and user authentication and more.
Service Mesh is a great way to address most of the service-to-service communication issues. Whether you are just starting on the path to microservices or looking to optimize your existing deployment, Service Mesh infrastructure is a way to optimize security and scalability as your move your microservices applications from staging to production.