The Basics and Backstory of Kubernetes

The vehicle many businesses use to race alongside today’s technology are applications. Kubernetes, aka K8s, is a framework designed to automate the deployment, management, and scaling of containers and containerized workloads across various infrastructures—a container-orchestration system. Containerization is an essential component in how fast and expansively cloud computing has grown, transforming the world into a collaborative ecology of interwoven microservices, devices, and systems. 

Originally a Google invention, Kubernetes was converted into an open-source system to facilitate the agnostic expansion of the Cloud. Donated to it as a seed technology, The Cloud Native Computing Foundation (CNCF) was then created in 2015 to promote containers with Kubernetes at its heart. 

Kubernetes plays an enormous role in allowing modern business accelerating business and technology growth in the cloud era. It opened the way for automation and infrastructure as code (IaC). It is a highly-integrative way to make the most of cloud environments. Not only does it promote containers, the acceleration of lifecycles has meant companies often have to work with more external providers and services.

Kubernetes has lastingly transformed architecture and computing. K8s solved the problem of moving data between on-premise and cloud environments, allowing companies more flexibility and easier movement. Kubernetes has also made it possible for cloud-native architectures to take root. That, in turn, sped up software deployment lifecycles exponentially and refocused business responsiveness around quickly meeting usability needs and desires. It effectively changed the idea of a “new release” from something epic, into something expected. 

What’s an ingress controller?

An ingress: (n.) a way in.
Ingress + controller: an add-on resource that allows control over what comes into a  Kubernetes cluster. Less a thing, like a turnstile, the ingress controller uses an established set of rules to make routing endpoints for applications, functions, and the like.

ingress controller

With an ingress controller, you can route things like a type of request to a specific microservice.  For instance, you create a rule for your website, http://store.com/cart and connect it to an existing Kubernetes service like “serviceName”: “shoppingCart”. Once that rule is created, inbound API requests to http://store.com/cart are serviced by the microservice, “shoppingCart”.

Ingress controllers can also add other functional enhancements and benefits with its features. 

The value of routing rules with an ingress controller for Kubernetes 

The main value of a K8s ingress controller is the defining creation of routing rules for accessing applications. There, are  other powerful features you get with an ingress controller that don’t come standard with Kubernetes. t .

Ingress-enabled Features

  • Routing and access rules 
  • Virtual hosts
  • Authentication
  • Single IP address for multiple apps/hosts
  • Load balancing
  • URL rewrite

Other features are also available, depending on they type of controller, such as Easy Helm Installation (See NGINX Plus) or transport layer security (TLS)  AMCE support (LetsEncrypt). 

Beneath the rules: ingress controllers and proxies.

In Kubernetes, the ingress is a set of resources and APIs that control how a proxy—Envoy, NGINX, or other proxy—is configured. Every individual proxy is configured differently, having  its own set of APIs and features. Each individual configuration is updated by a specific controller that is built into Kubernetes. Alternately, the controller can also be built by an outside vendor or community. . Between an ABL and an Envoy proxy, different configurations have to be used to modify aURI routing. So, every one has to be built custom. Note that not all features work with every proxy. For instance, NGINX Plus supports sticky sessions, but NGINX OSS does not. 

This high level of variation means some ingress controllers suit particular applications better than others. Knowing your business and technological needs and objectives will determine the best ingress controller for you. . 

Now, you need the information to make that best choice. 

Ingress Controller Types

While Kubernetes is itself an open-source system, there are both open-source and vended ingress controllers. Your choice should be a balance of what you can afford, what you need now, and where your business is going. 

Open-source K8s Controllers 

NGINX (Kubernetes Community)

The  built-in ingress controller for Kubernetes, NGINX, is built on the eponymous NGINX and LUA modules to dynamically update the configuration with no reloading. This makes it very easy maintenance. 

An example of a minimum resource Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

There are cases where an NGINX reload is required. Try to avoid reloads. Every NGINX master will branch into a new configuration, doubling the NGINX workers. What is built into the image limits the scope for adding new modules to the instance. Presently, this means only the Tracing and ModSecurity modules.

NGINX (NGINX Inc.)

There is NGINX—and there is NGINX Inc.

NGINX has its own free and paid ingress controllers. The free version from the Kubernetes Community (NGINX) has similar functions to the paid one of of NGINX Inc.However, each has its own distinct philosophy and deployments. 

NGINX doesn’t need to run any additional third-party modules. applies It runs entirely on unadulterated NGINX configurations. Thus, NGINX Inc. retains full control of everything inside NGINX, from NGINX to the controller. 

This free version’s largest drawback is this inability to run third-party modules. The OSS NGINX Ingress cannot support dynamic configurations. An NGINX reload starts whenever a Kubernetes endpoint is newly updated or added.

Envoy (Gloo, Heptio Contour, Istio, Ambassador)

If you haven’t heard the buzz about the Envoy ingress controller, start listening. The introduction of Envoy has earned a lot of accolades.  The features are very exciting, including: 

  • dynamic configuration updates; 
  • configuration APIs
  • gRPC and HTTP2 support; 
  • monitoring; and
  • authentication. 

Read the documentation to get more information.  

Envoy is still a young company, despite the incredible response.. As with any new or cutting-edge application,  you need to have your security and quality assurance teams vet the tech to see how it functions for your company structure and maturity as well as testing for edge-case scenarios.

All new technologies require a level of DevOps cultural maturity. Move your company towards that as soon as you can, if you are in the early stages. Even the best new tech may also require a level of customization, like a custom-built add-on, or may need to rely on external endpoints (i.e. Istio Mixer in the case of Envoy) for security, tracing, etc.

Gloo Edge

Your teams need to have the ability and resources to check any new tech before adopting—taken too far, though, you can get stuck with dangerously outdated technology if you never prioritize testing newer tech.

Purchasable K8s Ingress Controllers

NGINX Plus

Members of the larger Kubernetes community and ingress experts have lauded NGINX Plus. 

NGINX Plus has supplemental features that unavailable in their free OSS ingress controller. You don’t have to modify the base image to install third-party modules. All it takes is a helm installation.

If you choose to purchase the  commercial version of NGINX, you gain exclusive NGINX modules, like the dynamic configuration changes without the hassle of restarting NGINX. 

Kong

Kong is commonly described as an API gateway, although the composite technology is NGINX. That’s not a bad thing, necessarily. Kong offers some serious benefits, including being incredibly easy to customize. In fact, Kong is all about ease-of-use. Kong also offers an abundant ecosystem of modules and makes those modules incredibly easy to install with their dashboard. 

Configuring Kong for Kubernetes is easy:

#using YAMLs
$ kubectl apply -f https://bit.ly/k4k8s
#or using Helm
$ helm repo add kong https://charts.konghq.com
$ helm repo update
#Helm 3
$ helm install kong/kong --generate-name --set ingressController.installCRDs=false

Kong operates much like NGINX Plus. A free version is available with limited features. However, the paid version is strongly recommended to harness the power of Kong. 

Hardware-based vs cloud-based controllers 

The infrastructure or hardware you use will be locked-in by the type of ingress controller  you choose. So, this is a choice that will reduce your flexibility. That may mean more if resources are limited. Consider the impact of losing Kubernetes portability between on-prem and clouds.

Google Load Balancer Controller (GLBC) for GCP

Google Load Balancer Controller (GLBC), aka. ingress-GCE, builds, manages, and controls Google's Global Cloud Load Balancers using Google cloud APIs and the ingress resources. At the moment, GLBC is a beta controller for K8s used as a managed offering. 

Use GLBC with caution. Thoroughly test it with both security and quality assurance teams (as we’ve noted before). 

The benefits above the risks? As with most things “Googly”, GCLB offers a ton of great features. It comes with a single IP address for any K8s cluster in your Google Cloud Platfrom (GCP) projects and you can enable kubemci or Kubernetes Multi-Cluster ingress. You also get access to any proprietary add-on designed for GCLB, like Cloud Armor.

Google Load Balancer Controller (GLBC)

Application Load Balancer (ALB) for AWS

Amazon Web Services (AWS) has a Layer-7 managed load balancer called Application Load Balancer ( ALB). The advantage of the  ALB controller isethat while it is simplerc than other hardware- or cloud- based controllers, it is stabler and can more adeptly use native ingress APIs than alternative cloud controllers.

Application Load Balancer (ALB)

Citrix (Hybrid/Multi-cloud)

Citrix also offers a proprietary ingress controller. Sadly, you have to use a virtual appliance or additional hardware like Citrix ADC CPX. )Using a Virtual Appliance, Citrix can deploy to on-premises or cloud (GCP, and Azure). This  means you can experiment with hybrid or multi-cloud K8s deployments.

Want more features for your ingress controller?

Every ingress controller has its own pros and cons. There are a  wide variety features for add-ons, plugins, modules, and the like.

Up Your Security 

Wallarm

A cloud-native, automated API and application security platform, Wallarm is used to detect and block malicious requests. Itbuilds directly into an ingress controller acting as a WAF and offering extremely low false-positive rate. The Wallarm security platform bundles in features like vulnerability scanning.

Supports: Envoy, NGINX OSS, NGINX Plus, Kong

Wallarm scanner

Authentication/Authorization

Most ingress controllers support authentication (AuthN) with no extra work. However, some ingress controllers support external authentication and others rely on external endpoints for authentication (i.e. Istio Mixer). And, still other ingress controllers only support JWT

Supports: nginx-ingress, NGINX OSS, NGINX Plus, Envoy, GCLB

ModSecurity

ModSecurity is the most tried-and-true when it comes to Web Application Firewalls (WAFs). It is pre-installed on NGINX-ingress. You can install it as a dynamic module on NGINX-Plus. 

The downside is that  ModSecurity is not very good on parsing advanced API protocols. It is also  based on regular expressions; that means each signature has to be manually added  and any preconfigured rules have to be continuously managed and vetted for false-positives.

Supports: nginx-ingress, NGINX Plus

Observability Add-ons

OpenTracing

A vendor-neutral instrument for distributed tracing, OpenTracing relies on tracing headers at each microservice. A service mesh makes it easier. Easily access these traces with Istio, NGINX-Controller, or other platforms.

Supports: Envoy (Istio), NGINX Plus (NGINX controller)

Prometheus

Prometheus is a monitoring and time series database. It relies on a Prometheus exporter from your whatever your chosen proxy/ingress. Your exporter will determine how much data is gathered, since not all gather the same amount of data. For example, NGINX OSS exclusively exports stub status metrics while alternate proxies export a lot more.

Supports: nginx-ingress, NGINX OSS*, NGINX Plus, Envoy