Security Advice from an NGINX Cloud Architect (and Yoda)
Learning from the masters can be inspiring. It can also help us set realistic expectations about what success looks like. In listening to NGINX’s Rich Hogan, the main takeaway is that Kubernetes Security is not something you achieve. It’s something you continually take care of and understand as constantly transforming.
Here are our highlights from the conversation—and a few bonus tips about your K8s cybersecurity from Yoda.
“Security is not binary. It is analog.”
Rich Hogan, Global Architect at NGINX
Even the tiniest step towards securing operations in IT Kubernetes—and operations more generally—must have solid comfort with insecurity. There is no simple, binary yes-or-no answer to the question:
Do I have security or don’t I have security?
Consider the difference between a simple wooden door with a hook lock, a steel security door with a serrated pin key lock plus a combination, and a biometrically locked government clean room. A lock provides security, but there are varying degrees of sophistication of a lock or lock system.

A full range of potential vulnerabilities exists in any platform. This was true with virtualization. It’s even more true—and at a larger scale—with container orchestration. K8s mean that you potentially have many diverse workloads and new entry points and attack vectors that you need to consider. That means the answer to if you have security is never going to be “yes” or “no”. The question needs to be continually asked.
A wise Security master knows, “There is no real security.”
Despite the built-in dangers to K8s environments, there are masterful ways of approaching understanding and addressing Kubernetes security.
Thinking around security needs to constantly be evolving over time because the threat footprint can grow ever larger as time passes and companies change. The control points or points that can be attacked can have a much bigger impact.
We are going to talk about three security considerations related to Kubernetes.
- CHANGE THE DEFAULTS: start with good practice and change your settings as you scale. Do not leave the settings at “factory default”.
- ENCRYPTION, RBAC, API SERVERS: Hackers will find K8s-specific entry points and vulnerabilities. Understand the nuances of Kubernetes environments and how to apply protections there.
- DEV-SEC-OPS: Making security work from end to start, in real time.
Ultimately, when approaching K8s security, make sure you cover these following three areas. We won’t cover all of them exhaustively and the list would probably grow as you read these words. Above all, stay vigilantly aware of trends and tools that are emerging. Even the strongest security officers may benefit from thinking, WWYS—or, What would Yoda say?
“There is no try, only do.”
Yoda
And keep doing. There is no stop point on the journey to total security. Nor any steady states of total security lockdown.
Change the Defaults
Your foundation will crumble if you don’t change the basic K8s settings from the system defaults to what’s right for your organization. Do not choose convenience over good. Those doors opened by the path of least resistance can leave you unwittingly open to attacks.
“Once you start down the dark path, forever will it dominate your destiny”
Yoda
If you start your Kubernetes project on GKE, or some of the other cloud provider, many of the cluster settings will be already set by the hosting provider. While these defaults are more appropriate for a cloud than regular K8s defaults, they are optimized for convenience and getting started quickly, not for security.
The best practice is to really follow the rule of least privilege. Scale from a good practice and not the pre-existing defaults that come with Kubernetes for convenience of getting going. It doesn’t matter if you are running Kubernetes yourself or going through a cloud platform. Once your operations become institutionalized, it’s very hard to switch if you don’t put in a foundation where you’re using the correct defaults.
This is not a one-and-done property setting.
Change the defaults as you scale and you add more and more container workloads to your orchestration system. You want to make sure that you’re operating in predefined and well-known namespaces. In addition, those namespaces have to be appropriately used to apply correct defaults when you’re configuring things like pods and containers—determining what kinds of memory resources and limits will be applied to those containers and pods.
Extremely important is the security of storage accessed by Kubernetes workloads, like pods’ persistent volume claims and the associated Quality of Service and images pulled from repositories. This storage system can become potential points of vulnerability for Ingress controllers, various workloads and even private repositories.
As a starting point, when you’re changing your defaults create your own namespaces. Use those namespaces in your context and start by creating limit ranges that can be applied to your pods. This will change the default resource settings given to containers that are created in your new namespace such that they meet criteria set by you and in compliance with your operational policies, rather than defaults.
CHANGE EVERYTHING. Changing namespaces is just one example of the many many items changing defaults applies to. Try to define what the parameters are, the ranges, and the roles. You know the cluster role bindings, the ingress policies, et cetera. Make sure you’re not using default certificates or or any other components.
So, that’s the first piece of wisdom.
Take Control: Encryption, RBAC, & API Servers
It’s great that everything is distributed and each microservice is focused on its own task. But how can we make sure the microservices are what they are supposed to be and not hackers masquerading as a legitimate service?
Eavesdropping on the API conversations of other microservices and pretending to be something you are not is called “spoofing”. To avoid spoofing, microservices share “secrets” with each other. Secrets are like a codeword in a speakeasy—if you don’t know the word, the door stays closed.
But what if the secret was written on the doorframe outside of the bar?
Secret storing and access are together one of the more central and pressing problems of distributed software design, including microservices. Secrets should be encrypted and handled and stored with extreme care.
The other issue is working with sensitive workloads. To protect sensitive information, this information is encrypted. However, encrypting and decrypting data means a lot of computing overhead. The challenge is to have encrypted workloads perform optimally. They need to be able to work on the bigger world while safeguarded, which requires a detailed and controlled process. Access is a key part of the process. But so are practices that add dexterity to your encrypted workloads.
RBAC Backup
Another very critical concept in Kubernetes use is turning on Role Based Access Control, or RBAC. It works with namespaces to create effective rules for denying or authorizing access. Much like setting up other controls, setting up role-based access controls can deeply influence what gets in (or stays out of) your system.
“Your path you must decide.”
Yoda
Here is how it works. For almost all Kubernetes installations where you’re going to use role based access control, an identity is passed to the API server. [See a simple diagram of Amazon Elastic Container Service for Kubernetes (Amazon EKS) application architecture on its AWS Deployment page.] The identity is verified against a token or secret server of some sort. Depending on the result, the action is allowed or denied based on the authorization and access provided based on RBAC.
In conjunction with namespaces, role-based access control is a very effective means of providing added security to a basic Kubernetes environment. RBAC action authorizations or denials involve numerous different objects in Kubernetes. A user can be assigned a cluster roll and a cluster roll binding can be assigned to be objects to which that applies and to the namespace in which it applies.
K8s API Server Vulnerability
Kubernetes arguably has a major attack vector, but looking closer, it’s actually the Kubernetes API server that is vulnerable. The Kubernetes API server processes every command into the Kubernetes cluster. Really, it’s the interface that determines what operations any user can perform in a given namespace or any other area of the Kubernetes system.
Let’s say you have a multi-tier application deployed in multiple containers in Kubernetes and the frontend UI server has some sort of bug in it—either a misconfiguration or URL hack that allows an attacker to grab an access token. That token can be used to then talk to the API server and potentially get privileged information.
In this scenario, the access token that an attacker gets from the frontend UI allows them to interrogate the Kubernetes API server and get the appropriate secret so that he can interrogate a database in the application and begin to see sensitive data. This is just one of many vulnerabilities through the API server that you can mitigate.

The other important issue of API security is ensuring nothing is communicated in clear-text. In other words, APIs need to ensure security in-flight, which is provided by the Ingress controllers which also managers API encryption (SSL/TLS) in Kubernetes. They’re like the R2-D2 for APIs. The default ingress controller that comes with Kubernetes distribution is based on NGINX Open Source.
There is also a version of the ingress controller called NGINX Plus that adds useful API management features:
- Session persistence, which can improve the performance of the encrypted workloads; and
- Real-time monitoring and health checks, which can be important to your security posture.
To really address API security, consider using the Wallarm-instrumented ingress controller which can be either NGINX or NGINX Plus based. Wallarm provides in-line API attack detection and filtering based on adaptive security rules, customized for each API endpoint.
Go beyond default settings by setting up good policies and limiting access. Take control of your environments.
DevSecOps: Check before, during, and after.
The broader issues that encompass a lot of vulnerabilities to your Kubernetes environments fall under the DevOps umbrella. More accurately, the challenge is around integrating Security into “DevSecOps.” It is an integrated approach to fundamentally ensuring your security courses throughout the deployment lifecycle in a container-based operational environment.
DevSecOps is a pretty broad set of tools and processes. Your security in a Kubernetes environment depends on a variety of different components that are active in DevOps.
A few key K8s components.
Choose Wisely: Registry & Repository
In Kubernetes and container-based workloads, you need to pull images from a registry. For security, you have to make sure those container images are pulled from an authorized place. You also have to have some semblance of assurance that the container image is actually the right container image and hasn’t been compromised or changed in some unintended or unauthorized way.
DevOps tools can help maintain the integrity of the images in container based-workloads. At the very foundation of the DevSecOps cycle are tools for this purpose, like Git, GitHub, and Gitlab. Choose the right one for your environment. JFrog has a product called Artifactory that’s used for private registries. Cloud vendors also have their own private vendor-distinct registry options. Docker Hub is one.
Registries then comprise two types of different repositories the source repositories. On the one hand, you have repositories represented by things like GitHub and BitBucket and GitLab. On the other, you have binary repositories, which are things like Docker Hub and JFrog’s Artifactory.
The Force of Security is Stronger Inside CI/CD
Security testing inside your DevOps lifecycles can make the difference between having security and actually using it. Choosing the right tools that are easily adopted and effectively used is a definite critical component of securing your environment. Another stage that’s very important is the continuous integration component (CI/CD). This is where you may see things like Jenkins, CircleCI, and CloudBees. These tools can all be leveraged to perform security testing. But are they effective in your own, distinct DevOps workflow?
DevOps security tools are also a big boost to overall code health and compliance. Testing tools, such as Source Clear, can be used in the CI process to make sure that source code is compliant or doesn’t include any flagged code or compromised code. They can test during the container build. After the container build, tests can be performed to ensure that containers are compliant before they’re published to an authorized repository.
Consider Wallarm’s FAST for integrated testing that is super adoptable as it leverages the existing tests you are already doing.
“Difficult to see. Always in motion is the future.”
Yoda
The future is uncertain, except that it is always evolving—be that in attacks, tools, or your own environment and code. You need to stay aware of the threat landscape, internal goings-on, and new technology. Not checking all new code before production can leave vulnerabilities. It may not be about malicious attackers exploiting an emerging weakness. It may instead be a vulnerability in your own code that leads to non-compliance or data breaches that leave you stranded in the muck and potentially losing the war for security.
Logging
Another big component that has gone unmentioned so far is logging.
Given all of the different limit ranges and non-default configurations you put in place, you can log any exceptions and track them. Tools such as Splunk and Data Dog can be very useful in the DevSecOps process for logging and monitoring. Signal FX, recently acquired by Splunk, also has some really good tools supporting open tracing for microservices workloads.
Track issues that come out of CI
Issue tracking and testing tools, like Jira, can be used to track any vulnerabilities that come out of the CI/CD process so that they can be well worked.
Go back to the bottom of the process to modify the source and the binaries for the containers. You have to go back through the CI process until the issue count is at an acceptable level for your minimal viable product. From there, on configuration management and continuous deployment or release, there are a variety of tools that can be used to ensure consistency and reduce exposure due to manual error. Tools like Chef, Puppet, and Danceable have been extremely useful in managing configuration files. Those could include Docker files and other components. Finally, that code, in turn, can be published back into your source repositories and then back through the whole process yet again.
The targets for these refined working of issues can be both on-prem, public cloud providers, or a hybrid of the two leveraging the container architecture.
This process is basically a brief overview of what DevSecOps should look like:
- TEST IN DEVOPS (without interruptions or external staging for tests), using integrated testing tools that are best for your environment;
- TRACK IDENTIFIED VULNERABILITIES that come out of the CI/CD process;
- WORK THOSE IDENTIFIED ISSUES, taking them back through the CI process until you meet minimal viable product;
- USE TOOLS AND PROCESSES to adjust and manage configurations to ensure consistency and reduce vulnerabilities prior to deployment; and
- PUBLISH back into source repositories and start again.
As you can see, the whole process starts with and returns to security as a process, beginning with changing your defaults. It is as continuous as development and operations each is. And it needs to run alongside them.
Thinking forward about security is not about achieving the right things and letting them run. It is about understanding that this is an industry that is in as constant evolution as the technology itself, battling with threats and vulnerabilities that must be contained.
WWYD: Continuous mastery.
If Yoda were a 900-year-old Security master, he would say: “So certain were you. Go back and closer you must look.” (To the point, this is really not in the script, but is frequently misquoted.) The force inside you is not enough. Always keep checking, and rechecking, and checking again.
…
This article is adapted from an interview with Rich Hogan, Cloud Architect for NGINX, from a Wallarm webinar. Check out the full webinar at: