You’re dizzied by hours of researching DevOps trends and hashing it out with colleagues as you look towards what’s happened in DevOps trends. Take a breath and relax. You’ll want to read this post. We can save you time in meeting rooms and help you build efficiency and power in your CI/CD.
We’re going to cut through the fluff to which DevOps trends are worth hitching your wagon (or budget) to in the coming years—and which should be marked with a hazard warning. From containers to chaos engineering, here are the DevOps trends to trash and the ones you’ll want to go fanboy on, starting with the “heck yesses.”
Container use is on the rise.
According to a survey by Cloud Native Computing Foundation, 49% of companies deployed 250 or more containers by the end of 2017(and this number is expected to grow). The number of low volume deployments (<50) decreased while high volume deployments (> 5000) increased.
Companies are adopting containers and it won’t stop anytime soon. Containers offer ease and flexibility as more companies try to implement microservices to deliver functionality to customers. The movement to the cloud and containers are ways of future-proofing businesses that need to run slick, fast, and flexible.The increase in container use comes with new challenges:
- Reliable network communication between containers
- Security for new API-focused structures and cloud management
- The complexity of infrastructure (possibly hundreds of services deployed at one time)
- Monitoring large deployments of containers
- Scaling deployments as needed
Challenges standing, containers offer a tremendous benefit that outweighs the lag time in security and organization. They are easy to deploy and easy to replace, fitting in nicely with a “fail fast” mentality of sped-up CI/CD development cycles.
If you’ve implemented DevOps or want to, deploying containers gives maximum flexibility for operations and scaling with your business.
Goodbye CI pipelines.
Hello, specialized CI/CD assembly lines.
Traditional CI/CD pipelines are no longer cutting it. The next generation is coming, bringing specialization and automation to faster, more secure workflows.
Continuous integration and continuous deployment (CI/CD) pipelines have been in use for many years. CI/CD pipelines began as a mechanism to automate the build and testing process of code. A developer writes some code, checks it into a source code repository, and the CI pipeline takes over. The code is packaged for delivery, tested, and readied for deployment to production.
Developer needs have become more complex, pushing CI tools to add more functionality. Applications are also more numerous and complex, but with some standardizable production components. The pipeline has become clogged and sluggish despite pushes for faster cycles. Now, a new trend has appeared: DevOps Assembly Lines.
Let’s look analogously at car assembly to better understand the movement to DevOps Assembly Lines. Car manufacturing relies on a series of specialized manufacturing lines. Separate pieces are built and assembled in one assembly line. The engine, doors, axles, tires, transmission, and the like are built in separate manufacturing lines. Then, one assembly line takes the output of the others and transforms it into a new car.
DevOps Assembly Lines work in the same way for applications. A group of CI pipelines is linked together to form a single workflow. (Image below.) Assembly lines automate and connect specialized activities, performed by several teams, to create each application piece. Then, these branching pipelines are fed back into the main CI/CD pipeline.
One pipeline builds the infrastructure required to run the application, such as a group of containers or VMs running in the cloud. Another builds the application code. Another tests the application code while another scans the code for security vulnerabilities. Each separate function has its own pipeline and delivers an important piece of the final application.
Each pipeline can be maintained separately while functioning together. Looking at the whole assembly line, you can determine if any pipelines are bottlenecking. Lags or issues can be more easily identified and solved. Mundane tasks, like semantic versioning or network setup, can be automated. Each pipeline in the assembly line can be triggered automatically or by manual approval.
DevOps encourages automation and clear communication between the various stakeholders present in any application deployment. Assembly lines encourage clear communication and cooperation from operations, development, security, product management, and testing groups. Make sure all feel a sense of ownership for their pipelines. Start Assembly Line thinking by finding manual tasks in your current workflow and explore how an assembly line would help automate, solve or quarantine problems, or improve your DevOps efficiency.Build In Security for DevOps
Implementing DevOps requires a shift in mindset. Developers learn more about infrastructure and operations. System admins begin to think more like developers, storing Puppet code in code repositories.
Secure DevOps also requires a shift in mindset. Developers have to start thinking about security early and often. Security experts have to learn about the processes and business goals pushing software development, so they can better protect the releases and overall business in the total business context.
Security in DevOps is accomplished in two ways. First, security needs to be a process that is distributed across your company and integrated into existing teams. Second, consider updating security solutions to fit more modern technology landscapes and security concerns, like application level specific security and AI/ machine learning to combat enormous data loads.
There are tools and practices that help both developers and security synch with ongoing DevOps workflows. A big part of that is automating your security testing.
Automate security testing where it matters to people.
Encryption is the backbone of many security functions. Whether you’re keeping customer data safe or encrypting network traffic with HTTPS, keys and certificates are necessary pieces of modern applications. Automate key and certificate management so that developers don’t have to wait, or worse, try to build it themselves. Repeatable, automated processes keep your applications safe without getting in the way.
Integrating security tests alongside functional tests in your CI pipeline increases test coverage and reduces the risk of dangerous bugs creeping into your code. Scan any open source code used so you don’t put customer data at risk by using unsafe versions. Automated testing should be implemented for security alongside functional tests.
Despite your best efforts, vulnerabilities may still slip into production. Runtime protection solutions provide real-time protection against attacks in the wild. Software tools like RASP or WAFs intercept attacks before they reach the vulnerable code. These attacks are recorded, letting you know what attackers are trying to do to your systems. Next-gen WAFs offer even more sophisticated AI and attack verification to decrease false positives and improve security while keeping your applications safe in production.
Automated systems can be purchased and even run, but still not deliver value or protection. Avoid solutions that over-promise and underdeliver. Companies claim to automate security, but then demand huge resources for administration and updating. Others are disruptive or take a long time to deploy or configure. Both problems lead to low adoption.
Even if you run, make sure your response-team is equipped for success. Alerts and alarms are key to keeping your applications safe. High false-positives or tedious tuning will cause overwhelmed or sloppy management. The trick is to tune alerts so teams aren’t constantly running around with their hair on fire. Instead, they get the right messages at the right time.
Once you find the balance, you’ll be able to respond before any damage is done. You can also boost your code integrity and up security by introducing deep learning tools that can learn what the baseline is for your environment, traffic, and code and identify and respond to abnormalities or weaknesses.
Artificial intelligence and machine learning for DevOps
Much has been said about the benefits of artificial intelligence (AI) and machine learning (ML). The same benefits can be seen in a DevOps environment in three major ways.
First, AI and ML greatly increase the ability to gain fast feedback from systems. AI and ML algorithms can take in massive amounts of performance metrics from our systems and predict when problems may occur ahead of time. System admins can take those warnings and fix the problem before any real damage is done. (Bonus: code health boosts as teams get feedback about code!)
Second, AI and ML will strengthen monitoring and alerting without heavy resources. Any security team is all-too-familiar with floods of alerts coming from various systems. Examining all the alerts and determining which are false positives is humanly impossible. AI and ML algorithms are much more capable of combing through massive amounts of data and accurately identify which incidents are actually vulnerabilities within the application’s source code.
Finally, runtime security is revolutionized by AI and ML technologies. Learning how applications and users behave by using live data, AI and ML algorithms can detect anomalies and stop attacks in real time.
So, why hasn’t everyone adopted AI security helpers? AI and ML technologies can seem complicated, especially if you’re DevOps processes are still maturing or your resources are currently dedicated to everything else. However, there are solutions that are ideally suited to grow with companies. Start-up to Fortune 500, look for products featuring built-in ML models that focus on easy deployment. You’ll be able to use AI and ML without training the models yourself or heavy tuning and admin.
Say yes to the service mesh.
Consider a service mesh to help developers focus on application code and secure microservices.
Microservices are small chunks of code that do one thing well. Most applications today offer a cohesive view to users. In truth, they are made up of a network or smaller applications communicating over a network.
The many interdependencies between services create new failure modes. If one service goes down, it has downstream impacts on all of the services that depend on it. One small failure can lead to a large failure.
Service meshes were created to help microservices communicate with each other and prevent problems due to downtime. They provide powerful primitives to give services the extra protection needed without developers having to write huge amounts of error handling code.Service meshes provide:The increase in container use comes with new challenges:
- Dynamic routing – great for canary routing, traffic shadowing, and blue/green deployments
- Resilience – use techniques such as circuit breaking and rate limiting to mitigate the risk of failures
- Observability – collect metrics and add context to service-to-service communication
Developers are freed to focus on business logic while the service mesh handles getting requests to the right services. Service meshes are great tools for enabling the speed and fast feedback DevOps requires.
Made famous by Netflix, chaos engineering has developed a foothold in DevOps thinking, strategy, and practices by promising resilience.
Chaos engineering builds resilience in large, distributed systems. Testing for what may go sideways is impossible in a large system with endless variables. Chaos engineering performs experiments in production to weed out problems you’ll never notice otherwise.
Chaos engineering follows these principles:
- Build a hypothesis around steady-state behavior.
- Vary real-world events.
- Run experiments in production.
- Automate experiments to run continuously.
Chaos engineering is not chaotic engineering; it demands purpose and purposeful testing. That may mean that chaos engineers need to better understand the business logic and context underlying testing. Saying a system “works properly” under stress is too vague. A specific goal has to be at the center of using chaos engineering. For example, some services could go down as a result of tests that have no effect on the end user experience. It’s important to fix these errors, but the main experiment should focus on what could cause your users to see a failure.
In Netflix’s case, they only care that users can watch content on their platform without interruption. Their chaos engineering experiments focus on creating situations where a user cannot watch content, pinpointing unforeseen problems, and fixing them.
There are some prerequisites to chaos engineering. First, you need good monitoring in place. Next, determine your top five most critical services. Then, pick a service and create a hypothesis you want to test.
Chaos engineering revitalizes the tired concept of disaster recovery by making it a useful ongoing practice. Most companies have a disaster recovery plan in place. But, how often do they exercise it? Chaos engineering creates controlled disasters so your systems, and employees, can properly react to real problems.
Chaos engineering is not ideally suited for everyone, but it is a trend to watch in the next few years.
DevOps Trends to Ignore
Now let’s look at some DevOps trends you can safely ignore. “Trends” is a common word, especially in marketing. Companies want to know what’s coming around the corner for fear they may miss out on “the next big thing.” But some trends are more bluster than substance. While there may be some merit to espoused practices, it may not really be a new “trend” as much as a good practice wrapped in a new and exciting wardrobe.
If you jump on every single bandwagon that comes along, you’ll end up deaf at someone else’s destination. Be the driver.Tweet
Automation is so yesterday’s fake news.
We’re being a bit funny here. We entirely believe in automation, but the idea that it is “trending” is problematic. Automation is as valuable as it is inevitable. It’s needed to keep up with the variation and volume of data. It will help run and protect your business. It may even help morale. Automation doesn’t replace humans. Assigning repeatable tasks to machines frees human resources up for more interesting, specialized work. Automation has always been a part of business strategy. From Henry Ford’s assembly line to the AWS API, automation has been the goal of business owners for decades.
When labeled as a trend, we focus on any and all automation instead of looking at automation as a means to augment and complement existing business practices and goals. You need to look into what automation is streamlining and perform basic hygiene of your operations. Do you really need 5 approvals on that expense report? Automating the process for 5 approvals won’t net you any speed if the process itself is broken. For example, businesses are changing and infrastructure is taking a new shape with the move to the cloud. Automating a WAF at the top level might still leave you vulnerable at the application layer, where a lot of new infrastructure is located.
In recent years, misgivings about automation have set-in for a number of businesses because of false promises or partial automation that end up requiring a lot of specialized upkeep. But, it stands that automation has always been part of innovation scaling. If it was true for car production, it’s even truer under the accelerating pace of digital transformation.
Create better business processes before trying to automate. Once you’ve evaluated your operations, focus on where and how automating repeatable, mundane tasks will provide the most impact. Automation itself is not the key; using it wisely is.
Edge computing should remain on the edge.
Edge computing refers to increasing the power and capabilities of devices on the “edge” of your network, such as smart home devices and Internet of Things (IoT) devices.
Edge computing has many good applications. Many of these applications are very new, however, and tend to have a very specific use case. While some companies are investing heavily in this space, most companies will benefit from waiting a little longer to see how edge computing plays out and how it relates to your business.
Edge computing is its own field and has little to do with DevOps at all. Labeling edge computing as a DevOps trend gives the idea that all companies should be trying to build IoT devices. That simply isn’t true. Unless your company provides services based on physical objects, there likely isn’t much of an edge computing application for you.
Keep an eye on edge computing and watch how it develops. In the meantime, use your budget on something else.
Microservices are not your savior.
Anyone within a mile of the nearest IT department has heard of microservices. Microservices are small services that do one thing well. A group of microservices may form the backbone of a single application.
Yes, we did label service meshes for microservices as a trend to pay attention to. That’s because if you’re using microservices, you should build them the best way possible. Chaos engineering will also help you do that.
Microservices, like automation, are not an end in itself. Dev team managers should be careful not to run to ask their dev team lead where they can use microservices.
Don’t use microservices just because you can. Only use them if they will solve a business problem for you. Remember that microservices also introduce new problems (which technologies like service meshes and application-layer security are built to solve).
Like in the case of Segment, you may find yourself slowed down or overwhelmed by the hordes of microservices you’ve brought into your domain. Look for the right microservices that play best with your business practices, tools, and goals. In the end, Segment got rid of the microservices and went back to the monolithic service.
Technologies and methodologies come and go. But your architecture is much harder to change than a trend list on the Internet. Use the right tool for the job, regardless of how “cool” a fad looks or what it promises. Being market savvy is far different than being market-lead.
DevOps doesn’t require the Cloud.
Sometimes, trends lead to more problems than they solve. Or the adoption comes before the ideal time, or in the wrong way. The sense of urgency is real, but a rush job can leave a lot of messes down the line.
Cloud is an example of this race to use the latest technology. Cloud technologies give huge amounts of power and flexibility. But like microservices, the cloud should be used for what the cloud is good for. And in the right way. Cloud providers, like AWS, are very clear in their Shared Responsibility Model that the client is responsible for their own data security.
DevOps doesn’t depend on where your code sits. You can have an on-premise application follow DevOps principles. DevOps principles apply whether you are completely in the cloud, run a hybrid, or are all on-premise.
Your success does not pivots on any single buzzword. It grows outward from what trends are best suited to your own groundwork.Tweet
Again, always use the right tools for the right situation. Migrating to the cloud may not always be necessary. You can follow DevOps principles no matter where the code is.
Not All DevOps Trends Are Worth Following
Your success depends on determining which trends will drive success for you. We love new technology and its promises. But, sometimes it’s just not the right fit, or not the right fit right now. There’s no such thing as “best practices,” but best practices for your company.
Guide your company toward the DevOps practices that will drive real value. Ignore the rest.