OCIHOL0
Newly updated Wallarm Node images now natively support autoscaling capabilities in AWS, GCP, and Azure. Updated images are already available in cloud provider marketplaces and can rely on the native auto-scaling to adjust the number of nodes based on traffic, CPU load, and other parameters.
Many of our customers rely on autoscaling capabilities to horizontally scale their apps and APIs. Autoscaling mechanisms monitor your applications and automatically adjust capacity to maintain steady, predictable performance at the lowest possible cost.
Earlier releases already had the capability to automatically scale out Wallarm Nodes based on things like the load with native support of Kubernetes. Now, you can also dynamically add an additional node or remove underutilized ones using the native autoscaling mechanism of AWS, GCP, and Azure.
Native autoscaling is supported in Wallarm Node 2.12.0+. Find the images in Amazon Web Services, Google Cloud Platform, and Microsoft Azure marketplaces.
Let’s take a look at setting up autoscaling in AWS.
You can scale the number of instances based on several standard load parameters, including:
For example, in AWS you can set up the following policy for a group of Wallarm Node instances:
If Average CPU Utilization exceeds 60% for over 5 min then add 2 more nodes.
These are the steps required to set up autoscaling in AWS.
Detailed tutorials on how to set up autoscaling are available in Wallarm Docs for:
The shadow technology problem is getting worse. Over the past few years, organizations have scaled…
API security has been a growing concern for years. However, while it was always seen…
It’s an unusually cold winter morning in Houston, and Craig Riddell is settling into his…
You probably think the security mantra “you can’t protect what you don’t know about” is…
APIs are one of the most important technologies in digital business ecosystems. And yet, the…
API security is becoming more important by the day and skilled practitioners are in high…