OCIHOL0
Newly updated Wallarm Node images now natively support autoscaling capabilities in AWS, GCP, and Azure. Updated images are already available in cloud provider marketplaces and can rely on the native auto-scaling to adjust the number of nodes based on traffic, CPU load, and other parameters.
Many of our customers rely on autoscaling capabilities to horizontally scale their apps and APIs. Autoscaling mechanisms monitor your applications and automatically adjust capacity to maintain steady, predictable performance at the lowest possible cost.
Earlier releases already had the capability to automatically scale out Wallarm Nodes based on things like the load with native support of Kubernetes. Now, you can also dynamically add an additional node or remove underutilized ones using the native autoscaling mechanism of AWS, GCP, and Azure.
Native autoscaling is supported in Wallarm Node 2.12.0+. Find the images in Amazon Web Services, Google Cloud Platform, and Microsoft Azure marketplaces.
Let’s take a look at setting up autoscaling in AWS.
You can scale the number of instances based on several standard load parameters, including:
For example, in AWS you can set up the following policy for a group of Wallarm Node instances:
If Average CPU Utilization exceeds 60% for over 5 min then add 2 more nodes.
These are the steps required to set up autoscaling in AWS.
Detailed tutorials on how to set up autoscaling are available in Wallarm Docs for:
Is an AI-to-AI attack scenario a science fiction possibility only for blockbusters like the Terminator…
Lefteris Tzelepis, CISO at Steelmet /Viohalco Companies, was shaped by cybersecurity. From his early exposure…
This is a predictions blog. We know, we know; everyone does them, and they can…
The attack landscape has been dynamic following the disclosure of the React Server Components RCE…
As the year draws to a close, it’s worth pausing to look back on what…
On December 3, 2025, React maintainers disclosed a critical unauthenticated remote code execution (RCE) vulnerability…