New Whitepaper On How Wallarm AI Works

“Any sufficiently advanced technology is indistinguishable from magic,”
Arthur C. Clarke

Ever wanted to look under the covers of deep learning/artificial intelligence engine? While deep learning algorithms are generally based on neurons combined into a neural network and interacting with each other, just like in a human brain, to get good results with neural networks, it is critical to pick the right network topology, which has always been a difficult manual task.

One of the most interesting concepts in the modern AI architectures is using meta-AI to find out the best architecture for the task.

“Intelligence involves a great deal more than ability to follow rules… it is also the ability to make up the rules for oneself, when they are needed, or to learn new rules through trial and error.”
Steve Grand, author of Creation

This concept is called Reinforcement Learning, meaning that the parent AI reviews the efficiency of the child AI and makes adjustments to the neural networks, such as adjusting the number of layers, weights, regularization methods, etc. to improve efficiency. We have covered the details of the reinforcement learning implementation based on some of the code Google has open-sourced in an earlier blog.

Wallarm Reinforcement Learning
Wallarm Reinforcement Learning

Wallarm applies the same AI concepts to profile applications and to decide what’s normal within the application profile. This is a major improvement over traditional protection methods, that become more cumbersome or break altogether as applications and attacks grow in complexity and sophistication.

Training neural networks, particularly in the case of reinforcement learning, can take some time. To speed up the process we are using NVIDIA GPUs. We are excited to work with NVIDIA and other advanced AI startups as a member if NVIDIA Inception program.

The main task of the run-time application security is to protect modern applications and APIs.

With AI, we can solve many of the application security challenges that plagued the legacy solution:

• Applications are different both in structure and in content. Things that are harmful to one application may be perfectly normal for another.

• User behavior varies between both the applications and the individual application functions.

For example several login calls every second may indicate a credential stuffing attack, while several data layer queries per second may be a normal function of building a correlated data set.

• The number of known attacks keeps growing, with attack patterns (or signatures) sometimes being hidden within nested protocols

• Straightforward implementation of attack detection based on signatures often results in a high rate of false positives and false negatives,

Wallarm has recently published a whitepaper in which we have taken an in-depth look at the inner workings of several stages of Wallarm AI both in Wallarm Cloud and locally on the Wallarm filtering node.

Download the whitepaper to learn more about Reinforcement Learning and Wallarm AI.