Many of the developers we speak to are interested in taking advantage of Google Compute Cloud for developing and hosting their web applications. The advantages are many from reasonable costs to built in scalability to high level of availability built right into the platform.

However, the developers are faced with a question: if my application runs in dynamic instances within Compute Cloud and I only have limited control of the routing, how do I make sure that the application is protected?

For example, I can not protect the application using a traditional hardware firewall such in Imperva or F5. I also don’t want to share my traffic with 3rd party cloud protection service like Incapsula.

One such solution is Wallarm integrated in NGINX/NGINX+.

NGINX (and its’ commercial build NGINX Plus) HTTP(S) load balancing is an excellent tool for scaling web applications and, allowing you to route your traffic through a single IP and distribute it across multiple dynamic backend nodes. In addition, HTTP(S) load balancing allows you to route incoming requests with a high degree of granularity.

This Google Cloud Platform implementation guide walks you through the steps of installing NGINX load balancer into your Cloud Compute environment.

Wallarm installs right into the instance of NGINX / NGINX Plus as a software module, making it a perfect fit for scalable distributed architecture of Compute Cloud.

The below tutorial walks you through the steps of retrieving Wallarm packages from the repo, connecting to Wallarm cloud service and installing Wallarm WAF into your Google environment.

How to install Wallarm on Google Cloud

1. Create a new static IP address:

gcloud compute addresses create lb-ip — region us-central1

2. Create a firewall rule to allow external HTTP traffic to reach your load balancer instance. Later in this tutorial, you will configure the load balancer to redirect this traffic through your HTTPS proxy:

gcloud compute firewall-rules create http-firewall — target-tags lb-tag — allow tcp:80

3. Create a new Compute Engine instance, assign your new static IP address to the instance, and tag the instance so that it can receive both HTTP and HTTPS traffic:

gcloud compute instances create nginx-lb — zone us-central1-f — address lb-ip — tags lb-tag,be-tag

4. Establish an SSH connection to the nginx-lb instance:

gcloud compute ssh nginx-lb

5. Add the required repositories

apt-key adv — keyserver — recv-keys 72B865FD
echo ‘deb jessie/’ >/etc/apt/sources.list.d/wallarm.list
apt-get update

6. Install Wallarm packages

apt-get install — no-install-recommends wallarm-node nginx-wallarm

7. Setup the a post-analytics module

vi /etc/default/wallarm-tarantool
systemctl restart wallarm-tarantool

8. Install the license key (paste from buffer to file)

vim /etc/wallarm/license.key
chmod 0640 /etc/wallarm/license.key
chown root:wallarm /etc/wallarm/license.key

9. Connect the node to the Wallarm Cloud


When started, the script will ask for the username and password that you use to log into the interface. It is assumed that the account used has the privileges to connect a new filter node. If such privileges are absent, the script will display an error message.

10. Setup the filtration mode

Uncomment wallarm_mode in the /etc/nginx-wallarm/conf.d/wallarm.conf file to set up filtering. wallarm_mode monitoring means Wallarm will only mark malicious requests. With wallarm_mode block malicious requests will be blocked.

An example of the content of the file:

vi /etc/nginx-wallarm/conf.d/wallarm.conf
# Wallarm module specific parameters
wallarm_mode monitoring;

So you’ve instantly got WAF for NGINX and your apps in Google Compute Cloud just by adding a dynamic module to your load balancer!

More details of NGINX and Google Compute Cloud setup you’ll find in this Google tutorial.