The Wallarm WAF provides an organization with the ability to protect their applications and APIs against a wide range of attacks.  However, an organization may wish to achieve a greater degree of visibility into attack traffic and alerts than is possible via the Wallarm user interface.  

The Wallarm Nginx-based WAF nodes provide protection against a wide variety of threats to an organization’s systems.  However, it is not possible to perform a full-text search of alert data within Nginx or the Wallarm user interface.

By exporting Nginx access logs to an ELK stack, full-text search of alert data becomes possible.  This enables an organization to gain a much deeper understanding of the types of attacks being performed against their applications and APIs.

Why Export Nginx Logs?

The Wallarm user interface provides the ability to search for a number of different types of attack traffic within alert data.  More information about the supported types of searches can be found in the Wallarm documentation.

However, the search functionality of the Wallarm UI does not provide full visibility into every type of potential attack or full details of a particular alert.  If this level of visibility is necessary, then the access logs can be exported from Nginx into an ELK cluster where it is possible to perform full-text searches and other data analytics against them.  This also enables this information to be fed into other systems that the organization may be using for threat detection or threat intelligence generation and analysis.

A couple of different options exist for exporting this data from Nginx.  This article focuses on the approach of sending Nginx access logs to an ELK cluster.  Alternatively, it is possible to use the Wallarm API to extract request information, as described in this blog.

Sending Nginx logs to ELK

Sending Nginx logs to ELK

How to Export Nginx Access Logs

Sending access logs from Nginx to an ELK cluster is a multi-stage process.  In this article, we’ll focus on configuring Nginx to export the logs in the correct format and send them to a listening TCP/UDP port.  The remaining steps of the process (configuring Logstash and Elasticsearch) are well-described on the Internet, and links are provided that contain more detail on how to accomplish these steps correctly.

Step 1: Configure JSON Access Log Output

The first step in setting up the export of Nginx access logs is configuring Nginx to send the data out in the desired format.  For ease of use, we’ll be defining a JSON-based log format for the export.

The following shows the structure of a custom log format that combines some standard Nginx attributes, such as number of connection requests and response status, with some Wallarm-specific data, such as the Wallarm attack type.  More information about the standard attributes available with Nginx is available in their documentation, and the Wallarm documentation describes the set of custom attributes that Wallarm defines.

Custom Attributes
log_format json_log escape=json '{
'"route_uri": "$upstream_http_route_uri",'
'"route_cookie": "$upstream_http_route_cookie",'

The file format described above is a general use sample that should work on most systems.  However, some customization may be necessary to meet the needs of an organization’s unique deployment environment.

One limitation of Nginx is that it does not automatically log request headers, so it is necessary to do this manually.  To do so, take the following steps:

  1. Identify all request headers that should be included in the log file (e.g. Cookies, etc.)
  2. Create a custom attribute to log each header such as $http_cookies to log the Cookies header
  3. Add these custom attributes to the JSON log format in key:value format (such as ‘“http_cookie”:”$http_cookie”,’

At this point, your new log format should be defined.  For more information on configuring logging in Nginx, visit the Nginx documentation.

Step 2: Ship Access Logs to Logstash

After configuring the Nginx access log files to be stored in a JSON format, the next step is to set up the export operation.  A couple of different options exist for sending Nginx logs to Elasticsearch.

One option is to use fluentd to perform the transfer.  After saving the log file as a text file, a local copy of fluentd can be used to perform the transfer.  For more information regarding this option, check out the fluentd documentation.

Alternatively, it is possible to use Nginx’s built-in syslog client to perform the log file export.  However, this option requires the receiving Logstash instance to be configured to use the syslog plugin to receive input.  If this is the case, use the following configuration to send the access logs via syslog:

access_log syslog:server=<ADDRESS_OR_DNS_OF_COLLECTOR>:<PORT>,tag=wallarm_access json_log;

Step 3: Set Up Logstash to Receive Logs

After selecting a method to send the Nginx access log data to Logstash, the next step is configuring Logstash to receive them.  Information on configuring Logstash to receive syslog data is available in the Elasticsearch documentation.

Step 4: Search Stored Access Logs Using Kibana

Once the log files have been successfully sent to Logstash, it is possible to search through them using Kibana.  Extensive documentation is available on the Internet for accomplishing this.