Web Application Security

Exporting Nginx Access Logs to an ELK Cluster

The Wallarm WAF provides an organization with the ability to protect their applications and APIs against a wide range of attacks.  However, an organization may wish to achieve a greater degree of visibility into attack traffic and alerts than is possible via the Wallarm user interface.  

The Wallarm Nginx-based WAF nodes provide protection against a wide variety of threats to an organization’s systems.  However, it is not possible to perform a full-text search of alert data within Nginx or the Wallarm user interface.

By exporting Nginx access logs to an ELK stack, full-text search of alert data becomes possible.  This enables an organization to gain a much deeper understanding of the types of attacks being performed against their applications and APIs.

Why Export Nginx Logs?

The Wallarm user interface provides the ability to search for a number of different types of attack traffic within alert data.  More information about the supported types of searches can be found in the Wallarm documentation.

However, the search functionality of the Wallarm UI does not provide full visibility into every type of potential attack or full details of a particular alert.  If this level of visibility is necessary, then the access logs can be exported from Nginx into an ELK cluster where it is possible to perform full-text searches and other data analytics against them.  This also enables this information to be fed into other systems that the organization may be using for threat detection or threat intelligence generation and analysis.

A couple of different options exist for exporting this data from Nginx.  This article focuses on the approach of sending Nginx access logs to an ELK cluster.  Alternatively, it is possible to use the Wallarm API to extract request information, as described in this blog.


Sending Nginx logs to ELK

How to Export Nginx Access Logs

Sending access logs from Nginx to an ELK cluster is a multi-stage process.  In this article, we’ll focus on configuring Nginx to export the logs in the correct format and send them to a listening TCP/UDP port.  The remaining steps of the process (configuring Logstash and Elasticsearch) are well-described on the Internet, and links are provided that contain more detail on how to accomplish these steps correctly.

Step 1: Configure JSON Access Log Output

The first step in setting up the export of Nginx access logs is configuring Nginx to send the data out in the desired format.  For ease of use, we’ll be defining a JSON-based log format for the export.

The following shows the structure of a custom log format that combines some standard Nginx attributes, such as number of connection requests and response status, with some Wallarm-specific data, such as the Wallarm attack type.  More information about the standard attributes available with Nginx is available in their documentation, and the Wallarm documentation describes the set of custom attributes that Wallarm defines.

Custom Attributes
<code>log_format json_log escape=json '{
"connection_serial_number":$connection,'
'"number_of_requests":$connection_requests,'
'"response_status":"$status",'
'"body_bytes_sent":$body_bytes_sent,'
'"content_type":"$content_type",'
'"host":"$host",'
'"host_name":"$hostname",'
'"http_name":"$http_name",'
'"https":"$https",'
'"proxy_protocol_addr":"$proxy_protocol_addr",'
'"proxy_protocol_port":"$proxy_protocol_port",'
'"query_string":"$query_string",'
'"client_address":"$remote_addr",'
'"http_ar_real_proto":"$http_ar_real_proto",'
'"http_ar_real_ip":"$http_ar_real_ip",'
'"http_ar_real_country":"$http_ar_real_country",'
'"http_x_real_ip":"$http_x_real_ip",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"http_config":"$http_config",'
'"client_port":"$remote_port",'
'"remote_user":"$remote_user",'
'"request":"$request",'
'"request_time":$request_time,'
'"request_id":"$request_id",'
'"request_length":$request_length,'
'"request_method":"$request_method",'
'"request_uri":"$request_uri",'
'"request_body":"$request_body",'
'"scheme":"$scheme",'
'"server_addr":"$server_addr",'
'"server_name":"$server_name",'
'"server_port":"$server_port",'
'"server_protocol":"$server_protocol",'
'"http_user_agent":"$http_user_agent",'
'"time_local":"$time_local",'
'"time_iso":"$time_iso8601",'
'"attack":"$wallarm_attack_type",'
'"route_uri": "$upstream_http_route_uri",'
'"route_cookie": "$upstream_http_route_cookie",'
'"url":"$scheme://$host$request_uri",'
'"uri":"$uri"}';</code>

The file format described above is a general use sample that should work on most systems.  However, some customization may be necessary to meet the needs of an organization’s unique deployment environment.

One limitation of Nginx is that it does not automatically log request headers, so it is necessary to do this manually.  To do so, take the following steps:

  1. Identify all request headers that should be included in the log file (e.g. Cookies, etc.)
  2. Create a custom attribute to log each header such as $http_cookies to log the Cookies header
  3. Add these custom attributes to the JSON log format in key:value format (such as ‘“http_cookie”:”$http_cookie”,’

At this point, your new log format should be defined.  For more information on configuring logging in Nginx, visit the Nginx documentation.

Step 2: Ship Access Logs to Logstash

After configuring the Nginx access log files to be stored in a JSON format, the next step is to set up the export operation.  A couple of different options exist for sending Nginx logs to Elasticsearch.

One option is to use fluentd to perform the transfer.  After saving the log file as a text file, a local copy of fluentd can be used to perform the transfer.  For more information regarding this option, check out the fluentd documentation.

Alternatively, it is possible to use Nginx’s built-in syslog client to perform the log file export.  However, this option requires the receiving Logstash instance to be configured to use the syslog plugin to receive input.  If this is the case, use the following configuration to send the access logs via syslog:

access_log syslog:server=<ADDRESS_OR_DNS_OF_COLLECTOR>:<PORT>,tag=wallarm_access json_log;

Step 3: Set Up Logstash to Receive Logs

After selecting a method to send the Nginx access log data to Logstash, the next step is configuring Logstash to receive them.  Information on configuring Logstash to receive syslog data is available in the Elasticsearch documentation.

Step 4: Search Stored Access Logs Using Kibana

Once the log files have been successfully sent to Logstash, it is possible to search through them using Kibana.  Extensive documentation is available on the Internet for accomplishing this.

Recent Posts

From Shadow APIs to Shadow AI: How the API Threat Model Is Expanding Faster Than Most Defenses

The shadow technology problem is getting worse.  Over the past few years, organizations have scaled…

6 days ago

Inside Modern API Attacks: What We Learn from the 2026 API ThreatStats Report

API security has been a growing concern for years. However, while it was always seen…

7 days ago

CISO Spotlight: Craig Riddell on Curiosity, Translation, and Why API Security is the New Business Imperative

It’s an unusually cold winter morning in Houston, and Craig Riddell is settling into his…

2 weeks ago

The Myth of “Known APIs”: Why Inventory-First Security Models Are Already Obsolete

You probably think the security mantra “you can’t protect what you don’t know about” is…

2 weeks ago

Why API Security Is No Longer an AppSec Problem – And What Security Leaders Must Do Instead

APIs are one of the most important technologies in digital business ecosystems. And yet, the…

4 weeks ago

7 Reasons to Get Certified in API Security

API security is becoming more important by the day and skilled practitioners are in high…

1 month ago