In the beginning there was http 1 or 2, web pages were static and did not do much beyond displaying static text and images. Life has changed since…

Web applications discovered that bi-directional communication between the browser and the web server is essential. Of course, http protocol, with it’s short lived client-initiated sessions, was not a good fit for this requirement. Before Websockets, a typical solution was to simulate server-push with long polling. This involved the client making a long request that would remain open until the server was finally ready and pushed a message.

Once WebSockets got standardized as a part of HTML5, developers finally got a method of establishing bi-directional fully duplex persistent real time connection between the client and the web server. From Uber-like applications sending real-time collated state updates to individual clients to enterprise management applications needing to have individual agents adjust the level of logging, WebSockets is what allows the server to push data the clients whenever data changes on the server without the client having to request it. Persistency and responsiveness of the protocol made it the standard for the applications with lots of concurrent and quickly changing content, such as mobile apps or multiplayer online games.

For both the WebSocket API and native WebSocket support in browsers such as Google Chrome, Firefox, Opera, there are now WebSocket library implementations in Objective-C.NET, Ruby, Java, node.js, ActionScript and many other languages.

Although a protocol is still a protocol for application level (L7) communications, WebSockets is quite different from HTTP.

Some of the challenges are centered around the fact that WebSocket is a hop-based protocol, which creates certain challenges in configuration with proxies. NGNIX Plus is an excellent example of a proxy that has been designed to work natively with WebSockets as described in this blog. Other challenges have to do with the necessity to maintain hearbeat, reconnect dropped connections and support http failback.

To help developers deal with the technicalities of the protocols, a multitude of frameworks exist to implement most design patterns including Publish / Subscribe, Data Sync, RPC/ RMI and others. A lot of frameworks implement different ways to encode the application data in websockets. Some of them are using JSON, some BJSON and 3rd party formats. This variety of formats requires a security tool that can decode all of these data formats after WebSocket / framework protocol decoding. Considering the protocol is real-time, any tool that inspects the protocol’s content for security should support real-time performance to avoid locking the socket.

Even for transparent proxying, WebSocket performance is of concern. For example, NGINX Load balancer has excellent track record in performance with WebSockets. Many of the Web Application Firewall follow similar approach, implementing transparent WebSocket proxies without data analysis. This is not a security, just capability.

Contrastly, Wallarm analyses the protocol to detect websockets attacks and block them as well. Unlike HTTP request blocking, websockets “request” blocking means that the server should drop the socket altogether.

Wallarm Node has supported websockets since version 2.0, as described in in an earlier blog entry:

Wallarm WebSocket Example

Let’s look at an example of a chat application implemented on WebSocket that is being attacked by a hacker.

Input fields (chat message text “Hello world”) -> JavaScript framework (, ratchet, SignalR, etc) -> Websocket -> encoded data (client side)

Websocket (NGINX) -> Framework (, ratchet, SignalR, etc) -> decoded data (server side) “Hello world”

To configure websocket protection you should enable websocket proxying first at NGINX configuration file:

Configure upstreams to websocket servers

upstream socket_nodes {
    server weight=5;
    # server weight=10;

Then configure the Location to proxy webockets:

location / {
    proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection “upgrade”;
     proxy_http_version 1.1;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     proxy_set_header Host $host;
     proxy_pass http://socket_nodes;
    # error_log /var/log/wallarm/ debug;
     wallarm_parse_websocket on;

All these settings are typical. Only wallarm_parse_websocket on is a new one specific for Wallarm module for NGINX.

Let’s try to test it on chat application from examples to look at the websockets protection in action. Code of the app is available here:

Run it on port 3000 and set only one upstream server in Nginx config:

$ cd /usr/share/nginx-wallarm/html/
$ git clone
$ cd
$ cp -R public/ /usr/share/nginx-wallarm/html/
$ nodejs index.js &

Server listening at port 3000

Then open web development console in your browser and visit page to see the example in action:

Web development console

Look at the first line at Network tab on webdev console. Check that the protocol in websocket instead of http/1.1 as in case of pooling. Let’s see what will happen when one of the participants send a malicious payload into the chat. We have two users in the chat room. First one is Wallarm and second one is attacker which want to send XSS attack payload into the chat.

Let’s look at the attacker chat first:

Attacker chat

Attacker can look at his payload in the chat room because this payload was added in his browser by JavaScript. But this request was dropped by Wallarm and other users doesn’t receive this payload in chat room. Let’s look at the Wallarm user chat:

Attacker chat 2

We can see that the Attacker was dropped from the chat because Wallarm closed the socket between his browser and nodejs chat server.

Now look at the Wallarm account at to see the payloads.

Wallarm account

The request looks like HTTP just because of the normalization. Protocol mentioned in the first line:

Wallarm account 2

Socket dropped — application protected. Happy coding!