This is a small Friday project aimed at bringing some fun to those who deal with Neural Networks and got a bit tired of them. The idea is to emulate the impact of alcohol and other substances on a Neural Network. The project outcomes can be interesting. It can also help you measure the stability of your networks which can be quite useful.

We create many networks and often times we need to understand how excessive is the information stored inside. How many neurons are not really significantly involved in the output layer? How excessive is our topology for a particular problem? We created NeuralDrugs to answer all these questions.

A simple code is available on GitHub and is compatible with the TensorFlow models. Enjoy!

# git clone git@github.com:wallarm/neuraldrugs.git

During the first run, you need to enter the path to the model’s meta-file as well as the operation mode. A model’s file will be changed, in some bizarre fashion part of the weights will change their values. This will lead to unusual outcomes during computations.

The model where alcohol is interacting with neurons assumes that the blood clots clog the oxygen access to a neuron and it dies off. This can be emulated by putting the minimum possible weight on the neuron (https://en.wikipedia.org/wiki/Short-term_effects_of_alcohol_consumption). The impact of DMA on the neurons is represented by an arbitrary weight on the neuron. Unlike “Alcohol”, such an impact could cause the network to “hallucinate”, i.e. produce unpredictable results.

Let’s check the model’s performance using im2txt, Google’s image detection network. (https://github.com/tensorflow/models/tree/master/research/im2txt) In our case, it has been taught around 2 million iterations.

Here is the image output of the clean network:


Captions for image wallarm.jpg:

  • a couple of men standing next to each other . (p=0.001899)
  • a man and a woman standing next to each other . (p=0.000383)
  • a couple of men standing next to each other in front of a sign . (p=0.000084)

Now let’s run our Friday project and see what happens. The first parameter is a path to the network’s model, the second is the percentage of the network’s neurons. These values will be assigned randomly.

# ./neuraldrugs.py ./model/train/model.ckpt-1446068 --set_weights_random --dosage 0.05

The results are quite amusing:

Captions for image wallarm.jpg:

  • a man and woman standing next to angle ornaments . (p=0.001681)
  • a man and woman standing next to each other . (p=0.000604)
  • a man and woman standing next to angle ornaments (p=0.000185)

The network’s vision has been diluted and now it sees ornaments where previously it could only see the people and the road sign.

Now let’s increase the dose and check out the result once more:

# ./neuraldrugs.py ./model/train/model.ckpt-1446068 --set_weights_random --dosage 0.07

Captions for image wallarm.jpg:

  • a a a a man warms a a a a angle dumping warms medal medal . (p=0.001371)
  • a a a a a man and a a fried a a fried dumping (p=0.000317)
  • a a a a man warms a a a a angle dumping warms medal (p=0.000254)

Now the network is talking about some dumping which is quite amusing. It also stutters on the article which makes sense for a recurrent neural network.

Now let’s take WaveNet for our next example. WaveNet is a generative neural network architecture for image generation. (https://github.com/Zeta36/tensorflow-image-wavenet) As a learning range, let’s feed the network these photos of an object from different angles:


The machine learning outcome for a clean network looks like some average case of this object. It can be guessed easily by its outline. It is black and white because the image has been restored from the network through the weight of the color contrast neurons. The resolution is only 64×64 because of the convolution but you can see it easily:


This network consists of 16384 neurons (18 layers with a different number of neurons). Our parameter 0.01 means that 0.01% of the total neuron number will be damaged i.e. only one neuron after rounding. Take a look at the result of the random value weight change with only one random neuron out of 16384 in the network:

Now let’s run our program and see the result:

# ./neuraldrugs.py ./logdir/train/2017-01-24T06-34-00/model.ckpt-58352 --set_weights_random --dosage 0.01

This network consists of 16384 neurons (18 layers with a different number of neurons). Our parameter 0.01 means that 0.01% of the total neuron number will be damaged i.e. only one neuron after rounding. Take a look at the result of the random value weight change with only one random neuron out of 16384 in the network:


Despite the simplicity of the project, it is really helpful in testing the finished networks. As a criterion, you can use a number of weights that can be damaged in the network without losing the accuracy.

We invite everyone to test and work on our project! Enjoy your Friday!

Leave a Reply

Show Buttons
Hide Buttons