Visualising Filters and Feature Maps for Deep LearningGeorge SeifBlockedUnblockFollowFollowingMay 14Deep Neural Networks are one of the most powerful class of machine learning models.
With enough data, their accuracy in tasks such as Computer Vision and Natural Language Processing (NLP) is unmatched.
The only drawback that many scientists will comment on is the fact that these networks are completely black-box.
We still have very little knowledge as to how deep networks learn their target patterns so well, especially how all the neurons work together to achieve the final result.
Having some kind of inside knowledge of how an ML model works presents a few key advantages:They’re easier to tune, since when we see an error made by the network we can point out the cause directlyThe functionality and expected behaviour of the networks can be explained, especially to non-technical stakeholdersWe can further extend and improve the overall design of our models since we’d have knowledge of the current design, including the strengths and weaknesses of itThe great thing is that researchers have begun to explore techniques for understanding what goes on inside deep networks.
In particular, we now have the ability to visualise the filters that Convolutional Neural Networks (CNNs) learn from their training for Computer Vision tasks, and the feature maps that the CNN produces when applied to an image.
This tutorial will teach you how to do just that: visualise the filters and feature maps of a CNN trained for image classification.
We’ll even do it in the super easy to use Keras!Picking Our ModelFirst we’ll need to pick a model to visualise.
The easiest way to do this in Keras is to select a pre-trained image classification model.
Keras lots of these top-performing models in its Applications page.
All of them were trained on the ImageNet dataset with over a million images.
For this tutorial, I’m going to pick out the VGG16 Model.
It’s simple enough for us to go through our tutorial and still holds its own as a fairly accurate model.
The structure is shown below.
The process of loading up the model in Keras is shown below.
When you import the model from keras.
applications, Keras will automatically download the weights for you in the appropriate directory.
We then load the model and print out a summary of its structure.
Visualising FiltersThe first visualisation we’ll create is that of the CNN filters.
When Deep Learning folks talk about “filters” what they’re referring to is the learned weights of the convolutions.
For example, a single 3×3 convolution is called a “filter” and that filter has a total of 10 weights (9 + 1 bias).
By visualising the learned weights we can get some idea as to how well our network has learned.
For example, if we see a lot of zeros then we’ll know we have many dead filters that aren’t going much for our network — a great opportunity to do some pruning for model compression.
Take a look at the code below for visualising the filters and then scroll down for an explanation of how it works.
After importing all of our necessary libraries and loading up our model, we build a dictionary of the layers in our model at line 7.
This will allow us to access the weights at each layer by the layer name.
You can print out the keys in the dictionary if you want to check out all the names.
Next we’ll go ahead and set the name of the layer whose filters we want to visualise in line 9.
I picked the “block5_conv1” but you can select any of the convolution blocks you like.
In line 13 we call the get_weights() function to grab the filter and bias weights from our selected layer.
In lines 16 and 17 we normalise the filter values such that they’re between 0 and 1.
This will help create a clear visualisation when we show the weights as colours on the screen.
Finally, our loop starting at line 21 displays some of the filters.
For the “viridis” colour map we selected, yellow represents a value of 1 and dark blue a value of 0.
Check out the output below for VGG16’s block5_conv1 layer!Visualising Feature MapsThe feature maps of a CNN capture the result of applying the filters to an input image.
e at each layer, the feature map is the output of that layer.
The reason for visualising a feature map for a specific input image is to try to gain some understanding of what features our CNN detects.
Perhaps it detects some parts of our desired object and not others or the activations die out at a certain layer.
It is also interesting to directly observe a common idea in Deep Learning, that the early layers of the network detect low-level features (colours, edges, etc) and the later layers of the network detect high-level features (shapes and objects).
Take a look at the code below for visualising the feature maps and then scroll down for an explanation of how it works.
The first couple of steps remain the same: load up the model and select the name of the layer you want to visualise.
The one minor change is that we modify our model such that the final output is the output of the VGG16's last feature map, since we don’t really need the last couple of fully connected layers.
Next we’ll load up and prepare our image in lines 16 to 19.
You can choose any image you like; I chose an image of a tiger because tigers are awesome.
That code in lines 16 to 19 will read in the image and apply the necessary pre-processing that the VGG16 model requires.
Don’t worry if your image is a huge size like 4K resolution because VGG always expects an image of size 224×224, so the input will always be re-sized before processing.
At line 22 we apply our VGG16 model to the input image; the output will be our desired feature map.
We’ll loop through a few of the channels of the map and visualise them in a matplotlib figure.
Check out the resulting figure below.
Notice how there’s many activations on the edges and textures within the image, especially the outline of our tiger!Like to learn?Follow me on twitter where I post all about the latest and greatest AI, Technology, and Science!.Connect with me on LinkedIn too!Recommended ReadingWant to learn more about Deep Learning?.The Deep Learning with Python book will teach you how to do real Deep Learning with the easiest Python library ever: Keras!And just a heads up, I support this blog with Amazon affiliate links to great books, because sharing great books helps everyone!.As an Amazon Associate I earn from qualifying purchases.
.. More details