# Image Classification using SSIM

Image Classification using SSIMSimple Image Classifier with OpenCVIftekher MamunBlockedUnblockFollowFollowingJan 16Find the DifferencesAs humans, we are generally very good at finding the difference in a picture.

For example, let’s look at the above picture and see how they are different.

For one, the fruits, ice-creams and drinks have obviously changed.

That was pretty easy, right?However, for computers, this is not such an easy task.

Computers can only learn from what we train it’s models on.

There are great models that can classify batch images really well, like Google’s TensorFlow and Keras.

Thanks to image classifier libraries like these, computer vision has jumped dramatically.

Now we can create complicated models such as this one in kaggle: Animals-10, which contains thousands of pictures of ten different types of animals as well as non-animals already trained and cleaned.

All you have to do is create a model and see how well your model can predict each different types of animals.

For tutorials on image classification model checkout Prabhu or Amitabha.

However, I wanted to create an image classifier that can tell how similar two images are.

For that, there is no need for any complicated libraries like TensorFlow or image classification models like linked above.

There are two ways to find if an image is similar to another image.

First is to look at Mean Square Error (MSE) and the second is Structural Similarity Index (SSIM).

They both look pretty scary but no need to worry.

MSE can be calculated fairly easily because of NumPy and SSIM is a built in method part of Sci-Kit’s image library so we can just load it up.

Before we do all that, let’s define what each is calculating.

MSE will calculate the mean square error between each pixels for the two images we are comparing.

Whereas SSIM will do the opposite and look for similarities within pixels; i.

e.

if the pixels in the two images line up and or have similar pixel density values.

The only issues is that MSE tends to have arbitrarily high numbers so it is harder to standardize it.

While generally the higher the MSE the least similar they are, if the mse between picture sets differ appears randomly, it will be harder for us to tell anything.

SSIM on the other hand puts everything in a scale of -1 to 1 (but I was not able to produce a score less than 0).

A score of 1 meant they are very similar and a score of -1 meant they are very different.

In my opinion this is a better metric of measurement.

Now let’s focus on the codes and start importing all necessary libraries:Loading up necessary librariesI am using CV2 (part of OpenCV) to read and edit my images.

My friend Yish Lim did it in her blog for Pokemon match similarity and it is pretty awesome.

Now for the scary part of writing up our MSE formula:Woah, that was toughSince SSIM was already imported through skimage, no need to manually code it.

Now we create a function that will take in two images, calculate it’s mse and ssim and show us the values all at once.

Now the next three steps can be done all at once with a for-loop but let’s break it down in the codes below so it is easier to follow:I chose a purposely large picture size for better SSIM, but beware it takes some time to calculateFirst we load the images that are saved in our directory.

Second, we have to make sure they are all the same size as otherwise we will get a dimension error.

The issue with this is this can lead to distortion of image so play around till you find the perfect numbers.

Next we do one more function so we can see what our pictures look like.

It looks very small, and I haven’t figured out how to increase the sizeNow to test and see if our MSE and SSIM is working by comparing one image to itself.

If it works, then we should get MSE of 0 and SSIM of 1.

Yes, it works!Now that we know our computer can tell if the pictures being compared are the same, how about we compare different pictures.

For simplicity, I will compare three dog pictures to themselves and three cat pictures to themselves.

Let’s look at what our simple algorithm just compared.

As you can see, the MSE varies widely so it is hard to tell what is what.

But looking at SSIM, we can see that dog 2 and dog 3 are most similar relative to the other dogs.

Visually, I agree, specially with how the ears are.

But I would have thought that dog 1 and dog 3 would have had a higher SSIM because of their pose.

In reality, before graycycle of the pictures, dog 2 and 3 had similar white fur around the nose area while dog 1 did not.

That is most likely the reason for dog 2 and 3 having a higher SSIM value compare to dog 1.

For the cats, it is a little more difficult.

Cat 1 and cat 2 had similar shape and the picture was taken from similar distance but cat 2 and cat 3 had similar color fur.

That is probably why cat 1 and 3 are rated similarly as well as cat 2 and 3 but not cat 1 and cat 2.

There were only two more tests I wanted to conduct.

One was for how a dog may appear to a cat and second was how each animal compares to the gate picture that came with the original source code.

Dog vs.

Gate vs.

CatAs expected, dogs and cats are similar, specially compared to an inanimate object such as the Jurassic Park 1 entrance gate (side note: such a great movie!).

The only reason dogs and cats having a high SSIM with the gate picture would be because of the size and the grayscale filter it went through.

OpenCV is not the best when it comes to resizing and reconfiguring images.

For that Google’s TensorFlow is the best.

However, the issue I had with TensorFlow was I could not load single images into their librarie module.

TensorFlow works best with batch pictures.

For the future, I would like to use the kaggle Animal-10 dataset I mentioned with TensorFlow to run a full image classification lab.