Now, open a new terminal window and type the following commands:cd CSRNet-pytorch python train.
json 0 0Again, sit back because this will take some time.
You can reduce the number of epochs in the train.
py file to accelerate the process.
A cool alternate option is to download the pre-trained weights from here if you don’t feel like waiting.
Finally, let’s check our model’s performance on unseen data.
We will use the val.
ipynb file to validate the results.
Remember to change the path to the pretrained weights and images.
#importing libraries import h5py import scipy.
io as io import PIL.
Image as Image import numpy as np import os import glob from matplotlib import pyplot as plt from scipy.
filters import gaussian_filter import scipy import json import torchvision.
functional as F from matplotlib import cm as CM from image import * from model import CSRNet import torch %matplotlib inlinefrom torchvision import datasets, transforms transform=transforms.
225]), ])#defining the location of dataset root = '/home/pulkit/CSRNet/ShanghaiTech/CSRNet-pytorch/' part_A_train = os.
join(root,'part_A/train_data','images') part_A_test = os.
join(root,'part_A/test_data','images') part_B_train = os.
join(root,'part_B/train_data','images') part_B_test = os.
join(root,'part_B/test_data','images') path_sets = [part_A_test]#defining the image path img_paths =  for path in path_sets: for img_path in glob.
append(img_path)model = CSRNet()#defining the modelmodel = model.
cuda()#loading the trained weightscheckpoint = torch.
load_state_dict(checkpoint['state_dict'])Check the MAE (Mean Absolute Error) on test images to evaluate our model:mae = 0 for i in tqdm(range(len(img_paths))): img = transform(Image.
cuda() gt_file = h5py.
replace('images','ground-truth'),'r') groundtruth = np.
asarray(gt_file['density']) output = model(img.
unsqueeze(0)) mae += abs(output.
sum(groundtruth)) print (mae/len(img_paths))We got an MAE value of 75.
69 which is pretty good.
Now let’s check the predictions on a single image:from matplotlib import cm as c img = transform(Image.
cuda() output = model(img.
unsqueeze(0)) print("Predicted Count : ",int(output.
numpy())) temp = np.
imshow(temp,cmap = c.
show() temp = h5py.
h5', 'r') temp_1 = np.
imshow(temp_1,cmap = c.
jet) print("Original Count : ",int(np.
sum(temp_1)) + 1) plt.
show() print("Original Image") plt.
show()Wow, the original count was 382 and our model estimated there were 384 people in the image.
That is a very impressive performance!Congratulations on building your own crowd counting model!End NotesI encourage you to try out this approach on different images and share your results in the comments section below.
Crowd counting has so many diverse applications and is already seeing adoption by organizations and government bodies.
It is a useful skill to add to your portfolio.
Quite a number of industries will be looking for data scientists who can work with crowd counting algorithms.
Learn it, experiment with it, and give yourself the gift of deep learning!Did you find this article useful?.Feel free to leave your suggestions and feedback for me below, and I’ll be happy to connect with you.
You should also check out the below resources to learn and explore the wonderful world of computer vision:Certified Course: Computer Vision using Deep LearningA Step-by-Step Introduction to the Basic Computer Vision AlgorithmsUnderstanding and Building your First Object Detection Model from ScratchLearn Object Detection using the Popular YOLO FrameworkOriginally published at https://www.
com on February 18, 2019.