The AI Comic: Z.A.I.N – Issue #2: Facial Recognition using Computer Vision

Z.

A.

I.

N’s original model wouldn’t be able to detect that yet.

This is where Z.

A.

I.

N and we will discover the awesome concept of facial recognition.

We will even build a facial recognition model in Python once we see what Z.

A.

I.

N does in this issue!.  I recommend reading the previous issue of the AI comic first – Issue#1: No Attendance, No Problem.

This will help you understand why we are leaning on the concept of facial recognition in this issue and also build your computer vision foundations.

  Getting to Know this AI Comic’s Chief Character Z.

A.

I.

N Who is Z.

A.

I.

N?.And what’s the plot for issue #2 of this AI comic?.Here is an illustrated summary of all that you need to know before pouring into this week’s issue:   Note: Use the right and left arrow keys to navigate through the below slider.

div#n2-ss-3{width:1200px;float:left;margin:0px 0px 0px 0px;}html[dir=”rtl”] div#n2-ss-3{float:right;}div#n2-ss-3 .

n2-ss-slider-1{position:relative;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;height:1600px;border-style:solid;border-width:0px;border-color:#3e3e3e;border-color:RGBA(62,62,62,1);border-radius:0px;background-clip:padding-box;background-repeat:repeat;background-position:50% 50%;background-size:cover;background-attachment:scroll;}div#n2-ss-3 .

n2-ss-slider-background-video-container{position:absolute;left:0;top:0;width:100%;height:100%;overflow:hidden;}div#n2-ss-3 .

n2-ss-slider-2{position:relative;width:100%;height:100%;}.

x-firefox div#n2-ss-3 .

n2-ss-slider-2{opacity:0.

99999;}div#n2-ss-3 .

n2-ss-slider-3{position:relative;width:100%;height:100%;overflow:hidden;outline:1px solid rgba(0,0,0,0);z-index:10;}div#n2-ss-3 .

n2-ss-slide-backgrounds,div#n2-ss-3 .

n2-ss-slider-3 > .

n-particles-js-canvas-el,div#n2-ss-3 .

n2-ss-slider-3 > .

n2-ss-divider{position:absolute;left:0;top:0;width:100%;height:100%;}div#n2-ss-3 .

n2-ss-slide-backgrounds{z-index:10;}div#n2-ss-3 .

n2-ss-slider-3 > .

n-particles-js-canvas-el{z-index:12;}div#n2-ss-3 .

n2-ss-slide-backgrounds > *{overflow:hidden;}div#n2-ss-3 .

n2-ss-slide{position:absolute;top:0;left:0;width:100%;height:100%;z-index:20;display:block;-webkit-backface-visibility:hidden;}div#n2-ss-3 .

n2-ss-layers-container{position:relative;width:1200px;height:1600px;}div#n2-ss-3 .

n2-ss-parallax-clip > .

n2-ss-layers-container{position:absolute;right:0;}div#n2-ss-3 .

n2-ss-slide{perspective:1500px;}div#n2-ss-3[data-ie] .

n2-ss-slide{perspective:none;transform:perspective(1500px);}div#n2-ss-3 .

n2-ss-slide-active{z-index:21;}div#n2-ss-3 .

nextend-arrow{cursor:pointer;overflow:hidden;line-height:0 !important;z-index:20;}div#n2-ss-3 .

nextend-arrow img{position:relative;min-height:0;min-width:0;vertical-align:top;width:auto;height:auto;max-width:100%;max-height:100%;display:inline;}div#n2-ss-3 .

nextend-arrow img.

n2-arrow-hover-img{display:none;}div#n2-ss-3 .

nextend-arrow:HOVER img.

n2-arrow-hover-img{display:inline;}div#n2-ss-3 .

nextend-arrow:HOVER img.

n2-arrow-normal-img{display:none;}div#n2-ss-3 .

nextend-arrow-animated{overflow:hidden;}div#n2-ss-3 .

nextend-arrow-animated > div{position:relative;}div#n2-ss-3 .

nextend-arrow-animated .

n2-active{position:absolute;}div#n2-ss-3 .

nextend-arrow-animated-fade{transition:background 0.

3s, opacity 0.

4s;}div#n2-ss-3 .

nextend-arrow-animated-horizontal > div{transition:all 0.

4s;left:0;}div#n2-ss-3 .

nextend-arrow-animated-horizontal .

n2-active{top:0;}div#n2-ss-3 .

nextend-arrow-previous.

nextend-arrow-animated-horizontal:HOVER > div,div#n2-ss-3 .

nextend-arrow-next.

nextend-arrow-animated-horizontal .

n2-active{left:-100%;}div#n2-ss-3 .

nextend-arrow-previous.

nextend-arrow-animated-horizontal .

n2-active,div#n2-ss-3 .

nextend-arrow-next.

nextend-arrow-animated-horizontal:HOVER > div{left:100%;}div#n2-ss-3 .

nextend-arrow.

nextend-arrow-animated-horizontal:HOVER .

n2-active{left:0;}div#n2-ss-3 .

nextend-arrow-animated-vertical > div{transition:all 0.

4s;top:0;}div#n2-ss-3 .

nextend-arrow-animated-vertical .

n2-active{left:0;}div#n2-ss-3 .

nextend-arrow-animated-vertical .

n2-active{top:-100%;}div#n2-ss-3 .

nextend-arrow-animated-vertical:HOVER > div{top:100%;}div#n2-ss-3 .

nextend-arrow-animated-vertical:HOVER .

n2-active{top:0;}   You can download the full comic here!.  Python Code and Explanation Behind Z.

A.

I.

N’s Facial Recognition Model Enjoyed reading Issue #2?.Now let’s see how ZAIN came up with that extraordinary feat!.That’s right – we are going to dive deep into the Python code behind ZAIN’s facial recognition model.

First, we need data to train our own model.

This comes with a caveat – we won’t be using a pre-defined dataset.

This model has to be trained on our own customized data.

How else will facial recognition work for our situation?.We use the below tools to curate our dataset: The brilliant OpenCV library HarrCascadeClassifier: Frontal-face-default (Frontal face detector) We need to first initialize our camera to capture the video.

Then, we will use the frontal face classifier to make bounding boxes around the face.

Please note that since we have used ‘frontal-face-default’ specifically, it will only detect specific faces.

So, you can choose which classifier to use according to your requirements.

After the bounding boxes have been created, we: Detect these faces Convert them into grayscale, and Save the images with a label.

In our case, these labels are either 1 or 2 View the code on Gist.

Prefer learning through video examples?.I have created the below video just for you to understand what the above code does: https://s3-ap-south-1.

amazonaws.

com/av-blog-media/wp-content/uploads/2019/06/dataset.

mp4 And that’s it – our dataset is ready for action!.So what’s next?.Well, we will now build and train our model on those images!.We’ll use two tools specifically to do that:  LBPHFaceRecognizer: Please note that the syntax for the recognizer is different in different versions of OpenCV Pillow Here, we will convert the images into NumPy arrays and train the model.

We will then save the trained model as Recogniser.

yml: View the code on Gist.

The below code essentially does three things for us: It starts capturing the video Makes a bounding box around the face in the frame Classifies the faces into one of the two labels we trained on.

If it does not match any of them, then it shows ‘unknown’ View the code on Gist.

Here’s another intuitive video to show you what the above code block does: https://s3-ap-south-1.

amazonaws.

com/av-blog-media/wp-content/uploads/2019/06/implementation.

mp4   End Notes This second issue of Analytics Vidhya’s AI comic, Z.

A.

I.

N, covered how computer vision can change our day-to-day lives for the better.

And guess what?. More details

Leave a Reply