# Corners in Images and Angular Representation of Their Relationships

We will enlarge the patterns.

If we have filtering patterns as given below, this problem is avoided.

1 1 0 0 / 0 0 1 11 1 0 0 / 0 0 1 10 0 0 0 / 0 0 0 00 0 0 0 / 0 0 0 0We decided the size and type of patterns.

Now, time to merge them in a general filtering pattern that contains all types of corners we would like to detect.

General pattern:0 0 0 0 0 00 0 0 0 0 00 0 1 1 0 00 0 1 1 0 00 0 0 0 0 00 0 0 0 0 0Let’s see what kind of 4×4 filtering patterns that we can get from the general filtering pattern above.

0 0 0 00 0 1 10 0 1 10 0 0 00 0 0 00 0 0 00 0 1 10 0 1 11 1 0 01 1 0 00 0 0 00 0 0 0and so on…So, if we cut a template from our input image and, if we search and find it in the general filtering pattern, then we can detect the corners in the input image.

What about the angular relationship of the corners?After we apply the method explained above, we will find the coordinates of the corners we found.

After this, the mean coordinate of the corners’ coordinates will be found.

Display of mean coordinate.

(Ones are corners found in the image.

)After the mean coordinate is found, the angles of lines between mean coordinate and each corner will be found.

After these angles are found, we can investigate what do they represent.

Angle Representation between mean coordinate and corners of the input images.

(Input images are digit 1 images from MNIST data-set.

Cyan color is from digit 5 image.

)As we see, some of the digit 1 images show similarity with each other.

But digit 5(Cyan color) image shows completely different behavior.

Conclusion:The angular representation of the relationships between corners may be suitable to be used for feature description.

This feature extraction may increase the accuracy rate, while it decreases the spent time in training of classification algorithms.

Python Code of the Explained Methods:def tresholdEvo(img,t_value,tresholdType): if tresholdType == “both”: img[np.

where(img<t_value)] = 0 img[np.

where(img>t_value)] = 1 elif tresholdType == “onlyb”: img[np.

where(img<t_value)] = 0 elif tresholdType == “onlys”: img[np.

where(img>t_value)] = 1 return imgdef isCorner(kernel): mergedFilter = np.

zeros((6,6)) mergedFilter[2:4,2:4] = 1 result = match_template(mergedFilter, kernel) mV = np.

max(result) return mVdef angularRepresentation(coordinates): coordinates = np.

asarray(coordinates) mean1 = np.

sum(coordinates[:,0])/coordinates[:,0].

size mean2 = np.

sum(coordinates[:,1])/coordinates[:,1].

size angles = [] for cnt in range(len(coordinates)): diff1 = mean1 — coordinates[cnt,0] diff2 = mean2 — coordinates[cnt,1] ang = math.

degrees(math.

atan2(diff2,diff1))+90 if ang<0: ang = 360+ang angles.

append(ang) return sorted(angles),mean1,mean2def findCorners(img): coordWithValues = [] mvM = [] for cnt1 in range(0,len(img)-3): for cnt2 in range(len(img[cnt1])-3): kernel = img[cnt1:cnt1+4,cnt2:cnt2+4] mV = isCorner(kernel) coordWithValues.

append([cnt1,cnt2,mV]) sortedCoordWithValues = sorted(coordWithValues,key=lambda column:column, reverse=True) coord = [[row,row] for row in sortedCoordWithValues][0:10] return coorddef main(angleList,directoryOfElement): img = np.

asarray(Image.

open(directoryOfElement+””).

convert(‘L’)) img = tresholdEvo(img/np.

max(img),0.

4,”both”) coord = findCorners(img) angles,mean1,mean2 = angularRepresentation(coord) angleList.

append(angles) return angleList.. More details