Heart disease classifier using K-Nearest Neighbors Algorithm

Heart disease classifier using K-Nearest Neighbors AlgorithmNagesh Singh ChauhanBlockedUnblockFollowFollowingJul 1sourceI have written this post for the developers and assumes no background in statistics or mathematics.

The focus is mainly on how the k-NN algorithm works and how to use it for predictive modeling problems.

Classification of objects is an important area of research and application in a variety of fields.

In the presence of full knowledge of the underlying probabilities, Bayes decision theory gives optimal error rates.

In those cases where this information is not present, many algorithms make use of distance or similarity among samples as a means of classification.

The article has been divided into 2 parts.

In the first part, we’ll talk all about the K-NN machine learning algorithm and in the second part, we will implement K-NN in real life and classify Heart disease patients.

Table of contentWhat is a K-NN algorithm?How does the K-NN algorithm work?When to choose K-NN?How to choose the optimal value of K?What is Curse of dimensionality?Building K-NN classifier using python sci-kit learn.

What is a K-NN Algorithm?K-NN Algorithm representationK-NN or K-Nearest Neighbors is one of the most famous classification algorithms as of now in the industry simply because of its simplicity and accuracy.

K-NN is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.

g.

, distance functions).

KNN has been used in statistical estimation and pattern recognition already at the beginning of the 1970s as a non-parametric technique.

The algorithm assumes that similar things exist in close proximity.

In other words, entities which are similar exist together.

How the K-NN algorithm works?In K-NN, K is the number of nearest neighbors.

The number of neighbors is the core deciding factor.

K is generally an odd number if the number of classes is 2.

When K=1, then the algorithm is known as the nearest neighbor algorithm.

This is the simplest case.

In the below figure, suppose yellow colored “?” let's say P is the point, for which label needs to predict.

First, you find the one closest point to P and then the label of the nearest point assigned to P.

Second, you find the k closest point to P and then classify points by majority vote of its K neighbors.

Each object votes for their class and the class with the most votes is taken as the prediction.

For finding closest similar points, we find the distance between points using distance measures such as Euclidean distance, Hamming distance, Manhattan distance, and Minkowski distance.

The algorithm has the following basic steps:Calculate distanceFind closest neighborsVote for labelsThree most commonly used distance measures used to calculate the distance between point P and its nearest neighbors are represented as :In this article we will go ahead with Euclidean distance, so let's understand it first.

Euclidean distance: It is the most commonly used distance measure also called simply distance.

The usage of a Euclidean distance measure is highly recommended when the data is dense or continuous.

Euclidean distance is the best proximity measure.

The Euclidean distance between two points is the length of the path connecting them.

The Pythagorean theorem gives this distance between two points.

Below figure shows how to calculate Euclidean distance between two points in a 2-dimensional plane.

Euclidean distance between two points in 2-DWhen to use K-NN algorithm?KNN can be used for both classification and regression predictive problems.

However, it is more widely used in classification problems in the industry.

To evaluate any technique we generally look at 3 important aspects:1.

Ease to interpret the output2.

Calculation time of the algorithm3.

Predictive PowerLet us compare KNN with different models:source: Analytics VidhyaAs you can see K-NN surpasses Logistic Regression, CART and Random Forest in terms of the aspects which we are considering.

How to choose the optimal value of K?The number of neighbors(K) in K-NN is a hyperparameter that you need to choose at the time of building your model.

You can think of K as a controlling variable for the prediction model.

Now, choosing the optimal value for K is best done by first inspecting the data.

In general, a large K value is more precise as it reduces the overall noise but there is no guarantee.

Cross-validation is another way to retrospectively determine a good K value by using an independent dataset to validate the K value.

Historically, the optimal K for most datasets has been between 3–10.

That produces much better results than 1NN(when K=1).

Generally, an odd number is chosen if the number of classes is even.

You can also check by generating the model on different values of K and check their performance.

Curse of DimensionalityK-NN performs better with a lower number of features than a large number of features.

You can say that when the number of features increases than it requires more data.

Increase in dimension also leads to the problem of overfitting.

To avoid overfitting, the needed data will need to grow exponentially as you increase the number of dimensions.

This problem of higher dimension is known as the Curse of Dimensionality.

From the above graphical representation, it is clearly visible that the performance of your model decreases with an increase in the number of features(dimensions).

To deal with the problem of the curse of dimensionality, you need to perform principal component analysis(PCA) before applying any machine learning algorithm, or you can also use feature selection approach.

Research has shown that in large dimension Euclidean distance is not useful anymore.

Therefore, you can prefer other measures such as cosine similarity, which get decidedly less affected by high dimension.

The KNN algorithm can compete with the most accurate models because it makes highly accurate predictions.

Therefore, you can use the KNN algorithm for applications that require high accuracy but that do not require a human-readable model.

 — source: IBMSteps to compute K-NN algorithm:1.

Determine parameter K = number of nearest neighbors.

2.

Calculate the distance between the query-instance and all the training samples.

3.

Sort the distance and determine nearest neighbors based on the K-th minimum distance.

4.

Gather the category of the nearest neighbors5.

Use a simple majority of the category of nearest neighbors as the prediction value of the query.

In the next section, we are going to solve a real world scenario using K-NN algorithm.

Building Heart disease classifier using K-NN algorithmsourceThe most crucial task in the healthcare field is disease diagnosis.

If a disease is diagnosed early, many lives can be saved.

Machine learning classification techniques can significantly benefit the medical field by providing an accurate and quick diagnosis of diseases.

Hence, save time for both doctors and patients.

As heart disease is the number one killer in the world today, it becomes one of the most difficult diseases to diagnose.

In this section, we are going to build a K-NN classifier which will predict the presence of heart disease in a patient or not.

You can download the dataset from the UCI Machine Learning repository.

This database contains 76 attributes, but all published experiments refer to using a subset of 14 of them.

In particular, the Cleveland database is the only one that has been used by ML researchers to this date.

The “goal” field refers to the presence of heart disease in the patient.

It is integer valued from 0 (no presence) to 4.

Dataset contains following features:age — age in years sex — (1 = male; 0 = female) cp — chest pain type trestbps — resting blood pressure (in mm Hg on admission to the hospital) chol — serum cholestoral in mg/dl fbs — (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) restecg — resting electrocardiographic results thalach — maximum heart rate achieved exang — exercise induced angina (1 = yes; 0 = no) oldpeak — ST depression induced by exercise relative to rest slope — the slope of the peak exercise ST segment ca — number of major vessels (0–3) colored by flourosopy thal — 3 = normal; 6 = fixed defect; 7 = reversable defect target — have disease or not (1=yes, 0=no)Lets load all the required libraries.

import numpy as npimport matplotlib.

pyplot as plt import pandas as pdimport seaborn as snsfrom sklearn.

model_selection import train_test_splitfrom sklearn.

preprocessing import StandardScalerfrom sklearn.

neighbors import KNeighborsClassifierfrom sklearn.

metrics import confusion_matrixfrom sklearn import metricsLet's load our dataset:data = pd.

read_csv('/Users/nageshsinghchauhan/Downloads/ML/KNN_art/heart.

csv')Our original data set looks like thisLet us explore our dataset and count the number of patients who have heart disease:data.

target.

value_counts()1 1650 138Name: target, dtype: int64So out of all the patients 165 patients actually have heart disease.

Now also visualize.

sns.

countplot(x="target", data=data, palette="bwr")plt.

show()count of the number of patients having heart disease(target =1)Now let's classify target variable between male and female and visualize the result.

sns.

countplot(x='sex', data=data, palette="mako_r")plt.

xlabel("Sex (0 = female, 1= male)")plt.

show()count of the male and female having heart diseaseSo from the above figure, it is evident that in our dataset, 207 males and 96 females are there.

Let us also see the relation between “Maximum Heart Rate” and “Age”.

plt.

scatter(x=data.

age[data.

target==1], y=data.

thalach[(data.

target==1)], c="green")plt.

scatter(x=data.

age[data.

target==0], y=data.

thalach[(data.

target==0)], c = 'black')plt.

legend(["Disease", "Not Disease"])plt.

xlabel("Age")plt.

ylabel("Maximum Heart Rate")plt.

show()Scatter plot between Age and Maximum heart rateSo from the above, the maximum heart rate occurs in between age 50–60 years.

Ok, now let's label our dataset with X(matrix of independent variables) and y(vector of the dependent variable).

X = data.

iloc[:,:-1].

valuesy = data.

iloc[:,13].

valuesNext, we split 75% of the data to the training set while 25% of the data to test set using below code.

X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.

25, random_state= 0)Now, Our dataset contains features which are highly varying in magnitudes, units, and range.

But since most of the machine learning algorithms use Euclidean distance between two data points in their computations, this is a problem.

To suppress this effect, we need to bring all features to the same level of magnitudes.

This can be achieved by a method called feature scaling.

So our next step is to normalize the data which can be done using StandardScaler() from sci-kit learn.

sc_X = StandardScaler()X_train = sc_X.

fit_transform(X_train)X_test = sc_X.

transform(X_test)Our next step is to K-NN model and train it with the training data.

Here n_neighbors is the value of factor K.

classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)classifier = classifier.

fit(X_train,y_train)So the most important point to note here is to choose the optimal value of K and for that, we will start with K=5.

Now, since your K-NN model is ready with K=5.

Let's train our test data and check its accuracy.

y_pred = classifier.

predict(X_test)#check accuracyaccuracy = metrics.

accuracy_score(y_test, y_pred)print('Accuracy: {:.

2f}'.

format(accuracy))Accuracy: 0.

82For K=6classifier = KNeighborsClassifier(n_neighbors = 6, metric = 'minkowski', p = 2)classifier = classifier.

fit(X_train,y_train)#predictiony_pred = classifier.

predict(X_test)#check accuracyaccuracy = metrics.

accuracy_score(y_test, y_pred)print('Accuracy: {:.

2f}'.

format(accuracy))Accuracy: 0.

86For K=7classifier = KNeighborsClassifier(n_neighbors = 7, metric = 'minkowski', p = 2)classifier = classifier.

fit(X_train,y_train)#predictiony_pred = classifier.

predict(X_test)#check accuracyaccuracy = metrics.

accuracy_score(y_test, y_pred)print('Accuracy: {:.

2f}'.

format(accuracy))Accuracy: 0.

87For K=8classifier = KNeighborsClassifier(n_neighbors = 8, metric = 'minkowski', p = 2)classifier = classifier.

fit(X_train,y_train)#predictiony_pred = classifier.

predict(X_test)#check accuracyaccuracy = metrics.

accuracy_score(y_test, y_pred)print('Accuracy: {:.

2f}'.

format(accuracy))Accuracy: 0.

87For K=9classifier = KNeighborsClassifier(n_neighbors = 9, metric = 'minkowski', p = 2)classifier = classifier.

fit(X_train,y_train)#predictiony_pred = classifier.

predict(X_test)#check accuracyaccuracy = metrics.

accuracy_score(y_test, y_pred)print('Accuracy: {:.

2f}'.

format(accuracy))Accuracy: 0.

86So as we can see that Accuracy is maximum that is 87% when K=7.

Let's also check the confusion matrix and see how many records were predicted correctly.

#confusion matrixfrom sklearn.

metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred)array([[26, 7], [ 3, 40]])In the output, 26 and 40 are correct predictions, and 7 and 3 are incorrect predictions.

ConclusionCongratulations, you have successfully made a heart disease classifier using K-NN which is capable of classifying heart patient with optimal accuracy.

In this article, we have learned the K-Nearest Neighbor algorithm; it’s working, the curse of dimensionality, model building and evaluation on heart disease dataset using Python Scikit-learn package.

Well, I hope you guys have enjoyed reading this article.

Let me know your thoughts/suggestions/questions in the comment section.

You can reach me out on LinkedIn for any query.

Thanks for reading !!!.

. More details

Leave a Reply