Similarly, a true negative is an outcome where the model correctly predicts the negative class.False Positive & False NegativeThe terms False Positive and False Negative are very in determining how well the model is predicted with respect to classification.A false positive is an outcome where the model incorrectly predicts the positive class..And a false negative is an outcome where the model incorrectly predicts the negative class..The more values in main diagonal, better the model whereas the other diagonal gives the worst result for classification.False PositiveAn example in which the model mistakenly predicted the positive class..For example, the model inferred that a particular email message was spam (the positive class), but that email message was actually not spam..It’s like a warning sign that the mistake did should be rectified as it’s not much of a serious concern compared to False Negative.False positive (type I error) — when you reject a true null hypothesisFalse Positive RateFalse NegativeAn example in which the model mistakenly predicted the negative class..For example, the model inferred that a particular email message was not spam (the negative class), but that email message actually was spam..It’s like a danger sign that the mistake did should be rectified at the earliest as it’s of a much serious concern compared to False Positive.False negative (type II error) — when you accept a false null hypothesis.This picture easily illustrates the above metrics ..The man’s test results say “You ’re pregnant” is False Positive as a man cannot be pregnant, and a pregnant woman’s test results say “Pregnant” is False Negative as from the image it easily identified that the woman is pregnant.From the Confusion Matrix, we can infer Accuracy, Precision, Recall, F-1 Score.AccuracyAccuracy is the fraction of predictions our model got right.Accuracy TermAccuracy can also be written asAccuracyAccuracy alone doesn’t tell the full story when you’re working with a class-imbalanced data set, where there is a significant disparity between the number of positive and negative labels..Precision and Recall are better metrics for evaluating class-imbalanced problems.PrecisionOut of all the classes, how much we predicted correctly.Precision should be as high as possible.RecallOut of all the positive classes, how much we predicted correctly..It is also called sensitivity or true positive rate (TPR).Recall should be as high as possible.F-1 ScoreIt is often convenient to combine precision and recall into a single metric called the F1 score, in particular, if you need a simple way to compare two classifiers..The F1 score is the harmonic mean of precision and recall.The regular mean treats all values equally, the harmonic mean gives much more weight to low values thereby punishing the extreme values more..As a result, the classifier will only get a high F1 score if both recall and precision are high.3.. More details