Confusion Matrix and Cyber Security Errors.

Utkarsh Pande
3 min readJun 6, 2021

A confusion matrix is a fairly common term when it comes to machine learning. Today I would be trying to relate the importance of the confusion matrix when considering cyber crimes.

So confusion matrix is yet another classification metric that can be used to tell how well our model is performing. Yet it is more often used in various places which might not be using the confusion matrix.

This all gives us an idea that there is something more to the confusion matrix than just being called another classification metric.

So before we dive deep let’s first understand what a confusion matrix is.

Confusion Matrix?

It is a performance measurement for a machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.

True Positive:

Interpretation: You predicted positive and it’s true.

True Negative:

Interpretation: You predicted negative and it’s true.

False Positive: (Type I Error)

Interpretation: You predicted positive and it’s false.

False Negative: (Type II Error)

Interpretation: You predicted negative and it’s false.

So this would give an idea of what the four boxes in the confusion matrix are representing.

So what makes the confusion matrix so peculiar is the presence and distinction of type I and type II errors.

High accuracy is always the goal be it machine learning or any other field. But the question is does high accuracy always mean better results. Well in most cases the answer is yes but let me give you an example where we might have to go beyond the common notion that we can blindly go towards a higher accuracy.

Let’s say an anti-virus company came with an AI-based anti-virus that detects all the suspecting files. This model is giving 97 percent accuracy. Let’s say the model is working on your PC and you are there working on the next big thing. You just created an executable script which is very crucial for you but the anti-virus being an AI model gave a “FALSE POSITIVE” that your file is a virus.

But on the other hand, let’s say that you downloaded a few music videos that might have contained some malicious package but your model was unable to detect it and gave a “FALSE NEGATIVE”.

So now you have a choice. What type of model would you prefer. The mere existence of a choice here means that just accuracy doesn’t suffice the need in some cases because in both these cases the accuracy remained the same.

So you might now have a gist of the importance of the two types of error in the confusion matrix and what they mean.

Cybercrime can be anything like:

  • Stealing of personal data
  • Identity stolen
  • For stealing organizational data
  • Steal bank card details.
  • Hack emails for gaining information.

A trade-off between type I and type II errors is very critical in cybersecurity. Let’s take another example. Consider a face recognition system that is installed in front of the data warehouse which holds critical error. Consider that the manager comes and the recognition system is unable to recognize him. He tries to log in again and is allowed in.

This seems a pretty normal scenario. But let’s consider another condition. A new person comes and tries to log himself in. The recognition system makes an error and allows him in. Now, this is very dangerous. An unauthorized person has made an entry. This could be very damaging to the whole company.

In both cases, there was an error made by the security system. But the tolerance for False Negative here is 0 although we can still bear False Positive.

This shows the critical nature that might vary from use case to use case where we want a tradeoff between the two types of error.

--

--