Extending F1 metric, probabilistic approach

Abstract

This article explores the extension of well-known F1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specifcity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.

Keywords: machine learning, binary classifier, F1 , MCC, precision, recall

Full article

Article can be downloaded at: https://arxiv.org/abs/2210.11997


Leave a Reply

Your email address will not be published.