Statistics and Its Interface

Volume 17 (2024)

Number 3

Improved Naive Bayes with mislabeled data

Pages: 323 – 336

DOI: https://dx.doi.org/10.4310/22-SII757

Authors

Qianhan Zeng (Peking University)

Yingqiu Zhu (University of International Business and Economics, China)

Xuening Zhu (Fudan University)

Feifei Wang (Renmin University of China)

Weichen Zhao (Hohai University)

Shuning Sun (Fudan University)

Meng Su (Beijing Percent Technology Group Co., Ltd.)

Hansheng Wang (Peking University)

Abstract

Labeling mistakes are frequently encountered in real-world applications. If not treated well, the labeling mistakes can deteriorate the classification performances of a model seriously. To address this issue, we propose an improved Naive Bayes method for text classification. It is analytically simple and free of subjective judgments on the correct and incorrect labels. By specifying the generating mechanism of incorrect labels, we optimize the corresponding log-likelihood function iteratively by using an EM algorithm. Our simulation and experiment results show that the improved Naive Bayes method greatly improves the performances of the Naive Bayes method with mislabeled data.

Keywords

naive Bayes, text classification, label noise, EM algorithm

2010 Mathematics Subject Classification

Primary 62F15. Secondary 62F35.

The full text of this article is unavailable through your IP address: 172.17.0.1

Received 6 June 2022

Accepted 2 September 2022

Published 19 July 2024