Centroid training to achieve effective text classification

Zhang, Libiao, Li, Yuefeng, Xu, Yue, Tjondronegoro, Dian W., & Sun, Chao (2014) Centroid training to achieve effective text classification. In Cao, Longbing, Karypis, George, King, Irwin, & Wang, Wei (Eds.) Proceedings of the 2014 International Conference on Data Science and Advanced Analytics (DSAA), IEEE, Shanghai East Asia Hotel, Shanghai, pp. 406-412.

View at publisher


Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.

Impact and interest:

0 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

40 since deposited on 20 Apr 2015
12 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 83689
Item Type: Conference Paper
Refereed: Yes
Keywords: Data mining, Feature extraction, Optimization, Testing, Training, Uncertainty, Vectors
DOI: 10.1109/DSAA.2014.7058104
ISBN: 9781479969913
Divisions: Current > Schools > School of Electrical Engineering & Computer Science
Current > Schools > School of Information Systems
Current > QUT Faculties and Divisions > Science & Engineering Faculty
Copyright Owner: Copyright 2014 by IEEE
Deposited On: 20 Apr 2015 23:25
Last Modified: 24 Apr 2015 05:25

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page