Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information
Tang, Ling-Xiang, Geva, Shlomo, Xu, Yue, & Trotman, Andrew (2009) Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information. In Proceedings of the Fourteenth Australasian Document Computing Symposium, School of Information Technologies, University of Sydney, University of New South Wales, Sydney, pp. 82-89.
In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
|Item Type:||Conference Paper|
|Keywords:||Chinese word segmentation, mutual information, n-gram mutual information, boundary confidence|
|Subjects:||Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING (080100) > Natural Language Processing (080107)|
|Divisions:||Past > QUT Faculties & Divisions > Faculty of Science and Technology
Past > Schools > School of Information Technology
|Copyright Owner:||Copyright 2009 The authors.|
|Deposited On:||18 Dec 2009 04:10|
|Last Modified:||27 Feb 2015 01:11|
Repository Staff Only: item control page