Adaboost is consistent
Bartlett, Peter L. & Traskin, Mikhail (2007) Adaboost is consistent. Journal of Machine Learning Research, 8, pp. 2347-2368.
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.
Citation countsare sourced monthly fromand citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
|Item Type:||Journal Article|
|Additional Information:||The published version is also freely available via the Official URL.|
|Keywords:||boosting, adaboost, consistency, OAVJ|
|Divisions:||Past > QUT Faculties & Divisions > Faculty of Science and Technology|
Past > Schools > Mathematical Sciences
|Copyright Owner:||Copyright 2007 Peter L. Bartlett and Mikhail Traskin.|
|Deposited On:||18 Aug 2011 13:17|
|Last Modified:||01 Mar 2012 00:34|
Repository Staff Only: item control page