Adaboost is consistent
Bartlett, Peter L. & Traskin, Mikhail (2007) Adaboost is consistent. Journal of Machine Learning Research, 8, pp. 2347-2368.
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
|Item Type:||Journal Article|
|Additional Information:||The published version is also freely available via the Official URL.|
|Keywords:||boosting, adaboost, consistency, OAVJ|
|Divisions:||Past > QUT Faculties & Divisions > Faculty of Science and Technology
Past > Schools > Mathematical Sciences
|Copyright Owner:||Copyright 2007 Peter L. Bartlett and Mikhail Traskin.|
|Deposited On:||18 Aug 2011 03:17|
|Last Modified:||29 Feb 2012 14:34|
Repository Staff Only: item control page