Open problem : adversarial multiarmed bandits with limited advice
Seldin, Yevgeny, Bartlett, Peter L., & Crammer, Koby (2013) Open problem : adversarial multiarmed bandits with limited advice. In Conference on Learning Theory (COLT 2013), June 12-14, 2013, Princeton, NJ.
Adversarial multiarmed bandits with expert advice is one of the fundamental problems in studying the exploration-exploitation trade-o. It is known that if we observe the advice of all experts on every round we can achieve O(√KTlnN) regret, where K is the number of arms, T is the number of game rounds, and N is the number of experts. It is also known that if we observe the advice of just one expert on every round, we can achieve regret of order O(√NT). Our open problem is what can be achieved by asking M experts on every round, where 1 < M < N.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
|Item Type:||Conference Paper|
|Divisions:||Current > QUT Faculties and Divisions > Science & Engineering Faculty|
|Copyright Owner:||Copyright 2013 The Author(s)|
|Deposited On:||01 May 2014 01:34|
|Last Modified:||02 May 2014 21:47|
Repository Staff Only: item control page