Small Loss Bounds for Online Learning with Partial Information
成果类型:
Article; Early Access
署名作者:
Lykouris, Thodoris; Sridharan, Karthik; Tardos, Eva
署名单位:
Massachusetts Institute of Technology (MIT); Cornell University
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2021.1204
发表日期:
2022
关键词:
Regret
algorithms
摘要:
We consider the problem of adversarial (nonstochastic) online learning with partial-information feedback, in which, at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems in which the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are nonnegative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called small-loss o(alpha L*) regret bounds with high probability, where a is the independence number of the graph and L* is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e., utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications, such as combinatorial semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), and learning with slowly changing (shifting) comparators. In the special case of multi-armed bandit and combinatorial semi-bandit problems, we provide optimal small-loss, high-probability regret guarantees of (O) over tilde(root dL*), where d is the number of actions, answering open questions of Neu. Previous bounds for multi-armed bandits and semi-bandits were known only for pseudo-tegret and only in expectation. We also offer an optimal (O) over tilde(root kappa L*) regret guarantee for fixed feedback graphs with clique-partition number at most kappa.