Zeroth-Order Learning in Continuous Games via Residual Pseudogradient Estimates
成果类型:
Article
署名作者:
Huang, Yuanhanqing; Hu, Jianghai
署名单位:
Purdue University System; Purdue University
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2024.3479874
发表日期:
2025
页码:
2258-2273
关键词:
games
CONVERGENCE
optimization
mirrors
Approximation algorithms
linear programming
Heuristic algorithms
vectors
decision making
Wireless communication
Bandit methods
game theory
Iterative learning control
Multi-agent systems
Optimization methods
Variational Inequality
摘要:
A variety of practical problems can be modeled by the decision-making process in multiplayer games where a group of self-interested players aim at optimizing their own local objectives, while the objectives depend on the actions taken by others. The local gradient information of each player, essential in implementing algorithms for finding game solutions, is all too often unavailable. In this article, we focus on designing solution algorithms for multiplayer games using bandit feedback, i.e., the only available feedback at each player's disposal is the realized objective values. To tackle the issue of large variances in the existing bandit learning algorithms with a single oracle call, we propose two algorithms by integrating the residual feedback scheme into single-call extragradient methods. Subsequently, we show that the actual sequences of play can converge almost surely to a critical point if the game is pseudomonotone plus and characterize the convergence rate to the critical point when the game is strongly pseudomonotone. The ergodic convergence rates of the generated sequences in monotone games are also investigated as a supplement. Finally, the validity of the proposed algorithms is further verified via numerical examples.