Riemannian Online Optimistic Algorithms With Dynamic Regret

成果类型:
Article
署名作者:
Wang, Xi; Yuan, Deming; Hong, Yiguang; Hu, Zihao; Wang, Lei; Shi, Guodong
署名单位:
University of New South Wales Sydney; Nanjing University of Science & Technology; Tongji University; Hong Kong University of Science & Technology; Zhejiang University; University of Sydney
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2025.3557363
发表日期:
2025
页码:
6481-6496
关键词:
Heuristic algorithms MANIFOLDS Convex functions optimization Lower bound robots mirrors Upper bound training Machine Learning dynamic regret online learning optimistic algorithm Riemannian manifold
摘要:
this article, we consider Riemannian online convex optimization with dynamic regret, which involves minimizing the cumulative loss difference between a learner's decisions and a sequence of adaptive decisions across dynamic environments over time on Riemannian manifolds. First, we propose Riemannian online optimistic gradient descent (R-OOGD), which extends the classical optimistic algorithms to Riemannian manifolds, requiring one gradient inquiry per round. We then propose Riemannian adaptive online optimistic gradient descent (R-AOOGD) by incorporating the meta-expert framework into R-OOGD. We analyze the dynamic regret of R-OOGD and R-AOOGD in terms of the regularity of the sequence of cost functions and comparators. The dynamic regret bound matches the results in Euclidean space and with a scale ( ' adjustment of O (1+kappa D-2/root 1-KD2) related to the sectional curvature lower bound kappa, sectional curvature upper bound K and feasible set diameter D. Finally, we provide numerical experiments to illustrate the performance of the proposed algorithms.