Stochastic Optimization with Decision-Dependent Distributions
成果类型:
Article
署名作者:
Drusvyatskiy, Dmitriy; Xiao, Lin
署名单位:
University of Washington; University of Washington Seattle; Facebook Inc
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2022.1287
发表日期:
2023
页码:
954-998
关键词:
approximation algorithms
Composite optimization
PROGRAMS
online
摘要:
Stochastic optimization problems often involve data distributions that change in reaction to the decision variables. This is the case, for example, when members of the population respond to a deployed classifier by manipulating their features so as to improve the likelihood of being positively labeled. Recent works on performative prediction identify an intriguing solution concept for such problems: find the decision that is optimal with respect to the static distribution that the decision induces. Continuing this line of work, we show that, in the strongly convex setting, typical stochastic algorithms-originally designed for static problems-can be applied directly for finding such equilibria with little loss in efficiency. The reason is simple to explain: the main consequence of the distributional shift is that it corrupts algorithms with a bias that decays linearly with the distance to the solution. Using this perspective, we obtain convergence guarantees for popular algorithms, such as stochastic gradient, clipped gradient, prox-point, and dual averaging methods, along with their accelerated and proximal variants. In realistic applications, deployment of a decision rule is often much more expensive than sampling. We show how to modify the aforementioned algorithms so as to maintain their sample efficiency when performing only logarithmically many deployments.
来源URL: