Running Errands in Time: Approximation Algorithms for Stochastic Orienteering
成果类型:
Article
署名作者:
Gupta, Anupam; Krishnaswamy, Ravishankar; Nagarajan, Viswanath; Ravi, R.
署名单位:
Carnegie Mellon University; Princeton University; International Business Machines (IBM); IBM USA; Carnegie Mellon University
刊物名称:
MATHEMATICS OF OPERATIONS RESEARCH
ISSN/ISSBN:
0364-765X
DOI:
10.1287/moor.2014.0656
发表日期:
2015
页码:
56-79
关键词:
Bounds
lp
摘要:
In the stochastic orienteering problem, we are given a finite metric space, where each node contains a job with some deterministic reward and a random processing time. The processing time distributions are known and independent across nodes. However the actual processing time of a job is not known until it is completely processed. The objective is to compute a nonanticipatory policy to visit nodes (and run the corresponding jobs) so as to maximize the total expected reward, subject to the total distance traveled plus the total processing time being at most a given budget of B. This problem combines aspects of the stochastic knapsack problem with uncertain item sizes as well as the deterministic orienteering problem. In this paper, we consider both nonadaptive and adaptive policies for Stochastic Orienteering. We present a constant-factor approximation algorithm for the nonadaptive version and an O(log log B)-approximation algorithm for the adaptive version. We extend both these results to directed metrics and a more general sequence orienteering problem. Finally, we address the stochastic orienteering problem when the node rewards are also random and possibly correlated with the processing time and obtain an O(log n log B)-approximation algorithm; here n is the number of nodes in the metric. All our results for adaptive policies also bound the corresponding adaptivity gaps.
来源URL: