Byzantine-Resilient Distributed Hypothesis Testing With Time-Varying Network Topology
成果类型:
Article
署名作者:
Wu, Bo; Carr, Steven; Bharadwaj, Suda; Xu, Zhe; Topcu, Ufuk
署名单位:
University of Texas System; University of Texas Austin; University of Texas System; University of Texas Austin; Arizona State University; Arizona State University-Tempe
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2021.3102474
发表日期:
2022
页码:
3243-3258
关键词:
testing
Network topology
trajectory
CONVERGENCE
Mobile agents
Task analysis
SURVEILLANCE
Byzantine attacks
distributed hypothesis testing
multiagent system
摘要:
In this article, we study the problem of distributed hypothesis testing over a network of mobile agents with limited communication and sensing ranges to infer the true hypothesis collaboratively. In particular, we consider a scenario where there is an unknown subset of compromised agents that may deliberately share altered information to undermine the team objective. We propose two distributed algorithms where each agent maintains and updates two sets of beliefs (i.e., probability distributions over the hypotheses), namely local and actual beliefs (LB and AB, respectively, for brevity). In both algorithms, at every time step, each agent shares its AB with other agents within its communication range and makes a local observation to update its LB. Then, both algorithms can use the shared information to update ABs under certain conditions. One requires receiving a certain number of shared ABs at each time instant; the other accumulates shared ABs over time and updates after the number of shared ABs exceeds a prescribed threshold. Otherwise, both algorithms rely on the agent's current LB and AB to update the new AB. We prove under mild assumptions that the AB for every noncompromised agent converges almost surely to the true hypothesis, without requiring connectivity in the underlying time-varying network topology. Using a simulation of a team of unmanned aerial vehicles aiming to classify adversarial agents among themselves, we illustrate and compare the proposed algorithms. Finally, we show experimentally that the second algorithm consistently outperforms the first algorithm in terms of the speed of convergence.