Stabilization of Probabilistic Boolean Networks via State-Flipped Control and Reinforcement Learning
成果类型:
Article
署名作者:
Liu, Yang; Liu, Zejiao; Yerudkar, Amol; Del Vecchio, Carmen
署名单位:
Zhejiang Normal University; Zhejiang Normal University; East China University of Science & Technology; Zhejiang Normal University; Zhejiang Normal University; University of Sannio
刊物名称:
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
ISSN/ISSBN:
0018-9286
DOI:
10.1109/TAC.2023.3327618
发表日期:
2024
页码:
1858-1865
关键词:
Q$-learning ( QL)
finite-time global stabilization
probabilistic Boolean networks (PBNs)
semitensor product (STP)
state-flipped control
摘要:
In this article, the state-flipped control technique is explored to investigate the stabilization of probabilistic Boolean networks (PBNs). Changing the values of many nodes from 0 to 1 (or from 1 to 0) is called the state-flipped control. The concepts of fixed point, reachable sets, and finite-time global stabilization of PBNs under state-flipped control are proposed. Several necessary and sufficient conditions for global stabilization are also derived based on the reachable sets of a given state. Furthermore, a model-free reinforcement learning algorithm, namely, Q-learning (QL) is presented to design a flip sequence for any state that steers the state to a given destination state, thereby achieving finite-time global stabilization via state-flipped control. In addition, the process of finding the minimum flip set is proposed under the semitensor product and QL methods. Finally, the viability of the results in the article is shown by considering a 12-gene hepatocellular cancer cell tumor network.