Differentiate fully distributed control strategy from partially distributed control strategy

Differentiate fully distributed control strategy from partially distributed control strategy

  • Differentiate fully distributed control strategy from partially distributed control strategy
    Access through your institution

Differentiate fully distributed control strategy from partially distributed control strategy

Differentiate fully distributed control strategy from partially distributed control strategy

Abstract

This paper considers the design of fully distributed Nash equilibrium seeking strategies for networked games. To develop fully distributed seeking strategies, two adaptive control laws, including a node-based control law and an edge-based control law, are proposed. In the node-based adaptive strategy, each player adjusts their own weight on their procurable consensus error dynamically. Moreover, in the edge-based algorithm, the fully distributed strategy is designed by adding an adaptively adjusted scaling parameter to the weights of the edges of the communication graph. With the adaptive control laws, the proposed methods are free of any centralized control gains, i.e., the players can update their actions without knowing any centralized information. By utilizing LaSalle’s invariance principle, it is shown that the Nash equilibrium is globally asymptotically stable under both strategies. In addition, we further show that the edge-based method can be easily adapted to accommodate time-varying communication conditions, in which the communication network is switching among a set of undirected and connected graphs. In the last, numerical examples are given to illustrate the effectiveness of the proposed methods.

Introduction

Distributed Nash equilibrium seeking for networked games has become a thriving research topic in recent years. In particular, continuous-time consensus-based algorithms are shown to be very powerful and effective methods that can solve networked games with incomplete information. For example, singularly perturbed dynamics were firstly designed in Ye and Hu (2017b) for aggregative games, which were adapted for general networked games in Ye and Hu, 2017a, Ye and Hu, 2018 and N-cluster games in Ye et al., 2018, Ye et al., 2019, Ye et al., 2020. In the singularly perturbed algorithms, a singular perturbation parameter was included to ensure that the consensus part would be faster than the optimization part. In this way, the consensus-based algorithms are actually in two-time scales and they have advantages due to their simplicity and effectiveness. However, one drawback for this class of methods is that the explicit quantification of the control gain depends on the players’ objective functions, the topologies of the communication graphs and the number of players in the game. Similarly, the method in Deng and Nian (2019) not only requires the determination of the singular perturbation parameter but also some other shared control gains. In Liang, Yi, and Hong (2017), the authors adopted several coordinated control gains among all the players’ seeking strategies, whose quantification requires the number of players, the players’ gradient information and the constraint information. In Gadjov and Pavel (2019), a passivity perspective was proposed for distributed Nash equilibrium seeking with both a single-time scale method and a two-time scale method established. The single-time scale method requires that the second minimum eigenvalue of the Laplacian matrix associated with the communication graph is larger than some value determined by the players’ objective functions. Moreover, the two-time scale method requires the quantification of the centralized singular perturbation parameter to be implemented. In Ye (0000), distributed robust Nash equilibrium seeking algorithms were proposed by designing extended-state observers for disturbance and uncertainty estimation. The quantification of the singular perturbation parameter and some other observer gains in the seeking strategy requires centralized graph and game information. In Li and Ding (2019), a fixed-time Nash equilibrium seeking algorithm was given with the control gains therein depending on the number of players in the game. In Lu, Jing, and Wang (2019), a decaying control gain, which is not integrable but square integrable, was shared among all the players’ control strategies, indicating that the players need to coordinate on it. Based on an extremum seeker, a fast Nash equilibrium seeking algorithm without steady-state oscillation was proposed in Zahedi, Arefi, and Khayatian (2019), where there are some shared parameters among the players. In fact, the authors in Koshal, Nedic, and Shanbhag (2016) showed that in the discrete-time scenario, exact convergence to the Nash equilibrium can be obtained by utilizing the method proposed therein if coordinated stepsize is adopted while only convergence to a neighborhood of the Nash equilibrium can be concluded if the stepsize is uncoordinated. In short, most of the existing literature on continuous-time consensus-based Nash equilibrium seeking strategies require the determination of some coordinated control gains, which possibly depend on centralized information, to ensure the convergence of the closed-loop system. In this regard, these methods are not fully distributed.

It should be noted that in many practical situations, the centralized information can hardly be obtained by all the players for games under distributed communication networks. Moreover, as the choices of the control gains depend on the Lipschitz constants of the gradients associated with the players as well as the network topology, the control gains might need to be re-quantified to ensure the stability of the new Nash equilibrium if there exist players that join/leave the game. To avoid the usage of centralized control gains, an interesting fully distributed Nash equilibrium seeking algorithm was established in De Persis and Grammatico (2019) based on passivity properties and averaging integral controllers by tuning the weights of the fixed undirected communication graph online, which was extended to multi-integrator case in Bianchi and Grammatico (0000) by utilizing two-hop communication among the players. Motivated by the above observations, this paper intends to design alternative fully distributed Nash equilibrium seeking strategies for games under both fixed and switching communication topologies without utilizing two-hop communication among the players.

To establish fully distributed Nash equilibrium seeking strategies, the main idea of this paper is to adjust the control gains adaptively. Fully distributed consensus via designing adaptive control gains was widely studied in the existing literature. For example, in Li, Ren, Liu, and Fu (2013) and Li, Wen, Duan, and Ren (2015), distributed consensus of linear or Lipschitz nonlinear multi-agent system was explored by utilizing adaptive protocols. Consensus of second-order nonlinear multi-agent systems was investigated in Yu, Ren, Zheng, and Chen (2013) by designing distributed control gains and heterogeneous unknown inertias were further addressed in Mei, Ren, and Chen (2016). In Li, Liu, Buss, and Hirche (2020), two reduced-order adaptive protocols were proposed for fully distributed consensus of linear multi-agent systems. Note that though the proposed methods are motivated by the aforementioned works that it might be possible to avoid the usage of centralized control gains through adaptively adjusting the parameters, we study distributed Nash equilibrium seeking problems, which are different from the consensus problems studied in the aforementioned works. In the considered problem, consensus of the players’ estimates on the other players’ actions and the optimization of the players’ objective functions need to be achieved simultaneously, which makes the problem challenging.

In comparison with the existing works, the main contributions of the paper are summarized as follows.

(1)

A node-based adaptive strategy and an edge-based adaptive strategy are proposed to achieve the fully distributed seeking of Nash equilibrium in networked games. The node-based adaptive strategy achieves the goal by adaptively updating each player’s weight on its overall procurable consensus error. Moreover, the edge-based adaptive strategy achieves fully distributed Nash equilibrium seeking through adding an adaptively updated scaling parameter to the weights of the edges of the communication graph. Compared with Bianchi and Grammatico (0000) and De Persis and Grammatico (2019) that established the algorithms from a passivity perspective, the methods proposed in this paper provide new insight into the design of fully distributed seeking algorithms. Moreover, a noticeable distinction between the algorithms proposed in this paper and those in Bianchi and Grammatico (0000) and De Persis and Grammatico (2019) is that the algorithms proposed in this paper do not require two-hop communication.

(2)

Based on the LaSalle’s invariance principle, it is theoretically shown that the Nash equilibrium is globally asymptotically stable under both seeking strategies given that the provided assumptions are satisfied.

(3)

Extensions to time-varying communication topologies where the communication network switches among a set of undirected and connected graphs is discussed for the edge-based Nash equilibrium seeking algorithm. It is shown that the edge-based algorithm can be easily adapted to accommodate time-varying communication graphs. This extension is new to the community as the related existing works only considered fixed communication topologies.

Section snippets

Motivation and problem statement

Consider a game with N players who are equipped with a communication graph. Denote the player set as V={1,2,…,N}. Correspondingly, the players are labeled from 1 to N, respectively. In the considered game, each player tries to minxi fi(x),where fi(x) and xi∈Rni are the objective function and action of player i, respectively, x=[x1T ,x2T,…,xNT]T and ni is a positive integer. Then, the Nash equilibrium of the game is defined as follows.

Definition 1

An action profile x∗=(xi∗,x−i∗) is a Nash equilibrium if for

Main results

To achieve fully distributed Nash equilibrium seeking, we suppose that the players are equipped with a communication graph represented by G={V, E}, where V and E⊂V×V are the vertex set and edge set, respectively. The adjacency matrix associated with G is defined as A=[aij] where aij=1 if (j,i)∈E and aij=0 if (j,i)∉E. The network is said to be undirected if aij=aji for all i,j∈V. Correspondingly, the Laplacian matrix of G is L=D−A, where D is a diagonal matrix whose elements are ∑j=1Naij for i∈V,

Discussions

In this section, we provide some discussions on the proposed algorithms.

A numerical example

In this section, we consider the connectivity control game for a network of 5 mobile sensors considered in Ye (0000). In the considered game, xi∈ R2. For notational convenience, denote xi as xi=[xi1,xi2]T. Moreover, the players’ objective functions are defined as f1(x)=x1 Tx1+x11+x12+1+‖x1−x2‖2,f2(x)=2x2Tx2+2x21+2x22+2+‖x2−x3‖2,f3(x)=3x3Tx3+3x31+3x32+3+‖x3−x2‖2,f4(x)=4x4Tx4+4x41+4x42+4+‖x4−x2‖2 +‖x4−x5‖2,f5(x)=5x5Tx5+5x51+5x52+5+‖x5−x1‖2,by which the Nash equilibrium is unique and is xij∗=−0.5,

Conclusions

This paper develops fully distributed Nash equilibrium seeking algorithms for networked game. A node-based adaptive algorithm and an edge-based adaptive algorithm are proposed. The node-based adaptive algorithm achieves the distributed strategy design via adjusting the weight on each player’s accessible consensus error adaptively. The edge-based adaptive algorithm dynamically adjusts the weights on the edges of the communication graph to achieve fully distributed Nash equilibrium seeking. Both

Maojiao Ye received the B.Eng. degree in Automation from the University of Electronic Science and Technology of China, Sichuan, China, in 2012 and the Ph.D. degree from Nanyang Technological University, Singapore, in 2016. She is currently a Professor in the School of Automation, Nanjing University of Science and Technology. Prior to her current position, she was a research fellow in the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore from 2016 to

References (23)

  • et al.

    Fast convergence to Nash equilibria without steady-state oscillation

    Systems & Control Letters

    (2019)

  • YuW. et al.

    Distributed control gains design for consensus in multi-agent systems with second-order nonlinear dynamics

    Automatica

    (2013)

  • YeM. et al.

    An extremum seeking-based approach for Nash equilibrium seeking in N-cluster noncooperative games

    Automatica

    (2020)

  • YeM. et al.

    Nash equilibrium seeking for N-coalition non-cooperative games

    Automatica

    (2018)

  • LiangS. et al.

    Distributed Nash equilibrium seeking for aggregative games with coupled constraints

    Automatica

    (2017)

  • De PersisC. et al.

    Distributed averaging integral Nash equilibrium seeking on networks

    Automatica

    (2019)

  • Bianchi, M., & Grammatico, S. (0000). Continuous-time fully distributed generalized Nash equilibrium seeking for...
  • DengZ. et al.

    Distributed generalized Nash equilibrium seeking algorithm deign for aggregative games over weight-balanced digraphs

    IEEE Transactions on Neural Networks and Learning Systems

    (2019)

  • GadjovD. et al.

    A passivity-based approach to Nash equilibrium seeking over network

    IEEE Transactions on Automatic Control

    (2019)

  • KhaililH.

    Nonlinear Systems

    (2002)

  • KoshalJ. et al.

    Distributed algorithms for aggregative games on graphs

    Operations Research

    (2016)

  • Cited by (7)

    Differentiate fully distributed control strategy from partially distributed control strategy

    Maojiao Ye received the B.Eng. degree in Automation from the University of Electronic Science and Technology of China, Sichuan, China, in 2012 and the Ph.D. degree from Nanyang Technological University, Singapore, in 2016. She is currently a Professor in the School of Automation, Nanjing University of Science and Technology. Prior to her current position, she was a research fellow in the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore from 2016 to 2017.

    Dr. Ye was a recipient of Guan Zhao-Zhi Award in the 36th Chinese Control Conference and the Best Paper Award in the 15th IEEE International Conference on Control and Automation. Her research interests include distributed optimization, games and their applications.

    Differentiate fully distributed control strategy from partially distributed control strategy

    Guoqiang Hu received the B.Eng. degree in Automation from the University of Science and Technology of China, Hefei, China, in 2002, the M.Phil. degree in Automation and Computer-Aided Engineering from the Chinese University of Hong Kong in 2004, and the Ph.D. degree in Mechanical Engineering from the University of Florida, Gainesville, FL, USA, in 2007. He joined the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore in 2011, and is currently an Associate Professor with tenure and the Director of the Centre for System Intelligence and Efficiency (EXQUISITUS). He was an Assistant Professor at Kansas State University, Manhattan, KS, USA, from 2008 to 2011.

    Dr. Hu was a recipient of the Best Paper in Automation Award in the 14th IEEE International Conference on Information and Automation, and a recipient of the Best Paper Award (Guan Zhao-Zhi Award) in the 36th Chinese Control Conference. He serves as Subject Editor for International Journal of Robust and Nonlinear Control, Associate Editor for IEEE Transactions on Automatic Control, Associate Editor for IEEE Transactions on Control Systems Technology, Associate Editor for IEEE Transactions on Automation Science and Engineering, and Technical Editor for IEEE/ASME Transactions on Mechatronics. His research interests include distributed control and optimization with applications to energy and robotic systems.

    View full text

    © 2021 Elsevier Ltd. All rights reserved.

    What is partially distributed control strategy?

    A strategy of partial distribution, where some features and components are distributed and others are centrally controlled, has now emerged as the recognized recommended practice for IT systems in general and for IDS control systems in particular.

    What are Idpss list and describe the three control strategies proposed for Idpss?

    List and describe the three control strategies proposed for IDPS control. The three commonly utilized control strategies are centralized, partially distributed, and fully distributed. With a centralized IDPS control strategy all IDPS control functions are implemented and managed in a central location.

    What is centralized control strategy?

    centralized control strategy, all data resources send data to a central control center (CCC). After processing the received data, appropriate decisions are made and related commands are sent back to controllable devices [2]. The information flow for centralized control strategy is illustrated in Fig.