How a network gets to the goal (a consensus value) can be as important as reaching the consensus value. While prior methods focus on rapidly getting to a new consensus value, maintaining cohesion, during the transition between consensus values or during tracking, remains challenging and has not been addressed. The main contributions of this work are to address the problem of maintaining cohesion by: (i) proposing a new delayed self-reinforcement (DSR) approach; (ii) extending it for use with agents that have higher-order, heterogeneous dynamics, and (iii) developing stability conditions for the DSR-based method. With DSR, each agent uses current and past information from neighbors to infer the overall goal and modifies the update law to improve cohesion. The advantages of the proposed DSR approach are that it only requires already-available information from a given network to improve the cohesion and does not require network-connectivity modifications (which might not be always feasible) nor increases in the system's overall response speed (which can require larger input). Moreover, illustrative simulation examples are used to comparatively evaluate the performance with and without DSR. The simulation results show substantial improvement in cohesion with DSR.
- Pub Date:
- March 2020
- Electrical Engineering and Systems Science - Systems and Control;
- Computer Science - Multiagent Systems
- Updated Simulations from Journal Version and MATLAB Code for simulations are included