Efficient network formation by distributed reinforcement

Georgios C. Chasparis, Jeff S. Shamma

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations


We consider the problem of efficient network formation in a distributed fashion. Network formation is modeled as an evolutionary process, where agents can form and sever unidirectional links and derive direct and indirect benefits from these links. We formulate and analyze an evolutionary model in which each agent's choices depend on its own previous links and benefits, and link selections are subject to random perturbations. Agents reinforce the establishment of a link if it was beneficial in the past, and suppress it otherwise. We illustrate the flexibility of the model to incorporate various design criteria, including dynamic cost functions that reflect link establishment and maintenance, and distance-dependent benefit functions. We show that the evolutionary process assigns positive probability to the emergence of multiple stable configurations (i.e., strict Nash networks), which need not emerge under alternative processes such as best-reply dynamics. We analyze the specific case of so-called frictionless benefit flow, and show that a single agent can reinforce the emergence of an efficient network through an enhanced evolutionary process known as dynamic reinforcement.

Original languageEnglish (US)
Title of host publicationProceedings of the 47th IEEE Conference on Decision and Control, CDC 2008
Number of pages6
StatePublished - Dec 1 2008
Event47th IEEE Conference on Decision and Control, CDC 2008 - Cancun, Mexico
Duration: Dec 9 2008Dec 11 2008

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0191-2216


Other47th IEEE Conference on Decision and Control, CDC 2008

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization


Dive into the research topics of 'Efficient network formation by distributed reinforcement'. Together they form a unique fingerprint.

Cite this