TY - GEN
T1 - Fair scheduling in common-pool games by aspiration learning
AU - Chasparis, Georgios C.
AU - Arapostathis, Ari
AU - Shamma, Jeff S.
PY - 2012
Y1 - 2012
N2 - We propose a distributed learning algorithm for fair scheduling in common-pool games. Common-pool games are strategic-form games where multiple agents compete over utilizing a limited common resource. A characteristic example is the medium access control problem in wireless communications, where multiple users need to decide how to share a single communication channel so that there are no collisions (situations where two or more users use the medium at the same time slot). We introduce a (payoff-based) learning algorithm, namely aspiration learning, according to which agents learn how to play the game based only on their own prior experience, i.e., their previous actions and received rewards. Decisions are also subject to a small probability of mistakes (or mutations). We show that when all agents apply aspiration learning, then as time increases and the probability of mutations goes to zero, the expected percentage of time that agents utilize the common resource is equally divided among agents, i.e., fairness is established. When the step size of the aspiration learning recursion is also approaching zero, then the expected frequency of collisions approaches zero as time increases.
AB - We propose a distributed learning algorithm for fair scheduling in common-pool games. Common-pool games are strategic-form games where multiple agents compete over utilizing a limited common resource. A characteristic example is the medium access control problem in wireless communications, where multiple users need to decide how to share a single communication channel so that there are no collisions (situations where two or more users use the medium at the same time slot). We introduce a (payoff-based) learning algorithm, namely aspiration learning, according to which agents learn how to play the game based only on their own prior experience, i.e., their previous actions and received rewards. Decisions are also subject to a small probability of mistakes (or mutations). We show that when all agents apply aspiration learning, then as time increases and the probability of mutations goes to zero, the expected percentage of time that agents utilize the common resource is equally divided among agents, i.e., fairness is established. When the step size of the aspiration learning recursion is also approaching zero, then the expected frequency of collisions approaches zero as time increases.
KW - Aspiration learning
KW - Common-pool games
KW - Medium-access control
KW - Resource allocation
UR - http://www.scopus.com/inward/record.url?scp=84866901588&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84866901588
SN - 9783901882456
T3 - 2012 10th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, WiOpt 2012
SP - 386
EP - 390
BT - 2012 10th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, WiOpt 2012
T2 - 2012 10th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, WiOpt 2012
Y2 - 14 May 2012 through 18 May 2012
ER -