Approximate Nash Equilibria in Mean-Field Games with Discounted Cost
Naci Saldı
Özyeğin University, Türkiye
In this talk, I will present a general theory for discrete-time mean-field games
with discounted infinite-horizon cost. I will cover both perfect state and partial state information
structures. The state space of each player is a Polish space, and at each time,
the players are coupled through the empirical distribution of their states, which affects both
the players individual costs as well as their state transition probabilities. I will first discuss
the difficulties to be encountered in any attempt to obtain the exact Nash equilibrium in
such dynamic games with decentralized information, with a finite number of players. The
mean-field approach offers a way out of this difficulty. First focusing on the perfect state
information, and using the solution concept of Markov-Nash equilibrium, I will show under
some mild conditions the existence of a mean-field equilibrium in the infinite population
limit. I will then show that the policy obtained from the mean-field equilibrium is approximately
Markov-Nash when the number of players is sufficiently large. Following this, I will
turn to the class of discrete-time partially observed mean-field games. Using the technique
of converting the original partially observed stochastic control problem to a fully observed
one on the belief space and the dynamic programming principle, I will establish the existence
of Nash equilibria under mild technical conditions. I will again show, as in the perfect state
information case, that the mean-field equilibrium policy, when adopted by each player, forms
an approximate Nash equilibrium for games with sufficiently many players