Efficiency comparison between Fictitious Play and Alternate Fictitious Play algorithms on the restricted set of zero-sum games

From LRDE

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Abstract

The Fictitious Play algorithm is an iterate learning process created to compute Nash equilibria. At each iteration of this algorithm, each of the players “strengthens” the strategy that has the highest utility in the current context. For some specific game classes this algorithm converges to a Nash equilibrium, therefore providing an efficient approximation method. Howeverconvergence can only be proved for a small amount of game classes. The Alternate Fictitious Play algorithm (introduced last year) is a variant in which only one player at a time strengthens one of his strategies : the player which is the “further” from his maximum payoff. This study will focus on a comparison of these two approaches on the restricted set of zero-sum games. It will also present the notions of game classification used for this comparison.