[SOLVED] R C algorithm Scheme game Go theory

$25

File Name: R_C_algorithm_Scheme_game_Go_theory.zip
File Size: 329.7 KB

5/5 - (1 vote)

Munich Personal RePEc Archive
Using Genetic Algorithms to Develop Strategies for the Prisoners Dilemma
Adnan Haider
Department of Economics, Pakistan Institute of Development Economics, Islamabad
12. November 2005
Online at http:mpra.ub.unimuenchen.de28574 MPRA Paper No. 28574, posted 4. February 2011 06:40 UTC

USING GENETIC ALGORITHMS TO DEVELOP STRATEGIES FOR THE PRISONERS DILEMMA
ADNAN HAIDER
PhD Fellow
Department of Economics
Pakistan Institute of Development Economics Islamabad, Pakistan.
The Prisoners Dilemma, a simple twoperson game invented by Merrill FloodMelvin Dresher in the 1950s, has been studied extensively in Game Theory, Economics, and Political Science because it can be seen as an idealized model for realworld phenomena such as arms races Axelrod 1984. In this paper, I describe a GA to search for strategies to play the Iterated Prisoners Dilemma, in which the fitness of a strategy is its average score in playing 100 games with itself and with every other member of the population. Each strategy remembers the three previous turns with a given player, by using a population of 20 strategies, fitnessproportional selection, singlepoint crossover with Pc0.7, and mutation with Pm0.001.
JEL Classifications: C63, C72
Keywords: GA, Crossover, Mutation and Fitnessproportional.
1. Introduction
The Prisoners Dilemma, can be formulated as follows: Two individuals call them Mr. X and Mr. Y are arrested for committing a crime together and are held in separate cells, with no communication possible between them. Mr. X is offered the following deal: If he confesses and agrees to testify against Mr. Y, he will receive a suspended sentence with probation, and Mr. Y will be put away for 5 years. However, if at the same time Mr. Y confesses and agrees to testify against Mr. X, his testimony will be discredited, and each will receive 4 years for pleading guilty. Mr. X is told that Mr. Y is being offered precisely the same deal. Both Mr. X and Mr. Y know that if neither testifies against the other they can be convicted only on a lesser charge for which they will each get 2 years in jail. Should Mr. X defect against Mr. Y and hope for the suspended sentence, risking, a 4 year sentence if Mr. Y defects? Or should he cooperate with Mr. Y even though they cannot communicate, in the hope that he will also cooperate so each will get only 2 years, thereby risking a defection by Mr. Y that will send him away for 5 years?
The game can be described more abstractly. Each player independently decides which move to make i.e., whether to cooperate or defect. A game consists of each players making a decision a move. The possible results of a single game are summarized in a payoff matrix like the one shown in table 1.1. Here the goal is to get as many points as opposed to as few years in prison as possible. In table 1.1, the payoff in each case can be interpreted as 5 minus the number of years in prison. If both players cooperate, each

The author wishes to thank to an anonymous referee for providing useful comments.
1

2
gets 3 points. If player A defects and player B cooperates, then player A gets 5 points and player B gets 0 points, and vice versa if the situation is reversed. If both players defect, each gets 1 point. What is the best strategy to use in order to maximize ones own payoff? If you suspect that your opponent is going to cooperate, then you should surely defect. If you suspect that your opponent is going to defect, then you should defect too. No matter what the other player does, it is always better to defect. The dilemma is that if both players defect each gets a worse score than if they cooperate. If the game is iterated that is, if the two players play several games in a row, both players always defecting will lead to a much lower total payoff than the players would get if they cooperated.
Player B
Table 1. The Payoff Matrix Player A
Cooperate Cooperate 3, 3 Defect 5, 0
Defect
0, 5 1, 1
Assume a rational player is faced with playing a single game known as oneshot of the Prisoners Dilemma described above and that the player is trying to maximize their reward. If the player thinks hisher opponent will cooperate, the player will defect to receive a reward of 5 as opposed to cooperation, which would have earned himher only 3 points. However if the player thinks hisher opponent will defect, the rational choice is to also defect and receive 1 point rather than cooperate and receive the suckers payoff of 0 points. Therefore the rational decision is to always defect.
But assuming the other player is also rational heshe will come to the same conclusion as the first player. Thus both players will always defect; earning rewards of 1 point rather than the 3 points that mutual cooperation could have yielded. Therein lays the dilemma. In game theory the Prisoners Dilemma can be viewed as a two players, non zerosum and simultaneous game. Game theory has proved that always defecting is the dominant strategy for this game the Nash Equilibrium. This holds true as long as the payoffs follow the relationship TRPS, and the gain from mutual cooperation is greater than the average score for defecting and cooperating, RST 2. While this game may seem simple it can be applied to a multitude of real world scenarios. Problems ranging from businesses interacting in a market, personal relationships, super power negotiations and the trench warfare live and let live system of World War I have all been studied using some form of the Prisoners Dilemma.
2. IteratedPrisonersDilemma
The Iterated Prisoners Dilemma IPD is an interesting variant of the above game in which two players play repeated games of the Prisoners Dilemma against each other. In

3
the above discussion of the Prisoners Dilemma the dominant mutual defection strategy relies on the fact that it is a oneshot game, with no future. The key to the IPD is that the two players may play each other again; this allows the players to develop strategies based on previous game interactions. Therefore a players move now may affect how hisher opponent behaves in the future and thus affect the players future payoffs. This removes the single dominant strategy of mutual defection as players use more complex strategies dependant on game histories in order to maximize the payoffs they receive. In fact, under the correct circumstances mutual cooperation can emerge. The length of the IPD i.e. the number of repetitions of the Prisoners Dilemma played must not be known to either player, if it was the last iteration would become a oneshot play of the Prisoners Dilemma and as the players know they would not play each other again, both players would defect. Thus the second to last game would be a oneshot game not influencing any future and incur mutual defection, and so on till all games are oneshot plays of the Prisoners Dilemma.
This paper is concerned with modeling the IPD described above and devising strategies to play it. The fundamental Prisoners Dilemma will be used without alteration. This assumes a player may interact with many others but is assumed to be interacting with them one at a time. The players will have a memory of the previous three games only memory3 IPD.
3. GeneticAlgorithms
Genetic Algorithms are search algorithms based on the mechanics of natural selection and natural genetics. John Holland at the University of Michigan originally developed them. They usually work by beginning with an initial population of random solutions to a given problem. The success of these solutions is then evaluated according to a specially designed fitness function. A form ofnatural selection is then performed whereby solutions with higher fitness scores have a greater probability of being selected. The selected solutions are thenmated using genetic operators such as crossover and mutation. The children produced from this mating go on to form the next generation. The theory is that as fitter genetic material is propagated from generation to generation the solutions will converge towards an optimal solution. This research utilizes Genetic Algorithms to develop successful strategies for the Prisoners Dilemma.
A simple GA works on the basis of the following steps:
Step1.
Start with a randomly generated population of n lbit chromosomes Candidate solution to a problem.
Step2.
Calculate the fitness fx of each chromosome x in the population.
Step3.
Repeat the following steps until n offspring have been created:

4
o Select a pair of parent chromosomes from the current population, the probability of selection being an increasing function of fitness. Selection is done with replacement meaning that, the same chromosome can be selected more than once to become a parent.
o With probability Pc the crossover probability, crossover the pair at a randomly chosen point to form two offspring. If no crossover takes place, form two offspring that are exact copies of their respective parents. Note: crossover may be in single point or multipoint version of the GA.
o Mutate the two offspring at each locus with probability Pm the mutation probability or mutation rate, and place the resulting chromosomes in the new population. Note: if n is odd, one new population member can be described at random.
Step4.
Replace the current population with the new population.
Step5.
Go to step 2.
4. ExperimentalSetup
Genetic Algorithms provide the means by which strategies for the Prisoners Dilemma are developed in this paper. As this is the principal objective of the research, naturally the genetic algorithm used is one of the systems major components. The other system components have been designed to suit the Genetic Algorithm. As such, in describing the genetic algorithms implementation most of the other components will also be described. What follows is a description of how a genetic algorithm was implemented to evolve strategies to play the Iterated Prisoners Dilemma.
4.1. FiguringoutStrategies
The first issue is figuring out how to encode a strategy as a string. Suppose the memory of each player is one previous game. There are four possibilities for the previous game:
Case 1: CC Case 2: CD Case 3: DC Case 4: DD
Where C denotes cooperate and D denotes defect. Case I is when both players cooperated in the previous game, case II is when player A cooperated and player B defected, and so on.
A strategy is simply a rule that specifies an action in each of these cases.

5
If CC Case 1 Then C If CD Case 2 Then D If DC Case 3 Then C If DD Case 4 Then D
If the cases are ordered in this canonical way, this strategy can be expressed compactly as the string CDCD. To use the string as a strategy, the player records the moves made in the previous game e.g., CD, finds the case number i by looking up that case in a table of ordered cases like that given above for CD, i2, and selects the letter in the ith position of the string as its move in the next game for i2, the move is D. Consider the tournament involved strategies that remembered three previous games, then there are 64 possibilities for the previous three games:
CC CC CC Case 1, CC CC CD Case 2, CC CC DC Case 3,
i

DD DD DC Case 63 DD DD DD Case 64
Thus, a 64letter string, e.g., CCDCDCDCDCDC can encode a strategy. Since using the strategy requires the results of the three previous games, we can use a 70letter string, where the six extra letters encoded three hypothetical previous games used by the strategy to decide how to move in the first actual game. Since each locus in the string has two possible alleles C and D, the number of possible strategies is 270. The search space
is thus far too big to be searched exhaustive ly.
The history in table 2 is stated in the order Your first move, Opponents first Move,
Your second move, Opponents second Move, Your third move, Opponents third Move. The move column indicates what move to play for the given history. Table 2 also shows how the TFT strategy is encoded using this scheme. The resulting TFT chromosome is:
CCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCD CDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDC DCDCDC DCDCDCDCDCDCDCDCDCDCDCDCDCDCDCDCD
This is actually stored as a BitSet in computer memory as follow:

6
110101010101010101010101010101010101010101010101010101010101010101010101 010101010101010101010101010101010101010101010101010101010101010
Table 1. Encoding Strategies.
Bit History Move
0 FirstMove C
Bit History Move
36 CD D D CD D
1 Opponent C
2 Opponent D
3 Opponent CC
4 Opponent CD
5 Opponent DC
6 Opponent DD
7 CCCCCC
8 CCCCCD
9 CCCCDC
C 37 D 38 C 39 D 40 C 41 D 42 C 43 D 44 C 45 D 46 C 47 D 48 C 49 D 50 C 51 D 52 C 53 D 54 C 55 D 56 C 57 D 58 C 59 D 60 C 61 D 62 C 63 D 64 C 65 D 66 C 67 D 68 C 69 D 70
CD D D D C C C D D D DD D D CCCCC C D CCCCD D D CCCD C C D CCCD D D D CCD CC C D CCD CD D D CCD D C C D CCD D D D D CD CCC C D CD CCD D D CD CD C C D CD CD D D D CD D CC C D CD D CD D D CD D D C C D C D D DD D D D CCCC C D D C C CD D D D CCD C C D D CCD D D D D CD CC C D D CD CD D D D CD D C C D D C D DD D D D D CCC C D D D CCD D D D D CD C C D D D C DD D D D D D CC C DDDDCD D DDDDDC C DDDDDD D
10 CCCCDD
11 CCCD C C
12 CCCD C D
13 CCCD D C
14 CCCD D D
15 CCD C CC
16 CCD C CD
17 CCD C D C
18 CCD C D D
19 CCD D C C
20 CCD D C D
21 CCD D D C
22 CCD D D D
23 CD CCCC
24 CD CCCD
25 CD CCD C
26 CD CCD D
27 CD CD CC
28 CD CD CD
29 CD CD D C
30 CD CD D D
31 CD D CCC
32 CD D CCD
33 CD D CD C
34 CD D CD D
35 CD D CC C C
4.2. FitnessFunction
The next problem faced when designing a Genetic Algorithm is how to evaluate the success of each candidate solution. The Prisoners Dilemma provides a natural means of evaluating the success, or fitness, of each solutionthe game payoffs. These payoffs are stored in Rules objects within the system. We can state that the strategy, which earns the highest payoff score according to the rules of the IPD, is the fittest, while the lowest scoring strategy is the weakest. Thus fitness can be evaluated by playing the Prisoner

7
objects in some form of IPD. The Game object was implemented to play a game of IPD for a specified number of rounds between two players. This object simply keeps track of two Prisoners scores and game histories while asking them for new moves until the rounds limit is met. The tournament object uses this class to organize a round robin IPD tournament, akin to Axelrods 1 computer tournaments. In such a tournament an array of Prisoners is supplied as the population and every Prisoner plays an IPD Game against every other Prisoner and themselves. Each players payoff after these interactions have completed is deemed to be the players fitness score.
4.3. FitnessScaling
The fitness scores calculated above will serve to determine which players go on to reproduce and which playersdie off. However theseraw fitness values present some problems. The initial populations are likely to have a small number of very high scoring individuals in a population of ordinary colleagues. If using fitness proportional selection, these high scorers will take over the population rapidly and cause the population to converge on one strategy. This strategy will be a mixture of the high scorers strategies, however as the population did not get time to develop these strategies may be sub optimal, and the population will have converged prematurely. In the later generations of evolution the individuals should have begun to converge on a strategy. Thus they will all share very similar chromosomes and the populations average fitness will likely be very close to the populations best fitness. In this situation average members and above average members will have a similar probability of reproduction. In this situation the natural selection process has ended and the algorithm is merely performing a random search among the strategies.
It is useful to scale theraw fitness scores to help avoid the above situations. This algorithm uses linear scaling as described by Goldberg, Linear scaling produces a linear relationship between raw fitness, f, and scaled fitness, f, as follows:
fa fb 1 Coefficients a and b may be calculated as follows:
avg fmax favg
Where c is the number of times the fittest individual should be allowed to reproduce. A value of 2 was found to produce accurate scaling in this method. The effect of this fitness scaling is shown in the fig. 1.
a c 1 favg fmax favg
2 b f favg cfavg 3

8
Fig. 1. Linear Fitness Scaling
This scaling works fine for most situations, however in the later stages of evolution when there are relatively fewlow scoring strategies problems may arise. The average and best scoring strategies have very close raw fitness and extreme scaling is required to separate them. Applying this scaling to the few low scorers may result in them becoming negative. Fig. 2
Fig. 2. Linear Fitness Scaling Negative Values
This can be overcome by adjusting the scaling coefficients to scale the weak strategies to zero and scale the other strategies as much as is possible. In the case of negative scaled fitness values the coefficients may be calculated as follows:
4 5
favg favg fmin
a
bf favg
min favg fmin
This scaling helps prevent the early dominance of high scorers, while later on distinguishes between mediocre and above average strategies. It is implemented in the

9
Genetic object and applied to all raw fitness scores i.e. the IPD payoffs before performing genetic algorithm selection.
4.4. Reproduction
Having selected two strategies from the population the Genetic Algorithm proceeds to mate these two parents and produce their two children. This reproduction is a mirror of sexual reproduction in which the genetic material of the parents is combined to produce the children. In this research reproduction allows exploration of the search space and provides a means of reaching new and hopefully better strategies. Reproduction is accomplished using two simple yet effective genetic operatorscrossover and mutation.
Crossover is an artificial implementation of the exchange of genetic information that occurs in reallife reproduction. This a lgorithm, breaking both the parent chromosomes at the same randomly chosen point and then rejoining the parts, can implement it. Fig. 3
Fig. 3. Crossover
This crossover action, when applied to strategies selected proportional to their fitness, constructs new ideas from high scoring building blocks. The genetic algorithm implemented in this research performs crossover a large percentage of the time, however occasionally 5 of the time by default crossover will not be performed and simple natural selection will occur. In nature small mutations of the genetic material exchanged during reproduction occurs a very small percentage of the time. However if these mutations produce an advantageous result they will be propagated throughout the population by means of natural selection. The possibility of small mutations occurring was included in this system. A very small percentage of the time 0.1 of the time by

10
default a bit copied between the parent and the child will be flipped, representing a mutation. These mutations provide a means of exploration to the search.
4.5. Replacement
The genetic algorithm is run across the population until it has produced enough children to build a new generation. The children then replace all of the original population. More complicated replacement techniques such as fitweak and child parent replaced were researched but they were unsuitable for the round robin tournament nature of the system.
4.6. SearchTermination
The only termination criteria implemented is a limit to the maximum number of generations that will run; the user may set this. Other termination criteria were investigated, for example detecting when a population has converged and strategies are receiving equal payoffs, however these criteria resulted in many false positives and it was decided better to allow the user to judge when the algorithm had reached the end of useful evolution.
5. Conclusion
GA often found strategies that scored substantially higher than any other algorithm. But it would be wrong to conclude that the GA discovered strategies that are batter than any human designed strategy. The performance of a strategy depends very much on its environment that is, on the strategies with which it is playing. Here the environment was fixed that did not change over the course of a run. Therefore it may conclude that the abovementioned environment is static Unchanged.

11
A p pe n d i x :
The following flowchart describes the completed genetic algorithm of the Prisoners Dilemma Problem.
Fig. 4. Flow Chart of Prisoners Dilemma Problem

12
Re fe re nces:
Axelrod R, 1990. The Evolution of Cooperation, Penguin Books
B. Routledge, 1993. CoEvolution and Spatial Interaction, mimeo, University of British
Columbia
Chambers ed., 1995. Practical Handbook of Genetic Algorithms Applications, Volume II, CRC
Press
Conor Ryan, Niche Species, 1995. Formation in Genetic Algorithms, In L. Chambers ed.,
Practical Handbookof Genetic Algorithms Applications Volume I, CRC Press
David E. Goldberg, 1989. Genetic Algorithms in search, optimization, and machine learning,
AddisonWesley Publishing
Frank Schweitzer, Laxmidhar Behera, Heinz Muhlenbein, 2002. Evolution of Cooperation in a
Spatial Prisoners Dilemma, Advances in Complex Systems, vol. 5, no. 23, pp. 269299
John R. Koza, 1992. Genetic Programming On the programming of computers by means of
natural selection, MIT Press
Peter J.B. Hancock, 1995. Selection Methods for Evolutionary Algorithms, In L. Geoff Bartlett,
Genie: A First GA, In L. Chambers ed., Practical Handbook of Genetic Algorithms
Applications, Volume I, CRC Press
Shaun P. Hargreaves Heap and Yanis Varoufakis, 1995. Game Theory a Critical Introduction,
McGraw Hill Press

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] R C algorithm Scheme game Go theory
$25