Queens

Published on April 2021 | Categories: Documents | Downloads: 1 | Comments: 0 | Views: 527
of 16
Download PDF   Embed   Report

Comments

Content

 

 Minimax Explained  By ai-depot | July 28, 2002

Disc Discus usse ses s trees how how are searc earch h can can bas e well appl pplied ie toalgorithm log logic gam gathat mescan with itsearch h full information. Game described, asd an them. Pseudo code is given and optimisations like alpha-beta are explained. Written by

Paulo Pinto.

The Algorithm  Introduction

There are plenty of applications for AI, but games are the most interesting to the public. Nowadays every major OS comes with some games. So it is no surprise that there are some algorithms that were devised with games in mind. The Min-Max Algorithm 

The Min-Max algorithm is applied in two player games, such as tic-tac-toe, checke che ckers, rs, chess, chess, go, and so on. on. All these these games games have at least one thing in common, they are logic games. This means that they can be described by a set of rules and premisses. With them, it is possible to know from a given point in th the e game game,, what what are are th the e next next avai availa labl ble e move moves. s. So th they ey also also shar share e ot othe herr characteristic, they are ‘full information games’. Each player knows everything about the possible moves of the adversary.

Figure 1: A representation of a search tree for a logic game. Before explaining the algorithm, a brief introduction to search trees is required. Search trees are a way to represent searches. The squares are known as nodes and an d th they ey re repr pres esen entt poin points ts of the the deci decisi sion on in the the sear search ch.. The The node nodes s are are connected with branches. The search starts at the root node, the one at the top of the figure. At each decision point, nodes for the available search paths are generated, until no more decisions are possible. The nodes that represent the end of the search are known as leaf nodes. There are two players involved, MAX and MIN. A search tree is generated, depth-first, starting with the current game position upto the end game position.

 

Then, the final game position is evaluated from MAX’s point of view, as shown in Figure 1. Afterwards, the inner node values of the tree are filled bottom-up with the evaluated values. The nodes that belong to the MAX player receive the maximu max imun n value value of it’ it’s s childr children. en. The nodes nodes for the the MIN player player will will sel select ect the minimun value of it’s children. The algorithm is described in Listing 1. game) { MinMax (GamePosition game)   return MaxMove (game game) ); }   MaxMove (GamePosition game) game) {   if if   (GameEnded GameEnded( (game game)) ))   {   return EvalGameState EvalGameState( (game game) );   }   else else   { best_move < - {} {}; ; moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MinMove( MinMove(ApplyMove ApplyMove( (game game)) )); ;   if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move;   }   }     } return best_move; }   MinMove (GamePosition game) game) { best_move <- {} {}; ; moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MaxMove( MaxMove(ApplyMove ApplyMove( (game game)) )); ;   if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move;   }   }   return best_move; }

So what is happening here? The values represent how good a game move is. So the MAX player will try to select the move with highest value in the end. But the MIN player also has something to say about it and he will try to select the moves that are better to him, thus minimizing MAX’s outcome.

 Minimax Explained    By ai-depot | July 28, 2002

Optimisations Optimisation

 

However only very simple games can have their entire search tree generated in a short time. For most games this isn’t possible, the universe would probably vanish first. So there are a few optimizations to add to the algorithm. First a word of caution, optimization comes with a price. When optimizing we are trading the full information information about the game’s game’s events events with probabilities probabilities and shortcuts. Instead of knowing the full path that leads to victory, the decisions are made with the path that might lead to victory. If the optimization isn’t well choosen, or it is badly applied, then we could end with a dumb AI. And it would have been better to use random moves. One basic optimization is to limit the depth of the search tree. Why does this help? Generating the full tree could take ages. If a game has a branching factor of 3, which which means that each node node has tree children, children, the tree tree will will hav have e the folling number of nodes per depth: Depth Depth 0 1 2 3 … n

Node Count Count 1 3 9 27 .. 3^n

The sequence shows that at depth n the tree will have 3^n nodes. To know the total number of generated nodes, we need to sum the node count at each level. So the total number of nodes for a tree with depth n is sum (0, n, 3^n). For many games, like chess that have a very big branching factor, this means that the tree tree might might not fit into into memory memory.. Even Even if it did, did, it would take to long to generate. If each node took 1s to be analyzed, that means that for the previous example, each search tree would take sum (0, n, 3^n) * 1s. For a search tree with depth 5, that would mean 1+3+9+27+81+243 = 364 * 1 = 364s = 6m! This is too long for a game. The player would give up playing the game, if he had to wait 6m for each move from the computer. The second optimization is to use a function that evaluates the current game position from the point of view of some player. It does this by giving a value to the current state of the game, like counting the number of pieces in the board, for example. Or the number of moves left to the end of the game, or anything else that we might use to give a value to the game position. Instead of evaluating the current game position, the function might calculate how the current game position might help ending the game. Or in another words, how probable is that given the current game position we might win the game. In this case the function is known as an estimation function. This functi This function on will will have have to take take into into accoun accountt some some heurist heuristics. ics. Heuris Heuristic tics s are knowle kno wledge dge that that we have have about about the the game, game, and it can help help genera generate te better better evaluation functions. For example, in checkers, pieces at corners and sideways positions can’t be eaten. So we can create an evaluation function that gives

 

higher values to pieces that lie on those board positions thus giving higher outcomes for game moves that place pieces in those positions. One of the reasons that the evaluation evaluation function function must be able to evalute evalute game positions for both players is that you don’t know to which player the limit depth belongs. Howeve How everr having having two two funct function ions s can be avoide avoided d if the the game game is symetr symetric. ic. This This means that the loss of a player equals the gains of the other. Such games are also known as ZERO-SUM ZERO-SUM games. For thes these e games games one evalution evalution function function is enough, one of the players just have to negate the return of the function. The revised algorithm is described in Listing 2. MinMax (GamePosition game) game) {   return MaxMove (game game) ); }   MaxMove (GamePosition game) game) {   if if   (GameEnded GameEnded( (game game) ) || DepthLimitReached()) DepthLimitReached())   {   return EvalGameState EvalGameState( (game, MAX) MAX);   }   else else   { best_move < - {} {}; ;

 

moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MinMove( MinMove(ApplyMove ApplyMove( (game game)) )); ; if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move; } } return best_move;

        } }   MinMove (GamePosition game) game) {   if if   (GameEnded GameEnded( (game game) ) || DepthLimitReached()) DepthLimitReached())   {   return EvalGameState EvalGameState( (game, MIN) MIN);   }   else  else  { best_move <- {} {}; ; moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MaxMove( MaxMove(ApplyMove ApplyMove( (game game)) )); ;   if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move;   }   }   return best_move;   } }

Even so the algorithm has a few flaw, some of them can be fixed while other can only be solved by choosing another algorithm.

 

One of flaws is that if the game is too complex the answer will always take too long even with a depth limit. One solution s olution it limit the time for search. If the time runs out choose the best move found until the moment. A big flaw is the limited horizon problem. A game position that appears to be very good might turn out very bad. This happens because the algorithm wasn’t able to see that a few game moves ahead the adversary will be able to make a move that will bring him great outcome. because it was blinded bya the depth limit. The algorithm missed that fatal move Speeding the algorithm 

There are a few things can still be done to reduce the search time. Take a look at figure 2. The value for node A is 3, and the first found value for the subtree starting at node B is 2. So since the B node is at a MIN level, we know that the selected value for the B node must be less or equal than 2. But we also know that the A node has the value 3, and both A and B nodes share the same parent at a MAX level. This means that the game path starting at the B node wouldn’t be selected because 3 is better than 2 for the MAX node. So it isn’t worth to pursue the search for children of the B node, and we can safely ignore all the remaining children.

Figure 2: Minimax search showing branches that can be cuttoff. This all means that sometimes the search can be aborted because we find out that the search subtree won’t lead us to any viable answer. This optimization is know as alpha-beta cuttoffs and the algorithm is as follows: 1. Have two two values values passed aroun around d the tree tree nodes: nodes: ○

the alpha value which holds the best MAX value found;



the beta value which holds the best MIN value found.

2. At MAX level, level, before before evaluating evaluating each child child path, compare compare the returned returned value with of the previous path with the beta value. If the value is greater than it abort the search for the current node; 3. At MIN level, level, before before evaluating evaluating each child child path, compare compare the returned returned value with of the previous path with the alpha value. If the value is lesser than it abort the search for the current node. The Listing 3 shows the full pseudocode for MinMax with alpha-beta cuttoffs. MinMax (GamePosition game) game) {

 

  return MaxMove (game game) ); }   MaxMove (GamePosition game, Integer alpha, Integer beta) beta ) {   if if   (GameEnded GameEnded( (game game) ) || DepthLimitReached()) DepthLimitReached())   {   return EvalGameState EvalGameState( (game, MAX) MAX);   }   else else   { best_move < - {} {}; ; moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MinMove( MinMove(ApplyMove ApplyMove( (game game) ), alpha, beta) beta);   if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move; alpha <- Value Value( (move move) );   }   // Ignore remaining moves   if  if  (beta > alpha) alpha)   return best_move;   }   return best_move;   } }   MinMove (GamePosition game) game) {   if if   (GameEnded GameEnded( (game game) ) || DepthLimitReached()) DepthLimitReached())   {   return EvalGameState EvalGameState( (game, MIN) MIN);   }   else else   { best_move < - {} {}; ; moves <- GenerateMoves( GenerateMoves(game game) ); ForEach moves { move <- MaxMove( MaxMove(ApplyMove ApplyMove( (game game) ), alpha, beta) beta);   if  if  (Value Value( (move move) ) > Value( Value(best_move best_move)) ))   { best_move < - move; beta <- Value( Value(move move) );   }   // Ignore remaining moves   if  if  (beta < alpha) alpha)   return best_move;   }   return best_move;   } }

How better does a MinMax with alpha-beta cuttoffs behave when compared with a normal MinMax? It depends on the order the search is searched. If the way the game positions are generated doesn’t create situations where the algorithm can take take advant advantage age of alphaalpha-bet beta a cutoff cutoffs s then then the improv improvem ement ents s won’t won’t be noticib not icible. le. Howeve However, r, if the the evalua evaluatio tion n funct function ion and the genera generatio tion n of game game positions leads to alpha-beta cuttoffs then the improvements might be great.

   Minimax Explained 

 

By ai-depot | July 28, 2002

 Alpha-Beta Cutoff  Adding Alpha-Beta Cutoffs

With all this talk about search speed many of you might be wondering what this is al alll abou about. t. Well, Well, the sear search ch spee speed d is very very impo import rtan antt in AI beca becaus use e if an algorithm takes too long to give a good answer the algorithm may not be suitable. For exampl example, e, a good good MinMax MinMax algori algorith thm m implem implement entati ation on with with an evalua evaluatio tion n function funct ion capable to give very good estimatives estimatives might be able to search 1000 positions a second. In tourament chess each player has around 150 seconds to make a move. So it would probably be able to analyze 150 000 positions during that period. But in chess each move has around 35 possible branchs! In the end the program would only be able to analyze around 3, to 4 moves ahead in the game[1]. Even humans with very few pratice in chess can do better than this. But if we use MinMax MinMax with alpha-beta alpha-beta cutoffs, cutoffs, again a decent decent implement implementation ation with a good evaluation function, the result behaviour might be much better. In th this is case case,, th the e prog progra ram m migh mightt be able able to doub double le th the e numb number er of anal analyz yzed ed positions and thus becoming a much toughter adversary.  An example implementation implementation An exam exampl ple e is al alwa ways ys a good good way way to show show how how an algo algori rith thm m migh mightt be impl im plem emen ente ted. d. Back Back in 1999 1999,, I and and a fr frie iend nd of mine mine have have impl implem emen ente ted d a checkers game as a Java application for the AI class in the university. I have recently ported the game to C#. The MinMax algortihm isn’t a great implementa implementation. tion. In fact I should should mention mention that the best thing about it is that it works. However I think that it presents a way that the algorithm might be implemented and as an example it is good enough.

Figure 3: Example of a board with the values estimated for each position The game uses MinMax MinMax with alpha-beta alpha-beta cutoffs for the computer computer moves. moves. The evaluatio evalua tion n functi function on is an weigh weighte ted d averag average e of the positio positions ns occupi occupied ed by the checker pieces. The figure 3 shows the values for each board position. The

 

value of each board position is multiplied by the type of the piece that rests on that position, described in Table 1. The Listing 4 shows the Java implementation of the evaluation function. It has been slightly modified for the article. Plea Please se note note th that at th the e code code uses uses a vect vector or,, 0-31 0-31 base based, d, for for th the e boar board d game game positions. The game code is available at: •



Java version -http://www.progtools.org/games/project http://www.progtools.org/games/projects/checkers/checkers.html s/checkers/checkers.html C# version -http://www.progtools.org/games/projects/sharp_checkers/sharp_checke rs.html

Conclusion

The MinMax might not be the best answer for all kind of computer games that need ne ed to hav have AI that hat res resemb emble les s human uman beha behavi vio our. ur. But giv given a good ood implement imple mentation ation it can create create a tought tought adversary. I hope that this article has gave you some insight on the MinMax algorithm and how you might use it on your games.

 

Eight queens puzzle From Wikipedia, the free encyclopedia

a

b

c

d

e

f

g

h

8

8

7

7

6

6

5

5

4

4

3

3

2

2

1

1

a

b

c

d

e

f

g

h

One solution to the eight queens puzzle

The eight queens puzzle is the problem of placing eight chess chess  queens queens  on an 8×8 chessboard so that no two queens attack each other. Thus, a solution requires r equires that no two queens share the same row, column, or  diagonal. The eight queens puzzle is an example of the more general n-queens problem of placing on an n×n chessboard, where solutions exist for all natural numbers Contents

n

n

with the exception of 2 and 3. 3 .[1]

queens

 

hide]]   [hide



1 Hi Hist stor ory y



2 Cons Construct tructing ing a solu solution tion



2.1 Solutions to the eight queens puzzle



3 Expl Explicit icit solutions solution s



4 Count Counting ing solu solutions tions



5 Rela Related ted problems



6 The eight queens puzzle as an exercise in algorithm design



7 An animated version of the recursive solution



Sample le progr program am 8 Samp



See e al also so 9 Se



10 Ref Refere erence nces s



11 Exte External rnal links



11.1 Links to solu solution tions s

[edit edit]]History The puzzle was originally proposed in 1848 by the chess player  Max Bezzel , and over the years, many mathematicians, mathematicians, including Gauss Gauss,, have worked on this puzzle and its generalized n-queens problem. The first solutions were provided by Franz Nauck in 1850. Nauck also extended the puzzle to n-queens problem (on an n×n board—a chessboard of arbitrary size). In 1874, S. Günther proposed a method of finding solutions by using determinants, determinants, and J.W.L. Glaisher  Glaisher refined refined this approach. Edsger Dijkstra stra used this problem in 1972 to illustrate the power of what he called  called structured programming. programming. He Eds ger Dijk published a highly detailed description of the development of a depthdep th- fir first st backtr backtracking acking algorith algorithm m.2

[edit edit]]Constructing

a solution

The problem can be quite computationally expensive as there are 4,426,165,368 (i.e., 64  64 choose choose  8) possible arrangements of eight queens on a 8×8 board, but only 92 solutions. It is possible p ossible to use shortcuts that reduce computational requirements or rules of thumb that avoids  avoids brute-force computational techniques techniques.. For example,  just by applying a simple rule that constrains each queen to a single column (or row), though still considered

 

brute force, it is possible to reduce the number of possibilities to just 16,777,216 (that is, 8 8) possible combinations. Generating the permutations that are solutions of the eight rooks puzzle [2] and then checking for  ). The following Python  Python code uses this diagonal attacks further reduces the possibilities to just 40,320 (that is,8! is, 8!). technique to calculate the 92 solutions:[3] from   itertools  itertools import permutations

  n = 8 cols = range(n) range(n) for vec in permutations(cols):

 

if (n ==  == len( len(set set(vec[i]+i (vec[i]+i for i in cols))

==   len( len(set set(vec[i]-i (vec[i]-i for i in cols))): ==

 

 print vec

 

These brute-force algorithms are computationally manageable for n = 8, but would be intractable for problems problems  are the development and of n ≥ 20, as 20! = 2.433 * 1018. Advancements for this and other toy other  toy problems application of heuristics of heuristics (rules of thumb) that yield solutions to the

n

queens puzzle at a small fraction of the

computational requirements. This heuristic solves

N  queens

for any N  ≥ 4. It forms the list of numbers for vertical positions (rows) of queens

with horizontal position (column) simply increasing.

N  is

8 for eight queens puzzle.

1. If the remainder from dividing N  by 6 is not 2 or 3 then the list is simply all even numbers followed by all odd numbers ≤ N 2.

Other Otherwise, wise, w write rite sep separate arate lists lists of eve even n and odd num numbers bers (i. (i.e. e. 2,4,6,8 2,4,6,8 - 1,3,5,7) 1,3,5,7)

3. If the remainder is 2, swap 1 and 3 in odd list and move 5 to the end (i.e. 3,1 3,1,7, ,7,5 5) 4,6,8,2 4. If the remainder is 3, move 2 to the end of even list and 1,3 to the end of odd list (i.e. 4,6,8,2 5,7,9,1,3 5,7,9, 1,3)) 5.

Appen Append d odd list tto o the even list list and pl place ace quee queens ns in the rows rows given b by y these numbers, numbers, ffrom rom left left to right (i.e. a2, b4, c6, d8, e3, f1, g7, h5)

For N  = 8 this results in the solution shown above. A few more examples follow. 

14 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 3, 1, 7, 9, 11, 13, 5.



15 queens (remainder 3): 4, 6, 8, 10, 12, 14, 2, 5, 7, 9, 11, 13, 15, 1, 3.



20 queens (remainder 2): 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 3, 1, 7, 9, 1 11, 1, 13, 15, 17, 19, 5.

 

[edit edit]]Solutions to the eight queens puzzle The eight queens puzzle has 92 distinct solutions. If solutions that differ only by  by symmetry operations  (rotations and reflections) of the board are operations are  counted as one one,, the puzzle has 12unique 12unique (or  (or fundamental fundamental)) solutions. A fundamental solution usually has 8 variants (including its original form) obtained by rotating 90, 180, or 270 degrees and then reflecting each of the four rotational variants in a mirror in a fixed position. However, should a solution be equivalent to its own 90 degree rotation (as happens to one solution with 5 queens on a 5x5 board) that fundamental solution will have only 2 variants. Should a solution be equivalent to its own 180 degree rotation it will have 4 variants. Of the 12 fundamental solutions to the problem with 8 Queens on an 8x8 board, exactly 1 is equal to its own 180 degree rotation, and none are equal to their 90 degree rotation. Thus the number of distinct solutions is 11*8 + 1*4 = 92. The unique solutions are presented below:

a

b c

d

e

f

g

h

a

b c

d

e

f

g

h

a

b c

d

e

f

g

h

8

8

8

8

8

8

7

7

7

7

7

7

6

6

6

6

6

6

5

5

5

5

5

5

4

4

4

4

4

4

3

3

3

3

3

3

2

2

2

2

2

2

1

1

1

1

1

1

a

b c

d

e

Unique solution 1

f

g

h

a

b c

d

e

Unique solution 2

f

g

h

a

b c

d

e

Unique solution 3

f

g

h

 

a

b c

d

e

f

g

h

a

b c

d

e

f

g

h

a

b c

d

e

f

g

h

8

8

8

8

8

8

7

7

7

7

7

7

6

6

6

6

6

6

5

5

5

5

5

5

4

4

4

4

4

4

3

3

3

3

3

3

2

2

2

2

2

2

1

1

1

1

1

1

a

b c

d

e

f

g

h

a

Unique solution 4

a

b c

d

e

b c

d

e

f

g

h

a

Unique solution 5

f

g

h

a

b c

d

e

b c

d

e

f

g

h

f

g

h

Unique solution 6

f

g

h

a

b c

d

e

8

8

8

8

8

8

7

7

7

7

7

7

6

6

6

6

6

6

5

5

5

5

5

5

4

4

4

4

4

4

3

3

3

3

3

3

 

2

2

2

2

2

2

1

1

1

1

1

1

a

b c

d

e

f

g

h

a

Unique solution 7

a

b c

d

e

b c

d

e

f

g

h

a

Unique solution 8

f

g

h

a

b c

d

e

b c

d

e

f

g

h

f

g

h

Unique solution 9

f

g

h

a

b c

d

e

8

8

8

8

8

8

7

7

7

7

7

7

6

6

6

6

6

6

5

5

5

5

5

5

4

4

4

4

4

4

3

3

3

3

3

3

2

2

2

2

2

2

1

1

1

1

1

1

a

b c

d

e

f

g

h

a

Unique solution 10

edit]]Explicit [edit

d

e

f

Unique solution 11

g

h

a

b c

d

e

f

g

h

Unique solution 12

solutions

Explicit solutions exist for placing [4]

b c

n

queens on an n ×

n

board, requiring no combinatorial search whatsoever.

The explicit solutions exhibit stair-stepped patterns, as in the following examples for n = 8, 9 and 10:

a b c d e f g h

a b c

d e f

g h i

a b c

d e f

g h i

j

 

9 8

8

7

6 5

4

4

3

3

4

7

6

6

5

5

4

4

3

3

2

2

1

1

3

2

1

1

Explicit solution for 8

a b c d e f

queens

7

4

1 2

a b c d e f g h

8

5

2 3

1

8

6

5

2

9

7

6

5

9 8

7

6

1 0

9

8

7

1 0

g h i a

Explicit solution for 9 queens

b c

d e f

g h i

j

Explicit solution for 10 queens

[edit edit]]Counting

solutions

The following table gives the number of solutions for placing

n

queens on an

n

× n board, both unique

OEIS) and distinct (sequence A000170 in OEIS), OEIS), for n=1–14, 24–26. (sequence A002562 in OEIS)

:

n

1 2 3 4 5 6 7 8 9

uni 1 4 que 1 0 0 1 2 1 6 2 6 :

1 0

92

1 1

1 2

1 3

14

. .

24

25

26

28,439,27 275,986,68 2,789,712,4 34 1,7 9,2 45,7 .. 2,956,934 3,743,434 66,510,289 87 33 52 1

 

dist 3 1 4 9 inct 1 0 0 2 4 5 0 0 2 : 2

227,514,1 2,207,893, 22,317,699, 72 2,6 14, 73, 365, .. 71,973,73 435,808,35 616,364,04 4 80 200 712 596 6 2 4

Note that the six queens puzzle has fewer solutions than the five queens puzzle. There is currently no known formula for the exact number of solutions.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close