Blogs
Engines Navigating The Tree: AB-Pruning Minimax
Some trees grow as big as castles. (Photo by Ferdinand Reus; Cropped; CC 2.0 license)

Engines Navigating The Tree: AB-Pruning Minimax

the_real_greco
| 11

On his first move, white can play any of 20 legal moves. Several of these moves are good; a few more are probably playable; the rest are bad. But whichever white chooses, black will have 20 responses available to him. This means that by white's second turn, there could be any of (20*20)=400 positions on the board. And that's just from the starting position! The set of all possible moves and countermoves in a game of chess is called its game tree.

After generating all the moves in the current position, your engine needs to forecast ahead to what might happen in a game. Using 20 moves per position as an estimate (called a branching factor), your engine is facing a monumental task when it attempts to navigate the game tree. Attempting to look at every position quickly becomes hopeless, as this table shows:

Depth (ply) Positions in Tree Time required to evaluate at 100Mnps
2 421 0.0000042 Seconds
4 168421 0.0016 Seconds
6 67368421 0.67 Seconds
8 26947368421 4 Minutes 29 Seconds
10 10778947368421 1 Day 5 Hours 56 Minutes 29 Seconds

A brute-force approach is obviously not going to work. If we want a strong engine, we'll need to figure out which positions are worth evaluation and which positions can be ignored. Modern engines use one of two approaches: alpha-beta pruning minimax (AB) or Monte-Carlo tree search (MCTS). In this post I focus on AB pruning-minimax.


AB-pruning minimax is the traditional approach that engines have taken towards guiding their searches. Every strong engine before AlphaZero- such as Stockfish, Rybka, Crafty, and Fritz- was an AB engine. It is a combination of two techniques- minimax (the older approach) and AB-pruning (an improvement on the minimax algorithm).

To begin, it's helpful to look at a generic game tree:

Positions following black's move are black circles; positions following white's move are white squares.

This game tree begins after white has made a move to generate position 0. It is now black's turn, and he has two legal moves that lead to positions 1.1 and 1.2. If black chooses position 1.1, his opponent now can choose between 2.1 and 2.2. Position 2.2 loses; black is checkmated and the game ends. Position 2.1 continues the game and gives black three choices of position (3.1, 3.2, 3.3). 

(Was that last paragraph badly-worded? Probably. But I hope it makes sense when paired with the tree diagram! Try to decipher of the other half of the tree, in which black chooses position 1.2.)

The 'minimax' part of AB-pruning minimax is conceptually simple. Using its evaluation function, an engine might look at the last (what I call pseudoterminal) positions it reaches in its search. In my example tree these final positions are {2.2, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6}. Once those are assigned centipawn values, the engine works back 'up' the tree. 

A half-completed naive (non-AB) minimax search. Can you complete it? Note that all positions should eventually be labelled with a centipawn value, even though not all were evaluated using the engine's evaluation algorithm.

In words: The engine evaluates 3.1, 3.2, and 3.3, and finds that the best outcome is -9 (maybe the engine wins a queen!). The engine knows that if it can get to position 2.1, it can get a -9 position... so 2.1 is also given an evaluation of -9.

Next the engine evaluates position 2.2 (because 2.2 is also a pseudoterminal node), and realizes that 2.2 is a loss for black! That's the worst outcome possible, so it is given an arbitrarily high value (here +100). If position 1.1 occurs, white will play checkmate. So 1.1 is given a value of +100 also. It doesn't matter that there is also a -9 underneath 1.1; white will never opt for 2.1 instead of 2.2.

Go ahead and try to make sense of the other side of the tree! Evaluations of pseudoterminal nodes are provided. What value is assigned to each node? What evaluation is given for the initial position?


It turns out that the naive minimax algorithm doesn't save us much time. The exponential growth of the game tree- as well as chess' high branching factor- ensures that most of the positions in the tree are pseudoterminal nodes. With a branching factor of 20:

Depth (ply) Pseudoterminal Nodes Total Nodes Fractional Savings using minimax
1 20 21 0.0500
2 400 421 0.0525
3 8000 8421 0.0526
4 16000 24421 0.0526
5 320000 344421 0.0526

This means that even with our minimax strategy, we still have to evaluate roughly 95% of the game tree to a given depth. Obviously, we haven't come close to reaching our goal. To do this we need the 'AB-pruning' part of AB-pruning minimax.


Note: Different sources give different (opposite) definitions of alpha and beta. It doesn't matter which one is which, just so long as one doesn't confuse them.

In essence, alpha-beta pruning is an attempt to prove beforehand that a position or branch does not need to be evaluated. It works on the same principle as a naive minimax, starting by evaluating some pseudoterminal nodes and passing those values 'up' the tree. However, AB pruning does not evaluate all the pseudoterminal nodes initially; it is constantly trying to prove that some branches can be safely skipped.

I'll switch to a new generic tree to give another example. Let's say we have the following game tree:

A second example game tree. Note that the engine does not know the shape of the tree before it begins its search.

Initially the engine doesn't know how many positions are below the root node (position 0). So it picks one branch to begin; we will assume it always starts on the left of our diagram. The algorithm begins with position 1.1.

But after reaching position 1.1 the algorithm must continue down the tree since it isn't at a pseudoterminal node.  Down the algorithm goes until it hits a predetermined depth. Here we'll assume it halts at depth=3, passing through 1.1 and 2.1 and reaching position 3.1. Next, both 3.1 and its sister 3.2 are given an evaluation using the engine's evaluation function. At this point we are in the following situation:

The first step in the AB-pruning algorithm. The engine has only visited positions {0, 1.1, 2.1, 3.1, 3.2}. The rest of the tree is unknown.

Here comes the AB-pruning trick! We can't yet assign a value to position 1.1, since it might be influenced by positions on the 2.2 branch. However, we can deduce that 1.1 will have an evaluation of at most -1. If the game reaches position 1.1, white can (if he wishes) choose to force the game towards position 3.2. If white finds a better alternative, he will take it... but if he doesn't find a better alternative, he is guaranteed that -1 position.

'Beta' in 'alpha-beta pruning minimax' refers to the maximum possible value a position must have. In this case the beta value for position 1.1 is -1.

Our algorithm continues! The next position we must evaluate is 3.3:

Position 3.3 has been evaluated. What does that mean for the rest of the tree?.

Consider the tree from white's perspective. You already have been guaranteed an evaluation of at worst -1 from the 2.1 branch. If you go down the 2.2 branch, you'll allow black to obtain at least a -5 evaluation. This is the 'alpha' value- the minimum possible value a position might have.

Therefore- why would white ever go down the 2.2 branch? White wouldn't. White would choose the 2.1 branch every time. In fact, white doesn't even need to know anything further about the 2.2 branch. How many moves are legal from 2.2? It doesn't matter. What evaluations do they have? After seeing that one is -5, no other evaluations matter. The rest of that branch can be pruned, and an evaluation of -1 can be assigned to position 1.1.

While we only pruned one position from the 1.1 branch, something more drastic happens on the 1.2 branch. Consider:

Beginning to prune the 1.2 branch. Notice that I've kept an evaluation of -1 for position 1.1.

Position 2.3 is checkmate and a loss for black, so it is given an evaluation of +100. We can use +100 as the alpha (minimum) value for position 1.2. If 1.2 arises on the board, white is guaranteed at least +100. Obviously this is bad for black.

In fact, it doesn't really matter what values are on {2.4, 3.5, 3.6. 3.7}. White will never opt for position 2.4, since it can never have a better evaluation than the alpha value white already has obtained. For this reason we can do some radical pruning and finish our AB-minimax algorithm:

The final product of our AB pruning algorithm.

A few final observations!

1.     If we want to go deeper with our search (and we should; depth=3 is terrible) the process is easily to extend. In fact, any pruned branches can remain pruned- we've already proven that they contain positions that should never be reached. We simply need to select depth=4 positions, evaluate them, and continue pruning more branches as we go.

2.     It has been shown that, if an engine's evaluation function is perfect, AB-pruning minimax and naive minimax will give identical results. An engine with a weak evaluation function will have a significant difference in results between the two approaches. Therefore, if you want to improve your engine's pruning accuracy, you should improve its evaluation function!

3.     We started our AB-pruning algorithm from the left and worked towards the right. However, the algorithm can start anywhere and continue in whichever direction. This does not affect the final evaluation, but can change which (and how many) branches are pruned. What would the final tree look like if we had proceeded right to left?


Thanks for reading! I'd love it if you left a comment, and I'll try to answer any questions. Next post I'll cover the other, newer search procedure: MCTS.