Back Up Next

Null-Move Forward Pruning

Extra speed with no down-side

Null-move forward pruning allows a chess program to experience a dramatic reduction in branching factor with some manageable risk of missing something important.  It results in a dramatic improvement in search depth, and in most cases markedly decreases the amount of required to find tactics.  It does this by pruning out a lot of useless crap and leaving mostly good stuff.

The technique has been described in several publications, but the article that got everyone thinking about null-move is an ICCA Journal article by Chrilly Donniger, published in September 1993.

Imagine a chess position in a tree somewhere.  Your program is going to search each of the legal moves in this position to a depth of D.  If any of those moves result in a score that exceeds beta, you are going to return beta immediately.  If any of them exceed alpha, but don't exceed beta, you are going to remember the move and the score, since it could be part of the principal variation.  If all of them are <= alpha, you are going to return alpha.

Null-move forward pruning is a step you perform prior to searching any of the moves.  You ask the question, "If I do nothing here, can the opponent do anything?"

Note that in the regular case described two paragraphs ago, you are not asking this question.  You are asking about the best way to hurt the opponent.  Asking if the opponent can hurt you is something very different.

It turns out that there are a lot of cases where the opponent can't hurt you.  Let's say that you are up a rook, and none of your pieces are en-prise.  In this case, the opponent makes some random move, then you still bash them, since you are up a rook.  The point of null-move forward pruning is to simply get rid of such nonsense without having to spend a lot of time on it.

This is kind of like a fighter expressing heavy confidence in his ability, by giving the opponent a free shot.  If the opponent can't knock the fighter down with a free shot, chances are he's going to lose the fight if the fighter goes to the trouble of hitting him.

I'll stop talking around the problem now and explain  how this works in the case of computer chess.  Before you search any moves, and in fact before you generate any moves, you do a reduced-depth search, with the opponent to move first, and if that search results in a score >= beta, you simply return beta without searching any moves.

The idea is that you give the opponent a free shot at you, and if your position is still so good that you exceed beta, you assume that you'd also exceed beta if you went and searched all of your moves.

The reason this saves time is that this initial search is made with reduced depth.  The depth reduction factor is called R.  So rather than searching all of your moves to depth D, you search your opponent's moves with a depth of D-R.  An excellent value of R is 2.  So, if for instance you were going to search all of your moves to depth 6, you end up searching all of your opponent's moves to depth 4.

This results in a massive time savings, and a practical speedup of a ply or two.  That's a vastly huge amount!

Here is the source, added to an alpha-beta search, with changes highlighted:

#define R   2

 

int AlphaBeta(int depth, int alpha, int beta)
{
    if (depth == 0)
        return Evaluate();
    MakeNullMove();
    val = -AlphaBeta(depth - 1 - R, -beta, -beta + 1);
    UnmakeNullMove();
    if (val >= beta)
        return beta;
    GenerateLegalMoves();
    while (MovesLeft()) {
        MakeNextMove();
        val = -AlphaBeta(depth - 1, -beta, -alpha);
        UnmakeMove();
        if (val >= beta) // Delete these to get straight
            return beta; // min-max instead of alpha-beta.
        if (val > alpha)
            alpha = val;
    }
    return alpha;
}

I'm using a trick in this code.  I want to see if the score of the null-move search is beta or better.  If it's not beta or better, I don't care how much worse than beta it is, so I use a minimal window, in order to try to get more cutoffs faster.  Essentially I'm calling it with (beta - 1, beta), but since you have to reverse alpha and beta and negate them when you recurse, it comes out the way it appears in the source code.

It goes without saying that this doesn't work if the side to move is in check (because the opponent immediately takes the king).  The degree to which null-move forward pruning will be allowed to happen recursively must also be considered, since if you let several of them follow in a row the search degenerates down to nothing.  One obvious attempt is to not allow two null-move searches to occur without an intervening "real" move.  Another idea is to allow exactly two null-move searches before forcing a "real" move.  In practice these seem to work about equally well.

Well, almost no down-side

Unfortunately, null-move forward pruning doesn't work in some cases.  A big assumption is made -- the assumption that making a move will result in a higher score than not making a move.  Unfortunately, there are a whole class of positions where this is not true, and these are common enough that there is even a name for them.  These positions are called zugzwangs.  A zugzwang is a position where you are fine if you don't move, but your position collapses if you are forced to move.  Here is a classic example:

In this position, if white is forced to move, he has to play Kb2, and black plays Kd2 and the pawn queens.  If white doesn't have to move, then black plays Kc3 and the position is a draw.  (This is actually a mutual zugzwang, since both sides suffer from having to move, but that's beside the point.)

If this position were reached, and null-move forward pruning were attempted, it could indicate that black has no threat, because in fact black doesn't have a threat.  Black is waiting for white to wreck his own position, which is something different.

For this reason, null-move forward pruning is not used, at least in a straightforward way, in endgames.  If you do try to use it in endgames, very bad things will happen very often.

Another, more nasty example, is as follows:

This position is taken from Goetsch and Campbell's "Experiments with the Null-Move Heuristic" published in Computers, Chess, and Cognition, ISBN 0-387-97415-6.

In this position, with white to move, white can't do anything and is mated on the next move.  A two-ply search will spot this with no problem:  1. <any> Qg2#.

If you try to use null-move forward pruning here, something bad will happen.  We'd been planning to do a full-width two-ply search, so let's instead do a zero-ply search for the other side and try to detect threats.  In zero plies, black can't make any moves, so the "Evaluate" function is called, and returns +5 or so, since white is up a rook.  This is probably above beta, so the search returns beta.

This is not what should have happened!  The search should have returned a very small value, indicating that white is mated.

What we see here is a type of horizon effect that is common when null-move forward pruning is used.  Without null-move forward pruning, white makes a move that doesn't achieve anything, and then there are enough plies of search depth (one in this case) for black to kill white.  With null-move forward pruning and with an R of sufficiently high value (for instance R=2), white can do nothing, and yet black doesn't have enough depth left to see the win.

This example is may strike you as odd, given that it occurs with a very shallow search, but there are a great many example cases where this happens with just enough search depth that the tactic could have been seen with a normal search, but is missed by null-move forward pruning.  In fact, this pattern of black queen on h3 and pawn on f3 is an extremely dangerous case for chess programs.

Another problem with null-move forward pruning is that it introduces search instability.  Whether or not the null-move search succeeds or fails has something to do with the value of beta.  The next time this node is visited, the value of beta might be different, so the search might have an irrationally different value.  You could pass in an alpha-beta window of (7, 30) and the search might fail high, but if you pass in (7, 50), it could fail low.

In practice, the search speedup outweighs the occasional tactical mistake and the addition of search instability.

Some ideas

It's interesting to fiddle around trying to find a better value for R than two.  You can try one, three, four, some larger number, or a number that changes depending upon depth of search, how much material is on the board, etc.  I've never been able to do better than straight R=2, but there has been some research that suggests that other values might be good, and there are a few people who are using R=3 in at least part of the tree.

It's also tempting to try to figure out a way to use null-move forward pruning in endgames, through some sort of verification search.  If your null-move search fails high, you do a normal search to reduced depth, and see if that fails high as well.  This seems to me like it  might be very expensive, but it could be worth looking at.

There are other enhancements that are worth trying, but I'm not trying to make a complete list here.  You can look in the Donninger article, the article in Computers, Chess, and Cognition, or at any of Ernst Heinz' articles about null-move forward pruning and related topics.

 
Send mail to brucemo@seanet.com with questions or comments about this web site.
Copyright © 2001 Bruce Moreland
Last modified: 11/04/02