- Code: Select all
int search(MOVE move, int depth)
{
DO_MOVE(move);
for(depth_cnt=1;depth_cnt<depth;depth_cnt++)
{
best_score = -INFINITY;
FOR_ALL_MOVES
{
score = -search(nextmove, depth_cnt);
move_list[nextmove].score = score;
best_score = max(score, best_score);
if(score>beta) break;
}
SORT_MOVES_BY_SCORE;
}
UNDO_MOVE(move);
return best_score;
}
In words: first we search all moves to depth n, so that we know in which order we have to search them to depth n+1.
Is this really the way that everyone does it?
It has the disadvantage that alpha-beta pruning strongly interferes with the undelying idea:
Suppose on iteration 1 we find the best move, and it is good enough to cause a beta cutoff. This remains so until depth = 5, we get a quick and efficient cutoff. But on iteration 6 the move turns out not so good after all, and does not give the cutoff. So we have to search the other moves to 6 ply, to see what the best move there is (and hopefully find another one that gives the cutoff). But we know almost nothing about those, they were not searched on iteration 5, and some of them (or even all of them) might not even have been searched to a depth of 1 (if we picked our cutoff move there early). We run the risk of picking a total blunder as next move to search to 6 ply. The move sortin is based on purely static criteria.
It seems much smarter to slowly build up the depth of the other moves starting from 1:
- Code: Select all
int search(MOVE move, int depth)
{
DO_MOVE(move);
for(depth_cnt=1;depth_cnt<depth;depth_cnt++)
{
best_score = -INFINITY;
FOR_ALL_MOVES_IN_LIST
{
k = move_list[nextmove].depth;
if(k<depth_cnt-1) depth_cnt = k+1; /* should not happen without pruning */
while(move_list[nextmove].score>beta && k<depth-1 || k<depth_cnt)
{
score = -search(nextmove, ++k);
move_list[nextmove].score = score;
}
best_score = max(score, best_score);
if(score>beta) break(out of 2 loops);
if(move_list[nextmove].score < best_score - MARGIN) break;
}
SORT_MOVES_BY_SCORE;
}
UNDO_MOVE(move);
return best_score;
}
In words: as long as there is a move in the list that seems to give us a cutoff, deepen that move to the maximum depth. But stop with this as soon as it no longer gives us a cutoff, and then continue with the other moves one by one at the depth at which we were originally working on, until another crosses the beta threshold.
If we want to prune we can make MARGIN < INFINITY, so that moves that run too much behind in score (i.e. are proven very bad at low depth) are ignored until either the best_score drops as the moves responsible for it devaluate on deeper search, or until we change our mind and increase MARGIN again (e.g. when all other moves have been considered to the requested depth, in the hope that this will never happen due to the occurrence of a cutoff).
The window is determined by best_score, which is based on the scores of the moves at the largest depth currently known for that move, rather then on the scores at the depth we are currently working on.