Search questions

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Search questions

Postby Sven Schüle » 17 Jul 2007, 11:25

Hi,

a common advice to improve an engine is to remove "unusual" code. Now I'm scanning through my search code and find a couple of issues which I would like to clarify. Although currently I don't think these are really "unusual" I would like to get some opinions about it.

1. Many people seem to use either PVS or plain AlphaBeta. I use FAB (Fail-soft AlphaBeta) because I read that it is sometimes superior to plain AlphaBeta (it returns tighter bounds when an iteration fails high or low), it always searches the same nodes as AlphaBeta when given the same window, and - unfortunately - I have been too lazy up to now to rewrite my search using PVS.

Is anyone else using FAB, or does anyone have severe objections against it?

2. I use a small optimization (at least I think it is one) when defining the alpha-beta window and also when deciding about conditons of a repeated iteration after fail high/fail low. I determine minimum and maximum possible values of a position p given an iteration number i as follows:
Code: Select all
int minPossibleValue = hasMatingMaterial(p.opponent) ? -(MAX_VALUE-i) : 0;
int maxPossibleValue = hasMatingMaterial(p.side) ? MAX_VALUE-i : 0;
// in reality it's even more precise, e.g. 'side' can only mate in an odd number of plies,
// but I omit this here for simplicity
ASSERT(maxPossibleValue >= minPossibleValue); // of course

I do this mainly to save a little effort when a forced mate is found. The 'hasMatingMaterial' stuff seems to help in some special endgames where one side can't win anymore but the opponent struggles to win the game because the score often drops close to a draw.

The basic assumption is that a mate in less than i plies should (must) be found during one of the previous iterations, so the current iteration must not return a mate shorter than i. I know this might be broken by some aggressive pruning methods but up to now I don't use them.

I initialize the window with
Code: Select all
unsigned int const windowSize = 30; // for example
int alpha = max(minPossibleValue, bestValueOfLastIteration-windowSize);
int beta  = min(maxPossibleValue, bestValueOfLastIteration+windowSize);

Finally, when an iteration fails low or high, I apply the following modified technique:
Code: Select all
int v = falphabeta(p, alpha, beta, maxDepth);
if (v >= beta  && v < maxPossibleValue) v = falphabeta(p, v, maxPossibleValue, maxDepth);
else
if (v <= alpha && v > minPossibleValue) v = falphabeta(p, minPossibleValue, v, maxDepth);

This also allows to stop the whole search when (v == minPossibleValue && v < 0) or (v == maxPossibleValue && v > 0) because this indicates that the shortest possible mate has been found and no further iteration is necessary.

My construction implies a slightly extended understanding of a beta cutoff during search. The standard case is that at any node deep in the tree, v >= beta means that the current node's value will have no impact on the value of the root node. My extension includes that beta may also be maxPossibleValue (of the root node or even of the current node) in which case v >= beta may also mean that no further search is necessary because no further move may turn out to get a better value than v, but it is not necessarily a refutation of the opponent's previous move.

Now when writing this description, I feel that there may be some important points which I missed and which might classify my modifications as "unusual". But I can't yet see them.

Any opinions about this?

Thanks in advance,
Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: Search questions

Postby Pradu » 17 Jul 2007, 14:43

Sven Schüle wrote:Hi,

a common advice to improve an engine is to remove "unusual" code. Now I'm scanning through my search code and find a couple of issues which I would like to clarify. Although currently I don't think these are really "unusual" I would like to get some opinions about it.

1. Many people seem to use either PVS or plain AlphaBeta. I use FAB (Fail-soft AlphaBeta) because I read that it is sometimes superior to plain AlphaBeta (it returns tighter bounds when an iteration fails high or low), it always searches the same nodes as AlphaBeta when given the same window, and - unfortunately - I have been too lazy up to now to rewrite my search using PVS.

Is anyone else using FAB, or does anyone have severe objections against it?
I use it.

2. I use a small optimization (at least I think it is one) when defining the alpha-beta window and also when deciding about conditons of a repeated iteration after fail high/fail low. I determine minimum and maximum possible values of a position p given an iteration number i as follows:
Code: Select all
int minPossibleValue = hasMatingMaterial(p.opponent) ? -(MAX_VALUE-i) : 0;
int maxPossibleValue = hasMatingMaterial(p.side) ? MAX_VALUE-i : 0;
// in reality it's even more precise, e.g. 'side' can only mate in an odd number of plies,
// but I omit this here for simplicity
ASSERT(maxPossibleValue >= minPossibleValue); // of course

I do this mainly to save a little effort when a forced mate is found. The 'hasMatingMaterial' stuff seems to help in some special endgames where one side can't win anymore but the opponent struggles to win the game because the score often drops close to a draw.
I think this was described in the Recognizers section of "Scalable search in computer chess" by Heinz.

The basic assumption is that a mate in less than i plies should (must) be found during one of the previous iterations, so the current iteration must not return a mate shorter than i. I know this might be broken by some aggressive pruning methods but up to now I don't use them.
I think this is called mate-distance pruning by Fabien. I use it as well.

I initialize the window with
Code: Select all
unsigned int const windowSize = 30; // for example
int alpha = max(minPossibleValue, bestValueOfLastIteration-windowSize);
int beta  = min(maxPossibleValue, bestValueOfLastIteration+windowSize);

Finally, when an iteration fails low or high, I apply the following modified technique:
Code: Select all
int v = falphabeta(p, alpha, beta, maxDepth);
if (v >= beta  && v < maxPossibleValue) v = falphabeta(p, v, maxPossibleValue, maxDepth);
else
if (v <= alpha && v > minPossibleValue) v = falphabeta(p, minPossibleValue, v, maxDepth);

This also allows to stop the whole search when (v == minPossibleValue && v < 0) or (v == maxPossibleValue && v > 0) because this indicates that the shortest possible mate has been found and no further iteration is necessary.
Perhaps a quick suggestion here-If you get a fail low, research with (v-1) instead of v. If you get a fail high search it with (v+1) instead of v. This will make the hash tables have exact scores instead of bounded scores and this, in practice, leads to faster searches. Atleast this is the case with "scout-search" researches.

My construction implies a slightly extended understanding of a beta cutoff during search. The standard case is that at any node deep in the tree, v >= beta means that the current node's value will have no impact on the value of the root node. My extension includes that beta may also be maxPossibleValue (of the root node or even of the current node) in which case v >= beta may also mean that no further search is necessary because no further move may turn out to get a better value than v, but it is not necessarily a refutation of the opponent's previous move.

Now when writing this description, I feel that there may be some important points which I missed and which might classify my modifications as "unusual". But I can't yet see them.

Any opinions about this?

Thanks in advance,
Sven
User avatar
Pradu
 
Posts: 343
Joined: 12 Jan 2005, 19:17
Location: Chandler, Arizona, USA

Re: Search questions

Postby H.G.Muller » 17 Jul 2007, 14:47

I use FAB in Joker and uMax. I never tried to compare it with PVS. You need pretty good move ordering to make PVS better (or there will be too many re-searches). But if you have perfect move ordering, there is again no difference.

With LMR, however, PVS is more natural, as you would need a re-search on most moves anyway if they score above alpha, even in FAB.

It seems you actually introduce an alpha cutoff here. This is a form of futility pruning.

My engines treat mate in an unusual way: they add 1 score point to every node score below eval before returning it, and thus have to pre-adapt the window by subtracting one score point from boundaries below (current) eval. As a to-be-mated-in-N score is always below current eval, this means that the losing side is awarded one point for every move he can delay the mate (and that the opponent will try to deny him that score by going for the fastest mate). Nice thing is that it also works for other gains than checkmate.

A consequence of this mechanism is that, in branches radiating from a forced to-be-mated position, the window might shift to below the checkmated score (-INF), so that even the initial BestScore can cause a beta cutoff before any moves have been searched (similar to a stand-pat cutoff in QS). This automatically prevents the search to go on deeper than the quickest mate that can be forced.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Search questions

Postby Chan Rasjid » 17 Jul 2007, 15:10

Hello,

Is anyone else using FAB, or does anyone have severe objections against it?


I seem to remember that using fail-soft needs some care in some circumstances but I'm not sure if they are critical. If a node fails low, it may be bad to return best < alpha-start if some(or one) of the moves are not searched (e.g. through futility prune). We may then need to fail hard (an hash hard) here and return alpha-start (why ?).I think it is very easy to have bugs when hashing is added to search.

Rasjid
Chan Rasjid
 
Posts: 73
Joined: 23 Feb 2005, 16:36
Location: Singapore

Re: Search questions

Postby Sven Schüle » 17 Jul 2007, 15:13

Pradu wrote:Perhaps a quick suggestion here-If you get a fail low, research with (v-1) instead of v. If you get a fail high search it with (v+1) instead of v. This will make the hash tables have exact scores instead of bounded scores and this, in practice, leads to faster searches. Atleast this is the case with "scout-search" researches.

Hi Pradu,

thanks for your suggestion. Actually I already do it this way (I wrote my posting without looking into the code), but did not know these reasons yet.

Sven
Last edited by Sven Schüle on 17 Jul 2007, 21:59, edited 1 time in total.
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: Search questions

Postby H.G.Muller » 17 Jul 2007, 15:21

Chan Rasjid wrote:Hello,
I seem to remember that using fail-soft needs some care in some circumstances but I'm not sure if they are critical. If a node fails low, it may be bad to return best < alpha-start if some(or one) of the moves are not searched (e.g. through futility prune). We may then need to fail hard (an hash hard) here and return alpha-start (why ?).I think it is very easy to have bugs when hashing is added to search.

You indeed have to be a bit careful. But if the assumption is that the score of a move can never be higher than PieceValue[Victim]+MARGIN, you can simply use that latter value to update your BestScore in a fail-soft scheme for futility-pruned moves. The hashed score then is usable (as upper bound) in probes with other alpha as well, as in such cases you would also have pruned the corresponding move.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Search questions

Postby Chan Rasjid » 17 Jul 2007, 18:52

Hello,

I forgot to mention null-move as I have not been using it for a long time.

The side that pass and fail high should only return beta, i.e reverting to fail hard. This ensures that if the child node below fails low, it will also fail hard. I don't exactly know why it should be this way nor how this will interact with null window, pvs, hashing etc.

Rasjid
Chan Rasjid
 
Posts: 73
Joined: 23 Feb 2005, 16:36
Location: Singapore

Re: Search questions

Postby Sven Schüle » 17 Jul 2007, 20:54

H.G.Muller wrote:It seems you actually introduce an alpha cutoff here. This is a form of futility pruning.
Hi H.G.,

I do not quite understand to which part of my posting you refer when you say that I introduce an alpha cutoff, could you explain this briefly?

Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: Search questions

Postby Sven Schüle » 17 Jul 2007, 21:55

Pradu wrote:
Sven Schüle wrote:The basic assumption is that a mate in less than i plies should (must) be found during one of the previous iterations, so the current iteration must not return a mate shorter than i. I know this might be broken by some aggressive pruning methods but up to now I don't use them.
I think this is called mate-distance pruning by Fabien. I use it as well.
Hi Pradu,

in Fruit 2.1 I see mate-distance pruning being applied at qsearch nodes. I use it also at full search nodes.

But what I do here is different from mate-distance pruning. While the latter is based on the current node's height (distance from root), here I care about the current iteration number and modify the alpha-beta window before starting a new iteration.

Of course the ideas are similar.

Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: Search questions

Postby Sven Schüle » 17 Jul 2007, 22:12

Chan Rasjid wrote:I seem to remember that using fail-soft needs some care in some circumstances but I'm not sure if they are critical. If a node fails low, it may be bad to return best < alpha-start if some(or one) of the moves are not searched (e.g. through futility prune). We may then need to fail hard (an hash hard) here and return alpha-start (why ?).

Chan Rasjid wrote:The side that pass and fail high should only return beta, i.e reverting to fail hard. This ensures that if the child node below fails low, it will also fail hard. I don't exactly know why it should be this way nor how this will interact with null window, pvs, hashing etc.
Hi Rasjid,

thanks for your hints. I will think about it and perhaps even try whether this is an improvement for me.

Now we only need someone else knowing also the "why" :D

Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 30 guests