HFL pruning

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

HFL pruning

Postby Tord Romstad » 19 Jul 2005, 14:47

Hi all,

I have just started experimenting with something I have wanted to do since a long time: Using the shape of the search tree to make extension and pruning decisions. During iteration N, I collect statistics about the shape of the subtree below every node, and use these statistics to decide whether this node should be pruned, extended or searched with normal depth at iteration N+1. The idea is to try to use the subtree shape to estimate the probability that adding one to the search depth would yield a different outcome.

A very simple first experiment is included in Glaurung 0.2.5j FRC: A new pruning technique which I call HFL pruning. My results so far have been very confusing and contradictory, but at least they are good enough that I think it is possible to make the idea work with some tuning and extra conditions.

First, two definitions: A move is "easily refuted" if it is refuted by the null move or the first real move searched at the child node. A fail low node is a "hopeless fail low node", or "HFL node", if all the moves at this node are easily refuted.

It seems reasonable to assume that HFL nodes will normally still fail low when searched to a deeper search depth. My experiments confirm this. In Glaurung, nodes which fail low at depth N fail high at depth N+1 in about 98.5% of the cases. For HFL nodes, the number is around 99.7%. I use this observation in the following way: My transposition table entries have an HFL flag. When I look up a position in the transposition table and the HFL flag is set, I allow an alpha hash table cutoff even if the depth of the transposition table entry is one ply left than the current search depth.

This technique saves a lot of nodes. In most positions, the difference is somewhere between 20 and 40%. It seriously hurts the tactical abilities of the program, however. Many combinations are found one or even two plies later with HFL pruning enabled. I am still not sure whether the net effect is positive or negative. If I manage to find a way to reduce the tactical defects, I think HFL pruning will be very effective.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: HFL pruning

Postby Alessandro Scotti » 19 Jul 2005, 16:04

Hi Tord,
I played with a similar idea a lot around for Kiwi 0.5b, as I was trying to save not only nodes but also re-searches from MTDf. The results were very interesting, although I've never managed to include it in an "official" version because Kiwi has way too many problems to solve first.
My approach is a little bit different from yours, and I mainly tested it for reductions. Direct cutoffs did not work too well, but that depends a lot on what the definition of a "bad" node was of course.
At any rate, here's the method.
When searching a node, I collect statistics for every move, including score, node count and so on. Then after searching all moves, I analyze the collected data and try to locate which nodes were "hopelessly" bad, as you say. Criteria I tried included for example the percentage of nodes searched.
Now, here comes a nice property of a bitboard engine: it always generates move in the same order, so each move can be identified by an integer number. So it is possible to store one bit of information for up to 64 moves using a 64-bit integer, which I put into the hash table. (The move generator was also modified to return the original move index that would be destroyed by sorting.)
When the node is searched again, I probe the "bad move" bit and, if set, reduce the search or skip the node.
I've tried to put this in a few words, but I spent weeks testing... version 0.5b (without pruning... sigh! :() was out more than two months after 0.5a!
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Idea for move ordering?

Postby Joachim Rang » 19 Jul 2005, 16:12

Did somebody try to use such an information not for reductions but for move ordering? Perhaps that is the place where such information s beneficial.

regards Joachim
Joachim Rang
 
Posts: 69
Joined: 26 Sep 2004, 22:00

Re: HFL pruning

Postby Matthias Hartwich » 19 Jul 2005, 18:08

The idea is nice, but how do you count nodes in presence of a hashtable?
Or is sufficient to just include a node count in the hashtable and use this number in all positions with draft >= depth?

Matthias
Last edited by Matthias Hartwich on 19 Jul 2005, 18:12, edited 1 time in total.
Matthias Hartwich
 
Posts: 3
Joined: 12 Jul 2005, 16:13
Location: Dormagen / Germany

Re: HFL pruning

Postby Dann Corbit » 19 Jul 2005, 18:11

Tord Romstad wrote:Hi all,

I have just started experimenting with something I have wanted to do since a long time: Using the shape of the search tree to make extension and pruning decisions. During iteration N, I collect statistics about the shape of the subtree below every node, and use these statistics to decide whether this node should be pruned, extended or searched with normal depth at iteration N+1. The idea is to try to use the subtree shape to estimate the probability that adding one to the search depth would yield a different outcome.

A very simple first experiment is included in Glaurung 0.2.5j FRC: A new pruning technique which I call HFL pruning. My results so far have been very confusing and contradictory, but at least they are good enough that I think it is possible to make the idea work with some tuning and extra conditions.

First, two definitions: A move is "easily refuted" if it is refuted by the null move or the first real move searched at the child node. A fail low node is a "hopeless fail low node", or "HFL node", if all the moves at this node are easily refuted.

It seems reasonable to assume that HFL nodes will normally still fail low when searched to a deeper search depth. My experiments confirm this. In Glaurung, nodes which fail low at depth N fail high at depth N+1 in about 98.5% of the cases. For HFL nodes, the number is around 99.7%. I use this observation in the following way: My transposition table entries have an HFL flag. When I look up a position in the transposition table and the HFL flag is set, I allow an alpha hash table cutoff even if the depth of the transposition table entry is one ply left than the current search depth.

This technique saves a lot of nodes. In most positions, the difference is somewhere between 20 and 40%. It seriously hurts the tactical abilities of the program, however. Many combinations are found one or even two plies later with HFL pruning enabled. I am still not sure whether the net effect is positive or negative. If I manage to find a way to reduce the tactical defects, I think HFL pruning will be very effective.

Tord


I don't really understand how HFL is different from verified null move pruning. Do you deepen the reductions for HFL?
Dann Corbit
 

Re: HFL pruning

Postby Tord Romstad » 20 Jul 2005, 10:55

Alessandro Scotti wrote:I played with a similar idea a lot around for Kiwi 0.5b, as I was trying to save not only nodes but also re-searches from MTDf. The results were very interesting, although I've never managed to include it in an "official" version because Kiwi has way too many problems to solve first.
My approach is a little bit different from yours, and I mainly tested it for reductions. Direct cutoffs did not work too well, but that depends a lot on what the definition of a "bad" node was of course.

You could describe what I do as reductions rather than direct cutoffs too, if you prefer. It is mostly a matter of point of view. In my case, reducing the depth by one ply is exactly equivalent to a direct cutoff, because the result of a one ply shallower search is already found in the transposition table.

Alessandro Scotti wrote:At any rate, here's the method.
When searching a node, I collect statistics for every move, including score, node count and so on. Then after searching all moves, I analyze the collected data and try to locate which nodes were "hopelessly" bad, as you say. Criteria I tried included for example the percentage of nodes searched.

I agree with Matthias here: Percentage of nodes searched is probably a bad measure, because of the transposition table.

Alessandro Scotti wrote:Now, here comes a nice property of a bitboard engine: it always generates move in the same order, so each move can be identified by an integer number. So it is possible to store one bit of information for up to 64 moves using a 64-bit integer, which I put into the hash table. (The move generator was also modified to return the original move index that would be destroyed by sorting.)
When the node is searched again, I probe the "bad move" bit and, if set, reduce the search or skip the node.

Nice idea. Eight bytes is more than I want to spend for storing this kind of information, though. I prefer to mark positions as good or bad, rather than every single move in a position.

Alessandro Scotti wrote:I've tried to put this in a few words, but I spent weeks testing... version 0.5b (without pruning... sigh! :() was out more than two months after 0.5a!

Yeah, I know how it feels. It is hard to make something like this well without careful tuning, which takes a lot of time.

Joachim Rang wrote:Did somebody try to use such an information not for reductions but for move ordering? Perhaps that is the place where such information s beneficial.

Perhaps (at least in the case of Alessandro's scheme), but I am not very optimistic about this. In practise, I have found it surprisingly hard to improve noticably on the standard move ordering heuristics (hash table move, captures, killers, remaining moves ordered by history counters).

Dann Corbit wrote:I don't really understand how HFL is different from verified null move pruning. Do you deepen the reductions for HFL?

I am not sure exactly how to start explaining this, because I'm not quite sure what you didn't understand. I cannot see any similarities between HFL pruning and verified null move pruning at all. The techniques are entirely unrelated, and are used for different purposes. Verified null move pruning (and null move pruning in general) is used to reduce the amount of work at fail high nodes, while HFL pruning is used to reduce the amount of work at fail low nodes.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: HFL pruning

Postby Uri Blass » 21 Jul 2005, 12:22

I think that HFL pruning is already done by chess programs.

history based pruning is type of HFL pruning and the same for pruning based on the fact that the evaluation does not give a lot of hopes to fail high.

The only new idea that I find in your post is using the fact that the same node failed law at reduced depth.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: HFL pruning

Postby Tord Romstad » 21 Jul 2005, 12:41

Uri Blass wrote:I think that HFL pruning is already done by chess programs.

history based pruning is type of HFL pruning and the same for pruning based on the fact that the evaluation does not give a lot of hopes to fail high.

Clearly I did a very poor job at explaining the idea. HFL pruning has almost nothing in common with the techniques you mention, except that they all apply to nodes where we expect the search to fail low.

I'll try to write a better explanation later, if I end up deciding that the idea actually works.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: HFL pruning

Postby Sune Fischer » 21 Jul 2005, 13:20

Tord Romstad wrote:Clearly I did a very poor job at explaining the idea. HFL pruning has almost nothing in common with the techniques you mention, except that they all apply to nodes where we expect the search to fail low.


Hi Tord,

I understand your idea :)

Last week I tried a somewhat similar thing.
I tried using tactical aspects of the search to better select the root move.

The basic idea is this: if we have two moves with an almost equal score at the root, but one of them has a sub tree that is filled with threats and mate variations against our opponent, then choose that move instead.
This would be a bit similar to how humans choose a move I think, even if you can't find anything concrete right now you can certainly pridict a lot of problems for you opponent.

The technical difficulty here is that one does not have the absolute score on the second best root move (not usually anyway).
One could try searching with a lower window but it gets technical...

-S.
User avatar
Sune Fischer
 
Posts: 126
Joined: 07 Oct 2004, 11:12
Location: Denmark

Re: HFL pruning

Postby diepeveen » 21 Jul 2005, 15:53

Tord Romstad wrote:Hi all,

I have just started experimenting with something I have wanted to do since a long time: Using the shape of the search tree to make extension and pruning decisions. During iteration N, I collect statistics about the shape of the subtree below every node, and use these statistics to decide whether this node should be pruned, extended or searched with normal depth at iteration N+1. The idea is to try to use the subtree shape to estimate the probability that adding one to the search depth would yield a different outcome.

A very simple first experiment is included in Glaurung 0.2.5j FRC: A new pruning technique which I call HFL pruning. My results so far have been very confusing and contradictory, but at least they are good enough that I think it is possible to make the idea work with some tuning and extra conditions.

First, two definitions: A move is "easily refuted" if it is refuted by the null move or the first real move searched at the child node. A fail low node is a "hopeless fail low node", or "HFL node", if all the moves at this node are easily refuted.

It seems reasonable to assume that HFL nodes will normally still fail low when searched to a deeper search depth. My experiments confirm this. In Glaurung, nodes which fail low at depth N fail high at depth N+1 in about 98.5% of the cases.
Tord


Tord, i guess i read it wrong, or you wrote it down wrong. Let's not write it in describable notation too much but a bit more in math.

PreCondition : position P, Iteration N, Depthleft D, status : fail low
Claim : position P, Iteration N+1, Depthleft D+1 : 98.5% odds for fail high.

I assume you mean 98.5% odds for fail low again.

by the way i call that a FLIPrate of 1.5%, see older postings over the past 10 years.
Fliprate = chance that a node stored in hashtable score <= alfa
now flips to > alpha now.

Odds in Diep < 1% for nodes at small search depths. Odds a LOT smaller than 1% for bigger depthlefts.

Tord Romstad wrote:For HFL nodes, the number is around 99.7%. I use this observation in the following way: My transposition table entries have an HFL flag. When I look up a position in the transposition table and the HFL flag is set, I allow an alpha hash table cutoff even if the depth of the transposition table entry is one ply left than the current search depth.


Alternatively you could decide to search in this node only the 'patzermoves' to normal depth and rest of the moves to depth-1

Tord Romstad wrote:This technique saves a lot of nodes. In most positions, the difference is somewhere between 20 and 40%. It seriously hurts the tactical abilities of the program, however. Many combinations are found one or even two plies later with HFL pruning enabled. I am still not sure whether the net effect is positive or negative. If I manage to find a way to reduce the tactical defects, I think HFL pruning will be very effective.

Tord


Fixing tactical defects in forward pruning is not such a problem. A real problem is finding something that positional works well. Tactical defects are very easy to fix.

What type of nullmove do you use. R=3 everywhere or some R=3/R=2 mixture? Double nullmove?

The general idea is very interesting idea.

However it will prune a lot less when your evaluation function is pretty agressive. Additional the real problem is finding a good IMPLEMENTATION of what is a HFL node.

For example a real weakness in the above definition is that the first move giving a cutoff is an 'easy cutoff'. It might be that hashtable or killertable gave that cutoff move.

Additional a weakness is you probably tested it at tactical positions. In those positions where you threaten a mate in 1, of course you can do 4 nullmoves in a row without problems, but in real life this isn't possible always. Most games there aren't non stop mate in 1 threats that can prune entire subtrees.

Tactical testsets are real ugly to test algorithms at, because any non-normal thing is going to work there, as we know in advance there is a trick that is going to work.

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: HFL pruning

Postby Uri Blass » 21 Jul 2005, 17:21

Tord Romstad wrote:
Uri Blass wrote:I think that HFL pruning is already done by chess programs.

history based pruning is type of HFL pruning and the same for pruning based on the fact that the evaluation does not give a lot of hopes to fail high.

Clearly I did a very poor job at explaining the idea. HFL pruning has almost nothing in common with the techniques you mention, except that they all apply to nodes where we expect the search to fail low.

I'll try to write a better explanation later, if I end up deciding that the idea actually works.

Tord


I think that I did a bad reading job and used the name to get conclusions.
Maybe it is better to replace the name because
pruning hopeless fail low nodes already done by programs but this is not what you mean by HFL but a private case of hopefully fail low.

Maybe it is better to call is ERP for easily refuted pruning.

If I understand correctly you prune position that you expect to fail low if all the moves of the side to move were refuted by the first move of the opponent or by no threat in the previous time that you searched it at reduced depth.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 43 guests