Question regarding transposition tables (to clear or not)

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Question regarding transposition tables (to clear or not)

Postby Roman Hartmann » 18 Jul 2008, 20:57

Hi,
finally I implemented a transposition table in my engine. It works and does help in the endgame quite a bit. There is one little problem though which I'm not sure how to handle.

If I have position like Fine #70 the key move is shown after about 1s at depth 25 with a convincing score. Everything as expected but as soon as I make a move I have a lot of cut-offs due now outdated data in the hash table and the search is messed up. This is easy to solve by clearing the hash-table after a move is made and a new search is started, of course, but then I will also lose the advantage of a transposition table in the middle game.

Actually I have this problem already fixed by clearing the transposition table only in the endgame phase. This works but looks like an ugly hack to me.

So what's the best solution for an always overwrite sheme?

best regards
Roman
User avatar
Roman Hartmann
 
Posts: 155
Joined: 11 Oct 2004, 14:21

Re: Question regarding transposition tables (to clear or not

Postby Ron Murawski » 18 Jul 2008, 23:37

Roman Hartmann wrote:
So what's the best solution for an always overwrite scheme?

best regards
Roman


Hi Roman,

The best solution is to always overwrite! :D :) :D

Many engines have several overwrite entries and one depth-based entry for each hash table slot. The hash table needs all its depth-based entries decremented by 2 after the engine decides on its move. (Because when it is the engine's turn again, the depth-based entries in the hash are from the last search, which was 2 plies earlier.)

How many entries do you have for each hash slot?

Best regards,
Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: Question regarding transposition tables (to clear or not

Postby Dann Corbit » 19 Jul 2008, 02:27

Roman Hartmann wrote:Hi,
finally I implemented a transposition table in my engine. It works and does help in the endgame quite a bit. There is one little problem though which I'm not sure how to handle.

If I have position like Fine #70 the key move is shown after about 1s at depth 25 with a convincing score. Everything as expected but as soon as I make a move I have a lot of cut-offs due now outdated data in the hash table and the search is messed up. This is easy to solve by clearing the hash-table after a move is made and a new search is started, of course, but then I will also lose the advantage of a transposition table in the middle game.

Actually I have this problem already fixed by clearing the transposition table only in the endgame phase. This works but looks like an ugly hack to me.

So what's the best solution for an always overwrite sheme?

best regards
Roman


If your search gets messed up then you have a bug. The worst thing that should ever happen is for the search to run slower and if it is simply a forward position from the current node, we should not see that happen either.

Do you have a cross-check piece of the hash key in your hash table element?

My suggestion for clearing the hash is if you are given a position and it is not a pv node in your current hash table then clear the hash. Otherwise, never clear the hash. This is handy for processing a set of EPD records serially that do not belong to the same game. But if (for instance) you are analysing the moves of a single chess game serially, then you should not clear the hash because it will slow down your search quite a bit.

Ron's suggestion of "always overwrite" is certainly a serviceable method to keep the hash fairly current. What are you currently doing?
Dann Corbit
 

Re: Question regarding transposition tables (to clear or not

Postby Zach Wegner » 19 Jul 2008, 03:58

Ron Murawski wrote:Many engines have several overwrite entries and one depth-based entry for each hash table slot. The hash table needs all its depth-based entries decremented by 2 after the engine decides on its move. (Because when it is the engine's turn again, the depth-based entries in the hash are from the last search, which was 2 plies earlier.)
Why would you need to do that? Hash entries are stored with the depth they were searched with, which is independent of the root position. Don't decrement by 2, it's a waste of time and efficiency.
User avatar
Zach Wegner
 
Posts: 182
Joined: 26 Sep 2004, 22:02
Location: Austin, Texas, USA

Re: Question regarding transposition tables (to clear or not

Postby P.K. Mukhopadhyay » 19 Jul 2008, 07:48

Roman Hartmann wrote:If I have position like Fine #70 the key move is shown after about 1s at depth 25 with a convincing score. Everything as expected but as soon as I make a move I have a lot of cut-offs due now outdated data in the hash table and the search is messed up. This is easy to solve by clearing the hash-table after a move is made and a new search is started, of course, but then I will also lose the advantage of a transposition table in the middle game.

Actually I have this problem already fixed by clearing the transposition table only in the endgame phase. This works but looks like an ugly hack to me.
Will it be possible to post the Fine#70 position? I use depth+age to decide replacement in TT. I am interested to see how this scheme fares in this position.

Regards
P.K. Mukhopadhyay
 
Posts: 9
Joined: 31 May 2008, 08:34
Location: India

Re: Question regarding transposition tables (to clear or not

Postby H.G.Muller » 19 Jul 2008, 09:16

It sounds like you don't update your hash key properly in the root (when you do the move at game level). Normally, what is left over from a previous search should still be perfectly valid. They are simply the results of searches from the given positions to the specified depth, and redoing that search to the same depth should reproduce that result at an effort, while taking it from the TT would give it for free.

It might not be useful information, if the opponent did not play the PV move, and you have a different search window, but then they should be simply ignored if they don't have a useful score bound. It should never harm the search.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Question regarding transposition tables (to clear or not

Postby Harald Johnsen » 19 Jul 2008, 10:00

Roman Hartmann wrote:
If I have position like Fine #70 the key move is shown after about 1s at depth 25 with a convincing score. Everything as expected but as soon as I make a move I have a lot of cut-offs due now outdated data in the hash table and the search is messed up.


I don't know what you call outdated entries. You *want* those cut off for transpositions, and you want those entries for your move ordering.
There is no such things like outdated entries. If you hit them then they can not be outdated because you need them.

Potential bug : you are supposed to store the search depth in the entries, ie the depth of the subtree growing from this position. You must not use the relative depth from the root of your search.


This is easy to solve by clearing the hash-table after a move is made and a new search is started, of course, but then I will also lose the advantage of a transposition table in the middle game.

Actually I have this problem already fixed by clearing the transposition table only in the endgame phase. This works but looks like an ugly hack to me.

I don't know what end game or middle game have to do with it.

So what's the best solution for an always overwrite sheme?


Your bug has nothing to do with allways overwrite. Anyway any replacement scheme will allways work with the same efficiency when the hash table is not full.

HJ.
User avatar
Harald Johnsen
 
Posts: 43
Joined: 20 Aug 2007, 17:01
Location: France

Re: Question regarding transposition tables (to clear or not

Postby Roman Hartmann » 19 Jul 2008, 11:49

Hi all,
thanks for all your comments so far. Below I will answer the questions and also give additional details what I'm currently doing.

The best solution is to always overwrite!

That's what I'm already doing.

How many entries do you have for each hash slot?

Only one so far and this is always overwritten regardless of depth.

If your search gets messed up then you have a bug. The worst thing that should ever happen is for the search to run slower and if it is simply a forward position from the current node, we should not see that happen either.

It happens only in 'extreme' positions like Fine #70 (8/k/3p4/p2P1p2/P2P1P2/8/8/K7 w - - 0 1). It will give initially the right score, move and the proper line. Then when I make a move the search seems to get misguided by previous hash entries.
This does not happen if the hash table is cleared before a new search is started after a move is made. Then I can play the position out and the moves correspond with other engines.

My understanding of the problem is that the hash table does not take in account the path. To put the king on b1 makes only sense if it is currently on a1 (Fine #70) but when this position is found later again in the hash table it will just return a positive score even though this position is no good anymore. Other more important entries might already be overwritten. Ok, this is what I think is going on but then I'm rather new into hashing, so I might be completely wrong on that.

Do you have a cross-check piece of the hash key in your hash table element?

Yes, not only a cross-check piece of the hash key but the whole 64-bit key is stored as well.

It sounds like you don't update your hash key properly in the root (when you do the move at game level).

It shouldn't matter as I update the hash key only when a move is made regardless of where the move is made. But anyway, you're right, it shouldn't matter if I make a move at the root or somewhere in the tree. Good point!
I will have to check if my output routine at the root might be somehow messing things up ...

Potential bug : you are supposed to store the search depth in the entries, ie the depth of the subtree growing from this position. You must not use the relative depth from the root of your search.

I do store the depth of the subtree in the entries and also use them later only if the depth in the entry is >= the current depth in this subtree.

I don't know what end game or middle game have to do with it.

When I clear the hash table after a move is made I lose the advantage of having a hash table in the middle game while it doesn't seem to hurt the play too much when I clear it after every move in the endgame phase.

H.G. Muller is probably on the right track that there is something wrong when I make the moves at the root. There should be no difference in making a move in the tree or at the root ...

Summary of what I'm doing:

Zobrist key:
update the hashkey when a move is made. To make sure this is done properly I'm using currently a function that generates the key everytime a move is made from scratch.

Hash table:
always overwrite regardless of depth.
-full 64-bit key is stored
-depth
-score (with special handling of mate scores)
-type (alpha, beta, exact)
-bestmove (only from-, to-square for move ordering later)

Probe:
-check for identical key
-depth of stored entry >= than current depth
-cut-off for beta- or exact-entries (adjust alpha with alpha-entries)

best regards
Roman
User avatar
Roman Hartmann
 
Posts: 155
Joined: 11 Oct 2004, 14:21

Re: Question regarding transposition tables (to clear or not

Postby H.G.Muller » 19 Jul 2008, 13:02

What you describe sounds fully correct. Is the side to move properly worked into the hash key (and the same way irrspective of who has the move in the root?) It is indeed hard to imagine how something that worked within the previous search would cease to work when you make one of the moves in the root. Especially if it is the PV move, it should be able to use everything that was left from the previous search.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Question regarding transposition tables (to clear or not

Postby Roman Hartmann » 19 Jul 2008, 18:51

I guess I found the problem. As usual it is something else than expected.

My 3-fold rep-detection is polluting my hash table with lot of 0-values. If I disable any cut-off if the returned value is 0 the hash table works as expected.

So it seems I have to solve another question now: How do I prevent the 3-fold rep-detection from polluting my hash table with all those zeros?

I'm using a hash-stack for rep-detection and if there are 2 identical positions in the tree or in the game history a draw score (0) is returned. A draw score is also returned if a position occurs in the tree which is already in the game history.

Not storing a 0 value in the hash table looks like a simple solution to my problem. But if there are any better solutions I would like to know about it.

best regards
Roman
User avatar
Roman Hartmann
 
Posts: 155
Joined: 11 Oct 2004, 14:21

Re: Question regarding transposition tables (to clear or not

Postby Ron Murawski » 20 Jul 2008, 08:28

Zach Wegner wrote:
Ron Murawski wrote:Many engines have several overwrite entries and one depth-based entry for each hash table slot. The hash table needs all its depth-based entries decremented by 2 after the engine decides on its move. (Because when it is the engine's turn again, the depth-based entries in the hash are from the last search, which was 2 plies earlier.)
Why would you need to do that? Hash entries are stored with the depth they were searched with, which is independent of the root position. Don't decrement by 2, it's a waste of time and efficiency.


Hi Zach,

Just to remind everyone: we're talking about depth-based hash entries.

You're right about "Hash entries are stored with the depth they were searched with, which is independent of the root position". But you are wrong about decrementing the depth being a waste of time. (I had forgotten *why* I was decrementing the depth! :shock: )

Think about this: If you put a very deeply-searched entry into your hash table and you never decrement the search depth, the entry will take up a slot in your table for the remainder of the game. A lot of search paths that are not taken can get searched quite deeply. For instance, my engine will sometimes use huge amounts of time on a move it identifies as "problematic" -- the search depth is much deeper for "problem moves" than for the moves just before or after. Without decrementing depths, these deep hash entries were clogging up my table until some material got swapped where, most of the time, the new search depths got deeper and hash slots eventually become available again.

The efficient way to implement the "decrement strategy" is to store the search depth *plus* the game's move# into the hash. So, at move #13, if your base entry gets searched 12-deep, you store (13 + 12 = 25) as for its "depth".

I looked at my source code -- I haven't looked at this section of code in a long time -- and it is not quite the way I explained above. I add the move# (measured as 1 for white's first move, 2 for black's first move, 3 for white's second move, etc.) to the search-depth measured in fractional-plies. So I actually store (13 + (searchdepth * 8)) as the hash depth and will store (15 + (searchdepth * 8)) for the next move's search. With each new game-move the engine is adding 2/8 = 1/4 ply to each new hash entry. Adding 1/4 ply to new entries is equivalent to subtracting 1/4 ply from the old ones when all you care about is largest depth, not exact values.

I came up with the decrementing strategy on my own but, of course, it was quite well known to other engine authors. I experimented and discovered that using a value of 2 worked best for me. (That's why I remember decrementing and the value of '2'.)

The efficient "incrementing instead of decrementing idea" is not new. I first saw it posted by Vincent Diepeveen and he said it was a well known trick.

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: Question regarding transposition tables (to clear or not

Postby Zach Wegner » 21 Jul 2008, 03:40

OK, that's something different from what I thought you were saying. I got the impression that you only stored the depth in the hash table, and that you ran through the entire table after each move and decremented each entry by 2.

What you say is quite different. What I, and I think most others, do is to have a separate field in the hash table of just "age", which would be something like the move number. Then you base replacement decisions on age and depth, but you only base cutoff decisions based on depth. This is cleaner IMO and involves minimal loss of information.
User avatar
Zach Wegner
 
Posts: 182
Joined: 26 Sep 2004, 22:02
Location: Austin, Texas, USA

Re: Question regarding transposition tables (to clear or not

Postby H.G.Muller » 21 Jul 2008, 08:23

Pollution of the hash tablle with draw scores is an unsolved problem. In Joker I try to avoid it, and store only scores like the position would be visited for the first time. This is not simple, to achieve, and might lead to the opposite problem: the engine overlooking a draw line because it used the hashed values for a side branch.

According to Bob Hyatt, any cure here is worse than the disease.

But in any case you should avoid storing scores that you know to be heavily affected by draw scores. So you should not store the node that was the repetition itself. And preferably, you also should not store scores of nodes that have a move that leads to a repeated position as best move. Much trickier are the nodes where you cannot do the normally best move, because the opponent can refute it with a move to a position that occured before in the path. So you have to play a worse move, and underestimate the position, which does not even have a zero score.

In Joker I continue searching even after hitting a tree repeat. (This is not very expensive: all moves will lead to sufficient-depth hash hits.), but together with the score, I return a bitmap indicating the first occurence of repeated positions in the path. A score based on a planned continuation that repeats the current position is then topped-off by zero. This avoids path-dependent scores in the hash table, the score only depends on the deeper part of the branch.

In micro-Max I take a much simpler solution, which solves 95% of the rep-draw problem: I only count repeats of positions from the game history as draw. This means it cannot plan for sacrifices that force perpetuals, but this is hardly ever a problem. The main function of rep-draw detection is preventing senseless back-and-forth moving in a won position. In a lost position most engines are very keen on perpetual checks anyway, as these lines prevent the situation from deteriorating (which it usually does, if you search deep enough in a badly lost position).

As to the replacement problem: In uMax I do always-replace, so the problem does not occur. In Joker I 'scrub' the depth-first part of the hash table after each search, to find positions that did not get hit (presumably because the occur in a part of the game tree that was made inaccessible by the previous move at game level), and set their depth to zero. This requires 1 extra bit in the hash entry, set on a hit. In Joker obsolete positions in the hash table are not a real problem anyway, as it even replaces deep entries that have more than their fair share in the table (equi-distributed depth replacement). So it would only apply to the very deep searches, of which there are never enough to reach their fair share quota, and then would live forever. But as long as they don't reach their fair share, they are only a minor problem anyway. So the scrubbing is not really urgent, and can be skipped under time pressure.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Question regarding transposition tables (to clear or not

Postby Roman Hartmann » 21 Jul 2008, 12:34

Thanks for your elaboration on this topic. Much appreciated. I read in the meantime also several other threads to the problem with draw-scores from 3-folds in the hash table.

My (maybe wrong) understanding is that the problem is mainly related to engines that rely on their hash tables to find a 3-fold. As I don't rely on hash tables for recognizing 3-folds I think I can get away cheap by just not storing any draw scores in the hash table.

That's what I'm doing currently to detect a 3-fold (actually to detect a 2-fold):
Before I probe the hash table I loop through the move history (the current game tree included, taking the 50-move counter in account) and check for identical positions (identical hashkey). If there are two identical positions a draw score is returned. Bu if I store these positions now with a draw score in the hash table I'm in trouble. when this position is found in a new search again it will just return a draw score from the hash table even though it might not even be a 2-fold in this new search now. The search gets more and more crippled due misleading draw entries.

I think there are at least two solutions in my case to overcome this problem although I guess only one makes really sense. The obvious and simple solution is not to store any draw scores in the hash table that come from a 3-fold.

The other solution would be to remove all the draw scores coming from a 3-fold before a new search is started. But I guess this would cause too much overhead and I didn't try that.

A variation of the 2nd solution would be to ignore draw by rep scores if they come from a previous search.

Rightnow I just don't store any position with a draw-score in the hash table and it seems I got rid of the problem. This probably also means that some deep 3-folds might not be found at all, of course.

best regards
Roman
User avatar
Roman Hartmann
 
Posts: 155
Joined: 11 Oct 2004, 14:21

Re: Question regarding transposition tables (to clear or not

Postby H.G.Muller » 21 Jul 2008, 14:47

I am not sure that how you detect the repetitions has much to do with it (provided you do it correctly). It is the way the repetition causes a difference in score of the repeated position and its predecessors that creates the problem, if these scores are later read back in situatons where the repetition is no longer a repetition.

This immediately tells you that repetition of game positions can be safely scored as draws, as in that same game they will always remain repetitions.

Definitely you should not store the draw score of the repeated positions in the hash. That would be tantamount by storing the information "this position is a repeat", which will almost certainly no longer true when you hit that osition again, throuh another path. Plus that in cases where it was true, the information would be useless, as you would already know it was a repetition without probing the hash.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Question regarding transposition tables (to clear or not

Postby Ron Murawski » 21 Jul 2008, 19:37

Zach Wegner wrote:OK, that's something different from what I thought you were saying. I got the impression that you only stored the depth in the hash table, and that you ran through the entire table after each move and decremented each entry by 2.


Originally, that was *exactly* what I did! And it worked! (Remember that '2' here means 2 fractional ply = 2/8 = 1/4 ply.)

I had totally forgotten about the measurement being in fractional ply so I made a bad mistake claiming that the depth itself should be decremented. My mistake!

For aging the hash, there is no difference between decrementing every hash entry depth by 2 after each move or incrementing new entry depths by 2 for each game move played.

Zach Wegner wrote:What you say is quite different. What I, and I think most others, do is to have a separate field in the hash table of just "age", which would be something like the move number. Then you base replacement decisions on age and depth, but you only base cutoff decisions based on depth. This is cleaner IMO and involves minimal loss of information.


There's no loss of information combining these terms. For any depth-based replacement decision using an aging scheme, you need to calculate a "relative depth" using depth and move number. The choice is whether to store two numbers in the hash (to make the calculation) or simply store the calculated result in the table. It's important to realize that the real depth doesn't matter. it's the calculated "relative depth" that determines the replacement decision.

The advantage of combining the terms:
1) you can retrieve all the relevant depth information from the hash in one fetch instruction -- same for stores.
2) sometimes you can save bits -- this depends on how "packed" your hash entries are

The disadvantage of combining the terms:
1) the hash depths look very mysterious in a debugger
2) when you post about this years later it's easy to get confused! :)

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: Question regarding transposition tables (to clear or not

Postby Dann Corbit » 21 Jul 2008, 22:10

Roman Hartmann wrote:I guess I found the problem. As usual it is something else than expected.

My 3-fold rep-detection is polluting my hash table with lot of 0-values. If I disable any cut-off if the returned value is 0 the hash table works as expected.

So it seems I have to solve another question now: How do I prevent the 3-fold rep-detection from polluting my hash table with all those zeros?

I'm using a hash-stack for rep-detection and if there are 2 identical positions in the tree or in the game history a draw score (0) is returned. A draw score is also returned if a position occurs in the tree which is already in the game history.

Not storing a 0 value in the hash table looks like a simple solution to my problem. But if there are any better solutions I would like to know about it.

best regards
Roman


For this sort of thing, it is not uncommon to use a circular ring buffer of the last 100 plies that were played (you can store the whole game but there is no point in it unless you want to be able to play the whole game back for learning purposes or something like that).

You can use your half move clock to figure out how far backwards you will have to look.

You can use information from this structure to figure out if there is either 100 plies non-reversible or 3-fold repeat (obviously, if an irreversible move has been played, the 3-fold repeat does not have to be investigated beyond that boundary either).
Dann Corbit
 

Re: Question regarding transposition tables (to clear or not

Postby Zach Wegner » 22 Jul 2008, 02:29

Ron Murawski wrote:There's no loss of information combining these terms. For any depth-based replacement decision using an aging scheme, you need to calculate a "relative depth" using depth and move number. The choice is whether to store two numbers in the hash (to make the calculation) or simply store the calculated result in the table. It's important to realize that the real depth doesn't matter. it's the calculated "relative depth" that determines the replacement decision.
No, there's a loss of information when you make the cutoff decision. Imagine a 12 ply search from the starting position. After the e4 e5 Nf3 Nc6 subtree is searched (to 8 ply), you store the result in the hashtable with draft 0 + (8 * 8). Then, after e4 e5 are played, you are doing a 10 ply search, and you hit the same position with a depth of 8 left. You're only going to accept hash entries with a draft of 2 + (8 * 8) or more, and the old entry won't cause a cutoff, when it should.
User avatar
Zach Wegner
 
Posts: 182
Joined: 26 Sep 2004, 22:02
Location: Austin, Texas, USA

Re: Question regarding transposition tables (to clear or not

Postby Ron Murawski » 23 Jul 2008, 05:05

Zach Wegner wrote:
Ron Murawski wrote:There's no loss of information combining these terms. For any depth-based replacement decision using an aging scheme, you need to calculate a "relative depth" using depth and move number. The choice is whether to store two numbers in the hash (to make the calculation) or simply store the calculated result in the table. It's important to realize that the real depth doesn't matter. it's the calculated "relative depth" that determines the replacement decision.
No, there's a loss of information when you make the cutoff decision. Imagine a 12 ply search from the starting position. After the e4 e5 Nf3 Nc6 subtree is searched (to 8 ply), you store the result in the hashtable with draft 0 + (8 * 8). Then, after e4 e5 are played, you are doing a 10 ply search, and you hit the same position with a depth of 8 left. You're only going to accept hash entries with a draft of 2 + (8 * 8) or more, and the old entry won't cause a cutoff, when it should.


You are in error when you assume I would only accept hash entries with a draft of 2 + (8 * 8) or more. Again, this is my fault because I'm not being clear enough. The aging is 1/4 ply per game ply, or 1/2 ply per chess game move. The integer depth is unaffected until the 1/4 plies add up to a whole ply. (For pruning decisions I ignore fractional plies.)

In my testing it was quite rare to find missed pruning decisions. After playing several thousand test games I decided that the method worked fine in practice.

Maybe someday I'll re-try your method and see if it makes any difference.

Zach, you said you base replacement decisions on age and depth --
What is your formula?

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: Question regarding transposition tables (to clear or not

Postby Zach Wegner » 23 Jul 2008, 05:58

Ron Murawski wrote:You are in error when you assume I would only accept hash entries with a draft of 2 + (8 * 8) or more. Again, this is my fault because I'm not being clear enough. The aging is 1/4 ply per game ply, or 1/2 ply per chess game move. The integer depth is unaffected until the 1/4 plies add up to a whole ply. (For pruning decisions I ignore fractional plies.)
OK, I see. There is a loss of information here though. When you ignore the fractional plies, that means that you're rounding down at some point. If the move numbers were not 0 and 2 but 3 and 4, then you would lose a full ply of useful hash information. This is also dangerous in that fractional plies can make a difference in the search, and you can accept cutoffs from entries with too low of a draft. This last part I haven't tested personally, but I was told that by Bob Hyatt and Tord Romstad a few years ago on CCC, and it makes sense to me. A quarter or eighth of a ply here, and it ends up allowing a full ply later down the tree, which leads to seeing a checkmate earlier, and so on. It's also just a philosophical thing for me, I would rather not lose some good cutoffs and make some bad ones just to save some bits in the hash table.
In my testing it was quite rare to find missed pruning decisions. After playing several thousand test games I decided that the method worked fine in practice.

Maybe someday I'll re-try your method and see if it makes any difference.
That is of course true. It's quite possible that there's no real measurable difference between the methods. If it works fine for you, then there's no reason to mess with it. But in this case I think it would be worth testing, it might bring you some precious Elo.
Zach, you said you base replacement decisions on age and depth --
What is your formula?

Ron


My replacement formula is pretty complicated. I'll post some simplified code, and an explanation.
Code: Select all
    best_slot = 0;
    best_depth = MAX_PLY * PLY + 1024;
    for (x = 0; x < HASH_SLOT_COUNT; x++)
    {
        hashkey = entry->entry[x].hashkey;
        data = entry->entry[x].data;
        if (hashkeys_match())
        {
            if (HASH_DEPTH(data) > sb->depth)
                return;
            else
            {
                best_slot = x;
                break;
            }
        }
        else if (HASH_DEPTH(data) - age_difference(HASH_AGE(data)) * 2 * PLY < best_depth)
        {
            best_slot = x;
            best_depth = HASH_DEPTH(data) - age_difference(HASH_AGE(data)) * 2 * PLY;
        }
    }
    /* Check to see if we should overwrite the entry. */
    if (HASH_TYPE(entry->entry[best_slot].data) != HASH_EXACT_BOUND || type == HASH_EXACT_BOUND ||
        age_difference(HASH_AGE(entry->entry[best_slot].data)) > 0)
    {
        store_position();
    }
}
This means a position is stored when:
1. If the same position is in the hashtable already, it is only overwritten if the depth is greater now.
2. Otherwise, a slot in the hashtable with the lowest sum of depth + age is chosen to be overwritten. It is only overwritten when one of these conditions are met:
2. It is a PV node
3. The position stored in the table isn't a PV node
4. The position stored in the table is a PV node, but from an old search

The number of slots selected from right now is 4, which adds up to a cache line. As you can see, the age is multiplied by 2*PLY to be on the same scale as depth. This scaling is proper only when ponder is off, as the age is incremented after every search. I doubt it really makes a difference, but I should probably do something about it.
User avatar
Zach Wegner
 
Posts: 182
Joined: 26 Sep 2004, 22:02
Location: Austin, Texas, USA

Next

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 34 guests