cost of memory lookup

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Re: cost of memory lookup

Postby Ron Murawski » 05 Jun 2007, 17:42

Dann Corbit wrote:
Daniel Shawul wrote:Okay i will try 64 byte (i made a typo in my previous post) alignment and do the test again.

This is my hashtable entry.

struct HASH_ENTRY {
uint64 lock;
int32 move;
int16 score;
uint8 depth;
uint8 flag;
}

When probing the first byte to be accessed is the lock, so no problems i guess. But when storing the depth is checked first, so it means (score,depth & flag) are in cache but not lock & move. This will require additional memory look up?

Looking at your tt entries, I see that you store only 32 bits for the lock. Dont remever why I store the whole 64 bit :o May be because it is what i got from Bruce moreland's tutorial.. In your case i guess you use the upper or lower half as a key right?

I think you should have the 'age' counter in your entry. Or otherwise you have to clear the transpostion table after a move?
Testing on a set of testpositons really doesnt measure the performance of a replacement scheme IMO. However i do not think that every transpositon table entry from an old search is unimportant. In the most common hash table, the prefer depth entry is replaced when the depth is greater, or when the position is from previous search. I dont totally agree of the later, because the search might be from an immediate former search and might have a large depth.


There is no reason to store the entire hash as a cross-check.
The bottom bits are already matched by definition if you use a modulus or &-mask to determine the hash position. Therefore, the only bits we do not know about are the bits above the top of the hash lookup code itself.

Suppose that you use the top 32 bits as a cross check to be sure that your board position is identical to the one in hash.
Suppose further that you have 2 million hash entries, and so your hash table address is 21 bits wide. That gives 21 bits + 32 bits = 53 bits of confirmation. Now, it is possible that some of those other 11 bits did not match, but very unlikely. It is so improbable, that the damage caused by a faulty hash lookup is so rare that it will not effect the search in practical terms. So hash tables with the full 64 bits are very rare. I have seen some chess engines that dispose of the cross check altogether, but that seems a bad idea if your hash table is only (for instance) 21 bits wide. In such a case, I am sure that there will definitely be real problems in long searches.


Hi Dann,

Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: cost of memory lookup

Postby Uri Blass » 06 Jun 2007, 07:59

Ron Murawski wrote:
Dann Corbit wrote:
Daniel Shawul wrote:Okay i will try 64 byte (i made a typo in my previous post) alignment and do the test again.

This is my hashtable entry.

struct HASH_ENTRY {
uint64 lock;
int32 move;
int16 score;
uint8 depth;
uint8 flag;
}

When probing the first byte to be accessed is the lock, so no problems i guess. But when storing the depth is checked first, so it means (score,depth & flag) are in cache but not lock & move. This will require additional memory look up?

Looking at your tt entries, I see that you store only 32 bits for the lock. Dont remever why I store the whole 64 bit :o May be because it is what i got from Bruce moreland's tutorial.. In your case i guess you use the upper or lower half as a key right?

I think you should have the 'age' counter in your entry. Or otherwise you have to clear the transpostion table after a move?
Testing on a set of testpositons really doesnt measure the performance of a replacement scheme IMO. However i do not think that every transpositon table entry from an old search is unimportant. In the most common hash table, the prefer depth entry is replaced when the depth is greater, or when the position is from previous search. I dont totally agree of the later, because the search might be from an immediate former search and might have a large depth.


There is no reason to store the entire hash as a cross-check.
The bottom bits are already matched by definition if you use a modulus or &-mask to determine the hash position. Therefore, the only bits we do not know about are the bits above the top of the hash lookup code itself.

Suppose that you use the top 32 bits as a cross check to be sure that your board position is identical to the one in hash.
Suppose further that you have 2 million hash entries, and so your hash table address is 21 bits wide. That gives 21 bits + 32 bits = 53 bits of confirmation. Now, it is possible that some of those other 11 bits did not match, but very unlikely. It is so improbable, that the damage caused by a faulty hash lookup is so rare that it will not effect the search in practical terms. So hash tables with the full 64 bits are very rare. I have seen some chess engines that dispose of the cross check altogether, but that seems a bad idea if your hash table is only (for instance) 21 bits wide. In such a case, I am sure that there will definitely be real problems in long searches.


Hi Dann,

Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Ron


Hi Ron,

Dann Corbit did not suggest to use 32 bit hash key.
He only suggested to store 32 bit from the hash key in the hash and not to store all the 64 bit key.

Note that this is not very important and people agree that doubling the hash size can give only 6-7 elo and saving 32 bits even does not allow to double the hash size with the same memory.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: cost of memory lookup

Postby Ron Murawski » 06 Jun 2007, 18:32

Uri Blass wrote:
Ron Murawski wrote:
Dann Corbit wrote:
Daniel Shawul wrote:Okay i will try 64 byte (i made a typo in my previous post) alignment and do the test again.

This is my hashtable entry.

struct HASH_ENTRY {
uint64 lock;
int32 move;
int16 score;
uint8 depth;
uint8 flag;
}

When probing the first byte to be accessed is the lock, so no problems i guess. But when storing the depth is checked first, so it means (score,depth & flag) are in cache but not lock & move. This will require additional memory look up?

Looking at your tt entries, I see that you store only 32 bits for the lock. Dont remever why I store the whole 64 bit :o May be because it is what i got from Bruce moreland's tutorial.. In your case i guess you use the upper or lower half as a key right?

I think you should have the 'age' counter in your entry. Or otherwise you have to clear the transpostion table after a move?
Testing on a set of testpositons really doesnt measure the performance of a replacement scheme IMO. However i do not think that every transpositon table entry from an old search is unimportant. In the most common hash table, the prefer depth entry is replaced when the depth is greater, or when the position is from previous search. I dont totally agree of the later, because the search might be from an immediate former search and might have a large depth.


There is no reason to store the entire hash as a cross-check.
The bottom bits are already matched by definition if you use a modulus or &-mask to determine the hash position. Therefore, the only bits we do not know about are the bits above the top of the hash lookup code itself.

Suppose that you use the top 32 bits as a cross check to be sure that your board position is identical to the one in hash.
Suppose further that you have 2 million hash entries, and so your hash table address is 21 bits wide. That gives 21 bits + 32 bits = 53 bits of confirmation. Now, it is possible that some of those other 11 bits did not match, but very unlikely. It is so improbable, that the damage caused by a faulty hash lookup is so rare that it will not effect the search in practical terms. So hash tables with the full 64 bits are very rare. I have seen some chess engines that dispose of the cross check altogether, but that seems a bad idea if your hash table is only (for instance) 21 bits wide. In such a case, I am sure that there will definitely be real problems in long searches.


Hi Dann,

Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Ron


Hi Ron,

Dann Corbit did not suggest to use 32 bit hash key.
He only suggested to store 32 bit from the hash key in the hash and not to store all the 64 bit key.

Note that this is not very important and people agree that doubling the hash size can give only 6-7 elo and saving 32 bits even does not allow to double the hash size with the same memory.

Uri


Hi Uri,

You have misunderstood me because I was not clear. I said:
Ron Murawski wrote:I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine.

I was talking about storing the top 32-bits of the 64-bit Zobrist key.

In my post I *did* make an eror when I said:
Ron Murawski wrote:Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate.

I should have said "Robert Hyatt tried using only the top 32-bits of a 64-bit pawnhash key and found it inadequate."

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: cost of memory lookup

Postby H.G.Muller » 06 Jun 2007, 21:39

Ron Murawski wrote:Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Joker only stores 32 bits, and with 2M hash buckets (128MB table) that effectively uses 53 bits of the key. At 2M nodes per second there should only be 1 key collission every 4 billion seconds. It is hard to believe that this would cause a measurable drop in game play. If it does, it suggerst that there is something wronng in your key, making some of the bits you do use not independent.

You should try to count the number of 'false matches' where the 32 upper bits match, but the lower 32 do not. This should be very low.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: cost of memory lookup

Postby Ron Murawski » 06 Jun 2007, 21:47

Daniel Shawul wrote:
This is my hashtable entry.

struct HASH_ENTRY {
uint64 lock;
int32 move;
int16 score;
uint8 depth;
uint8 flag;
}

I think you should have the 'age' counter in your entry. Or otherwise you have to clear the transpostion table after a move?


Hi Daniel,

For depth-preferred hash entries it is not necessary to use a flag or an age counter to sweep through and mark old entries. In fact, it is possible to age the hash without any hash sweeps at all.

You can do this by storing (search_depth + game_movenumber) instead of depth. In my engine I store a 16-bit (depth+movenumber). My depth is in fractional plies where 8 fractional plies == 1 full ply and my movenumber is in half-moves. (The upper 4 bits of depth+movenumber are unused and can be used to store other information.)

This trick has been mentioned several times by Diep author Vincent. Conceptually it is almost equivalent to subtracting 2 from the depth of every depth-preferred entry after every computer move. The method works as well for me as any other method I've tried. The big advantage, of course, is never wasting time searching through the entire hash table looking for old entries.

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: cost of memory lookup

Postby Ron Murawski » 06 Jun 2007, 22:59

H.G.Muller wrote:
Ron Murawski wrote:Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Joker only stores 32 bits, and with 2M hash buckets (128MB table) that effectively uses 53 bits of the key. At 2M nodes per second there should only be 1 key collission every 4 billion seconds. It is hard to believe that this would cause a measurable drop in game play. If it does, it suggerst that there is something wronng in your key, making some of the bits you do use not independent.

You should try to count the number of 'false matches' where the 32 upper bits match, but the lower 32 do not. This should be very low.


Hi H.G.,

I gained about 20-40 Elo points storing all 64 bits vs storing the high 32-bits. At the time that I switched to storing a 64-bit key I had two major uncorrected hash bugs. One bug allowed overwriting the base position move, the other bug was that I was using a very inferior PRNG, the C compiler library function rand().

Since that time I have done additional changes to the hash structure as well as to the probe/store coding so I should retest this one more time. It would be a great advantage not storing the extra 32-bits! Luckily I have the time to work on this idea now. I will report back to this thread when I have some results, probably in a week's time.

I think my PRNG is good. I use ISAAC64 to generate my Zobrist numbers: http://www.burtleburtle.net/bob/rand/isaacafa.html
Bob Jenkins has some other interesting math-related stuff here: http://www.burtleburtle.net/bob/index.html

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: cost of memory lookup

Postby Ron Murawski » 11 Jun 2007, 20:18

Ron Murawski wrote:
H.G.Muller wrote:
Ron Murawski wrote:Robert Hyatt tried using a 32-bit pawnhash key and found it inadequate. I've tested various bit lengths for the main hash signature -- using only the top 32-bits causes a measurable drop in playing strength for my engine. The least amount of bits that seemed okay was 48, but I found it too slow and cumbersome masking/unmasking the lower 16 bits, so I use a full 64-bit signature.

Joker only stores 32 bits, and with 2M hash buckets (128MB table) that effectively uses 53 bits of the key. At 2M nodes per second there should only be 1 key collission every 4 billion seconds. It is hard to believe that this would cause a measurable drop in game play. If it does, it suggerst that there is something wronng in your key, making some of the bits you do use not independent.

You should try to count the number of 'false matches' where the 32 upper bits match, but the lower 32 do not. This should be very low.


Hi H.G.,

I gained about 20-40 Elo points storing all 64 bits vs storing the high 32-bits. At the time that I switched to storing a 64-bit key I had two major uncorrected hash bugs. One bug allowed overwriting the base position move, the other bug was that I was using a very inferior PRNG, the C compiler library function rand().

Since that time I have done additional changes to the hash structure as well as to the probe/store coding so I should retest this one more time. It would be a great advantage not storing the extra 32-bits! Luckily I have the time to work on this idea now. I will report back to this thread when I have some results, probably in a week's time.

I think my PRNG is good. I use ISAAC64 to generate my Zobrist numbers: http://www.burtleburtle.net/bob/rand/isaacafa.html
Bob Jenkins has some other interesting math-related stuff here: http://www.burtleburtle.net/bob/index.html

Ron


I've played enough games to convince me that storing the high 32 bits is sufficient. I could detect no increase in Elo but test positions solve slightly faster and nps is up, both due to increasing the amount of hash entries.

I suspect that my former troubles in storing just the high 32 bits was due to using the inferior compiler-supplied rand().

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Previous

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 34 guests