hash collision issue?

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

hash collision issue?

Postby vancekic » 18 Feb 2008, 07:32

Hi all!

I am writing my bitboards based chess engine as a hobby, and I've got an issue with my hash tables. I am not sure if I am doing anything wrong or if I even have a bug, but I need some explanations :-)

My hash table size is 20940347 (prime number in the 20 millions). My hash keys are 64 bits numbers.

As my alpha beta is computing, I "new" pointers who do not correspond to any hash table entry. I have the computer play a few moves and after having newed roughly 5400 entries, I've got hash collision.

Basically my engine has found two totally different positions (I am sure they are totally different, as I can see them in the debugger) with two totally different hash keys (I am sure of that too), that give the same reminder when they are divided by 20940347, hence that give the same index to my hash table.

These numbers are as follows:
3843009106291839884 % 20940347 = 17414108
7315253179216087815 % 20940347 = 17414108

So I get the same index in the table and I am about to overwrite the previous value. Shall I implement a strategy to overcome this issue (like linked list) or is this absolutely not normal and should be very unlikely?

I use Zobrist keys, and compute my first hash key (the position to search with my alpha beta) by visiting every square, and hashing on with the following random zobrist keys:
m_ZobristKey[pieceType][colour][square]

then as I create positions by generating moves, I hash in and out, as everybody does in the literature (I handle the different cases of captures, promotions, en passant, castling, regular moves, etc).

If I keep running the program, collisions get more and more frequent...

I use two random 32 bits appended to create a 64 hash key.
To get the random 32 bits, I use the algorithm given by Knuth, the art of computer programming, vol. 2, pp. 26-27.

So I am wondering:
Is it a hashing bug?
Is it a randomness bug?
Is it both?
Is it even a bug?

Thanks in advance!
vancekic
 
Posts: 5
Joined: 12 Feb 2008, 22:01

Re: hash collision issue?

Postby Ilari Pihlajisto » 18 Feb 2008, 11:18

vancekic wrote:So I get the same index in the table and I am about to overwrite the previous value. Shall I implement a strategy to overcome this issue (like linked list) or is this absolutely not normal and should be very unlikely?


It is absolutely normal, you can't really prevent collisions from happening. What you can do is use a replacement scheme which favors entries that have a better chance of reducing your tree size. In my program (Sloppy) I favor deep and new entries over shallow (doesn't cut the tree much) and old (small chance of a hit) ones.
User avatar
Ilari Pihlajisto
 
Posts: 78
Joined: 18 Jul 2005, 06:58

Re: hash collision issue?

Postby H.G.Muller » 18 Feb 2008, 11:29

This is quite normal. With 5400 entries you have 5400 x 5399 / 2 = 14.5M pairs of entries. The probability that there is a collision within a pair (i.e. the two positions in the pair map accidentally to the same table entry) is ~1/21M. So there already is a 14.5/21 = ~65% probability that one of the pairs will collide. And that is what you see.

If you hash, you will have collisions. So you have to program a way how to handle them. Most primitive way is simply to overwrite the old entry, under the philosophy that newly generated positions are more likely to be still relevant than older ones. This way is completely safe: your table will never be filled up with obsolete entries.

Another way is to store the search depth in the table, and compare it to that of the entry you now want to store, and then store the one from the deepest search (on the assumption that a future hit will save you more work if it occurs for the deeper entry). This 'depth-preferred' replacement holds the danger of the table getting completely filled with deep but obsolete entries (that cannot be reached from the root anymore because of an irreversible move in the game). So you can only use it if you also add a mechanism to purge the table from obsolete entries. (e.g. clear the entire table, clear it of entries that were not used in the large search).

More advanced schemes do not probe one entry of the hash table for each position, but two (or more), and then overwrite the one with the smallest depth. You then always have both the most recent and the deepest entry in the table. You still have to purge the depth-preferred part, though, to remove obsolete entries. Some people make this automatic, by storing the search number of the last use of the entry in the table, and use not plain depth, but depth - search number as a criterion for the replacement.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: hash collision issue?

Postby vancekic » 18 Feb 2008, 19:43

Thanks a lot for your help guys!

Most primitive way is simply to overwrite the old entry, under the philosophy that newly generated positions are more likely to be still relevant than older ones. This way is completely safe: your table will never be filled up with obsolete entries.


If I go with simplicity and safety, and just overwrite the old entry, doesn't that mean that I can still delete a relevant entry? Worse case that can happen is that I keep computing the evaluation for those two unrelated positions which hash to the same index, back and forth, until a move in the game excludes at least one of them, potentially both, right?
vancekic
 
Posts: 5
Joined: 12 Feb 2008, 22:01

Re: hash collision issue?

Postby H.G.Muller » 18 Feb 2008, 20:15

Right. It can only leed to sub-optimal effiiency. Never to errors. uMax uses 'always replace', and it works quite well. You might need a larger hash table to get the same hit-rate, though. But the search speed is a very weak function of hash size anyway.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: hash collision issue?

Postby Harald Lüßen » 18 Feb 2008, 21:38

Just to make sure you have this right: You certainly store the rest of
your hash key in the hash entry, do you? That is rest = key / 20940347.

You need that number to compare the whole keys (index and rest)
when you probe the hash table. Because index collisions are quite
normal, the entry counts as found and usable only if the whole key
is the same. And even then some people do not trust a stored move
and check it before they use it. It is also a good idea to replace the
exact same entry with infos from a deeper search when you use the
more-than-one-entry-per-hash-index-system that was described in other
replies.

There are two kinds of collisions:
- the common and normal collisions of the same index. With today's
nodes per second the hash table is full after a few seconds and then
each probe is an index collision.
- and the very rare collision of different positions with the same whole
hash key. Just ignore the errors when you use the score or bounds.
Make sure your program does not crash when it uses a stored best
move from the hash.

Some people believe partitioning the hash key with bits is faster
or easier or whatever.
index = key & 0x0000000000ffffff;
rest = (key & 0xffffffffff000000) >> 24;
And a little bit more work with the bits and bytes.

Harald
User avatar
Harald Lüßen
 
Posts: 29
Joined: 09 Oct 2004, 22:39

Re: hash collision issue?

Postby bob » 19 Feb 2008, 00:17

this is completely normal. the easiest solution is the "Belle" approach which uses two tables. One where you use the draft of the old and new entries and always keep the largest, and one where you always store (if you could not store in the first table due to insufficient draft on the new entry).

Easy to do, and works as well as anything I have ever tried...
User avatar
bob
 
Posts: 156
Joined: 10 May 2006, 17:59

Re: hash collision issue?

Postby vancekic » 19 Feb 2008, 00:32

You might need a larger hash table to get the same hit-rate, though.


I think 20+ millions entries in the hash table, each entry being 16 bytes, is decent enough.

Just to make sure you have this right: You certainly store the rest of
your hash key in the hash entry, do you? That is rest = key / 20940347.


I do:
Code: Select all
index = hashFromCurrentPosition % 20940347;
HashPosition* hashPosition = m_HashTable[index];
if (hashPosition)
{
    if (hashPosition->key != index)
    {
        // that's the case where I am getting a hash collision
        // two different positions with two different keys
        // hash to the same index in the hash table

        // I will just replace the old one by the new one
        m_HashTable[index] = hashPosition;
    }
    else
    {
        // we found a hit, return the value we stored
        // if we're at the same depth kind of logic, etc etc...
    }
}
else
{
   // otherwise, create new entry
   m_HashTable[index] = new HashPosition;
}


There's a lot of confusion between the terms index, hash code, hash key, key, etc...
vancekic
 
Posts: 5
Joined: 12 Feb 2008, 22:01

Re: hash collision issue?

Postby Sven Schüle » 19 Feb 2008, 16:46

vancekic wrote:I do:
Code: Select all
index = hashFromCurrentPosition % 20940347;
HashPosition* hashPosition = m_HashTable[index];
if (hashPosition)
{
    if (hashPosition->key != index)
    {
        // that's the case where I am getting a hash collision
        // two different positions with two different keys
        // hash to the same index in the hash table

        // I will just replace the old one by the new one
        m_HashTable[index] = hashPosition;
    }
    else
    {
        // we found a hit, return the value we stored
        // if we're at the same depth kind of logic, etc etc...
    }
}
else
{
   // otherwise, create new entry
   m_HashTable[index] = new HashPosition;
}


There's a lot of confusion between the terms index, hash code, hash key, key, etc...


Hi,
hash key, key, and hash code should be the same in this context. The hash key is a property of a position. Therefore I would propose to rename your variable "hashFromCurrentPosition" into "hashKeyFromCurrentPosition".

Then, if you store the whole hash key in a hash table entry of type "HashPosition" (hashPosition->key) you must also compare it to the whole "hashKeyFromCurrentPosition", not just the "index" which is a relatively small number, only the lower 25 bits may be nonzero, so your comparison always returns "!=" provided you have a normal way of creating your hash key.

Another issue, "m_HashTable[index] = hashPosition;" seems not to store a new hash entry but to overwrite it with itself. If this is right, your intent may have been to overwrite the old entry by doing
Code: Select all
delete m_HashTable[index];
m_HashTable[index] = new HashPosition;
// fill new entry with data, store the key, ...


This could lead to a semantically correct solution. Note, however, that the runtime overhead of doing many new's and delete's can be really significant. So you could think of "m_HashTable" being a table of whole "HashPosition" entries, not just pointers, and therefore omitting the new's and delete's. The hash key "0" could be reserved to indicate an unused entry, so that you can check:
Code: Select all
if (m_HashTable[index].key != 0)
{
    ...
}



I hope this helps you to solve your hash table problem.

Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: hash collision issue?

Postby vancekic » 19 Feb 2008, 17:44

Thanks Sven! I agree that your naming is better!
Plus, I just double checked my code, and what I wrote in my post was not what was in my code (I wrote the post's "code" section from memory and got it wrong).

I don't do

Code: Select all
if (existingHashPosition->key != index)

but rather
Code: Select all
if (existingHashPosition->key != hashKeyFromCurrentPosition)


The index is just well.. the index! a number between 0 and HASHTABLE_SIZE-1. It has nothing to do with the key except it's derived from it (like so many other 64 bits number yield to the same index).

Which raises the question, why would one test the key against the index, it does not make sense to me (and what I wrote did not make sense, thanks for underlining it!). Get the index from the current hash key from the position being explored, and check that the element in the hash table at this index has indeed got the same key (cause it could be a different one yielding to the same index). That makes much more sense :-) Hence the code above. test 64 bits vs 64 bits, and not 64 bits vs an index (which is the remainder of the 64 bits number divided by the prime number 20940347).

You're right for the memory leak, it's meant to be deleted before being overwritten. In fact I will revert to an array of plain members and not pointers. My exe will just increase 300+ Mb but who cares :-) It's much less error prone.
vancekic
 
Posts: 5
Joined: 12 Feb 2008, 22:01

Re: hash collision issue?

Postby Sven Schüle » 19 Feb 2008, 17:56

Another remark, your code combines both the hash table lookup code and the code to store new data in the table within one function. What if you split this into two functions, one for lookup and one for storing new data? Otherwise I don't see how you can call your one single function from different places during your search.
Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: hash collision issue?

Postby Sven Schüle » 19 Feb 2008, 18:09

vancekic wrote:You're right for the memory leak, it's meant to be deleted before being overwritten.

The point was not the memory leak but that your code snippet does not look as if you do a "new" when overwriting, instead you store the old pointer over itself. But now I guess your real code does this right :-)

In fact I will revert to an array of plain members and not pointers. My exe will just increase 300+ Mb but who cares :-) It's much less error prone.

The size of your exe should not increase. Instead, at runtime it will even use less memory than before by not storing 20M pointers + 20M allocated entries of 16 byte each but only one array of 20M * 16 byte. You might as well do one "new" for the whole "m_HashTable" at program startup, which would also be future proof when thinking of a configurable hash table size. (Of course you would need different prime numbers for different table sizes.)

Sven
User avatar
Sven Schüle
 
Posts: 240
Joined: 26 Sep 2004, 20:19
Location: Berlin, Germany

Re: hash collision issue?

Postby vancekic » 19 Feb 2008, 20:01

The point was not the memory leak but that your code snippet does not look as if you do a "new" when overwriting, instead you store the old pointer over itself. But now I guess your real code does this right Smile


Right!

The size of your exe should not increase. Instead, at runtime it will even use less memory than before by not storing 20M pointers + 20M allocated entries of 16 byte each but only one array of 20M * 16 byte. You might as well do one "new" for the whole "m_HashTable" at program startup, which would also be future proof when thinking of a configurable hash table size. (Of course you would need different prime numbers for different table sizes.)


The exe DOES increase - not the exe size mind you, I meant its memory usage, because it stores a 300Mb struct - try it for yourself, check your task manager. Thinking about it, this should be the subject of a different thread, "table of pointers vs table of objects". If you have a pool of pre allocated pointers, I think it's cheaper to use pointers then, cause you don't pay the extra cost of copying the object each time you assign it in the table.

Why would I need a configurable hash table size? What does this achieve?
I think I will implement different hash table though (one for pawn structures seems like a very good idea to compute positions' evaluation in a more efficient way).
vancekic
 
Posts: 5
Joined: 12 Feb 2008, 22:01

Re: hash collision issue?

Postby Teemu Pudas » 19 Feb 2008, 21:04

vancekic wrote:The exe DOES increase - not the exe size mind you, I meant its memory usage, because it stores a 300Mb struct - try it for yourself, check your task manager.


Whereas with your method the memory usage will rise to 375Mb in time. Or 450Mb on 64-bit systems. With smaller entries, the effect is even more dramatic - mine are 8 bytes...

Why would I need a configurable hash table size? What does this achieve?

Some users have more memory than others. Or less. Or they are religious in their belief that the size absolutely has to be smaller in fast games. Plenty of other reasons, too.
Teemu Pudas
 
Posts: 124
Joined: 16 Apr 2007, 14:03

Re: hash collision issue?

Postby Onno Garms » 21 Feb 2008, 08:49

H.G.Muller wrote:This is quite normal. With 5400 entries you have 5400 x 5399 / 2 = 14.5M pairs of entries. The probability that there is a collision within a pair (i.e. the two positions in the pair map accidentally to the same table entry) is ~1/21M. So there already is a 14.5/21 = ~65% probability that one of the pairs will collide. And that is what you see.


For a more precise calculation see
http://en.wikipedia.org/wiki/Birthday_problem
User avatar
Onno Garms
 
Posts: 128
Joined: 17 Feb 2007, 11:17
Location: Bonn, Germany


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 41 guests