A Few General Questions on Parallel Search
Posted: 31 Aug 2007, 02:02
It'll take much effort reinventing techniques and making/learning from mistakes others have already painstakenly done; so, I have a few questions (that I could not find answers to easily on the internet) to people who already have lot of experience with parallel search.
Thanks in advance.
- How would you handle move ordering differently for the parallel search? For example, would you share killers for the next ply at the split point? What would be the best way to handle hash tables (transposition table, pawn hash, ect.)? How would you make history heuristic work best? Would you want to spend time copying all move ordering data (like history, hash) when splitting?
- For Buzz, I only split at alpha+1==beta so I don't have to check for bounds updating; however, I still have to check for cuttoffs. For this I loop through the entire search stack at every new node and poll the split points for a cutoff flag. Is there a more efficient/elegant way to do this?
- When creating a split point, I use malloc; when destroying it, I use free. Does it matter whether I use dynamic or static memory allocation for performance?
- I heard some postings here on "buffering 64-bytes between data structures or something". What is this? Why would you do it?
- Are there other coding tips you would give for better scaling?
Thanks in advance.