hi all -
with server systems becoming so incredibly powerful...w/ fast processors, multiple CPUs, multiple cores, loads of RAM, etc., the average nodes per second that can be realized is growing. 128 MB may have been sufficient in the past, but i believe more memory is needed today. my personal belief is that more is better.
i have recently implemented a function in xyclOps that optimizes the size of hash table for the specific system it is running on. i believe crafty created this idea (called adaptive hash), and has been using it for some time now (since v. 19 i think). the concern is: on fast systems, with long time controls, the hash table could fill, with possible negative performance consequences. it seems that this could be especially true if the chess engine uses some sort of dynamic move time allocation, i.e. providing more time to search a difficult or critical position.
if hash table entry size = 16 bytes (typical)
if nps = 700,000 (many programs today run much faster than this)
and if time control = 40 moves/40 minutes (1 minute per move for simplicity)
then
60 secs x 700,000 nps = 42,000,000 nodes per move
and
42,000,000 nodes x 16 bytes = 672,000,000 bytes = 656,250 KB = 640 MB
Robert Hyatt recommends 1 GB hash for crafty at time controls of 1 minute per move or more. of course, crafty is achieving something like 6,000,000 nps (often more) … a substantially higher rate than most engines.
if interested, here’s a link to Dr. Hyatt’s post in another forum:
http://64.68.157.89/forum/viewtopic.php?t=18490
my final thought here: if the server being utilized has abundant RAM (like most do), why not allocate if for engine use? as far as I know, for chess engine tournaments, at any given time only 2 engines are playing simultaneously. thus if a server has 4 GB RAM, one could allocate 1024 MB to each engine and still have 2 GB left.
best regards-
Norm Schmidt
www.xyclOps.com