Hello, I am a developer of an amateur chess engine. I have been able to get it to work crudely with time control but I want it to be a little smarter. I need some suggestions. Here is how my engine works now:
Let's say the engine looks at the current clock, current move count, and the time level (X moves in Y minutes) and finally calculates a number (say T seconds) to spend on this move. My approach is thus: ]
Start searching with iterative deepening from d=2 to d=maxDepth. Also, schedule a timed task that will interrupt the search and retreive the best move possible after T seconds (provided that the search has not already finished upto maxDepth early, or the user has not forced the AI to move now using the '?' command).
My timer works perfectly, nicely interrupting the ongoing search of depth D after T seconds and returning the bestMoveSoFar from the D-1 search. But my problem is this:
Let's say for example, I have calculated and decided I should spend at most T=40sec on this move. My engine searches to 6 ply in about 20 seconds. Now, the search of depth 7 ply would have completed after around 1:30. So after 40 seconds the timer task interrupts the search and returns the bestMoveSoFar of depth 6 as expected. But then, I have wasted the last 20 seconds on searching to a deep level which would have never finished anyway. I want to know how I can minimize this wastage. I would like someone to point out a good way of deciding before hand "Will a deeper search complete within the given time frame? Or should I save that extra time and just return now?".
Thanks for any help...