Tord Romstad wrote:H.G.Muller wrote:Also the 'depth' argument on qsearch seems superfluous, since at that point depth is known to be zero.
Yes, you are right again. It is not superfluous in Glaurung, which is why I put it there in the pseudo code by reflex.
You make me curious! What in your qsearch is depth-dependent?
My 25-year old chess engine 'Usurpator' has a QS that considers way too much (like capturing pieces of which the defense-count was lowered in the previous ply), which gives it a strong tendency to explode. To combat that problem, which became painfully obvious when I competed with this program in the 25th Duch Computer Chess Championship on a modern computer, I consider introducing a depth parameter in QS.
The purpose would be to do iterative deepening in QS, in order to force a breadth-first rather than a depth-first search through the QS tree. This would restore the efficiency of alpha-beta pruning, which in a depth-first search in a tree of very uneven depth can be totally destroyed by even an occasional wrong guess for the best move. Of course the search would be a very unstable one, since the very nature of QS is that branches that are not yet explored to their full depth have scores that are totally meaningless. But I expect nevertheless very superior performance from ID compared to depth-first, since many very undeep branches will be quickly brought within the horizon, narrowing the window and making the very deep branches (which often lead way outside the window) manageable.
One would have to do minimax and alpha-beta a bit different from usual, though, since a very high but meanigless score would be no reason to carry out a beta cutoff: in QS the score is likely to be revised at larger depth by an unlimited amount, and this would bring all the moves we skip with a beta cutoff back in view. So the sub-tree search at a depth d would return a score
range, rather than a single value, if some of its branches that ran beyond depth d are still relevant (according to alpha-beta). The tentative 'value' (pes, opt) at a certain node (the equivaent of the best-score-so-far of minimax) would then be updated on a move-returned range (a,b) to (max(a,pes), max(b, opt)), and the search window (alpha, beta) would be narrowed to (max(pes, alpha), beta). A beta cutoff would only occur if pes>=beta. After all (aceptable QS) moves have been considered 'opt' really gives the maximum score that can be expected from this node, and the range (pes, opt) is returned. Which, by negamax, is turned into (-opt, -pes) at the parent node. After a beta cutoff 'opt' is meaningless, because the pruned moves might be arbitrary good, and (pes, inf) is returned. (But since pes>=beta no one cares, and at the parent the (-inf, -pes) will be a ove that failed low and can be completely ignored. A move at the enforced horizon that does belong to the QS repertoire would 'score' a range (-inf, +inf), so that a node from which such moves are possible has no upper limit to its score (and the lower limit might be set by the static evaluation).