Daniel Shawul wrote: Do you do a non-zero depth null move in qsearch? I only do a zero depth IE eval() null move.
No, that is part of the problem. To recognize a threat, I have to do a depth of at least one. In principle, this is no problem, to get the score of a recapture you also do a depth-1 search on it. (So a depth-0 = QS search on the reply.) So it is not any more expensive than any other QS move. It is just that the old version frequently cuts off on the 'depth-0 null-move search', without considering any move. (At the expense of being totally blind for any threat.)
Since i search all non-losing captures in qsearch what i need to consider is only taking my queen to a safe square. I guess in your case you only do recaptures of the last moved piece.
That and lower takes higher (i.e. 'obviously winning' captures). Otherwise I could not recognize any threat after null move, since there is nothing to recapture then.
I think that you can't easily detect hung pieces once you start considering threats on pieces other than (king , queen) , and hidden attacks (not just by the last moved piece).
This might be one of the reasons that it did not work in the minimalist approac. I have no SEE there, basically the QS is a recursive implemantation of SEE, with only recaptures. The lower x higher does not catch higer x undefended lower, making the threat-detection overlook lots of threats (but not on King or Queen, though).
I do it with 0x88 style but it is still very expensive. Especially generation of moves that take a hung piece to a safe square asks for a lot of effort.
Yes, without a SEE I simply try the moves and search. Very expensive... In my serious engine I have a very light recapture search that is hardly more work than a SEE, but even there it is expensive.
This sounds interesting. I never tried different search depths for different sides. How does this interact with alpha-beta? The reason why i ask is that alpha - beta assumes what is good for us is equally bad for opponent.
But if the search depths are different , this may not hold true any more.
This is not different from when you use a single search depth. Also there the score depends on the search depth. There always is only a single score that you use in the ordinary way. (I am also working on developing a 'contingency minimax', which works with score ranges, but this is very experimental...). The scores are simply parameters that control the tree that you will search, but within the allowed tree I you do ordinary minimax and alpha-beta.
The big problem is in the hash. To satisfy a request on a hit, the stored depth should be upward compatible, i.e. the depth for both sides should be larger or equal to the requested depths. The biggest problem, though, is when you have to decide about replacement. The depth becomes a two-dimensional quantity, and there is no ordering of 2D quantities... So if I have depth (5,3) in the table, and I needed and calculated (4,4), should I replace? If you replace only based on least-recent used, it is no problem, but if you want to make a decision based on depth it is not clear what you should do. With a single depth, when I do a search at a lower depth than was in the table, because the bound in the table was not good enough to be used with the current window, I overwrite the larger depth, to prevent that useless deep entries with worthless bounds after a score readjustment (due to deepening) will poison the hash table. I am not sure that this would be the best here.
So I plan to go for the most-equal depth, because I expect the most balanced score for that. Giving one side more depth will tip the score in his favor (not to much, if you included adequate defensive moves in QS). This is not so bad, it will discourage each side from wasting its ply budget on delaying tactics, because that will hurt himself more than the opponent. I would perform (internal) iterative deepening in such a way that I deepen the side with the smallest depth first, to steer the search and hash content to balanced entries.