Page 1 of 1

Selectivity at depth == 1

PostPosted: 16 Jan 2006, 21:36
by smcracraft
Hi - does this make any sense?

#ifdef SELECTIVE
// If depth is 1 and no reductions or extensions and eval is bad relative
to alpha
// then search only captures, promotions or checking moves
if (depth == 1 && reduction == 0 && extension == 0)
if (eval(bd,QUIET) <= alpha+(1*uparams[pawn]))
if (!(sml[mvi].cap!=0||sml[mvi].pro!=0||incheck(bd))) {
unmakemv(bd);
continue;
}
#endif

I don't see any great improvement for 1*uparams[pawn] (1 pawn) 3*uparams[pawn] (a bishop/knight) and 5*uparams[pawn] (a rook).

The point is that if at depth == 1 and no reductions or extensions and
eval is worse with respect to alpha and it is not a capture, promotion
or the move didn't put us into check, then skip searching it since
the move is unlikely to improve us as much as the variation from
alpha.

Perhaps someone can see what is wrong with the above or indicate
this is not the right way to do it or to toss it entirely.

Stuart

Re: Selectivity at depth == 1

PostPosted: 16 Jan 2006, 21:56
by David Weller
Hi Stuart,

This looks a lot like futility pruning. Or am I missing something?

BTW- since your considering the value of the move, why not prune captures too? Since they are futile ...

-David[/code]

Re: Selectivity at depth == 1

PostPosted: 17 Jan 2006, 17:53
by H.G.Muller
I guess the main problem, even with this kind of futility pruning, is that even non-captures might be at the root of the gain of sufficient material to bring the score back in the window. Think of attacking a piece that has nowhere to go with a pawn, or giving a fork.

Of course if you did search the move, is is not obvious that such a threat will be expressed in the score. That depends on the quality of your QS+eval (e.g. if it includes null-move).

Re: Selectivity at depth == 1

PostPosted: 22 Jan 2006, 00:47
by David Weller
Stuart, it looks like your not using a 'lazy' eval()+MARGIN

===================

A 'safe' version of futility pruning is guaranteed NOT to miss anything that the subsequent quies() would get [assuming standard stand-pat ]

Because if the move can not possibly raise alpha, then quies() will certainly fail high with it's stand-pat score

Of course MARGIN must be maximium possible contribution of positional scores left out of your 'lazy' eval

Also, It seems to me, this is still valid after any reduction or extensions

The benefit of futility pruning over lazy eval in eval itself [IE, cutting out of eval when you realize the score is so good/bad nothing else matters], are a few function calls/returns

Re: Selectivity at depth == 1

PostPosted: 22 Jan 2006, 20:53
by smcracraft
David Weller wrote:Stuart, it looks like your not using a 'lazy' eval()+MARGIN

===================

A 'safe' version of futility pruning is guaranteed NOT to miss anything that the subsequent quies() would get [assuming standard stand-pat ]

Because if the move can not possibly raise alpha, then quies() will certainly fail high with it's stand-pat score

Of course MARGIN must be maximium possible contribution of positional scores left out of your 'lazy' eval

Also, It seems to me, this is still valid after any reduction or extensions

The benefit of futility pruning over lazy eval in eval itself [IE, cutting out of eval when you realize the score is so good/bad nothing else matters], are a few function calls/returns


You're right. I don't use futility at quiescence nodes nor lazy evaluation at all. The reason is that my evaluation is still pretty simple.

Also, Vincent says lazy evaluation is very bad (I think).

Do others share that issue and why?

Re: Selectivity at depth == 1

PostPosted: 22 Jan 2006, 21:58
by H.G.Muller
I think the idea of lazy evaluation is basically sound: If you have large and small terms in your evaluation, and you can put an upper bound on the sum of the small terms, and the difference between the sum of the large terms and what you need in that node to matter is larger than that upper bound, evaluating the small terms is a waste of time.

That wisdom might not help at all if your evaluation is such that there is no clear distinction between large and small terms, or if there are not enough small terms to benefit significantly from skipping them. If your eval is mainly material (very easy to calculate), on top of which you have very complicated positional terms that hardly contribute, lazy eval can save you a lot.

I think what Vincent meant is that good evaluation functions are not structured like that at all, they have to anticipate material gain from positional characteristics (hanging pieces, presence of fork squares, 'subtraction' threats, trapped pieces), which makes the (complicated) positional terms necessarily of the same order of magnitude as material. And then it is not clear how you can be lazy.

Re: Selectivity at depth == 1

PostPosted: 23 Jan 2006, 09:55
by David Weller
It is true that lazy eval works best with a more complicated eval, but in your original code example, you are implementing what seems to be futility pruning, albiet without an estimated eval

My point is that unless you use a statement such as, eg.,

if(material[stm]-material[snm]+MAXPOS <= alpha){
....
}

Where 'lazy eval' here, is just material difference, there is no point to your code, which explains why there has been no gain by it

Because your saving nothing really. In fact, it may be costing you.