Extended futility reduction

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Extended futility reduction

Postby Michael Sherwin » 18 Nov 2006, 11:00

Using both extended futility reduction at remaining depth = 3 along with null move makes to sense to me. If the depth is reduced by one and a move is made then the other side does a null move and prunes the line, then nothing was gained. If the null move does not prune (which should not happen) then the line was reduced in a dangerous position and that makes no sense. Same with futility at remaining depth = 2.

If a program is going to have futility and extended futility reductions then it makes sense (to me) not to do them if a null move is going to be done. So delay the decision for futility untill after null move was not made for some reason.

What am I missing?
User avatar
Michael Sherwin
 
Posts: 168
Joined: 21 Jul 2005, 06:10
Location: USA

Re: Extended futility reduction

Postby mridul » 18 Nov 2006, 19:58

For depth <=3, you will drop to qsrarch after null move.
So make decisions based on null move only if your qsearch is good enough.
If it is dumb caps only then it would not be a good idea.


Mridul
mridul
 
Posts: 48
Joined: 09 Dec 2004, 11:34
Location: Bangalore , India

Re: Extended futility reduction

Postby H.G.Muller » 19 Nov 2006, 13:13

Futlity reduction never makes sense. Reducing the move at depth=3 can only make the move more futile. There is no way to discover that it was a winner afterwards, if you reduce it and clip the follow-up ply of the tree. So you either prune it completely, or you search to full depth. Searching it at a reduced depth is purely a waste of time, since you know the outcome in advance.

Futility pruning at depth=3 can only be based on the fact that the opponent can reply by the null move. So you prune moves that cannot possibly get above alpha on a null-move reply, like at depth=1 you prune what cannot possibly get above eval on a stand-pat reply.

If null-moves are not enabled, you can never decide at depth=3 if a move is futile: the opponent might be in zugzwang on the reply, and might have to give an arbitrarily bad move.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Extended futility reduction

Postby Pradu » 19 Nov 2006, 13:31

H.G.Muller wrote:Futlity reduction never makes sense. Reducing the move at depth=3 can only make the move more futile. There is no way to discover that it was a winner afterwards, if you reduce it and clip the follow-up ply of the tree. So you either prune it completely, or you search to full depth. Searching it at a reduced depth is purely a waste of time, since you know the outcome in advance.

Futility pruning at depth=3 can only be based on the fact that the opponent can reply by the null move. So you prune moves that cannot possibly get above alpha on a null-move reply, like at depth=1 you prune what cannot possibly get above eval on a stand-pat reply.

If null-moves are not enabled, you can never decide at depth=3 if a move is futile: the opponent might be in zugzwang on the reply, and might have to give an arbitrarily bad move.
Do you have test results asserting your claims? I have no concisive results myself but I'm current expirementing with futility redutions at all plys and atleast it isn't as bad as you say for my program. Futility reduction is basically LMR except positiosn are reduced by bound instead of moves reduced by move ordering. I don't see any logical reason why it won't work if LMR works for some chess programs. In fact you could say nullmove is a kind of futility reduction except you add an extra move to the futility margin.

Code: Select all
Nullmove
Condition: Do when not in check .. mabe when qsearch() > beta ... ect
if( searchscore(ply-R_NULLMOVE + margin of extra opponent move) > beta )
"cut"
otherwise do regular search


Likewise with futility reductions
Condition: Not sure yet but possibly when qsearch + a margin < alpha or qsearch - a margin >= beta and not in check and last move was not checking and was not played when ordered to a value of >= KILLER_MOVE
if ( searchscore(ply-R_FUTILITY) + FUTILITY_MARGIN < alpha )
"cut"
otherwise do regular search

And "futility" condition for beta which is pretty much nullmove condition except using futilty margins instead of an extra opponent move
if ( searchscore(ply-R_FUTILITY) - FUTILITY_MARGIN >= beta )
"cut"
otherwise do regular search


I believe "futility reductions", perhaps "out of bounds reductions" is a better term, will become a popular technique in computer chess. You can extremely low futility margins here, perhaps even 0 can work. You can also do additional reductiosn depending on how far away qsearch is from the bounds. For 10 pawns for example you can probalby reduce 3 times without too much a loss in tactical ability.
Last edited by Pradu on 19 Nov 2006, 21:26, edited 2 times in total.
User avatar
Pradu
 
Posts: 343
Joined: 12 Jan 2005, 19:17
Location: Chandler, Arizona, USA

Re: Extended futility reduction

Postby Uri Blass » 19 Nov 2006, 14:40

H.G.Muller wrote:Futlity reduction never makes sense. Reducing the move at depth=3 can only make the move more futile. There is no way to discover that it was a winner afterwards, if you reduce it and clip the follow-up ply of the tree. So you either prune it completely, or you search to full depth. Searching it at a reduced depth is purely a waste of time, since you know the outcome in advance.


This is not correct.
I do not know the result of reducing a move in advance.

My qsearch include checks in the first ply.
Even if I use normal qsearch without checks
I may discover checkmate in the qsearch because if a capture in the qsearch threats the opponent king the first thing that I check is if the move is checkmate and returns mate in this case so there is no situation that I can be sure that a move is going to fail low based on static evaluation.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Extended futility reduction

Postby H.G.Muller » 19 Nov 2006, 16:11

Well, then it is not really futility reduction. A move is futile if if it cannot possibly recoup what you are short compared to alpha before you hit the horizon. So if a move is already futile for the current search depth, that will not change by reducing it.

Indeed, if you do not allow stand pat when in check, checking moves at d=1 cannot be futile. And as a consequence, as long as checks are possible, no move at d=3 can be futile.

LMR is reduction of moves because they seem bad from a long-term perspective. But bad is not the same as futile. It make still sense to search them, at a lower depth, exactly because they are non-futile, and at low depth you might discover that despite of the static judgement that they were poor moves, they might have tactical consequences that you static judgement could not foresee. If they have, it should become clear at low search depth, since the pobability that a move which already seemed poor by static criteria will unexpectedly improve becomes progressively smaller the longer it takes.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Extended futility reduction

Postby Pradu » 19 Nov 2006, 20:19

H.G.Muller wrote:Well, then it is not really futility reduction. A move is futile if if it cannot possibly recoup what you are short compared to alpha before you hit the horizon. So if a move is already futile for the current search depth, that will not change by reducing it.
It can with tactical combinations. This is also the reason why you don't do beta cuts with qsearch + extra move margin with nullmove and instead do a reduced depth search when the conditions are met (not incheck, qsearch>=beta, ect).

LMR is reduction of moves because they seem bad from a long-term perspective. But bad is not the same as futile.


They are identical, after you play a bad move you will eveutally enter a "beta futilie" position and in 1 more ply an "alpha futile" position. If two bad moves are played in a row, LMR would reduce the second one incorrectly but it shouldn't be reduced because there is a possibility it is within the bounds + margins. But of course if the reduced depth search returns a contrary result becase of tactical situations, you do a research to full depth just like what you do with nullmove.

...at low depth you might discover that despite of the static judgement that they were poor moves, they might have tactical consequences that you static judgement could not foresee. If they have, it should become clear at low search depth, since the pobability that a move which already seemed poor by static criteria will unexpectedly improve becomes progressively smaller the longer it takes.
Good explanation of why reduced depth searches are done in general--because there are tactical combinations possible where you are momentarily out of bounds. This is probably also why regular futility pruning has been done near leaf nodes with greatly increasing margins for each ply away from the leaf. With reduced depth "futility" pruning, you can do it at all nodes; and for each ply away fromt he leaves, the margins will only increase slightly or not at all compared to regular futility pruning.

I will make a new topic here on futility/out of bound reductions after I'm done with experimenting with them. I suspect the beta futility condition and nullmove will be doing pretty much the same thing but we'll see how to handle it. Nullmove is also very useful for detecting threats so I don't think we can throw it away completely.
Last edited by Pradu on 19 Nov 2006, 21:17, edited 2 times in total.
User avatar
Pradu
 
Posts: 343
Joined: 12 Jan 2005, 19:17
Location: Chandler, Arizona, USA

Re: Extended futility reduction

Postby Uri Blass » 19 Nov 2006, 21:15

H.G.Muller wrote:Well, then it is not really futility reduction. A move is futile if if it cannot possibly recoup what you are short compared to alpha before you hit the horizon. So if a move is already futile for the current search depth, that will not change by reducing it.

Indeed, if you do not allow stand pat when in check, checking moves at d=1 cannot be futile. And as a consequence, as long as checks are possible, no move at d=3 can be futile.


In my case even if I do not allow stand pat when in check checking moves at d=1 cannot be futile because my static evaluation knows if the position is mate or not mate so even not extending check replies in the qsearch is going to create mate score.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Extended futility reduction

Postby H.G.Muller » 19 Nov 2006, 21:40

But if you don't allow stand pat, and you know it is not checkmate, you would have to extend for the evasion. And afterthe evasion the checking side would be in QS, and perhaps capture a full Queen (that was hiding behind the King, which had to step aside from the check).

Or do you not allow stand pat only when you see it is checkmate? In that case only the checkmating moves would not be futile (and quite justly so! :wink: ).

As to Pradu's remarks:

Indeed, bad moves might lead to later moves becoming futile. But why make life difficult for yourself by trying to predict this in advance?

This is what I never understood about reductions: a reduction in an internal node will in the end result in a hard pruning when you approach the leaves. So why would you want to take the decision to make that pruning far in advance. You will still have the opportunity to do that up to the point where the reduction would mean hard pruning. And at that point you have all the information that you would otherwise have to guess.

So if you see a bad move, at d=7, and you suspect that it might be a poor one, that will depress your score so much that at d=3 you might still be so much below alpha that all moves there will be futile... Why not just play on? If you guessed right, futility pruning at d=3 will do its job. But if you decide to reduce, and you guessed wrong because an early unforseen tactical exchange will bring you back in window, you are stuck with the reduction. In the best case you would have to do a re-search. In the worst case, the tactical exchange would occur at the end of the tree, in the part that you reduced away, because the moves at the old d=1 are not really futile (they capture a high piece) and you would not detect it at all.

It seems you have absolutely nothing to win by rreducing, only to lose!
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 38 guests