(Internal) Iterative Deepening

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

(Internal) Iterative Deepening

Postby H.G.Muller » 18 Feb 2006, 20:48

I have a question over iterative deepening. I thought the standard implementation of this was as follows: (very schematical)
Code: Select all
int search(MOVE move, int depth)
{
    DO_MOVE(move);
    for(depth_cnt=1;depth_cnt<depth;depth_cnt++)
    {
        best_score = -INFINITY;   
        FOR_ALL_MOVES
        {
             score = -search(nextmove, depth_cnt);
             move_list[nextmove].score = score;

             best_score = max(score, best_score);
             if(score>beta) break;
        }
        SORT_MOVES_BY_SCORE;
    }
    UNDO_MOVE(move);
    return best_score;
}

In words: first we search all moves to depth n, so that we know in which order we have to search them to depth n+1.

Is this really the way that everyone does it?

It has the disadvantage that alpha-beta pruning strongly interferes with the undelying idea:

Suppose on iteration 1 we find the best move, and it is good enough to cause a beta cutoff. This remains so until depth = 5, we get a quick and efficient cutoff. But on iteration 6 the move turns out not so good after all, and does not give the cutoff. So we have to search the other moves to 6 ply, to see what the best move there is (and hopefully find another one that gives the cutoff). But we know almost nothing about those, they were not searched on iteration 5, and some of them (or even all of them) might not even have been searched to a depth of 1 (if we picked our cutoff move there early). We run the risk of picking a total blunder as next move to search to 6 ply. The move sortin is based on purely static criteria.

It seems much smarter to slowly build up the depth of the other moves starting from 1:

Code: Select all
int search(MOVE move, int depth)
{
    DO_MOVE(move);
    for(depth_cnt=1;depth_cnt<depth;depth_cnt++)
    {
        best_score = -INFINITY;   
        FOR_ALL_MOVES_IN_LIST
        {
             k =   move_list[nextmove].depth;
             if(k<depth_cnt-1) depth_cnt = k+1; /* should not happen without pruning */
             while(move_list[nextmove].score>beta && k<depth-1 || k<depth_cnt)
             {
                    score = -search(nextmove, ++k);
                    move_list[nextmove].score = score;
             }
             best_score = max(score, best_score);
             if(score>beta) break(out of 2 loops);

             if(move_list[nextmove].score < best_score - MARGIN) break;
        }
        SORT_MOVES_BY_SCORE;
    }
    UNDO_MOVE(move);
    return best_score;
}

In words: as long as there is a move in the list that seems to give us a cutoff, deepen that move to the maximum depth. But stop with this as soon as it no longer gives us a cutoff, and then continue with the other moves one by one at the depth at which we were originally working on, until another crosses the beta threshold.

If we want to prune we can make MARGIN < INFINITY, so that moves that run too much behind in score (i.e. are proven very bad at low depth) are ignored until either the best_score drops as the moves responsible for it devaluate on deeper search, or until we change our mind and increase MARGIN again (e.g. when all other moves have been considered to the requested depth, in the hope that this will never happen due to the occurrence of a cutoff).

The window is determined by best_score, which is based on the scores of the moves at the largest depth currently known for that move, rather then on the scores at the depth we are currently working on.
Last edited by H.G.Muller on 18 Feb 2006, 21:25, edited 3 times in total.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Chan Rasjid » 18 Feb 2006, 21:15

It seems your topic is not iterative deepening.

I think most of us understand ID as searching the root move-list only at increasing depth until time-out. beta would be Infinity and there is no cutoff.

Aspiration at root will require a research if it exceed the higher bound.

Best Regards,
Rasjid
Chan Rasjid
 
Posts: 73
Joined: 23 Feb 2005, 16:36
Location: Singapore

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 18 Feb 2006, 21:21

OK, I should have made it clear that I am taking about internal iterative deepening, here. That is done in every node. I changed the title of the topic accordingly. (Nice you can do that if you started the thread. :D )

Oh, and if there is a hash table I suppose you would consult it before calling search recursively, so that in case of a hit you would know if the stored depth would actually be higher than the one you needed at that time, and record that higher depth in the move_list. If at the end of the search routine you would store the value of the position in the hash table, you would have the choice to what depth you would assign to this. If the move that gives you the cutoff had a higher depth than the requested search, you could report a depth one higher than that. Even though the position itself was only officially searched to lower depth!
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Alessandro Scotti » 18 Feb 2006, 23:03

Hi H.G.,
I'm a bit confused by your definition of IID. The one I've always used is: within an internal node, if we don't have a good move to try first (like from the transposition table or the PV array), search the current node at reduced depth and pick the best move from that search to try first. In my engine it is something like:

Code: Select all
    if( UseIID && depth >= IID_Depth && pv_move == 0 ) {
        search( tree, alpha, beta, ply, depth - IID_Reduction );

        pv_move = node->pv[0]; // Best move found by above search
    }


I will then search the node normally and try "pv_move" first.
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Re: (Internal) Iterative Deepening

Postby Robert Allgeuer » 19 Feb 2006, 09:37

Alessandro Scotti wrote:Hi H.G.,
I'm a bit confused by your definition of IID. The one I've always used is: within an internal node, if we don't have a good move to try first (like from the transposition table or the PV array), search the current node at reduced depth and pick the best move from that search to try first. In my engine it is something like:

Code: Select all
    if( UseIID && depth >= IID_Depth && pv_move == 0 ) {
        search( tree, alpha, beta, ply, depth - IID_Reduction );

        pv_move = node->pv[0]; // Best move found by above search
    }


I will then search the node normally and try "pv_move" first.


Hi,
but couldn?t you imagine also a scheme where in the interior node recursively the same is done as in the root node, i.e. in each interior node you start at depth 1 and go up to current remaining depth? What then happens in the tree is probably a bit mind-boggling, but on the other hand: Why should a scheme that is good for the root node not be also good for interior nodes?

Robert
Robert Allgeuer
 
Posts: 124
Joined: 28 Sep 2004, 19:09
Location: Konz / Germany

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 19 Feb 2006, 12:25

Indeed when I talk about internal iterative deepening, I had in mind what Robert says. From Allesandro's remark I understand that this is not generally applied. But it seems to follow logically from the idea behind the iterative deepening that one shoud do this:

If the most efficient way to search a position to N ply, if you don't have a clue to the best move, is first do a shallower search to depth N-R to get that clue... Then at depth N-R you are faced with the same dilemma! Because you still don't have a clue. So rather than starting the (N-R)-ply search with random move ordering, it would on the average save time to first get a best move from an (N-2R)-ply search. Etcetera...

Of course on the larger scale of things at some point it might not matter much, if N=10 and I have to choose if I want to get my move clue from a stupid 3-ply search, or a smarter 3-ply search that used the 2-ply move clue (and perhaps aspiration), the difference might be so small compared to the work of even the smartest 10-py search that you would not notice it. But if in principle it is better to do the smaller searches first, it will never hurt you to do it. Hence I always start iterating from depth 1 (if I have no hash move to rely on).

The best value for R should be found through experimentation. For me it was 1 ply, but with another QS results might oscillate between even and odd depth, and then R=2 would probably be a better choice.

All of this is especially true without hashing, where you never have a move clue from the table, and the internal deepening would traverse the tree in an utterly jittery way. But despite that, the number of nodes visited would still be smaller than with a depth-first tree walk. (Unless you had some move-ordering oracle.) With hashing you don't notice anything of this, really: if from the root an N-ply search has been done, and we are now working on an (N+1)-ply search, every node in the tree will in general be in the hash table to a depth that is at most one less than we need it now. (Provided the search is sufficiently stable.) So in every node we just have to stack one iteration on top of the retrieved hash result, which is just the iteration that we would do if we had not done any internal iterative deepening at all. So the internal deepening has absolutely no consequence then! But you will notice it when you switch PV from one iteration to the next, then you will suddenly need lots of nodes that you have never visited before, at high depth. Then progressing carefully in those nodes, rather than plunging in depth first, might save huge amounts of time.

If I do have a hash-table hit that gives me a move clue, but the depth is insufficient, I still might do iterative deepening despite the clue. Because a clue based on a 3-ply search (retrieved from the table) is in general not reliable enough to confidently base a 9-ply search on. If it were, one could do iterative deepening with R=7. But if the optimal R is known to be 2, you know on the average it saves nodes to do 3-,5-,7-ply searches first, that this is better than doing 3-,6-,9-ply or 3-,9-ply. Otherwise R=2 would not have been optimal! And the fact that you happen to get the 3-ply search for free from the table does not change that. So I iterate the retrieved hash-table result up in the normal way until I reach the requested depth for that node.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby David Weller » 20 Feb 2006, 13:46

Hi HG,

Because of the nature of a depth first search, IID effectually DOES the same thing as iterative deepening at the root.

The minimal depth is searched first, because each succesive call is relegated until the minimal depth is reached...

-David
User avatar
David Weller
 
Posts: 135
Joined: 26 Sep 2004, 20:30
Location: USA

Re: (Internal) Iterative Deepening

Postby Richard Pijl » 20 Feb 2006, 14:29

If you apply IID unrestricted, it will not do any good, but burn a lot of nodes instead. Doing IID in nodes that will not provide a cutoff (so where order of the moves is not important), basically waste all of the nodes used in the IID search. So one of the main tricks is to only try IID when a cutoff can be expected (or more precisely, where a score>alpha is expected).
Second: Keep the cost of IID as low as possible. A shallow depth search is usually sufficient to find the best move, so reducing by only a fixed R is again a waste of nodes. I do depth/2 for depth>4, depth-2 otherwise.

Richard.
User avatar
Richard Pijl
 
Posts: 105
Joined: 26 Sep 2004, 21:09
Location: Minderhout, Belgium

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 20 Feb 2006, 18:39

Good point. An alpha node close to the root might have been in a part of the tree that has not been visited upto iteration from the root reached a high total depth, and then suddenly become significant due to a change in PV. So if the current eval is so much below alpha that this seems a dead end, we might as well go for the full depth immediately, even without knowing anything about any move yet.

The question of course is how reliable your expectation for this can be. If a node looked very bad based on the current eval, but there was a single move out of 40 that unexpectedly saved the day... Potential savings are enormous if I could be sure to start searching that move, rather than wait for me accidentally picking it. It does not have to happen too often to pay back the investment of the shallower search.

Another consideration is this:
If I am in an alpha node, every node of mine in the tree behind it is also an alpha node (if his move ordering is good). If I unexpectedly am diverted to an alpha node at high remaning depth that I encounter for the first time I might refrain from IDD. But all moves from there lead to the opponent's beta nodes, and he most likely would love to do IDD, to make sure he picks the cutoff move at the earliest possible stage. In doing the IDD he will slowly push out the horizon over my alpha nodes that are behind that (perhaps behind his null move, although if he does too many null moves in a row his luck will probably not last long...) So although these alpha nodes might not be doing IDD by choice, they are successively called at increasing depth, and since they remeber through the hash to which point they got last time, they will simply continue from there. So in a sense the ID is forced upon them by the underlying opponent's beta node. It seems that the choice to do IDD or not only is non-illusory in alpha nodes where the underlying beta node decides to try a completely new move that is going to earn it a cutoff immediately at large depth. And that sounds a bit like over-confidence...

The latter is exactly the situation I was worrying about. If IDD only pays off in beta nodes, you can read my posting as a question of how to handle it in such beta nodes. The IDD 'per move' rather than iteration by iteration still seems a lot safer to me. That, as a side effect, it also slowly increases depth on the alpha nodes behind it, does not hurt. Although these alpha nodes did not need it for their move ordering, the underlying beta node benefits from it to make sure it is diverting the search time to the alpha node that is most worth it, and to compare these alpha nodes it first asks their value at 1 ply, then at 3, etc... So that it can start avoiding that node at the earliest possible time, without having spent unnecessarily much effort on it. And when the alpha node is interrogated for 3 ply, it now happens to have a 1-ply result stocked in the hash. Completely for free.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Daniel Shawul » 21 Feb 2006, 08:29

We need to do IID at exact and fail high nodes, but how do you know a node is a fail low(alpha) node before you search it?
Because of this and since it improves move ordering, I do IID at every node when i dont get a hashtable move. At blitz this will be a slowdown as you will search more nodes, but with enough time this will help because move ordering matters there.

I have not fully understood what you are doing, but i have read some people do iterative deepening like this:
If we discovered that pv move fails low at iteration 10 for example,
re-start the iteration from depth = 1. This might have the same effect as internal IID but i am not sure.
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: (Internal) Iterative Deepening

Postby Richard Pijl » 21 Feb 2006, 11:11

We need to do IID at exact and fail high nodes, but how do you know a node is a fail low(alpha) node before you search it?

You don't know, but you can guess.
In PVS pv-nodes, one of the children is a pv node, the rest are expected to be zero-window cutnodes.
Children of cutnodes are expected to be all nodes.
Using this simple scheme you can reduce many of the unneeded IID tries. Once in a while you'll skip a few useful ones to, but that's just bad luck.
Richard.
User avatar
Richard Pijl
 
Posts: 105
Joined: 26 Sep 2004, 21:09
Location: Minderhout, Belgium

Re: (Internal) Iterative Deepening

Postby Tord Romstad » 21 Feb 2006, 11:29

Richard Pijl wrote:If you apply IID unrestricted, it will not do any good, but burn a lot of nodes instead. Doing IID in nodes that will not provide a cutoff (so where order of the moves is not important), basically waste all of the nodes used in the IID search. So one of the main tricks is to only try IID when a cutoff can be expected (or more precisely, where a score>alpha is expected).

Yes. At least it should be avoided when a score > alpha is very unlikely. I use IID in the following two cases:
  1. PV nodes where the remaining depth is more than 3 plies.
  2. Nodes where (static eval) > (alpha - 2 pawns) and the remaining depth is more than 6 plies


Second: Keep the cost of IID as low as possible. A shallow depth search is usually sufficient to find the best move, so reducing by only a fixed R is again a waste of nodes. I do depth/2 for depth>4, depth-2 otherwise.

This makes a lot of sense, but is totally wrong - at least in my program. I always reduce by exactly two plies, regardless of the remaining depth. I have tried lots of other schemes, some of them similar to the one you suggest, and they all seem to be clearly inferior to the simple depth-2. As always, YMMV.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 21 Feb 2006, 14:00

Well, it seems that IDD is a controversial subject, for which the optimal solution depends on the details of the program... :D

Still I want to pose a question to those that want to avoid ID in alpha nodes:

If you do iterative deepening from the root, and there is an alpha node somewhere in your tree, after 3 ply (say)... How do you avoid it from being iteratively deepened? The node will first be touched when the search from the root goes 3 ply deep, with a depth_remaining of 0, and in the next iteration (say 5 ply deep from the root) with a depth_remaining of 2, etcetera.

So the node will automatically go trough the deepening sequence 2, 4, 6, ..., and if your hash table does a good job it will remember all information form the 4-ply search (score, best move...) when it is called for a 6-ply search. So the information from the (N-2)-ply search is always available.

Only when due to a PV change the node suddenly becomes visible at high depth, there seems to be a difference. (Plus perhaps in the case of hash-table entries that were lost due to overwriting.) And this is exactly the situation where you can be least certain that you are indeed dealing with an alpha node, because you know nothing about that node altogether.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 21 Feb 2006, 14:17

Tord Romstad wrote:
  1. PV nodes where the remaining depth is more than 3 plies.
  2. Nodes where (static eval) > (alpha - 2 pawns) and the remaining depth is more than 6 plies


About the second case: when you are more then 2 pawns away from alpha, do you make special provisions for tactics-in-progress? I can imagine that I am a Rook behind simply because the opponent thought fit to exchange it, and that my next move recaptures the hanging Rook and that this in effect is a beta node 2 pawns up, rather than 3 down...

Or does your initial move ordering take care of this sufficiently efficient that you will never miss such a recapture anyway? It could be recapture through a trick, like a family check, or attacking the (defended) Rook that is pinned with a Bishop on the King with a Pawn first:

[diag]3k4/6q1/6r1/6R1/4P2B/3PKP2/2R6 b[/diag]

1. ... , RxR
2. f2f3, any
3. BxR+

This would be hard for the initial move ordering to recognize, chances are that you will be wasting a (6+)-ply search on 2. BxR, QxB+ as an alpha node...

Or does this kind of thing occur so infrequently that it does not matter?
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Richard Pijl » 21 Feb 2006, 15:35

Unfortunately there is no way to determine for certain whether a node is an all-node or not before searching it. Otherwise we would not need to search it at all and just return alpha ...
Tord's conditions for performing IID is similar to the one I have, and it is aimed at reducing the IID cost, and keeping as much as benefit as possible. That certain cases where IID could have been useful are skipped is just too bad. The good thing is that many cases where it would not have helped were not done.
Avoiding IID at allnodes is not the same as always doing IID at cutnodes. Sometimes cutnodes have a very simple ordering.

Now to your example:
If I understand you correctly, you say that basing the decision whether or not to do IID on the static eval, can miss possible cutnodes, and the example you have is one where you're in the middle of a recapture sequence.
But when you're in the middle of such a recapture sequence, usually the normal move ordering is good enough to select the best move. IID would not have helped here. So not doing IID is in fact a good thing here.

Edited:
I read your example again, and realize the above is not true for this position, and IID could have helped here. Too bad. Then it is just one of the exceptions!
Richard.
User avatar
Richard Pijl
 
Posts: 105
Joined: 26 Sep 2004, 21:09
Location: Minderhout, Belgium

Re: (Internal) Iterative Deepening

Postby Tord Romstad » 21 Feb 2006, 15:45

H.G.Muller wrote:
Tord Romstad wrote:
  1. PV nodes where the remaining depth is more than 3 plies.
  2. Nodes where (static eval) > (alpha - 2 pawns) and the remaining depth is more than 6 plies


About the second case: when you are more then 2 pawns away from alpha, do you make special provisions for tactics-in-progress? I can imagine that I am a Rook behind simply because the opponent thought fit to exchange it, and that my next move recaptures the hanging Rook and that this in effect is a beta node 2 pawns up, rather than 3 down...

Or does your initial move ordering take care of this sufficiently efficient that you will never miss such a recapture anyway? It could be recapture through a trick, like a family check, or attacking the (defended) Rook that is pinned with a Bishop on the King with a Pawn first:

[diag]3k4/6q1/6r1/6R1/4P2B/3PKP2/2R6 b[/diag]

1. ... , RxR
2. f2f3, any
3. BxR+

This would be hard for the initial move ordering to recognize, chances are that you will be wasting a (6+)-ply search on 2. BxR, QxB+ as an alpha node...

Or does this kind of thing occur so infrequently that it does not matter?

I think so. For normal recaptures, there is no problem. I order winning recaptures directly after the hash move anyway. Tricks based on forks and pins are relatively rare, and would be a bit messy to detect without a search in any case.

Something else which could be worth trying is to avoid IID if the score is too far above beta. In such cases, it is likely that almost any move would be good enough cause a beta cutoff, and doing an IID search just to improve the move ordering might be a waste of time. I have never managed to make this work well, but perhaps I just haven't found the right limit.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 21 Feb 2006, 17:10

Tord Romstad wrote:Something else which could be worth trying is to avoid IID if the score is too far above beta. In such cases, it is likely that almost any move would be good enough cause a beta cutoff, and doing an IID search just to improve the move ordering might be a waste of time. I have never managed to make this work well, but perhaps I just haven't found the right limit.

I guess this is a situation where you would use null-move pruning. So what you are saying is that the null move-search should be immediately done at the full depth, rather than through deepening. The node behind it would be an all-node, and should also avoid deepening. Beyond that we would try another null move at full depth, as long as our advantage lasts. (Which will probably not be very long if we do only null moves. :wink: )

I guess this makes sense.

So the secret is all in the move ordering. If that is good enough all by itself, IDD becomes a liability rather than an asset. I'm probably just lazy, that I want IDD to figure out the move ordering for me... :?
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Daniel Shawul » 28 Feb 2006, 15:34

You don't know, but you can guess.
In PVS pv-nodes, one of the children is a pv node, the rest are expected to be zero-window cutnodes.
Children of cutnodes are expected to be all nodes.
Using this simple scheme you can reduce many of the unneeded IID tries. Once in a while you'll skip a few useful ones to, but that's just bad luck.
Richard.


I measured the amount of IID tries at different node types.
Note that i do IID at all nodes, whether there is a winning move or not.
The total IID tries is on average 1% of total nodes excluding quiecence nodes. Of which most (about 80% are at CUT nodes). The percentage of wasted IIDs ( I assume the IID is wasted if i find later that there is a winning capture) , and it seems that 40% of those are wasted! So i think i should add this criteria. One other observation is that the amount of IID tries increases significantly at ALL nodes at positions where IID helps,
Usually when there is a continious fail high/low at the root.
So i am confused if not to do iids at ALL nodes?
Daniel
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: (Internal) Iterative Deepening

Postby H.G.Muller » 28 Feb 2006, 18:34

I am not sure I understand what you say.

How do you define and 'IDD try'? Is that a node where you actually had to do IDD, rather than relying on a hashed less deep result of the node, and using the retrieved data to immediately start the search at the full requested depth?

About your definition of 'wasted IDDs': you mean that these are beta nodes where the winning capture caused the cutoff? How is the IDD wasted there? I suppose that it helped you to establish at low depth what the winning capture was, so that you could search that at full depth. Or do you mean that you applied static criteria to identify the winning capture, and might as well have searched it directly at full depth? But I suppose you would start the lower-depth pilot search with this identified capture as well, so that would also have led to immediate cutoff there and very small effort. This makes the presence of a later-confirmed winning capture a meaningless statistic, since not much can be saved there by doing or not doing the IDD. Much more important would be the number of cases where your static criteria would identify a 'winning capture' that would turn out bad, while the IDD came up with the proper refutation. In those cases it could make a huge difference in effort (at least doubling it) if you had NOT done the IDD.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: (Internal) Iterative Deepening

Postby Daniel Shawul » 01 Mar 2006, 06:12

Code: Select all
I am not sure I understand what you say.

How do you define and 'IDD try'? Is that a node where you actually had to do IDD, rather than relying on a hashed less deep result of the node, and using the retrieved data to immediately start the search at the full requested depth?


if i have a hashtable move i never try iid whatever the depth is.
I just measured how much it is done at every node type.

Code: Select all
About your definition of 'wasted IDDs': you mean that these are beta nodes where the winning capture caused the cutoff? How is the IDD wasted there? I


It is wasted because the next thing i try after the hashtable moves are winning captures. I use SEE for this , most of the time this predicts the situation correct. Ofcourse there are cases where a winning capture is not necessarily the best move. Probably at PV nodes it is better not to rely on this, but at CUT & ALL nodes it might be safe not to try the iid. According to my stat in 40% of iids tried at CUT nodes , the move we got from iid is a capture.

Code: Select all
Much more important would be the number of cases where your static criteria would identify a 'winning capture' that would turn out bad, while the IDD came up with the proper refutation.


There are winning captures ,as predicted by SEE, which are really loosing
but I dont see why identifying this cases is important.(other than statistics).
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Next

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 8 guests