self deepening: an improved implementation of IID

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

self deepening: an improved implementation of IID

Postby H.G.Muller » 24 Apr 2006, 16:36

I just tried a very simple implementation of Internal Iterative Deepening, that seems superior to the usual implementation:
Code: Select all
/* elementary IID */
search(int requested_depth, int alpha, int beta)
{   int iter_depth

    iter_depth = HASH->depth; /* gives zero on hash miss */

    while(iter_depth++ < requested_depth)
    {
        best_score = -INF;
        for(ALL_MOVES)
        {
            make(next_move);
            score = -search(iter_depth - 1, -beta, -max(alpha,best_score));
            unmake(next_move);
            best_score = max(score, best_score);
            if(best_score >= beta) break; /* beta cutoff */
        }
        STORE_HASH(iter_depth);
    }
    return(best_score)
}

The main incentive for doing IID is to discover as quickly as possible (i.e. at low search depth) if a node is a beta node. As soon as a move that causes a beta cutoff is found, it is sorted in front, and each iteration consists of only searching that single move. If the cutoff survives all the way to the requested dept, we are done.

But if the beta cutoff was only an illusion due to insufficient search depth, and disappears at a later iteration, we might not have any idea which was the second-best move: it was never searched, or it was a move that was searched at an early iteration and did not give a cutoff there. The whole philosophy behind IID, of hunting for potential cutoff moves at low depth, and only search them at full depth when we are pretty sure we have one, is subverted by allowing the examination of only a single nmove that proves a bust to effectively skip the iterations that might have found the true cutoff move.

What we really would like is to resume the deepening at the point where we first started skipping the moves because of the (false) cutoff. This can easily be achieved by moving the deepening of a cutoff move to the daughter node, so that the node itself remembers upto where it got with cutoff-free iterations (where really all moves were considered). I implemented this by passing not only a ('requested_depth' to the search routine, the depth that it should minimally search, but also a second depth parameter, that tells it how much deeper we like it to search if the search move causes a fail high (i.e. if the replies fail low).
Code: Select all
/* self-deepening IID */
search(int requested_depth, int ultimate_depth, int alpha, int beta)
{   int iter_depth

    iter_depth = HASH->depth;  /* gives zero on hash miss */
    best_score = HASH->score; /* -INF on hash miss          */

    while(iter_depth++ < requested_depth ||
             iter_depth <= utimate_depth && best_score <= alpha)
    {
        best_score = -INF;
        for(ALL_MOVES)
        {
            make(next_move);
            score = -search(iter_depth-1, requested_depth-1, -beta, -max(alpha,best_score));
            unmake(next_move);
            best_score = max(score, best_score);
            if(best_score >= beta) break; /* beta cutoff */
        }
        STORE_HASH(iter_depth);
    }
    return(best_score)
}

As soon as in the course of the IID we encounter a move that fails high, it deepens itself to the full depth to be done with this node. After that the IID in the parent node is just a formality: all subsequent iterations just get the move score from the value that is now in the hash table, and don't have to search any other moves.

But if the self deepening does not reach the full requested depth, search() will not return a fail-high score. The only way a fail-high score can be returned is where this fail high is valid for all subsequent iterations, and we are done with this node. Except from such a final cutoff, no cutoffs can occur. We are either done with the node at some point, or we continue doing deepening iterations that do not skip any move at any depth to the very end.

If a false cutoff is encountered, that at the depth where it first not failed high is still a pretty good move, it might still be the best moe for the iterations to come, and provide an automatic aspiration to the search of the other moves, by being 'searched' first (i.e. just retrieved from the hash table).

This implementation of IID seems superior to the regular one: in 'self play' of micro-Max, where the only difference between the versions was this why of IID, the new version scored 57.5-42.5 (57.5%) at (each side) 15 sec per game (it used 3.6% mor time, though), and 26.5-23.5 (53%) at 60 sec per game (where it iused 2.8% less time). I am now testing at longer time controls.
Last edited by H.G.Muller on 24 Apr 2006, 20:21, edited 1 time in total.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby Gerd Isenberg » 24 Apr 2006, 19:33

While it sounds interesting, i don't get it exactly. I guess +search in the classical IID is a typo? What about the initial requested_depth and ultimate_depth parameters in an ID-framework?

I am also a bit confused by using a postincrement in such a way, even if the compiler might be happy with it ;-)
Code: Select all
While ( iter_depth++ < requested_depth || iter_depth <= ...


Thanks,
Gerd
Gerd Isenberg
 
Posts: 285
Joined: 31 Jan 2005, 20:31
Location: Hattingen, Germany

Re: self deepening: an improved implementation of IID

Postby H.G.Muller » 24 Apr 2006, 20:56

Sorry, the +search was indeed a typo. I corrected it now.

As for the increment: in might have indeed been clearer if I write it as a for with separate increment:
Code: Select all
for(... ; iter_depth<=requested_depth ||
          (iter_depth<=ultimate_depth && best_score<=alpha) ; iter_depth++)

The idea is that the search depth is at least stepped up to requested_depth, but then, as long as it fails low, stepped up further until ultimate_depth is reached.

In the root you would need a different condition in the deepening loop, because there (probably) is no pre-set maximum depth, but you want to deepen indefinitely until time's up (or stack overflow will occur). This could simply be fixed by adding to the continuation condition:
Code: Select all
|| (node=ROOT && clock()<max_time)

The value of requested_depth and ultimate_depth could then be simply set to 1 (so that you are guaranteed to do a 1-ply search and get a move even if that 1-ply search would go overtime).

In the null-move search, I call search with requested_depth=1 and ultimate_depth=n-R-1. This aborts the deepening of the null-move search as soon as the null move fails low. This is in fact what gave me the idea to try that for other moves as well. I am still not sure if it is a good idea for the null move. In engines where the score oscillates depending on which side has the last move, it is conceivable that in the ordinary search I end with my own ply, R=2, so the associated null-move search also ends with my ply. But one ply less deep it ends with the opponent, and thus gives a score that is significantly worse and does not fail high, where the full depth would have failed high. This could be solved by re-trying the null move in every iteration, even though it failed low in the previous iteration (and there is another best move from the real search on that iteration). Now I try the null move just once, before the deepening loop.

The idea was that in a situation where eval>beta there might be a trivial threat, (say P attacks Q), that already becomes apparent at a search depth of 1 ply. In that case it seems an obvious waste of nodes to continue deepening the null-move search. But if you also want to prune on a positional advantage, then it is more tricky. With a null window you have no idea by how much the null move fails low. Best of course would be if the scoring is not really sensitive as to who does the last move. But fiddling with that really affects the move that is chosen. An alternative would be to apply a margin, and don't break of the deepening on an iteration where you end yourself if you only are above alpha by a little bit, anticipating that the score will go below alpha again on the next iteration where the opponent has the last move. You could even make this dependent on who will have the last move at the ultimate depth. Or you could of course deepen in steps of 2 ply, for a strongly oscillating score that might be preferable anyway.

I now tested the scheme by timing searches of various depths on the same position with self-deepening for any move, or just for the null move. There is a clear speedup, that seems to increase with depth: without self-deepening it takes 14% longer at 9 ply, 17% at 10 ply, 25% at 11 ply and 45% at 12 ply. This is only for one testposition, so I don't know how statistically significant that is, but it is certainly encouraging.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby H.G.Muller » 24 Apr 2006, 21:30

For fairness I should perhaps add that all this was tested on micro-Max, which for move ordering totally relies on iterative deepening. Otherwise the moves would simply come in the order the board scan delivers them. So this tends to exaggerate the impact of better iterative deepening: if the so-far best move dies, it has no idea what to try as second-best move, and could start a deep search of a totally stupid move in the presence of strong cut-move candidates!

On the other hand, programs that do en bloc move generation benifit more from moving the deepening control to the daughter node than micro-Max does: if the move that failed high would return all the way to the parent, to be immediately searched again at a larger depth, the move generator in the daughter would have to be re-run every time. In the proposed scheme it would have to do that only once.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby Michael Sherwin » 26 Apr 2006, 11:26

deleted
Last edited by Michael Sherwin on 16 May 2006, 12:28, edited 1 time in total.
User avatar
Michael Sherwin
 
Posts: 168
Joined: 21 Jul 2005, 06:10
Location: USA

Re: self deepening: an improved implementation of IID

Postby Michael Sherwin » 26 Apr 2006, 14:13

deleted
Last edited by Michael Sherwin on 16 May 2006, 12:29, edited 1 time in total.
User avatar
Michael Sherwin
 
Posts: 168
Joined: 21 Jul 2005, 06:10
Location: USA

Re: self deepening: an improved implementation of IID

Postby H.G.Muller » 26 Apr 2006, 22:36

I am not sure that there is such a big difference. If a hash move is available, but the (stored) depth at which this move was found was not sufficient to satisfy the currently requested depth, I use the hash result as if it were the previous iteration. This does mean I start the first iteration with the hash move, at the nest-higher depth, and I don't necessarily have to do that before running the move generator. This is just hidden in the unspecified for(ALL_MOVES).

Other programs might not consider it necessary to deepen in small steps in such a case, but try full depth immediately. I consider this risky, because especially on a hash hit you have no idea whatsoever about the ordering of other moves. Even if the hash result was already calculated from a 5-ply search, and now you need 9 ply, so that at 5 ply you could have a quite reliable rejection of poor moves, none of that information was transferred through the hash. So basically you would be thrown back to the static move ordering you would employ in the first ply.

In fact the entire move sorting is hidden in the hash, in the entry for the daughter nodes. To recover it, you might (on a hash hit with stored (remaining) depth d) start with a search at depth d, rather than d+1. Because this search was already done before (filling the hash entry), this basically does nothing more than just peeking in the hash, to retrieve the complete move ordering. You would of course still start with the hash move; if it remains a cut move at the new requested depth n (>d), the search of it at any depth, be it d or d+1, will self-deepen to n, and you are done with the node. Only when it is not a cut move, you will have to consider other moves.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby diepeveen » 13 May 2006, 03:12

Hi Harm-Geert,

In a private email i already explained that no matter how i test, IID simply doesn't make diep quicker finish it's plies when testing single cpu.

Another big problem of IID is when you search parallel.

Suppose i'm in PV node. I have no pv move.
If i apply IID there, then all my other cpu's are sitting idle.

Very bad.

Very BIG performance penalty with 4 cpu's.

My scaling goes down from 3.92 to quite a bit less.

The real problem is the condition that you go up in plydepth from iter depth++ to the wished depth.

All that time it's very difficult to split in other cpu's as near the leafs it's very dangerous to split.

So i assume you have no plans in getting parallel yet?

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: self deepening: an improved implementation of IID

Postby bob » 13 May 2006, 17:47

diepeveen wrote:Hi Harm-Geert,

In a private email i already explained that no matter how i test, IID simply doesn't make diep quicker finish it's plies when testing single cpu.

Another big problem of IID is when you search parallel.

Suppose i'm in PV node. I have no pv move.
If i apply IID there, then all my other cpu's are sitting idle.

Very bad.

Very BIG performance penalty with 4 cpu's.

My scaling goes down from 3.92 to quite a bit less.


All that shows is that you don't know how to design parallel software. I use IID. I don't have _my_ processors sitting idle when I do that either. I quite happily split the work up _below_ the IID node and have all processors work on the IID search together.

I can't figure out why you make these asinine statements that are so obviously wrong.

This "if I can't make it work then it can't possibly work for anybody" is a poor way to develop anything at all..





The real problem is the condition that you go up in plydepth from iter depth++ to the wished depth.

All that time it's very difficult to split in other cpu's as near the leafs it's very dangerous to split.

So i assume you have no plans in getting parallel yet?

Vincent
User avatar
bob
 
Posts: 156
Joined: 10 May 2006, 17:59

Re: self deepening: an improved implementation of IID

Postby H.G.Muller » 13 May 2006, 19:57

Indeed I have no intentions to buy a multi-core machine, so I have not studied or thought much about how to parallellize the search. If anyone would ask me right now how I would do it with a large number of CPUs I would probably say I would do it by using a PVS search, and when a process thread has searched the first move in a node, if it is not a cut-move, fork off threads for every other move to be searched with the null window. Each thread would start by probing the hash and lock the entry it creates until it completes (or aborts) the search and stores the result there. If a thread would do a hash probe to a locked entry it would put itself to sleep until the thread working on it would unlock it. (and then it would wake up to use that resut and continue the search. If one of the moves searched by the child threads would fail high, the parent would immediately kill all its children, and start a re-search of the failed move as if it was the first move (i.e. wait for the search result, and fork off new child threads for the other moves only after that).

The available CPUs would be put to work on non-sleeping threads by a general scheduler. This seems a quite general and robust way to distribute the work-load over the CPUs, that is not very sensitive to the details of the search algorithm.

But since I am making this up just from the top of my head, there might be lots of problems with such an implementation that I do no foresee. (Obviously, I would need some arbitration scheme to avoid deadlocks, which should somehow be related to repetition draws, if positions occur in each other's search trees.) Mutiprocessing is not appealing very much to me anyway. It would give me little satisfaction to win just because I can afford faster hardware. What interests me is to prove that a search guided by chess knowledge can find the same PV by searching 100 or even 100 times fewer nodes. Basically beat Hydra with a computer in a match box, that would be fun... 8-)
Last edited by H.G.Muller on 13 May 2006, 20:04, edited 2 times in total.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby diepeveen » 13 May 2006, 19:58

bob wrote:
diepeveen wrote:Hi Harm-Geert,

In a private email i already explained that no matter how i test, IID simply doesn't make diep quicker finish it's plies when testing single cpu.

Another big problem of IID is when you search parallel.

Suppose i'm in PV node. I have no pv move.
If i apply IID there, then all my other cpu's are sitting idle.

Very bad.

Very BIG performance penalty with 4 cpu's.

My scaling goes down from 3.92 to quite a bit less.


All that shows is that you don't know how to design parallel software. I use IID. I don't have _my_ processors sitting idle when I do that either. I quite happily split the work up _below_ the IID node and have all processors work on the IID search together.

I can't figure out why you make these asinine statements that are so obviously wrong.

This "if I can't make it work then it can't possibly work for anybody" is a poor way to develop anything at all..





The real problem is the condition that you go up in plydepth from iter depth++ to the wished depth.

All that time it's very difficult to split in other cpu's as near the leafs it's very dangerous to split.

So i assume you have no plans in getting parallel yet?

Vincent


Diep needs less nodes a ply than Crafty (when not turning on history pruning), despite that you're not doing checks in qsearch.

Diep has a better parallel speedup.

IID is not working for Diep at all. It never did.

I retested regurarly.

It does work for Napoleon my draughtsprogram, but i must admit that Napoleon is a very inefficient searcher, just like Crafty.

Note that your fliprate is a lot higher than mine, which might explain it.

Of course that is it. Your fliprate.

Crafty is so so so so unsure of itself that a position has like 5% chance to flip from <= alfa to >= beta.

In diep this is < 1%.

Your search is instable.

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: self deepening: an improved implementation of IID

Postby Gerd Isenberg » 13 May 2006, 21:26

IID is not working for Diep at all. It never did.

May be your non recursive search with double nullmove behaves accidently already like Harm-Geert's self-deepening IID? ;-)

Note that your fliprate is a lot higher than mine, which might explain it.
Of course that is it. Your fliprate.

I like your fliprate idea - changing moves at the root in positions where the program has no clue what to do and search becomes inefficient. It probably over-evaluates the current root position - but there is no progress, or the opponent can equalize conveniently with a lot of safe moves.

But i think this is no special feature of crafty, but immanent in all chess-programs. The good are more often right with their eval and search heuristics - so in general if you play a more knowledgable opponent your fliprate may increase because it is more likely you have no clue, and eval-noise with extensions/reductions may cause "random" changes at the root.
Gerd Isenberg
 
Posts: 285
Joined: 31 Jan 2005, 20:31
Location: Hattingen, Germany

Re: self deepening: an improved implementation of IID

Postby diepeveen » 13 May 2006, 21:57

Gerd Isenberg wrote:
IID is not working for Diep at all. It never did.


May be your non recursive search with double nullmove behaves accidently already like Harm-Geert's self-deepening IID? ;-)

Actually i tried to save the best move from such searches,
in case i didn't already have a best move.

By testing at 213 positions from world champs 2004,
the result was that this didn't help move ordering.

The netto result was worse.

The best way to do search is simply pick at once a good move, and try to search that one first.

Gerd Isenberg wrote:
Note that your fliprate is a lot higher than mine, which might explain it.
Of course that is it. Your fliprate.

I like your fliprate idea - changing moves at the root in positions where the program has no clue what to do and search becomes inefficient. It probably over-evaluates the current root position - but there is no progress, or the opponent can equalize conveniently with a lot of safe moves.

But i think this is no special feature of crafty, but immanent in all chess-programs. The good are more often right with their eval and search heuristics - so in general if you play a more knowledgable opponent your fliprate may increase because it is more likely you have no clue, and eval-noise with extensions/reductions may cause "random" changes at the root.


If i'd post WHY my fliprate is better than that of crafty, then bob searches within 1 week 3 ply deeper.

So i'll keep my mouth shut.

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: self deepening: an improved implementation of IID

Postby diepeveen » 13 May 2006, 22:01

Gerd Isenberg wrote:
IID is not working for Diep at all. It never did.

May be your non recursive search with double nullmove behaves accidently already like Harm-Geert's self-deepening IID? ;-)

Note that your fliprate is a lot higher than mine, which might explain it.
Of course that is it. Your fliprate.

I like your fliprate idea - changing moves at the root in positions where the program has no clue what to do and search becomes inefficient. It probably over-evaluates the current root position - but there is no progress, or the opponent can equalize conveniently with a lot of safe moves.

But i think this is no special feature of crafty, but immanent in all chess-programs. The good are more often right with their eval and search heuristics - so in general if you play a more knowledgable opponent your fliprate may increase because it is more likely you have no clue, and eval-noise with extensions/reductions may cause "random" changes at the root.


Gerd if IID works so well for you, you better ask yourself why you are using an iterative search.

Why not simply directly selecting 15 ply search depth and with IID get that 15 ply search?

What i mean is: i'm using iterative search BECAUSE it works as a kind of IID.

If you need a lot of IID that means obviously that you search nodes which you didn't search the previous iteration.

That means in short that the assumption of iterative deepening is incorrect.

So perhaps try something else?

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: self deepening: an improved implementation of IID

Postby H.G.Muller » 13 May 2006, 22:34

An alpha-beta tree also gets larger when you don't start with the best move in the root. It means you have more all nodes at ply 1. IID in the deeper nodes does not protect you from that, without ID in the root you would be committed to the first move you pick at full depth. Plus that you might have no idea what that depth should be. For that latter reason alone, what else could you do?

I agree that in a stable search with ID from the root IID is basically a no-op: it is never called for in any node. But that means it also doesn't hurt you. So it only comes in action when there is a switch in best line. But that is exactly the situation where it might save you big.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: self deepening: an improved implementation of IID

Postby bob » 14 May 2006, 02:48

diepeveen wrote:
Gerd Isenberg wrote:
IID is not working for Diep at all. It never did.


May be your non recursive search with double nullmove behaves accidently already like Harm-Geert's self-deepening IID? ;-)

Actually i tried to save the best move from such searches,
in case i didn't already have a best move.

By testing at 213 positions from world champs 2004,
the result was that this didn't help move ordering.

The netto result was worse.

The best way to do search is simply pick at once a good move, and try to search that one first.

Gerd Isenberg wrote:
Note that your fliprate is a lot higher than mine, which might explain it.
Of course that is it. Your fliprate.

I like your fliprate idea - changing moves at the root in positions where the program has no clue what to do and search becomes inefficient. It probably over-evaluates the current root position - but there is no progress, or the opponent can equalize conveniently with a lot of safe moves.

But i think this is no special feature of crafty, but immanent in all chess-programs. The good are more often right with their eval and search heuristics - so in general if you play a more knowledgable opponent your fliprate may increase because it is more likely you have no clue, and eval-noise with extensions/reductions may cause "random" changes at the root.


If i'd post WHY my fliprate is better than that of crafty, then bob searches within 1 week 3 ply deeper.

So i'll keep my mouth shut.

Vincent


First, I don't believe there is anything you could say that would give me 3 extra plies, so this entire thread of discussion is crap. But in spite of that, it is nice to see you show your true lack of integrity with such statements. I will remind you do you remember when you _started_ a parallel search for your program? Do you remember how many questions you sent me via private email? Do you remember that I answered each and every one of them clearly, carefully and accurately? And this is the way you repay such responses?

Nice touch, Vincent. True class. One of a kind.

BTW I'd be happy to post all the old emails if you want to deny they exist. Our email server has been saving every email I sent for 15+ years now...

But, that was just to show your lack of character, mainly. I don't need any advice from you, we seem to have been doing well enough against your program at recent WCCCs, I suspect we will continue to do just fine with this "inefficient parallel search, inefficient alpha/beta search, primitive evaluation, amateur booked, piece of trash program...
User avatar
bob
 
Posts: 156
Joined: 10 May 2006, 17:59

Re: self deepening: an improved implementation of IID

Postby bob » 14 May 2006, 02:50

H.G.Muller wrote:An alpha-beta tree also gets larger when you don't start with the best move in the root. It means you have more all nodes at ply 1. IID in the deeper nodes does not protect you from that, without ID in the root you would be committed to the first move you pick at full depth. Plus that you might have no idea what that depth should be. For that latter reason alone, what else could you do?

I agree that in a stable search with ID from the root IID is basically a no-op: it is never called for in any node. But that means it also doesn't hurt you. So it only comes in action when there is a switch in best line. But that is exactly the situation where it might save you big.


Not much point in arguing with him. I hardly ever see an IID search triggered. Mainly after a fail high at the root after a really big search on the first move so that the transposition table has no move to suggest. And there you are correct, it can be a _big_ saver. For most positions, it is not even used and costs zero.
User avatar
bob
 
Posts: 156
Joined: 10 May 2006, 17:59

Re: self deepening: an improved implementation of IID

Postby Gerd Isenberg » 14 May 2006, 10:46

Gerd if IID works so well for you, you better ask yourself why you are using an iterative search.

Why not simply directly selecting 15 ply search depth and with IID get that 15 ply search?

I think Harm-Geert's IID framework is indeed intended to use without external iterative deepening. Otoh as already mentioned, IID does not really hurt if you have something from hash with sufficient depth stored. I had IID before with someting like depth/2 if no hashhit in non-nullwindow nodes, but found IID always with depth-increment of two seem to work better for me...
Code: Select all
stored_depth = 0;
probe(..., &stored_depth);
for (depth = stored_depth; depth < requested_depth; depth+=2) {
  ... search(depth, ...
}
... search(requested_depth, ...

If you need a lot of IID that means obviously that you search nodes which you didn't search the previous iteration.

That means in short that the assumption of iterative deepening is incorrect.

Not exactly. Sometimes during the next iteration your best move so far turns out to become worse and i think in such situations IID helps to avoid huge explosions.
So perhaps try something else?

Yes, i always appreciate some tips to improve my search - what do you suggest?
Gerd Isenberg
 
Posts: 285
Joined: 31 Jan 2005, 20:31
Location: Hattingen, Germany

Re: self deepening: an improved implementation of IID

Postby diepeveen » 14 May 2006, 19:07

Gerd Isenberg wrote:
Gerd if IID works so well for you, you better ask yourself why you are using an iterative search.

Why not simply directly selecting 15 ply search depth and with IID get that 15 ply search?

I think Harm-Geert's IID framework is indeed intended to use without external iterative deepening. Otoh as already mentioned, IID does not really hurt if you have something from hash with sufficient depth stored. I had IID before with someting like depth/2 if no hashhit in non-nullwindow nodes, but found IID always with depth-increment of two seem to work better for me...
Code: Select all
stored_depth = 0;
probe(..., &stored_depth);
for (depth = stored_depth; depth < requested_depth; depth+=2) {
  ... search(depth, ...
}
... search(requested_depth, ...

If you need a lot of IID that means obviously that you search nodes which you didn't search the previous iteration.

That means in short that the assumption of iterative deepening is incorrect.

Not exactly. Sometimes during the next iteration your best move so far turns out to become worse and i think in such situations IID helps to avoid huge explosions.
So perhaps try something else?

Yes, i always appreciate some tips to improve my search - what do you suggest?


I suggest comparing node counts with other programs who do not forward prune. you can simply ask programmers for example outputs of positions and compare.

I assume of course that equal things get compared.

Not doing checks in qsearch is +2 ply of course.

I'm doing them in diep though and i believe Zappa is doing them too.

Fruit you can compare with easily too, just turn off HP.

Oh wait you don't even need to turn that off, fritz9 already automatically is doing that if i remember well a posting about that :)

Vincent
diepeveen
 
Posts: 116
Joined: 28 Jun 2005, 01:09
Location: Netherlands

Re: self deepening: an improved implementation of IID

Postby Michael Sherwin » 15 May 2006, 10:18

deleted
Last edited by Michael Sherwin on 16 May 2006, 12:31, edited 1 time in total.
User avatar
Michael Sherwin
 
Posts: 168
Joined: 21 Jul 2005, 06:10
Location: USA

Next

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 34 guests