Do you evaluate internal nodes?

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Do you evaluate internal nodes?

Yes
19
39%
No
30
61%
 
Total votes : 49

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 23 Jan 2006, 10:19

Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 4 ply*), than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.

*) unfortunately I made a typo here initially, I meant to say 4 but wrote 5. :( This would make the two cases I distinguish the same! It seems the following posters were not confused by this, however.
Last edited by H.G.Muller on 23 Jan 2006, 11:59, edited 1 time in total.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Uri Blass » 23 Jan 2006, 11:45

H.G.Muller wrote:Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 5 ply, than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.


R=1 means the first case and I believe that R=3 is more common.

if remaining depth before null moves is n plies then remaining depth after the null move is n-R-1 plies.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby mridul » 23 Jan 2006, 11:50

Uri Blass wrote:
H.G.Muller wrote:Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 5 ply, than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.


R=1 means the first case and I believe that R=3 is more common.

if remaining depth before null moves is n plies then remaining depth after the null move is n-R-1 plies.

Uri


R=1 would still mean that you are reducing by a ply.
R=3 is more common ? I thought it was R=2 (or some adaptive version of 2,3).
My impression was that R=3 was tried by only those who were -
1) more adventurous,
2) had checks in qsearch
3) did not have much of other pruning.

I could be wrong on all three :)
Just 'cos of 1 , I use R=3 8-)

More seriously , R=3 prunes a hell lot - so you should have sufficient safegaurds to ensure that it is not making you tactically blind.

Mridul
mridul
 
Posts: 48
Joined: 09 Dec 2004, 11:34
Location: Bangalore , India

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 23 Jan 2006, 12:07

Note that there was a typo in my previous post in a rather essential place. Now the definition is unambiguous, though.

R=3 seems indeed a lot, but one should also take into account whose plies those are. If the 3 plies you reduce are two of your own, and only one of the opponent, it is a lot safer then the other way around.

The same of course holds for all odd R, specifically R=1. If it was your own ply, it seems completely safe. (If the ply you want to prune is a forced one, such as a recapture, QS presumably includes it anyway.) But if it is your opponent's ply, you might miss some tactical shots.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Uri Blass » 23 Jan 2006, 12:23

mridul wrote:
Uri Blass wrote:
H.G.Muller wrote:Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 5 ply, than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.


R=1 means the first case and I believe that R=3 is more common.

if remaining depth before null moves is n plies then remaining depth after the null move is n-R-1 plies.

Uri


R=1 would still mean that you are reducing by a ply.
R=3 is more common ? I thought it was R=2 (or some adaptive version of 2,3).
My impression was that R=3 was tried by only those who were -
1) more adventurous,
2) had checks in qsearch
3) did not have much of other pruning.

I could be wrong on all three :)
Just 'cos of 1 , I use R=3 8-)

More seriously , R=3 prunes a hell lot - so you should have sufficient safegaurds to ensure that it is not making you tactically blind.

Mridul


Fruit use R=3 and checks in the qsearch
Glaurung use R=3 except endgame when R=2 is used.

I think that the strong programs simply do checks in the qsearch and R=3.

You say that R=3 is only for people who are

1) more adventurous,
2) had checks in qsearch
3) did not have much of other pruning.

I have only 2.

I used R=3 simply because I found that it improved results in test suites after having checks in the qsearch.

Later I added agrresive pruning that help because R=3 is not enough in order not to be outsearched by other programs.

Note that in the endgame I use verified null move pruning with R=3 and in pawn endgame I do not use null move pruning.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby Alessandro Scotti » 23 Jan 2006, 13:04

Uri Blass wrote:Note that in the endgame I use verified null move pruning with R=3 and in pawn endgame I do not use null move pruning.


Hi Uri,
do you use verified null move like in the Tabibi paper or null move followed by a verification search?
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Re: Do you evaluate internal nodes?

Postby Uri Blass » 23 Jan 2006, 17:52

Alessandro Scotti wrote:
Uri Blass wrote:Note that in the endgame I use verified null move pruning with R=3 and in pawn endgame I do not use null move pruning.


Hi Uri,
do you use verified null move like in the Tabibi paper or null move followed by a verification search?


I use it in a similiar way to tabibi paper(the only difference is that I do not do a research in case of unexpected result and it means that in rare cases when there is no threat(val>=beta) but the result of search to depth that is reduced by 1 suggest a zugzwang I do not do a research).

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby L?szl? G?sp » 23 Jan 2006, 21:53

H.G.Muller wrote:Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 4 ply*), than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.

*) unfortunately I made a typo here initially, I meant to say 4 but wrote 5. :( This would make the two cases I distinguish the same! It seems the following posters were not confused by this, however.


Hi H.G.,

In my interpretation R=1 is your second variation. I think you are right if you use any move instead of nullmove when R=1. In recursive R=1 nullmove I thought I get the usual (R>=2) nullmove advantages without sacrificing the correctness, since the depth reductions accumulate. Now it seems to me that it gives just the plane alpha-beta and I can only save a few move generation.:? This is not where I would like to get to...

Regards,
L?szl?
L?szl? G?sp
 

Re: Do you evaluate internal nodes?

Postby Uri Blass » 23 Jan 2006, 23:42

L?szl? G?sp?r wrote:
H.G.Muller wrote:Before I can answer that one, I have to make sure how you define R. In the description of nul-move pruning I saw I was a bit confused by its exact meaning: dou you count the null-move as 1 ply or doesn't it count in the depth at all? If R=1 means that, in a node where you would normally search 6 ply, you would search the null-move branch only 5 ply deep, i.e. the initial null-move followed by 4 ply*), than I would say that it hurts in positions where the 6th ply (where the opponent moves!) holds an unpleasant surprise. If it means you search 6-1=5 plies after the null move, then I would agree: there is no pain, but I also see little gain. Any other move you would have selected to try first would have been searched equally deep, and if you would not have picked a really stupid one, it would have caused the cut-off as well.

*) unfortunately I made a typo here initially, I meant to say 4 but wrote 5. :( This would make the two cases I distinguish the same! It seems the following posters were not confused by this, however.


Hi H.G.,

In my interpretation R=1 is your second variation. I think you are right if you use any move instead of nullmove when R=1. In recursive R=1 nullmove I thought I get the usual (R>=2) nullmove advantages without sacrificing the correctness, since the depth reductions accumulate. Now it seems to me that it gives just the plane alpha-beta and I can only save a few move generation.:? This is not where I would like to get to...

Regards,
L?szl?


I do not understand why did you think that you get advantages from null move when the depth reduction accumulate.

It is obvious that in this case you only extend the tree by null move and prune nothing.

all the idea of null move is to search smaller trees and if the depth reduction accumulate you do not get a smaller tree and probably get a bigger tree based on common sense because you simply assume one more move for the players(null move).

Note that in that case searching null first is a very bad idea because doing nothing is a bad candidate to be the best move.

In cases that null is not good enough to generate cut off but another move is good enough you search both null and another move to the same depth when without of your null extension(it is obviously not pruning) you search less moves to get a cutoff.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby L?szl? G?sp » 24 Jan 2006, 15:36

Hi Uri,

Thank you for the lesson. I missed some points, but now it's clear. (In fact it was already clear after HG's post.)

Regards,
L?szl?
L?szl? G?sp
 

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 24 Jan 2006, 16:44

Uri Blass wrote:Note that in that case searching null first is a very bad idea because doing nothing is a bad candidate to be the best move.

Well, if you do it only when eval>beta, this objection might not be valid. Because in that case there might be a good chance that doing nothing, although it will almost certainly not be the best, will still be good enough to cause the cutoff we are aiming for.

In the other thread about tree size we both had examples where tree size benifitted from searching a non-best move first, as long as it is good enough to fail high, and is expected to have a small tree. Picking the null move as a first try when there is no need to improve on the current eval, as opposed to a real move, gives you a guarantee that you will not make your position worse. A real move might leave other pieces undefended, go to an unsave square, allow passage of enemy pieces over the square it evacuated, etc. So even if eval>beta you should exercise some care in picking the move that you want to produce the fail high with. This might require testing of quite a lot of conditions, which you don't need by simply picking the null move.

It would be nice if you could just pick the best move from an earlier iteration as one that most likely will not lower the current eval, but remember that this is a situation where the first move you try should cause the cut-off. So any choice that you make is likely to be 'frozen-in' for the rest of the deepening process (if it was not a bad one), and you must effectively make the choice at the earliest possible time, without the benifit of earlier iterations. If in that case you will pick the null move, and it causes the fail, none of the real moves will be tried, and you would have to make this choice between null-move and a real move that might worsen your position at later iterations as well.

In addition, if the null-move does not fail high, it might give you a clue as to how to sort the rest of your moves. Because apparently you are dealing with a threat in that case, like the capture of a piece that came under attack in the previous ply, so withdrawing that piece to a safe square suddenly becomes a very attractive idea, where that move would have lacked any appeal in the absence of the threat. Of course you could also try to extract such information from unexpectedly poor performance of a real move, but this is much more difficult: if a real move turns out bad you might have provoked the punishment by the move itself (e.g. moving a pinned piece), not necessarily that there was a threat that would refute any other move except a few specific defensive ones.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Uri Blass » 24 Jan 2006, 17:07

H.G.Muller wrote:
Uri Blass wrote:Note that in that case searching null first is a very bad idea because doing nothing is a bad candidate to be the best move.

Well, if you do it only when eval>beta, this objection might not be valid. Because in that case there might be a good chance that doing nothing, although it will almost certainly not be the best, will still be good enough to cause the cutoff we are aiming for.

In the other thread about tree size we both had examples where tree size benifitted from searching a non-best move first, as long as it is good enough to fail high, and is expected to have a small tree. Picking the null move as a first try when there is no need to improve on the current eval, as opposed to a real move, gives you a guarantee that you will not make your position worse. A real move might leave other pieces undefended, go to an unsave square, allow passage of enemy pieces over the square it evacuated, etc. So even if eval>beta you should exercise some care in picking the move that you want to produce the fail high with. This might require testing of quite a lot of conditions, which you don't need by simply picking the null move.

It would be nice if you could just pick the best move from an earlier iteration as one that most likely will not lower the current eval, but remember that this is a situation where the first move you try should cause the cut-off. So any choice that you make is likely to be 'frozen-in' for the rest of the deepening process (if it was not a bad one), and you must effectively make the choice at the earliest possible time, without the benifit of earlier iterations. If in that case you will pick the null move, and it causes the fail, none of the real moves will be tried, and you would have to make this choice between null-move and a real move that might worsen your position at later iterations as well.

In addition, if the null-move does not fail high, it might give you a clue as to how to sort the rest of your moves. Because apparently you are dealing with a threat in that case, like the capture of a piece that came under attack in the previous ply, so withdrawing that piece to a safe square suddenly becomes a very attractive idea, where that move would have lacked any appeal in the absence of the threat. Of course you could also try to extract such information from unexpectedly poor performance of a real move, but this is much more difficult: if a real move turns out bad you might have provoked the punishment by the move itself (e.g. moving a pinned piece), not necessarily that there was a threat that would refute any other move except a few specific defensive ones.


The problem is that even if the evaluation is bigger than beta null move may allow the opponent a good capture when escaping with the piece that is under threat is better.

If you try null move first you only waste nodes in that case.
If you have good order of moves you can hope that usually null move is not going to be worse than the first move.

I do not have good order of moves to try escaping from threat first but I think that using the search after null move to detect a threat is not a good idea and it is better to have a function that detect threats without making moves and undoing them.

the main reason that I do not have the relevant function is that I was too lazy to write the relevant code and check for bugs and not that I think that detecting that an important piece is under threat is too expensive.

Note that detecting that a piece is under threat is not enough and I need also function to help me to get better order of moves based on that information.

Urii
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 24 Jan 2006, 18:50

Sure, if you know that a piece is under threat it makes no sense to do a null-move, unless you are prepared to abandon that piece (i.e. if eval is so much larger than beta that even the loss of that piece would not prevent the fail high). I am assuming that the eval is such that it is not smart enough to anticipate such losses in a static manner.

The problem is: can you make a static analysis that recognizes such threats faster than the search. When I experimented with this, I had very poor results. My static judgement of the situation would completely fail in a large fraction of the cases, because some of the pieces it counted as defenders were in fact pinned, could be exchanged, or were burdoned with defending other pieces (so that their use in recapture for this piece would lead to the loss of another). So I kind of gave up on that idea, and now I am trying to decide such matters (if a piece is sufficiently defended or under threat) by search only. The search can still be guided by static clues, of course.

So the very obvious threats, like P attacks Q, are very easy to detect, but the search tree to detect them (in a QS-like manner) would also be tiny. To recognize a threat with 3 attackers amongst which a doubled Rook, and 3 defenders with one tied up in more urgent duties, is much more difficult, statically as well as through search.

The effect the recognition of a threat should have on the move ordering is the easy part: moving away the threatened piece or blocking the attack should be treated as if they capture a piece of the threatened value.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Uri Blass » 25 Jan 2006, 08:18

I think that in my case I can evaluate faster than searching and the main problem is writing code with no bugs and use it with no bugs for better order of moves.

searching is expensive for me and I try to avoid it when it is possible.

reasons:

1)I do not have incremental move generator so I usually generate all moves before making a move in the search(I have also special generator for captures).

2)my make move include updating my pin array.
It means that I can see easily if a piece is pinned to the king and I use it later in the search.

I think that good and expensive evaluation that include detecting tactical tricks is a good idea because it later can save you more work in the search because you can use it better for pruning and reductions.

The problem is of course to write it with no bugs and if you want to have minimalistic program you cannot do it.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 25 Jan 2006, 10:11

I guess it is a choice in philosophy that you have to make initially and then better stick to it: if you want to have the information available that is needed to do a good static analysis of the tactical situation, (like attack tables, or your pin array) it takes a lot of time to update it when you do a move for the purpose of searching. If you don't keep any records, search becomes very fast, but statically you are totally blind.

By the way, I am not bound to minimalistic methods, although such methods appeal to me because they are less error prone and more transparent than complicated algorithms. This is why it is kind of a sport for me to see how small you can get. But I also develop a 'serious' engine.

A question:

If you have a reliable way to identify threats statically, that is compatitive to a null-move search, why do you do such a search to make the pruning decision in null-move pruning (or the reduction decision in verified null-move pruning)? If you can identify which piece is under threat staticaly, this must also tell you if there is no threat at all. And when that is the case, you can act as if the null-move search would have returned a score above eval, i.e. prune. Especially in verified null-move pruning the risk seems small, becase if you miss a threat now and then, there is a good chance that the verification search will still recognize it.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Tord Romstad » 25 Jan 2006, 10:30

H.G.Muller wrote:Sure, if you know that a piece is under threat it makes no sense to do a null-move, unless you are prepared to abandon that piece (i.e. if eval is so much larger than beta that even the loss of that piece would not prevent the fail high). I am assuming that the eval is such that it is not smart enough to anticipate such losses in a static manner.

I used to do this. It worked in my previous program, but not in my current program. I'm not sure why, but the fact that I had much more sophisticated static threat detection in my previous program is probably part of the explanation.

The problem is: can you make a static analysis that recognizes such threats faster than the search. When I experimented with this, I had very poor results. My static judgement of the situation would completely fail in a large fraction of the cases, because some of the pieces it counted as defenders were in fact pinned, could be exchanged, or were burdoned with defending other pieces (so that their use in recapture for this piece would lead to the loss of another).

Similar to what I do in the last few plies of the main search, except that I don't do this in order to test whether I should do a null move search, but rather as a replacement for the null move search. The code you tried, if I understand correctly, looked like this:
Code: Select all
if(ok_to_do_nullmove(board)) {
  if(static_eval(board) - biggest_threat(board, them) >= beta)
    do_nullmove(board);
    value = -search(board, -beta, -(beta-1), depth-(R+1));
    undo_nullmove(board);
    if(value >= beta) return value;
  }
}

What I do is this:
Code: Select all
if(ok_to_do_nullmove(board)) {
  static_value = static_eval(board);
  if(depth <= 2 && static_value - biggest_threat(board, them) >= beta + margin)
    return beta;
  else if(static_value >= beta) {
    do_nullmove(board);
    value = -search(board, -beta, -(beta-1), depth-(R+1));
    undo_nullmove(board);
    if(value >= beta) return value;
  }
}

Considering the extremely primitive biggest_threat() function I am using, it is surprising that this works at all. I just loop over all available captures and compute their SEE values, without considering pins or overloaded pieces. Nevertheless, it does seem to work. I am sure it could easily be improved, though.

So I kind of gave up on that idea, and now I am trying to decide such matters (if a piece is sufficiently defended or under threat) by search only. The search can still be guided by static clues, of course.

A quite common trick which you may want to try is doing a pre-nullmove search before the full nullmove search when the remaining depth is big. This works even in programs which don't evaluate internal nodes. The pseudo code would look roughly like this:
Code: Select all
  if(ok_to_do_nullmove(board)) {
    int value = -Infinity;
    bool do_nullmove_search = true;
    do_nullmove(board);
    if(depth >= 7) {
      value = -qsearch(board, -beta, -(beta-1));
      if(value < beta) do_nullmove_search = false;
    }
    if(do_nullmove_search)
      value = -search(board, -beta, -(beta-1), depth-(R+1));
    undo_nullmove(board);
    if(value >= beta) return beta;
  }

I like the idea, but it never worked for me.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 25 Jan 2006, 12:09

I guess it depends on how clever your biggest_threat() (i.e. your SEE()) is if you still need the null-move. To completely replace the null-move search it must be very accurate, and I would not be surprised that the simple version (without pinnings and overloads) is not good enough. (I am a bit confused which part of you message pertains to your old code and which to the new one. The looping over SEE() without pins and overloads, was that the 'more sophisticated' threat detection of your old code, or is this the new code in which you say it does not work?)

If your static threat detection would overlook some threats (involving pinnings or overloads), using it as the sole criterion for pruning makes you completely blind to such threats. Verified pruning should be more forgiving in this respect, especially if you do a re-search.

About the interesting idea of the pre-search: Since I do internal iterative deepening, in principle any null-move search would start as a qsearch(), since the initial depth_left always starts out as zero. So a generalization of this idea could be to pass a parameter to the search routine that searches the tree beyond the null move, to tell it that it should abort the deepening on the first iteration that does fail high (implying that the null move would not fail high). This guarantees that you will find any threat (that would be within the scope of your reduced null-move search) with minimal effort. It seems worth trying! 8-)
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Do you evaluate internal nodes?

Postby Uri Blass » 25 Jan 2006, 12:47

H.G.Muller wrote:
A question:

If you have a reliable way to identify threats statically, that is compatitive to a null-move search, why do you do such a search to make the pruning decision in null-move pruning (or the reduction decision in verified null-move pruning)? If you can identify which piece is under threat staticaly, this must also tell you if there is no threat at all. And when that is the case, you can act as if the null-move search would have returned a score above eval, i.e. prune. Especially in verified null-move pruning the risk seems small, becase if you miss a threat now and then, there is a good chance that the verification search will still recognize it.


part of the threats are not threat against a piece but positional threats so detecting threats against pieces is not enough to avoid null move pruning.

Detecting threats against pieces by evaluation is also not enough to be sure that there is a threat because the pawn may threat the queen but the opponent may reply the capture of the queen by checkmate so the plan to capture the queen is no threat based on search.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Do you evaluate internal nodes?

Postby H.G.Muller » 25 Jan 2006, 14:21

OK, so basically you are saying that for null-move pruning you need a much more accurate determination of the score, since in many cases the decision to prune should be taken on positional grounds.

But I could imagine that you could also benifit a lot from a null-move search to recognize positional threats. In situations where no captures are good, and where this is recognized by SEE, you would have to sort non-captures in front. An obvious way to sort those initially would be by the number of positional points they are likely to earn, e.g. from piece-square tables, which then takes the place of 'most valuable victim' in tactical sequences.

But the concept of 'threat' works pretty much the same in that case: you might have a move available that improves your own position a bit, but if the opponent has a move that improves his positional score much more, than your over-all score might be better if you direct your effort to preventing his move rather than persuing your own. Just as it is better to save your Rook than to capture his Pawn. And the means available to prevent his positional move are very much the same as for defending yourself against captures: you can increase the attack on the square he wants to go so that it is no longer safe for him (the equivalent of defending your piece under threat), you could block the move, you could exchange the piece that can do the move. (The only mode of defense that has no analogy in ordinary chess is withdrawal of the threatened piece.) Moves that do such things are easily recognized statically and can be upgraded in priority by the value of the move they might prevent (or by how much that value sticks out above the average positional advantage of a move).
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Previous

Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 18 guests