Irrefutable laws of chess programming !

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Irrefutable laws of chess programming !

Postby GeoffW » 21 Jan 2006, 17:00

Hi

Law 1
Take an idea that works for everyone else, implement it in your program and it will never work first time or ever !

I have just tried the late move reductions that Tord and Stuart had been talking about recently

Code: Select all
if ((depth > 1) && !disablePruning  &&  !weAreInCheck && (!(gen_dat[i].m.b.bits & (CAPTURE_MOVE | PROMOTE_MOVE))) &&
    history[gen_dat[i].m.b.from][gen_dat[i].m.b.to].count == 0 && !checkingMove(gen_dat[i].m)  )
{
    if (legalMoveCount > 3)
        reduction = 1;
    else
        reduction = 0;
}
    /* history[].count is a measure of moves where score > alpha,  =depth squared to be precise */



This looked promising at first as it reduced node counts drastically but made the program play significantly worse.
Any ideas what I could do to improve that bit of code ? I have added history array variables to count nodes that score > alpha, and score>beta, but so far no other experiments have helped

Law 2
If you discover a bug that has been in your program for several months, fixing the bug is guaranteed to make it play worse!

I fixed a couple of typos while I was adding mobility code, didnt seem to help me at all. From reading the other messages here today, sounds like Naum has fell foul of this law too :)

Still testing, but my simple mobility code is not doing much to help either :(

Regards Geoff
GeoffW
 
Posts: 37
Joined: 01 Oct 2004, 18:36

Re: Irrefutable laws of chess programming !

Postby mridul » 21 Jan 2006, 20:24

Hi,

Your post reminded me of something which I wanted to post about - maybe this might be relevent to this case you have here too ...
I have seen lot of people use this idiom and until recently , so did I : sometimes in very non-obvious ways (example : the cheap singular extension code posted in CCC sometime back).

Now , I try to avoid code which uses :
* current_move[ply - k] , current_victim[ply - k] , etc - that is dependending on the previous kth move where k >= 1.
* Depending on number of valid moves made in this play like num_valid[ply] > y , etc. (other than for mate detection ofcourse :) )

The first introduces path dependencies into your search which could get amplified across multiple hashtable stores/probes.

The second makes the search dependent on move ordering - and sometimes to a unexpectedly high degree.

Even if you have constant move ordering for a given position (like based on pcsq , static analysis of the position) , I feel I am still introducing variations and dependencies between two unrelated aspects.

For that matter , I would not prefer using the "legalMoveCount > 3" in your code below.
But then ,
* I am highly unsuccessful in writing fast engines and in my expiriments in pruning , so take me observations with a bag of salt :-)
* Practical aspects usually outweigh 'clean search'

On the pruning code itself , I am woefully ignorent - I remember trying and failing ....
The only pruning I have is a ifdef'ed variation of very heavily dumbed down futility - I think the analysis to turn it on is outweighing the potential benifits anyway !

Enjoy,
Mridul
mridul
 
Posts: 48
Joined: 09 Dec 2004, 11:34
Location: Bangalore , India

Re: Irrefutable laws of chess programming !

Postby Alessandro Scotti » 21 Jan 2006, 20:46

Hi Geoff,
this is what I have in Kiwi:

Code: Select all
       if( haveHistoryPruning &&
            depthExtension == 0 &&
            validMoves >= 4 &&
            ! curr.isCaptureOrPromotion() &&
            ! givesCheck &&
            depth >= (4*FullPlyDepth) &&
            true )
        {
            int n = histTable[ hindex ].count;
            int f = histTable[ hindex ].fail_high;

            if( f < (n / 8) ) {
                depthExtension = -FullPlyDepth;
            }
        }


The first version had validMoves >= 3 and depth >= 3, and gave me 15-20 elo. The above version is worth probably some 10-15 extra elo.
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Re: Irrefutable laws of chess programming !

Postby smcracraft » 21 Jan 2006, 22:24

GeoffW wrote:Hi

Law 1
Take an idea that works for everyone else, implement it in your program and it will never work first time or ever !

I have just tried the late move reductions that Tord and Stuart had been talking about recently

Code: Select all
if ((depth > 1) && !disablePruning  &&  !weAreInCheck && (!(gen_dat[i].m.b.bits & (CAPTURE_MOVE | PROMOTE_MOVE))) &&
    history[gen_dat[i].m.b.from][gen_dat[i].m.b.to].count == 0 && !checkingMove(gen_dat[i].m)  )
{
    if (legalMoveCount > 3)
        reduction = 1;
    else
        reduction = 0;
}
    /* history[].count is a measure of moves where score > alpha,  =depth squared to be precise */



This looked promising at first as it reduced node counts drastically but made the program play significantly worse.
Any ideas what I could do to improve that bit of code ? I have added history array variables to count nodes that score > alpha, and score>beta, but so far no other experiments have helped

Law 2
If you discover a bug that has been in your program for several months, fixing the bug is guaranteed to make it play worse!

I fixed a couple of typos while I was adding mobility code, didnt seem to help me at all. From reading the other messages here today, sounds like Naum has fell foul of this law too :)

Still testing, but my simple mobility code is not doing much to help either :(

Regards Geoff


Tord also said that it would work best in simple-eval programs.
Is your program a simple-eval program?

My program was at the time of adding it and it produced a 1-2%
improvement in basic test suite score, improved branching factor,
and greater depth. My program then did only material + pcsq.

As my eval gets more complex, I'll watch late move reduction
with and without to ensure it doesn't go the opposite side too much.
Since late move reduction I added passed pawns and doubled pawns
at terminal nodes (no file/rank multipliers for though yet, but
a game stage multiplier), queen foraging in opening penalty,
basic king castling, and mobility.

I went back and ran a test for the purposes of this post.

The first two results are with late move reduction at 1 second per
position on wac (revised) and Dann Corbit's ecmgcp.epd) and
then without late move reduction at the same time control for the same tests:

$ ./td_testshort
time 1 wac.epd
+ 7.06/18.88 81% 243/300 bf=2.30 ha=34.29% pha=96.04% 244.46 29659060 98864/1/121325 44961/74622/385188/983312/6086660/1
46079
- 7.22/20.45 19% 35/183 bf=2.62 ha=29.31% pha=97.06% 182.81 20108780 109884/1/109999 23477/52305/27428/471522/5670896/47
576

Stuart Cracraft@CRACRAFT ~/src/td
$ ./td_testshort
time 1 wac.epd
+ 6.07/18.79 80% 241/300 bf=3.06 ha=31.80% pha=95.96% 245.81 37070824 123569/1/150808 52641/82607/226060/1035093/1016172
0/240549
- 5.94/20.03 17% 32/183 bf=3.86 ha=27.91% pha=96.97% 183.66 24156228 132001/1/131525 23947/52399/38485/440327/8880315/46
771


bf (branching factor) improved measureably while test results very
slightly improved.

This result is good enough for me.

The evaluation in the former pair above (late move reduction) is
the default version of the present "TeeDee" program and has only
these at terminal nodes:

material
pcsq all (based on fruit, but hard-transition not smoothed yet)
doubled pawns penalty
passed pawns bonus
mobility
queen foraging in opening penalty
king presently in castled position bonus

Specifically:

material 1000.0000 3000.0000 3100.0000 5000.0000 9000.0000
params[2311] passed pawn = 100.000000 for each passed pawn
without respect to file or rank but multiplied by game stage 1-10
params[2312] doubled pawn = -100.000000 for each doubled pawn
without respect to file or rank but multiplied by game stage 1-10
params[2313] mobility = 100.000000 for each pseudo-legal move
there is not variance per piece-type for mobility granted nor for
any other aspect (game stage, centrality, etc.)
params[2314] queen_foraging = -100.000000
a penalty applied if the queen is seen off its opening square in
the opening of the game without respect to development before
the first third of material is traded off
params[2315] king castled = 100.000000
king and rook must be on their castled square queen or kingside

Sorry for the format of the above. It is a side-effect of the program's
research design goals still underway.

Stuart
smcracraft
 
Posts: 65
Joined: 15 Jan 2006, 05:38

Re: Irrefutable laws of chess programming !

Postby GeoffW » 22 Jan 2006, 13:09

Hi

Thanks for the replies guys. Just been doing some testing again with this

my eval is very simple so I hoped it would work

Code: Select all

1 Sec per pos'n

WACNEW No reductions
Test results for 300 Positions. Solved 279. Failed on 21
Total time for test suite 4:04 Minutes
Solved = 93.0 Percent     Total nodes: 367333564

ECMGCP  No reductions
Test results for 183 Positions. Solved 45. Failed on 138
Total time for test suite 3:05 Minutes
Solved = 24.6 Percent     Total nodes: 239836997



WACNEW  with reductions as original post
Test results for 300 Positions. Solved 276. Failed on 24
Total time for test suite 4:03 Minutes
Solved = 92.0 Percent     Total nodes: 366648329

ECMGCP with reductions as original post
Test results for 183 Positions. Solved 52. Failed on 131
Total time for test suite 3:05 Minutes
Solved = 28.4 Percent     Total nodes: 240203514



WACNEW With Alessandro's reductions
Test results for 300 Positions. Solved 281. Failed on 19
Total time for test suite 4:04 Minutes
Solved = 93.7 Percent     Total nodes: 367557545

ECMGCP With Alessandro's reductions
Test results for 183 Positions. Solved 51. Failed on 132
Total time for test suite 3:05 Minutes
Solved = 27.9 Percent     Total nodes: 244648328



I implemented Alessandro's example as it looked promising, I am still running a test against some engines but I am sure it is going to give a worse result in real games :( It was particualrly noticeable against Aice, with no reductions I had won 9-1, with reductions that is now running at 4-4

I followed Allesandro's code apart from the line
Code: Select all
if( f < (n / 8) )


Isnt that saying if we havent had many fail high cutoffs previously on this move then we reduce depth. Isnt this the reverse of the coventional idea, if this move has caused lots of fail highs before, we anticipate it will again and can afford to reduce depth ?

Geoff
GeoffW
 
Posts: 37
Joined: 01 Oct 2004, 18:36

Re: Irrefutable laws of chess programming !

Postby Alessandro Scotti » 22 Jan 2006, 14:33

GeoffW wrote:I implemented Alessandro's example as it looked promising, I am still running a test against some engines but I am sure it is going to give a worse result in real games :( It was particualrly noticeable against Aice, with no reductions I had won 9-1, with reductions that is now running at 4-4

I followed Allesandro's code apart from the line
Code: Select all
if( f < (n / 8) )


Isnt that saying if we havent had many fail high cutoffs previously on this move then we reduce depth. Isnt this the reverse of the coventional idea, if this move has caused lots of fail highs before, we anticipate it will again and can afford to reduce depth ?


Hi Geoff,
well if the elo difference is small you need to test it will more games. I usually start with 400 games for every new feature. Then if it looks promising I play some more hundred games.
And yes my code does exactly that: if a move failed low often in the past I expect a fail low again and search it with reduced depth. There are a couple things I forgot to include in the above snippet though:
- history table is indexed by piece/square rather than square/square;
- if a reduced search fails high I research the position at full depth.
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Re: Irrefutable laws of chess programming !

Postby Tord Romstad » 22 Jan 2006, 15:02

smcracraft wrote:Tord also said that it would work best in simple-eval programs.

I have never really meant to say that. What I meant to say was only that it is possible that this reduction technique, like many other selective search tricks, works better in programs with a simple evaluation function. I have no idea, really.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: Irrefutable laws of chess programming !

Postby Tord Romstad » 22 Jan 2006, 16:51

Here are my results in WAC and ECMGCP on a Pentium IV 2.4 GHz:

At 1 second/position:
With late move reductions: 292/300 at WAC, 93/183 at ECMGCP.
Without late move reductions: 290/300 at WAC, 80/103 at ECMGCP.

At 2 seconds/position:
With late move reductions: 295/300 at WAC, 115/183 at ECMGCP
Without late move reductions: 295/300 at WAC, 104/183 at ECMGCP

At 3 seconds/position:
With late move reductions: 297/300 at WAC, 128/183 at ECMGCP
Without late move reductions: 296/300 at WAC, 114/183 at ECMGCP

At 5 seconds/position:
With late move reductions: 298/300 at WAC, 141/183 at ECMGCP
Without late move reductions: 298/300 at WAC, 128/183 at ECMGCP.

At 10 seconds/position:
With late move reductions: 298/300 at WAC, 159/183 at ECMGCP
Without late move reductions: 298/300 at WAC, 145/183 at ECMGCP

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: Irrefutable laws of chess programming !

Postby Naum » 22 Jan 2006, 18:42

I agree with both laws GeoffW mentioned. Unfortunately they are so true that I would call them axioms.
For almost a year I have a bug in backwards pawn eval that I can't fix because Naum plays worse. It is like a torn in my eye.

I also agree with Mridul about using something like if (lastMoveWasCapture) in your search. It seems to be introducing unwanted side-effects.

History pruning (or whatever you wanna call it) gives Naum 40-50 ELO.
Naum
 
Posts: 87
Joined: 10 Oct 2004, 04:23
Location: Toronto

Re: Irrefutable laws of chess programming !

Postby smcracraft » 22 Jan 2006, 20:58

Tord Romstad wrote:
smcracraft wrote:Tord also said that it would work best in simple-eval programs.

I have never really meant to say that. What I meant to say was only that it is possible that this reduction technique, like many other selective search tricks, works better in programs with a simple evaluation function. I have no idea, really.

Tord


Sorry - my ear seems to hear what it wants to sometimes.

Thanks for clarifying your point.

Stuart
smcracraft
 
Posts: 65
Joined: 15 Jan 2006, 05:38

Re: Irrefutable laws of chess programming !

Postby Ron Murawski » 22 Jan 2006, 22:16

Naum wrote:I agree with both laws GeoffW mentioned. Unfortunately they are so true that I would call them axioms.
For almost a year I have a bug in backwards pawn eval that I can't fix because Naum plays worse. It is like a torn in my eye.
<snip>


I once had an error in my eval confusing black bishops with black rooks. When I fixed it, my engine played about 150 Elo weaker. Time went by (3 years!) and I found and fixed many other bugs, and then, suddenly, I was able to fix the bishop/rook confusion but gained no strength from it. But at least I lost no strength, either!

Ron
User avatar
Ron Murawski
 
Posts: 352
Joined: 26 Sep 2004, 21:50
Location: Schenectady, NY, USA

Re: Irrefutable laws of chess programming !

Postby GeoffW » 24 Jan 2006, 02:17

Hi

Thanks for the comparative tests Tord. Looks like LMR (late move reductions) is a big help for Glaurung. I have it working a little better now I think thanks to Alessandro's explanation and tip about researching if a LMR node fails high, I have been playing some games with that change and getting slightly improved results.

Code: Select all
    Engine        Score        Wa
01: Waster 0.18   77.0/119 ??????
02: Postmodernist 6.5/14    4-5-5
02: GES_136       6.5/13    3-3-7
04: Hermann       5.0/14    4-8-2
04: Knightx192    5.0/13    5-8-0
04: Dorky         5.0/13    4-7-2
07: Terra34       4.5/13    2-6-5
08: Djinn884b     4.0/13    3-8-2
09: Aice          3.5/13    2-8-3
10: Matacz        2.0/13   1-10-2

Level: Blitz 3/1


Oh and that reminds me, complaint for Alessandro :D. I had to drop Kiwi from that list of test engines, I used to get about 40% or so against it but since your last version update, I cant win a game ! Congrats on a big improvement, must have been quite a lot of work I imagine.

The other thing you mentioned about the history table being indexed by [piece][move to] sounded interesting. I just use the conventional [from][to], I will try that idea when I get a bit of time, see if improves my horribly bad move ordering.


Regards Geoff
GeoffW
 
Posts: 37
Joined: 01 Oct 2004, 18:36

Re: Irrefutable laws of chess programming !

Postby Tord Romstad » 24 Jan 2006, 11:55

GeoffW wrote:Hi

Thanks for the comparative tests Tord. Looks like LMR (late move reductions) is a big help for Glaurung.

They are. Of course the test suite results by themselves mean nothing, but LMR help a lot in my test games, too.

I have it working a little better now I think thanks to Alessandro's explanation and tip about researching if a LMR node fails high, I have been playing some games with that change and getting slightly improved results.

Most people do such re-searches, I think. I prefer to think of the reduced depth searches as a sort of internal verification search which tests whether it is safe to prune the move. If the reduced move fails high, it means that it is not safe to prune, and we do a full depth search.

The other thing you mentioned about the history table being indexed by [piece][move to] sounded interesting. I just use the conventional [from][to], I will try that idea when I get a bit of time, see if improves my horribly bad move ordering.

Unfortunately I would be suprised if it did. In my experience there is little difference between [piece][to] history and [from][to] history. If you don't mind wasting some memory, you could even try [piece][from][to]. I have never tried this myself, but I know one strong commercial program which uses it.

By the way, if your move ordering is as horrible as you say, I would expect LMR to be less effective than in a program with good move ordering. The main point of LMR is that a beta cutoff will usually either happen at one of the first moves searched, or not at all. If your move ordering is sufficiently bad, this is no longer true.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 42 guests