Drop of nps : why ?

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Drop of nps : why ?

Postby YvesLejeail » 21 Feb 2006, 18:14

Hello all,
My chess engine is very weak, so I have lot of work to improve it. But very often when I try to ameliorate the code, there is a huge nps drop (a factor 3 to 10 often). My question is why ? Could someone explain me so that I could stop doing silly changes in the code, cause it's time consuming and very desperating ? :?
It is very uneasy for me to describe exactly when it arise, otherwise I would not post here. Although, is it possible to have something to do with passing tables byref or byval through subroutines ? I use tables in move generation (move_from, move_to, move_expected_score by SEE) and sorting, recently I tried to change something in the argument of quiescent move generation subroutine, and the nps droped : the code seems to prefer passing Byval instead that Byref as I have noticed (at least I suppose). Could it be due to the VB.net, and why? I know lot of you are using C or C++ but the problem should be common to these languages (I still suppose). Could it be a stack limitation or something like that (sorry to ask things I don't really understand) :( Is the code structure very important ? Is it better to avoid passing tables through subroutines ?

I'm not a guru programmer, being sure of nothing, so, any idea is welcomed . Thanks for any help, :)
Yves
User avatar
YvesLejeail
 
Posts: 48
Joined: 03 Aug 2005, 17:36
Location: Pertuis, France

Re: Drop of nps : why ?

Postby Uri Blass » 21 Feb 2006, 18:25

Yes it is better not to pass tables to routines.

You should pass pointers to tables or use global variables.

passing pointers is practically better but I still use a lot of global variables
and it is not easy to change it without making a rewrite so I do not plan to change it in the near future.

Uri
User avatar
Uri Blass
 
Posts: 727
Joined: 09 Oct 2004, 05:59
Location: Tel-Aviv

Re: Drop of nps : why ?

Postby Roman Hartmann » 21 Feb 2006, 18:54

Hi,
I had that problem often enough. One tiny change and the nps dropped drastically.
I tried to implement a qs several times but failed for a long time as my nps dropped like a stone and the search couldn't reach any significant depth anymore every time I added a qs.
Well, finally I was able to solve that. In my case it was a design issue. I passed a structure to the search instead of a pointer to the structure and as I was passing the structure from the search further to the qs the overhead was just too much.
Passing a pointer works probably much better than passing a whole structure or tables in your case.

best regards
Roman
User avatar
Roman Hartmann
 
Posts: 155
Joined: 11 Oct 2004, 14:21

Re: Drop of nps : why ?

Postby YvesLejeail » 21 Feb 2006, 19:12

thanks for the replies ! So it seems that passing a table (or a structure) through a subroutine is very bad... I wonder if making my tables globals
is worth the try, but that's a very interesting idea... This is an easier try for me than structures (I never tried them) but if I understand you Roman, my problem
should be the same, better to pass the pointers than the structures or tables...
The hope come back, nps will raise this time :wink: I will let you know of the
effect of the change,
Thanks again,
Yves
User avatar
YvesLejeail
 
Posts: 48
Joined: 03 Aug 2005, 17:36
Location: Pertuis, France

Re: Drop of nps : why ?

Postby H.G.Muller » 22 Feb 2006, 13:42

You should always realize how difficult it is what you ask the computer to do. Things that do not fit in a single memory word, like tables and structures, require time for copying if you pass them 'by value'. In such a case calling them 'by reference' or passing a pointer (which is the same thing) is much quicker. But of course this only works for 'read-only' data, if you want to change it with a pointer you change the original. This might not be what you want at all.

By the way, lower nps does not necessary mean less efficient code. Depending on your program design the time a node requires might be very dependent on if it is a cut-node or an all-node. (E.g. if you do incremental move generation, the time might be roughly proportional to the number of moves actually generated before the cutoff occurred.) So changes that effect the efficiency of your pruning might also result in a different nps.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Drop of nps : why ?

Postby YvesLejeail » 23 Feb 2006, 09:10

Hi !
it is said everywhere on the web that passing byref is much more efficient than passing byval. But sometimes the contrary is also said... As far as I remember there are some cases which can lead to "overhead", when you pass something byref, then passing in another sub byref, then passing in another byref,...So I suppose it depends of the programme. Some time ago I tried to pass all byref, but the result was very disapointing, a big decrease in the search speed, so I came back to the byval for the variables which are not modified...
What you are saying about the pruning and move ordering effect is very interesting since yesterday, observing my engine during play, I noticed that the nps whas sensitive to the "efficiency" of the search:
- obvious move to find = move well ordered = not so bad nps (20000 to 40000)
- very difficult move, computer under high pressure = move not well ordered = very bad nps (drop to 3000)
It's difficult to put that in mathematical words, so that's just like a zoological observation :)
Also I noticed the nps is strongly dependent on the search of depth (If I remember well for "even" depth the nps is lower than for odd depth, sometimes by a factor 3. Of course the search becomes very inefficient
when the engine drop to 3000nps, because the branching factor is around 6-8 (at least). But this discussion drive me to the point that I should focus on move ordering....
Moreover, I tried yesterday to change the structures of my tables for move generation. I made two big global tables (20*75 of 32 bit integers), to track the generated moves (from and to), depending on the depth. I removed all the tables passed byval and byref. But again the result was very disapointing, no apparent improvements, and still the same "unstability" depending on the depth, big apparent "overheads" , as described above.
So I think I will keep my code as it is today. Maybe will try as you suggest to avoid copying board, better to have a makemove and undomove. Can we say, to summarize, the more nodes we prune (i.e. the more efficient is the move ordering and the search), and the higher will be the nps ? At least it will be as high as possible ? If I can't ameliorate the nps, I can try to improve the move ordering :)
Thanks !
Yves
User avatar
YvesLejeail
 
Posts: 48
Joined: 03 Aug 2005, 17:36
Location: Pertuis, France

Re: Drop of nps : why ?

Postby Tord Romstad » 23 Feb 2006, 10:17

YvesLejeail wrote:Hi !
it is said everywhere on the web that passing byref is much more efficient than passing byval. But sometimes the contrary is also said... As far as I remember there are some cases which can lead to "overhead", when you pass something byref, then passing in another sub byref, then passing in another byref,...So I suppose it depends of the programme.


I think it depends more on what type of data you are trying to pass. Passing big objects (like structs) by value is probably not a good idea. Try to pass basic data types like numbers and characters by value, and everything bigger by reference.

Tord
User avatar
Tord Romstad
 
Posts: 639
Joined: 09 Oct 2004, 12:49
Location: Oslo, Norway

Re: Drop of nps : why ?

Postby H.G.Muller » 23 Feb 2006, 10:39

If you look at how alpha-beta works with perfect move ordering, you will see that in any branch other than the PV there are two kinds of nodes, depending on who is to move:

Alpha nodes (aka all-nodes) will sarch every move that is possible from that position. (The move ordering does not matter there.) Beta nodes (aka cut-nodes) will only conside a single move, which is so good that it produces a cutoff (fail high) and all other moves can be skipped. (Here the move ordering is important, with wrong move ordering you would consider other moves before the one that gives the cutoff.)

Your code might need much shorter time to finish the beta nodes, hence your nps will be higher if you have more of them. For a tree of fixed depth and branching ratio you can calculate how many alpha and beta nodes there are in the tree (with perfect move ordering). It turns out that for even depth this is about equal, while for odd depth almost all nodes are alpha nodes. So your observations with odd/even depth make sense. How it works out depends a little on the details of your program, in particular what you do at the horizon. If you would always do a static eval there, without generating moves, you should not count them in the depth (so that odd and even interchange). If you generate moves for the purpose of deciding if they are good for QS, but allow the static eval, which you calculate first, to effectthe beta cutoff, there is a strong difference between alpha and beta nodes even at the horizon, and you should count them.

For judging the efficiency of your code, independent of the adeptness of your search, it is better to count the total number of moves generated in all nodes, rather than the nodes itself. If your search routine has a large overhead it even executes without trying any moves, you might take a weighted comination of the two (i.e. overhead * N_nodes + N_moves, where the factor 'overhead' expresses to how many moves the overhead is equivalent).

Passing byref should only be inefficient for passing objects that can be passed in a single memory word. To pass the pointer is then exactly as much work as to pass a copy of the object itself. But every time the object is used, the pointer has to be dereferenced, which takes an extra instruction (the first memory fetch only tells you where the object is, rather than what the object is). For arrays this does not matter, because arrays are pointers anyway, and even to access a local copy of the array would require a dereference to get at the elements.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Drop of nps : why ?

Postby Tony van Roon-Werten » 23 Feb 2006, 17:19

YvesLejeail wrote:Hello all,
My chess engine is very weak, so I have lot of work to improve it. But very often when I try to ameliorate the code, there is a huge nps drop (a factor 3 to 10 often). My question is why ? Could someone explain me so that I could stop doing silly changes in the code, cause it's time consuming and very desperating ? :?
It is very uneasy for me to describe exactly when it arise, otherwise I would not post here. Although, is it possible to have something to do with passing tables byref or byval through subroutines ? I use tables in move generation (move_from, move_to, move_expected_score by SEE) and sorting, recently I tried to change something in the argument of quiescent move generation subroutine, and the nps droped : the code seems to prefer passing Byval instead that Byref as I have noticed (at least I suppose). Could it be due to the VB.net, and why? I know lot of you are using C or C++ but the problem should be common to these languages (I still suppose). Could it be a stack limitation or something like that (sorry to ask things I don't really understand) :( Is the code structure very important ? Is it better to avoid passing tables through subroutines ?

I'm not a guru programmer, being sure of nothing, so, any idea is welcomed . Thanks for any help, :)
Yves


In C++ and Delphi, passing arrays by reference is always faster.

In a dot.net surrounding it doesn't have to be because passing by ref may result in making a copy,sending it to the function,copy it back to the original array. So bassicly twice as much copying as sending it by val in the first place.

The best way to solves this is not using a dot.net environment for your engine. It's only good for the interface.

Tony
Tony van Roon-Werten
 
Posts: 99
Joined: 02 Oct 2004, 15:31
Location: 's Hertogenbosch, Netherlands

Re: Drop of nps : why ?

Postby Casper W. Berg » 23 Feb 2006, 18:10

If you are unsure where you program spends it's time try using a profiling tool. This is invaluable for optimizing your code for speed.

I can't recommend any particular profiler for VB.Net because I'm not a VB-programmer, but I know they exist.

Casper
User avatar
Casper W. Berg
 
Posts: 27
Joined: 11 Jan 2006, 22:33

Re: Drop of nps : why ?

Postby Dann Corbit » 23 Feb 2006, 20:19

The profiler advice is excellent.

Passing objects by reference has an evil side effect:
Modifying them in the called function also modifies them in the caller.

So it means you must be very careful about what you are doing.

A change from call by value to call by reference is very drastic. You shoul do that only if it is absolutely necessary.

The profiler will tell you if that is the case.
Dann Corbit
 

Re: Drop of nps : why ?

Postby Alessandro Scotti » 23 Feb 2006, 20:33

Dann Corbit wrote:Passing objects by reference has an evil side effect: Modifying them in the called function also modifies them in the caller.

So it means you must be very careful about what you are doing.

A change from call by value to call by reference is very drastic. You shoul do that only if it is absolutely necessary.


In this case it really helps, if the language supports it, to declare references and pointers as "const", so the compiler won't let you modify them by mistake.
User avatar
Alessandro Scotti
 
Posts: 306
Joined: 20 Nov 2004, 00:10
Location: Rome, Italy

Re: Drop of nps : why ?

Postby YvesLejeail » 23 Feb 2006, 22:34

As profiler I'm using nprof, which is working with VB and is free and simple for me. At least I can see roughly where are the code bottlenecks... But at the moment I can't go further than about ~40000 nps. It seems a kind of limit for me. Maybe due to the VB code or to some bad code structure as I'm not a Guru :wink: I may consider to use another language, but is it worth the try ( I mean a good language with a not professional programmer, it's maybe a waste of time and money) ? But this is for the far future, I don't consider it in the next weeks...So that I will focus more on the pruning or move ordering as I said previously, than on the code structure (of course I will come back on that as soon as possible :) )
This afternoon I was working on the Alpha-Beta , on the paper. I think I understand very well now how it works theoritically, but to go to the practice is another thing, isn't it ? I was also playing with the pruning (the 3 kinds including the razoring, as I don't have null-move). I'm interested by some recent discussions I saw here, about the pruning after a Beta cutoff
(If I remember well). I will study it carefully.
Thanks for all the answers, guys, they will be also carefully examined, several times, to understand them completely !
Yves
User avatar
YvesLejeail
 
Posts: 48
Joined: 03 Aug 2005, 17:36
Location: Pertuis, France

Re: Drop of nps : why ?

Postby Dann Corbit » 24 Feb 2006, 01:50

Alessandro Scotti wrote:
Dann Corbit wrote:Passing objects by reference has an evil side effect: Modifying them in the called function also modifies them in the caller.

So it means you must be very careful about what you are doing.

A change from call by value to call by reference is very drastic. You shoul do that only if it is absolutely necessary.


In this case it really helps, if the language supports it, to declare references and pointers as "const", so the compiler won't let you modify them by mistake.


However, you often need to modify them without modifying the versions in the calling program (which is why passing by value is the default for many languages).
Dann Corbit
 

Re: Drop of nps : why ?

Postby YvesLejeail » 24 Feb 2006, 15:40

I'm not sure about the pointer feature in VB.net: it seems to exists something very limited so that I should look at it to see if it can be useful for me.
Also, the use of const in VB.net seems to be very limited, to simple things (not possible to have an array of constants, or maybe declare each value of the array as a constant, I should also take a look at that).
This morning I removed most of the unnecessary tables passed through subs (sometimes I passed the board, which is global, through subs !!!). Now I feel
better :) , I don't know for the code, must test it. During the tests (I'm using testsuites Combinations, Mates, Final positions), I discover
some major bugs like :
- castling rights not "forgotten" after the 1st iteration, in some cases;
- pruning in the Quiescence search, causing the programme to a bad evaluation of the position (maybe some further conditions have to be checked).
Milady 1.xx is rather young (1year+3monthes) so it is still full of bugs!
Seems that carefully testing is a hard task in computer chess...Someone on the net said that "Computer chess is the art of debugging", which seems really true for me
Yves
User avatar
YvesLejeail
 
Posts: 48
Joined: 03 Aug 2005, 17:36
Location: Pertuis, France


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 40 guests