The "memory" command

Discussions about the WinBoard protocol. Here you can also report bugs and request new features.

Moderators: hgm, Andres Valverde

The "memory" command

Postby Ilari Pihlajisto » 25 Feb 2009, 18:46

As someone who's writing a GUI for both Winboard and UCI engines, I find the "memory" command problematic for the following reasons:

- The UCI protocol already has standard options "Hash" and "NalimovCache". Winboard should follow the lead. It would make life easier for us GUI athors too.
- Forcing the engine to use X MB of RAM for everything can be difficult. What if I want to load the 4-men Scorpio bitbases to RAM, and also allocate a certain amount of memory for the hash table?

I'm not saying that we should get rid of the memory command. It could be nice for the few engines that would support such a feature. But I'd really like to have separate options for setting the hash size and egt cache size. I suggest that we add the options:

- "hash N", where N is the hash table size in megabytes
- "egtcache TYPE N" where TYPE is the egt type (same with the egtpath option) and N is the egt cache size in megabytes
User avatar
Ilari Pihlajisto
 
Posts: 78
Joined: 18 Jul 2005, 06:58

Re: The "memory" command

Postby H.G.Muller » 25 Feb 2009, 22:00

Ilari Pihlajisto wrote:- Forcing the engine to use X MB of RAM for everything can be difficult. What if I want to load the 4-men Scorpio bitbases to RAM, and also allocate a certain amount of memory for the hash table?

Well, so what if you want that? Just subtract the size of the bitbases, together with all other tables you need, from the memory quota, and use that as hash size. That seems easy enough.

The philosophy of WinBoard protocol is that engines should be allowed to optimize their own use of the available resources. The desirability of the GUI telling them what hash size to use is just as large as to have it tell them the value of a Knight and a Bishop.

I don't see why it should be problematic in the GUI. In WinBoard I just add the UCI hash size and EGTB cache, plus 1MB for crumbs, and send it to the engine. (It seems that UCI engines do not include the EGTB cache in the hash size, but add it to it, although the UCI protocol specs don't explicitly say you can do that.)

The division of memory between main hash, Pawn hash, move-generator tables, EGBB, EGTB cache and what have you belongs in the domain of engine tuning. If we want the GUI to act as an interface for engine tuning through and engine-specific menu, we should define and use a WinBoard equivalent to the UCI option and setoption command. I already have an experimental XBoard version that implements this, through the option feature. If engines want to be told on installation how much EGTB cache to use, they could define a feature option="EGTBcache -spin 0 8 4", and WinBoard can then send option EGTBcache 4 to set it to the user-requested size. I don't think it makes much sense to make this a standard command, as every engine will require a different value for optimal functioning. My engine, for example, do not use EGTBs, and thus require size 0. But other engines would not be very happy with that setting.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: The "memory" command

Postby Ilari Pihlajisto » 26 Feb 2009, 00:27

H.G.Muller wrote:
Ilari Pihlajisto wrote:- Forcing the engine to use X MB of RAM for everything can be difficult. What if I want to load the 4-men Scorpio bitbases to RAM, and also allocate a certain amount of memory for the hash table?

Well, so what if you want that? Just subtract the size of the bitbases, together with all other tables you need, from the memory quota, and use that as hash size. That seems easy enough.


It's not always that simple. My chess engine doesn't know (and probably shouldn't know) how much memory the EGBB library uses, unless none of the bitbases are loaded to RAM and only the EGBB cache is used.

The philosophy of WinBoard protocol is that engines should be allowed to optimize their own use of the available resources. The desirability of the GUI telling them what hash size to use is just as large as to have it tell them the value of a Knight and a Bishop.


That's a good philosophy. But since the memory command already violates the principle of letting engines decide for themselves, and they aren't required to support the command, why not do what UCI does? And it shouldn't just be a matter of philosophy, but also one of practicality. The de facto standard is that engines have a separate setting for hash table size; we should take that into account.

The division of memory between main hash, Pawn hash, move-generator tables, EGBB, EGTB cache and what have you belongs in the domain of engine tuning.


Sure. I think pawn hash, move generator tables, piece-square tables, etc. should be completely left out from these memory limits, they're too insignificant to matter. The main hash table and engame tables are something else.

If we want the GUI to act as an interface for engine tuning through and engine-specific menu, we should define and use a WinBoard equivalent to the UCI option and setoption command. I already have an experimental XBoard version that implements this, through the option feature. If engines want to be told on installation how much EGTB cache to use, they could define a feature option="EGTBcache -spin 0 8 4", and WinBoard can then send option EGTBcache 4 to set it to the user-requested size. I don't think it makes much sense to make this a standard command, as every engine will require a different value for optimal functioning.


I agree, Winboard doesn't need dynamic configuration options.
User avatar
Ilari Pihlajisto
 
Posts: 78
Joined: 18 Jul 2005, 06:58

Re: The "memory" command

Postby H.G.Muller » 26 Feb 2009, 09:48

Ilari Pihlajisto wrote:It's not always that simple. My chess engine doesn't know (and probably shouldn't know) how much memory the EGBB library uses, unless none of the bitbases are loaded to RAM and only the EGBB cache is used.

I think it is very important to know that. We cannot have engines using unlimited amounts of memory in addition to what is available. There might be several TeraByte of 8-men bitbass on the machine they are running on, would you want to allow my engine to load it all in memory (preferably during ponder time, of course. :D )

That's a good philosophy. But since the memory command already violates the principle of letting engines decide for themselves, and they aren't required to support the command, why not do what UCI does? And it shouldn't just be a matter of philosophy, but also one of practicality. The de facto standard is that engines have a separate setting for hash table size; we should take that into account.

I don't see how making the engine aware of the resources available to them violates the principle of havng them use the resources in the way that is best for them. Can you elaborate on that? I also don't see the practicality issue. The de facto standard is actually that every engine uses its own method for setting hash-table or EGTBcache size (hard-coded, though command-line options, in ini files, whatever). Setting the size through the protocol will require a change in the engine. When people make that change, they might as well do it in such a way that it is actually useful.

Why not do it like UCI does? Because what UCI engines do sucks: What if UCI engine A plays UCI engine B on a machine with 512MB RAM, and A uses 256MB worth of bitbases, and B does not use bitbases, and the OS needs 100MB? How would you create fair conditions to play them against each other? Do you think it is fair to let A use 320MB, and B only 64MB, by setting the hash size to 64MB? The current interpretation of UCI specs used by engine authors, where the Hash option gives the size of the main hash, just does not work in practice. IMO the interpretation is flawed anyway, making use of a loophole in the formulation of the protocol specs, by saying "well, strictly speaking this huge table in my engine is not a hash table, so I don't have to include its size in the memory that can be used for hash tables according to the Hash-option setting". This is tantamount to a clever way of cheating. If I would develop an engine that does not use hashing for its main transposition table, but, say, a binary tree. (Which could be very competative, if the ordered way of storing the entries would guarantee that almost all accesses to it would be cache hits, rather than cache misses, as they are with Zobrist hashing). Would you want to allow my engine to use 2GB for it, when the hash setting was 64MB, and use that 64MB in addition to the 2GB for Pawn hash, just under the pretext that a binary tree is not hash table? Perhaps more realistically: my future engines will have on-the-fly EGTBs, and they will need a 1GB buffer to build a 6-men tablebase. And a tablebase is also not a hash table. Should they be allowed to use the 1GB next to their hash tables to build the EGT in ponder time?

The only way to enforce fair conditions when engines are allowed all kind of extras that use memory, besides the number given to them by the GUI, is to hand tune the value given to them for each engine separately. Using a standardized command to convey an engine-specific parameter can only be detrimental, as it will encourage the sending of wrong values, meant for another engine than the one receiving it. So I think standardized options for specifying the size of just the main hash table do more harm than good.

Sure. I think pawn hash, move generator tables, piece-square tables, etc. should be completely left out from these memory limits, they're too insignificant to matter. The main hash table and engame tables are something else.

This is why I added 1MB for 'crumbs'. But technology progresses, and other memory-consuming techniques might be adopted. For bitbases this already happened. On-the-fly EGTBs will be another one that is surely to come. No standardized UCI options do exist to limit the memory size on those.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: The "memory" command

Postby Ilari Pihlajisto » 27 Feb 2009, 00:44

H.G.Muller wrote:
Ilari Pihlajisto wrote:It's not always that simple. My chess engine doesn't know (and probably shouldn't know) how much memory the EGBB library uses, unless none of the bitbases are loaded to RAM and only the EGBB cache is used.

I think it is very important to know that. We cannot have engines using unlimited amounts of memory in addition to what is available. There might be several TeraByte of 8-men bitbass on the machine they are running on, would you want to allow my engine to load it all in memory (preferably during ponder time, of course. :D )


I don't know of any elegant and easy ways to make my engine aware of, not to mention control the EGBB library's memory usage, unless only the cache is used. I think it's the operator/user's job to choose what EGBBs are loaded to memory, and he should of course have some idea of the memory usage. Considering the design and functionality of the Scorpio EGBB library, the memory command is not that practical for controlling EGBB size. Or are there any engines that are doing it?

I don't see how making the engine aware of the resources available to them violates the principle of havng them use the resources in the way that is best for them. Can you elaborate on that?


Well, we don't tell human players how many brain cells they may use, they'll use every one they can. So if we really wanted chess engines to be completely independent we'd let them use all the memory they want. But that's just the philosophy, of course it's not practical. So this is kind of a moot point.

I also don't see the practicality issue. The de facto standard is actually that every engine uses its own method for setting hash-table or EGTBcache size (hard-coded, though command-line options, in ini files, whatever). Setting the size through the protocol will require a change in the engine. When people make that change, they might as well do it in such a way that it is actually useful.


For an engine that already has a hash size option, it's very easy to implement a command for setting said option. And because it's so easy, we'd have a better chance of getting engine authors to do it.

Why not do it like UCI does? Because what UCI engines do sucks: What if UCI engine A plays UCI engine B on a machine with 512MB RAM, and A uses 256MB worth of bitbases, and B does not use bitbases, and the OS needs 100MB? How would you create fair conditions to play them against each other? Do you think it is fair to let A use 320MB, and B only 64MB, by setting the hash size to 64MB?


If fairness is defined by the amount of memory allocated for the engines, then no, that's not fair. But the point of using endgame tables is that they should give an advantage to the engines that use them.Punishing the engine for having endgame tables by reducing its hash size is counter-productive if we just want both engines to play as well as possible, which is exactly what I want when I'm setting up a tournament. I know that many others think the same way; take a look at CCRL for example. Their testing conditions are clearly unfair by your definition, and they give an advantage to SMP engines and engines with endgame tables. The people who organize engine tournaments are an important part of my target market (for my upcoming GUI at least), so I want to take their needs into account.


If I would develop an engine that does not use hashing for its main transposition table, but, say, a binary tree. (Which could be very competative, if the ordered way of storing the entries would guarantee that almost all accesses to it would be cache hits, rather than cache misses, as they are with Zobrist hashing). Would you want to allow my engine to use 2GB for it, when the hash setting was 64MB, and use that 64MB in addition to the 2GB for Pawn hash, just under the pretext that a binary tree is not hash table?


That would be just semantics. Even if the configuration option is called "Hash size", I would consider it the same as transposition table size, and would give your engine the same amount of memory that I give to the opponent. As for the pawn hash, I'd really wonder why you'd need it to be that big. But I'd probably allow it if there was enough free memory (there's plenty on my system).

Perhaps more realistically: my future engines will have on-the-fly EGTBs, and they will need a 1GB buffer to build a 6-men tablebase. And a tablebase is also not a hash table. Should they be allowed to use the 1GB next to their hash tables to build the EGT in ponder time?


I would want a separate configuration option (in an ini file, by a command line argument, etc.) which would let me choose which tables to load. And if I wanted to run a match with endgame tables and I had enough RAM, then sure, your engine would get its 1GB of egt memory. I don't know how building the tables during pondering is relevant.

The only way to enforce fair conditions when engines are allowed all kind of extras that use memory, besides the number given to them by the GUI, is to hand tune the value given to them for each engine separately. Using a standardized command to convey an engine-specific parameter can only be detrimental, as it will encourage the sending of wrong values, meant for another engine than the one receiving it. So I think standardized options for specifying the size of just the main hash table do more harm than good.


IMO you're pushing the pursuit of fairness so far that it actually hinders progress and handicaps the stronger engines. I guess this is one of those "We agree to disagree" things.
User avatar
Ilari Pihlajisto
 
Posts: 78
Joined: 18 Jul 2005, 06:58

Re: The "memory" command

Postby Miguel A. Ballicora » 27 Feb 2009, 05:44

Ilari Pihlajisto wrote:
H.G.Muller wrote:
Ilari Pihlajisto wrote:It's not always that simple. My chess engine doesn't know (and probably shouldn't know) how much memory the EGBB library uses, unless none of the bitbases are loaded to RAM and only the EGBB cache is used.

I think it is very important to know that. We cannot have engines using unlimited amounts of memory in addition to what is available. There might be several TeraByte of 8-men bitbass on the machine they are running on, would you want to allow my engine to load it all in memory (preferably during ponder time, of course. :D )


I don't know of any elegant and easy ways to make my engine aware of, not to mention control the EGBB library's memory usage, unless only the cache is used. I think it's the operator/user's job to choose what EGBBs are loaded to memory, and he should of course have some idea of the memory usage. Considering the design and functionality of the Scorpio EGBB library, the memory command is not that practical for controlling EGBB size. Or are there any engines that are doing it?

I don't see how making the engine aware of the resources available to them violates the principle of havng them use the resources in the way that is best for them. Can you elaborate on that?


Well, we don't tell human players how many brain cells they may use, they'll use every one they can. So if we really wanted chess engines to be completely independent we'd let them use all the memory they want. But that's just the philosophy, of course it's not practical. So this is kind of a moot point.

I also don't see the practicality issue. The de facto standard is actually that every engine uses its own method for setting hash-table or EGTBcache size (hard-coded, though command-line options, in ini files, whatever). Setting the size through the protocol will require a change in the engine. When people make that change, they might as well do it in such a way that it is actually useful.


For an engine that already has a hash size option, it's very easy to implement a command for setting said option. And because it's so easy, we'd have a better chance of getting engine authors to do it.

Why not do it like UCI does? Because what UCI engines do sucks: What if UCI engine A plays UCI engine B on a machine with 512MB RAM, and A uses 256MB worth of bitbases, and B does not use bitbases, and the OS needs 100MB? How would you create fair conditions to play them against each other? Do you think it is fair to let A use 320MB, and B only 64MB, by setting the hash size to 64MB?


If fairness is defined by the amount of memory allocated for the engines, then no, that's not fair. But the point of using endgame tables is that they should give an advantage to the engines that use them.Punishing the engine for having endgame tables by reducing its hash size is counter-


That't not punishment!
You are rewarding an engine that created a necessity for more resources. Why is that fair?
Why if I want more memory because I have a special table for move generation that nobody has?
Fairness come from giving equal resources.

productive if we just want both engines to play as well as possible, which is exactly what I want when I'm setting up a tournament. I know that many others think the same way; take a look at CCRL for example. Their testing conditions are clearly unfair by your definition, and they give an advantage to SMP engines and engines with endgame tables. The people who organize engine tournaments are an important part of my target market (for my upcoming GUI at least), so I want to take their needs into account.


If I would develop an engine that does not use hashing for its main transposition table, but, say, a binary tree. (Which could be very competative, if the ordered way of storing the entries would guarantee that almost all accesses to it would be cache hits, rather than cache misses, as they are with Zobrist hashing). Would you want to allow my engine to use 2GB for it, when the hash setting was 64MB, and use that 64MB in addition to the 2GB for Pawn hash, just under the pretext that a binary tree is not hash table?


That would be just semantics. Even if the configuration option is called "Hash size", I would consider it the same as transposition table size, and would give your engine the same amount of memory that I give to the opponent. As for the pawn hash, I'd really wonder why you'd need it to be that big. But I'd probably allow it if there was enough free memory (there's plenty on my system).

Perhaps more realistically: my future engines will have on-the-fly EGTBs, and they will need a 1GB buffer to build a 6-men tablebase. And a tablebase is also not a hash table. Should they be allowed to use the 1GB next to their hash tables to build the EGT in ponder time?


I would want a separate configuration option (in an ini file, by a command line argument, etc.) which would let me choose which tables to load. And if I wanted to run a match with endgame tables and I had enough RAM, then sure, your engine would get its 1GB of egt memory. I don't know how building the tables during pondering is relevant.

The only way to enforce fair conditions when engines are allowed all kind of extras that use memory, besides the number given to them by the GUI, is to hand tune the value given to them for each engine separately. Using a standardized command to convey an engine-specific parameter can only be detrimental, as it will encourage the sending of wrong values, meant for another engine than the one receiving it. So I think standardized options for specifying the size of just the main hash table do more harm than good.


IMO you're pushing the pursuit of fairness so far that it actually hinders progress and handicaps the stronger engines. I guess this is one of those "We agree to disagree" things.


I do not understand how a command that informs the engine the amount of resources allocated to it hinders programs and handicaps stronger engines. If an engine cannot figure out how to use their resources, it is not very smart in my opinion.

Miguel
User avatar
Miguel A. Ballicora
 
Posts: 160
Joined: 03 Aug 2005, 02:24
Location: Chicago, IL, USA

Re: The "memory" command

Postby Ilari Pihlajisto » 27 Feb 2009, 11:38

Miguel A. Ballicora wrote:That't not punishment!
You are rewarding an engine that created a necessity for more resources. Why is that fair?
Why if I want more memory because I have a special table for move generation that nobody has?
Fairness come from giving equal resources.


It's fair because the engine can actually put the extra resources to good use. Is it unfair that SMP engines get to use more CPU time than single CPU engines? To me, it's the same thing.

I do not understand how a command that informs the engine the amount of resources allocated to it hinders programs and handicaps stronger engines. If an engine cannot figure out how to use their resources, it is not very smart in my opinion.


Let's say we have engine A that loads the 4-men EGBBs to RAM (about 20MB), and engine B that doesn't use endgame tables. Then we set up a tournament where both engines get 32MB of memory. Engine A gets a 12MB hash table, and engine B gets a 32MB hash table. This is not good for engine A, the hash size is too small, and A would probably be better off without using the EGBBs at all. You could call these conditions fair, but they would still put a handicap on the engine that can use endgame tables.
User avatar
Ilari Pihlajisto
 
Posts: 78
Joined: 18 Jul 2005, 06:58

Re: The "memory" command

Postby H.G.Muller » 27 Feb 2009, 12:10

H.G.Muller wrote:I don't know of any elegant and easy ways to make my engine aware of, not to mention control the EGBB library's memory usage, unless only the cache is used. I think it's the operator/user's job to choose what EGBBs are loaded to memory, and he should of course have some idea of the memory usage. Considering the design and functionality of the Scorpio EGBB library, the memory command is not that practical for controlling EGBB size. Or are there any engines that are doing it?

I have no idea how EGBBs work. Using codes written by others, giving the engine the status of semi-clone, is dubious practice in the first place. It certainly cannot be used to derive rights from. If the Scorpio EGBB library is defective in the sence that it does not contain calls to limit its resource usage or at least inform the engine of its needs, it does not exempt the author from obeying the constraints. He can either refrain from using EGBB altogether, or write his own routines for this.

Well, we don't tell human players how many brain cells they may use, they'll use every one they can. So if we really wanted chess engines to be completely independent we'd let them use all the memory they want. But that's just the philosophy, of course it's not practical. So this is kind of a moot point.

You are overlooking an important point here, that makes your human analogy fail completely: a computer contains two (or four, or eight) Chess-playing entities, that have to share the availabe memory. Brains in a human Chess match are not a shared commodity, each player brings along its own brain. You should compare it to the food that is allowed to be consumed in the tournament hall by the players. I still have to see the candidate match were two meals are brought in, and one of the GMs says: "I'll take both, please! All this thinking has famished me, so if you don't mind, I'll eat my opponent's meal too, and let him starve. If I eat more, I can play better Chess". And that the organizers thn would allow this.

When single engine is running on a computer, (e.g. in an OTB or on-line tourney) the memory command is not needed at all, and the operator can set it to whatever value he wants (depending on what else he wants to do on that computer!).

For an engine that already has a hash size option, it's very easy to implement a command for setting said option. And because it's so easy, we'd have a better chance of getting engine authors to do it.

I do not consider a better chance on someone doing something that is detrimental a good thing. It is better to have a small chance that they do something good, than to create a large chance that they will do something bad.

If fairness is defined by the amount of memory allocated for the engines, then no, that's not fair. But the point of using endgame tables is that they should give an advantage to the engines that use them.Punishing the engine for having endgame tables by reducing its hash size is counter-productive if we just want both engines to play as well as possible, which is exactly what I want when I'm setting up a tournament.

Well, this is not posible then, using a GUI to play them against each other on the same PC. Because each engine will always play "as well as possible" when it grabs all resources of the machine, leaving nothing (no memory, no CPU time) for its opponent. Wether you like it or not, memory is a physical resource, that is in finite supply on any machine. End-game tables require memory, and that memory cannot be used as main hash. You don't think the engine using the EGT should be punished for it by reducing its hash. So apparently you think it is OK to punish its opponent for it, by reducing this opponent's hash?

I don't share that view. Because it is in fact this view that would be very counter-productive to your defined goal of producing good Chess. Adopting this philosophy would mean that it pays for an engine to allocate a huge array next to its hash table, which it is clearing at maximum speed during ponder time, because it would be a good way to reduce its opponent's hash size and take way CPU time from it. And the opponent would quickly learn to do the same. So in the end none of the engines would get to spend too much effort on producing Chess, being hindered too much by the opponent's wasting of resources...

I know that many others think the same way; take a look at CCRL for example. Their testing conditions are clearly unfair by your definition, and they give an advantage to SMP engines and engines with endgame tables. The people who organize engine tournaments are an important part of my target market (for my upcoming GUI at least), so I want to take their needs into account.

I see that mostly as their problem. I have not made a in-depth study of what exactly they do, but I am inclined to give them the benefit of the doubt, that, if they only create unfair conditions, it is only because technical difficulties preclude them from creating fair conditions. Rather than thatthey intentionally aim to mess up their rating lists. This makes it all the more important to create the possibility for them to conduct their tests in a fair way, allowing them to make a conscious decision if they want to do fair or unfair testing. I am very confident they will do the right thing, then. And if not, I see it as their problem, not mine. I don't see it as my calling to aid and abet cheaters, no matter how large a market they might offer...

That would be just semantics. Even if the configuration option is called "Hash size", I would consider it the same as transposition table size, and would give your engine the same amount of memory that I give to the opponent. As for the pawn hash, I'd really wonder why you'd need it to be that big. But I'd probably allow it if there was enough free memory (there's plenty on my system).

If you would read the UCI protocol specs carefully, you would see that he Hash option definition speaks of "hash tables" (plural). This makes it clear that it cannot just be read as "transposition table", as there is supposed to be only one of those. I agree that "hash" in this context should not be taken litteral, providing an excuse to dodge the limit by adapting an alternative storage scheme that strictly speaking is not hashing. But the consequence of this is that the EGTB cache should be considered as part of the memory limit specified in the Hash option. The EGTB cache is just another "hash table". (It is in fact very likely to use hashing for allocating the buffers to EGTB compression blocks, so it even counts as a hash table in the strict sense).

So I don't think tere really is any difference between the UCI Hash option, and the way I defined memory in WB protocol. Both pertain to maximum memory usage. It is just that cheating is so common in UCI engines that it has become the norm! CPW, for instance, allocates 80MB when you set its Hash option to 64MB. I inquired with the authors, and they told me this is because it allocates a Pawn hash table 1/4 the size of the TT, next to the latter.

If you have RAM to spare, it just mean you don't make the hash tables large enough.

I would want a separate configuration option (in an ini file, by a command line argument, etc.) which would let me choose which tables to load. And if I wanted to run a match with endgame tables and I had enough RAM, then sure, your engine would get its 1GB of egt memory. I don't know how building the tables during pondering is relevant.

If you have enough memory, the whole memory command is not relevant. It is only provided to hndle cases where memory use has to be limited, and should be optimized for that situation.

IMO you're pushing the pursuit of fairness so far that it actually hinders progress and handicaps the stronger engines. I guess this is one of those "We agree to disagree" things.

We indeed seem to hve a different definition of 'stronger engines'. In your definition, A would be stonger than B, despite the fact that A played weaker Chess, just because A takes away so many resources from its opponent that that opponent would be weakened even more than the A-B difference. In my definition, B would be the stronger engine, and A would be a cheater. And I can assure you the fact that my pursuit of fairness handicapping cheaters is fully intentional! :D
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: The "memory" command

Postby H.G.Muller » 27 Feb 2009, 12:32

Ilari Pihlajisto wrote:Let's say we have engine A that loads the 4-men EGBBs to RAM (about 20MB), and engine B that doesn't use endgame tables. Then we set up a tournament where both engines get 32MB of memory. Engine A gets a 12MB hash table, and engine B gets a 32MB hash table. This is not good for engine A, the hash size is too small, and A would probably be better off without using the EGBBs at all. You could call these conditions fair, but they would still put a handicap on the engine that can use endgame tables.


We had this discussion before the memory command was introduced. My conclusion was that it would make no sense to try keeping non-competative techniques alive by making them competative artificially, by awarding extra resources to those engines that use them. If A would be better off not using EGBB, it should not use them. There are zillions of search or evaluation techniques that do not work, in the sense that they reduce engine strength. They all could be made competative by arbitrary declaring that engines that do use them get extra resources. "A recpture extension costs you 10Elo? Too bad no one uses it. Let us award 10% extra CPU time to engines that do use it! Of cuorse their opponents get 10% less time, to make up for that, we don't want longr games."

You are directly contradicting yourself. If you want the best possible Chess, you should be happy that engines do not waste memory on EGBB anymore. If you wnted best possible Chess, you would not set 32MB memory limit to leave the rest of your memory unused. You would do it because you only had 128MB on that machine (64MB being used by the OS). Or perhaps you had a quad with 256MB, and are playing 4 games simultaneously. So the 64MB for A and B together is non-negociable. Apparently you want to divide it s 22MB hash each, and 20MB EGBB for A. This might be good for A, (although not as good as iving it 42MB hash and no EGBB), but it is bad for B. The quality of the Chess produced by A already suffers, and that of the average quality of the game suffers even more.

But it is worse: you cannot expect B to let this happen. If those are the rules by which tourneys are conducted, B would of course be quickly altered to claim that he was using EGBB as well, even if it were not true. Because that would ensure they now both get 20MB of "EGBB memory", an 12MB of hash. (If you were playing an automated tourney with more than a single EGBB-using participant, where you wanted each game to use the same settings, you would even have to use 12MB hash size for both when B had not be claiming anything, to allow for two EGBB users playing against each other.) So now both programs incur the maximum performance hit, and the quality of the Chess you produce would suffer in the worst possible way!

(Fortunately, if B was my engine, it woud of course cheat, and use the memory it said was EGBB for hash anyway. Who is to know...)
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: The "memory" command

Postby Miguel A. Ballicora » 28 Feb 2009, 00:22

H.G.Muller wrote:
Ilari Pihlajisto wrote:Let's say we have engine A that loads the 4-men EGBBs to RAM (about 20MB), and engine B that doesn't use endgame tables. Then we set up a tournament where both engines get 32MB of memory. Engine A gets a 12MB hash table, and engine B gets a 32MB hash table. This is not good for engine A, the hash size is too small, and A would probably be better off without using the EGBBs at all. You could call these conditions fair, but they would still put a handicap on the engine that can use endgame tables.


We had this discussion before the memory command was introduced. My conclusion was that it would make no sense to try keeping non-competative techniques alive by making them competative artificially, by awarding extra resources to those engines that use them. If A would be better off not using EGBB, it should not use them. There are zillions of search or evaluation techniques that do not work, in the sense that they reduce engine strength. They all could be made competative by arbitrary declaring that engines that do use them get extra resources. "A recpture extension costs you 10Elo? Too bad no one uses it. Let us award 10% extra CPU time to engines that do use it! Of cuorse their opponents get 10% less time, to make up for that, we don't want longr games."

You are directly contradicting yourself. If you want the best possible Chess, you should be happy that engines do not waste memory on EGBB anymore. If you wnted best possible Chess, you would not set 32MB memory limit to leave the rest of your memory unused. You would do it because you only had 128MB on that machine (64MB being used by the OS). Or perhaps you had a quad with 256MB, and are playing 4 games simultaneously. So the 64MB for A and B together is non-negociable. Apparently you want to divide it s 22MB hash each, and 20MB EGBB for A. This might be good for A, (although not as good as iving it 42MB hash and no EGBB), but it is bad for B. The quality of the Chess produced by A already suffers, and that of the average quality of the game suffers even more.

But it is worse: you cannot expect B to let this happen. If those are the rules by which tourneys are conducted, B would of course be quickly altered to claim that he was using EGBB as well, even if it were not true. Because that would ensure they now both get 20MB of "EGBB memory", an 12MB of hash. (If you were playing an automated tourney with more than a single EGBB-using participant, where you wanted each game to use the same settings, you would even have to use 12MB hash size for both when B had not be claiming anything, to allow for two EGBB users playing against each other.) So now both programs incur the maximum performance hit, and the quality of the Chess you produce would suffer in the worst possible way!

(Fortunately, if B was my engine, it woud of course cheat, and use the memory it said was EGBB for hash anyway. Who is to know...)


Moreover, an engine that declares to use EGBB and also uses it, might be better off using the 32 MB as a hashtable during the opening and middlegames.

Miguel
User avatar
Miguel A. Ballicora
 
Posts: 160
Joined: 03 Aug 2005, 02:24
Location: Chicago, IL, USA

Re: The "memory" command

Postby Miguel A. Ballicora » 28 Feb 2009, 00:42

Ilari Pihlajisto wrote:
Miguel A. Ballicora wrote:That't not punishment!
You are rewarding an engine that created a necessity for more resources. Why is that fair?
Why if I want more memory because I have a special table for move generation that nobody has?
Fairness come from giving equal resources.


It's fair because the engine can actually put the extra resources to good use. Is it unfair that SMP engines get to use more CPU time than single CPU engines? To me, it's the same thing.



The "cores" parameter lets the engine know what resources are available. It is up to the engine to use them or not.
The "memory" parameter lets the engine know what resources are available. It is up to the engine to use them or not.

It will be unfair, in a match, to send to an SMP engine the parameter cores=2 (just because you know it can use it) and send the single-thread opponent core=1 because you assume it cannot use two cores. If you are in a match and you send cores=2 to one engine, you should send cores=2 to the other, regardless of your assumption of what that engine will do with the extra core. If the engine cannot use the second core, it is its problem.

That is a better analogy of what you are proposing with the RAM memory, and it is unfair.


I do not understand how a command that informs the engine the amount of resources allocated to it hinders programs and handicaps stronger engines. If an engine cannot figure out how to use their resources, it is not very smart in my opinion.


Let's say we have engine A that loads the 4-men EGBBs to RAM (about 20MB), and engine B that doesn't use endgame tables. Then we set up a tournament where both engines get 32MB of memory. Engine A gets a 12MB hash table, and engine B gets a 32MB hash table. This is not good for engine A, the hash size is too small, and A would probably be better off without using the EGBBs at all. You could call these conditions fair, but they would still put a handicap on the engine that can use endgame tables.


The handicap is not put by the memory command, it is set by the lack of resources. If the engine wants to use EGBBs on a mobile phone, do not blame the memory command.

It is not about to chop memory for the engine that supports EGBB, it is about to give the same amount of extra memory to the opponent, who will be using it for whatever it pleases. You are not handicapping any engine. You are informing what is available.

Miguel
User avatar
Miguel A. Ballicora
 
Posts: 160
Joined: 03 Aug 2005, 02:24
Location: Chicago, IL, USA

Re: The "memory" command

Postby Roger Brown » 28 Feb 2009, 15:00

Miguel A. Ballicora wrote:

SNIP

It is not about to chop memory for the engine that supports EGBB, it is about to give the same amount of extra memory to the opponent, who will be using it for whatever it pleases. You are not handicapping any engine. You are informing what is available.

Miguel



Hello Miguel Ballicora,

I had an interesting exchange with H.G. on some of these points so permit me to obtain clarification on something.

(1) There are some engines which do not (cannot) adapt hash to exact integers (or even fractions of integers as I think Yace could do). Examples abound - Crafty and the Baron come to mind. So if I set the memory per engine at - say 90 mb - Crafty will use the next lower setting which is 48 mb. The other engine is using 96 mb. Are you saying that the memory command is simply a menu and that the engines are responsible for allocating it as they see fit?

So in that example it would simply be Crafty's tough luck, correct? So how would a TD ensure that engines are playing level in terms of available resources?


(2) There are engines with different hashes (main, pawn, bitbases etc.). So if I assign a fixed number to each process (engine) how does the engine allocate this to stay under the limit? Personally I believe that it is wrong to assign the memory set all to main hash and then grab some more for pawn hash and others.

Your thoughts?

Later.
Roger Brown
 
Posts: 346
Joined: 24 Sep 2004, 12:31

Re: The "memory" command

Postby Miguel A. Ballicora » 28 Feb 2009, 17:17

Roger Brown wrote:
Miguel A. Ballicora wrote:

SNIP

It is not about to chop memory for the engine that supports EGBB, it is about to give the same amount of extra memory to the opponent, who will be using it for whatever it pleases. You are not handicapping any engine. You are informing what is available.

Miguel



Hello Miguel Ballicora,

I had an interesting exchange with H.G. on some of these points so permit me to obtain clarification on something.

(1) There are some engines which do not (cannot) adapt hash to exact integers (or even fractions of integers as I think Yace could do). Examples abound - Crafty and the Baron come to mind. So if I set the memory per engine at - say 90 mb - Crafty will use the next lower setting which is 48 mb. The other engine is using 96 mb. Are you saying that the memory command is simply a menu and that the engines are responsible for allocating it as they see fit?



Thanks Roger to bring this up. This is an excellent case to analyze.

First, let me be very specific. There is no engine that "cannot" adapt hash to any integer. Those engines "choose" not to do it by design.
In other words, Crafty chooses not to use the memory your are providing. So, short answer = "Tough luck for Crafty" if it wastes resources.

Why do engines choose to waste memory? This is a "speed vs flexibility" design decision. Flexibility has its price. Using fixed numbers such as 2^n (or 2^n + 2^(n+1) as in crafty), the engine is basically saying, "I want to calculate hash decisions fast". It is much faster do divide by 2, 4, 8, 16, 32 etc, than any other number. So primitive hash designs, such as Crafty's, sacrifice flexibility for speed. Dieter (YACE's author), did some research on this and found that he could be flexible, accepting any number, without sacrificing much speed. So, if you choose a number that is suitable for Crafty, thinking that this is fair, what you are doing is penalizing more sophisticated engines such as YACE. Dieter spent some time to make sure that his engine is flexible to satisfy its users. They can now use ALL THE RAM that is available, which paid by THEM the USERS. Crafty, as many other engines (including mine), is saying "I do not care if you bought more memory, I will use what I want, because I say so, based on a personal design decision, or maybe out of laziness".

In other words, YACE may perform relatively better at any amount of memory than Crafty, EXCEPT if you choose 48, 96 etc. At that particular amount of memory, Dieter's work is wasted. No matter what amount of memory you choose, you will be favoring one engine over the other.
What memory should you choose then? simple, whatever amount of memory you have available for that match. Anything else, is author's responsibility. Crafty gambles on speed, YACE on flexibility. Those has been Bob's and Dieter's decisions.

Imagine that there is an engine that can allocate only 96 MB and you bought a system with 16 GB of RAM. Why would you want to give both engines 96 MB? if the engine refuses to use the resources, TOUGH!!!


So in that example it would simply be Crafty's tough luck, correct? So how would a TD ensure that engines are playing level in terms of available resources?


(2) There are engines with different hashes (main, pawn, bitbases etc.). So if I assign a fixed number to each process (engine) how does the engine allocate this to stay under the limit? Personally I believe that it is wrong to assign the memory set all to main hash and then grab some more for pawn hash and others.



It is author's responsibility to find whatever is best, given the resources. Until now, authors trusted users to do all this dirty low level job. I believe is wrong. Chess players should not be dealing with hashes etc. They do not care! they care about spending money on RAM and telling the engine what is available to be used.

Miguel
PS: Having said all this, I have not implemented memory yet :-)
PS2: But I will in my next release.


Your thoughts?

Later.
User avatar
Miguel A. Ballicora
 
Posts: 160
Joined: 03 Aug 2005, 02:24
Location: Chicago, IL, USA

Re: The "memory" command

Postby Roger Brown » 28 Feb 2009, 19:34

Hello Miguel,

Thank you for your answer!

Impatiently awaiting the new release.

Later.
Roger Brown
 
Posts: 346
Joined: 24 Sep 2004, 12:31


Return to WinBoard development and bugfixing

Who is online

Users browsing this forum: No registered users and 5 guests