Richard Pijl wrote:Volker Pittlik wrote:I played some games with a few engines. Opponent was Fruit in all cases using the Marc's "performance" book.
Each engine played 2*100 games at bullet time control (1+1). One series with all learning disabled then the same with all learning enabled. In the "learn off" series of Romi its learn file was deleted after every game.
It seems there wasn't any effect of learning within 100 games.
I do not think this is very surprising. In most of the test matches I received from Rodolfo, the learning effect started to kick in only after a few hundred games. In some games it may even give a negative effect, as it might try the next best move instead of the best move when the score is dropping. In the next game it will find out that the next best move is really worse than the best move, and select the best move again.
If you play with a book for both engines as well, the number out-of-book positions that the engine has to deal with is also quite big. This also reduces the chances that it will learn something good. Rodolfo's Manhattan book solves half this problem by giving the learning engine just one choice while in book.
Therefor I think that the main strength of (position) learning is its use during analysis, moving back and forth in a game, while using the results that were learned earlier.
This, and dealing with repeaters on chess servers of course
Richard.
Hi Richard,
you always have good and valuable opinions about anything. I'm the first to admit, learning has some limits. But it has some good applications. When doing analysis, of course. But now, thanks to this first very beta Manhattan book, it could have effects on tournament too. That's something worth to try, anyway.
I tell you the project I'm working with Romichess. I'm building a very large learn data file with Romichess, to be used as Manhattan book extension in next WBEC tournament. Up to now, Romi stored about 5000 games, all played with Manhattan book against several engines. Thank to Romi merge command, I can import into data file games played by other strong engines too, as well as by human GMs. Target is to have at least 50.000 learned games as extension of Manhattan book. Then, we'll check effects of all this work on WBEC Romi tournament (4th division). Just an experiment. If it works, that'd mean my work isn't useless.
Maybe I'm a bit emotional about all of that. But, just consider that work took several months of attempts. All positions were Rybka-checked. Not only for blunders check, but to find the best move too. Each of 7.125 book positions took me time. When Rybka didn't suffice, I used combined analysis of Glaurung and Spike, making moves and taking them back, until I felt good about any single choice. I don't assume I always found the best moves, as my own playing strenght is only CM (I miss a dozen italian ELO points to CM). But my effort was great. And my tests tell me my work was worth of such efforts.
Bye,