Moderator: Andres Valverde
Don't take anything for granted! Uri explained why.tnesham wrote: for example, a pawn move to e4 - I do not need to check if the path to e4 (from e2) is obstructed because I take the pgn's text as fact it is legal.
tnesham wrote:However, given the pgn text "5. c4", does that mean pawn on c2 to c4? or pawn on c3 to c4? What if there is a pawn on *both* c2 *and* c3? In the latter case, c4 means the pawn on c3 because the pawn on c2 cannot legally move.
That was an instance of my general question: what rules must a pgn parser observe? Have those rules been defined? Is there documentation similar to a flow chart?
Peter Fendrich wrote:Take a look at the PGN spec: http://www.very-best.de/pgn-spec.htm
It's all there, almost
/Peter
tnesham wrote:I was wondering if performance was a consideration, particularly when determining if the King is in check or determining if a square, between King and Rook, is under attack during a castling move.
Dan Honeycutt wrote:No. The slowest routine I can concieve of to make this kind of test would consume negligible time compared to what you spend on file I/O.
Dieter B?r?ner wrote:Dan did you check it?
Dieter B?r?ner wrote:
Parsing PGNs is more or less CPU bound here. The program will parse about 3.5 Mbyte/s. The disk is much faster. While I type this message, I see that the parsing uses 94% of the CPU time. To make it disk bound, it would need to be almost 20 times faster - you'll need pretty efficient algorithms for it. Or in other words, a modern disk reads 60 MB/s. Say ome half move needs 12 bytes (move number and the overhead by headers, which I assume will need no CPU time), this is 5 mio moves/s. How do you check legality? When generating legal moves and comparing: say on average 40 legal moves: you must generate 200 mio moves/s (and must not have any more overhead ...).
Uri Blass wrote:I do not think that generating a book takes a long time but the question is how do you generate the book.
Uri Blass wrote:I do not see why O(N)
I think that adding a move or to be more exact adding a book entry is
O(logN) because of binary search.
Uri
Sune Fischer wrote:I don't see how.
A linked list can't be indexed efficiently and an array can't be kept ordered.
There is actualy a way to have both in C++ but according to the documentation the container supporting insertion (forgot its name at the moment) can require a re-allocation at any point in time when there is no room for insertion, this will be very expensive too.
-S.
Dieter B?r?ner wrote:Sune, you cited all of my talking. Note, that I only commented to Dan, and I doubted that PGN parsing is IO bounded (at least when not using very efficient methods). I did not argue at all, that it is important, to have a fast PGN parser.
Uri Blass wrote:I do not add a single move every time but first get all the book moves into an array and later order them in time O(n*logn)
average time per move is O(logn).
Sune Fischer wrote:Uri Blass wrote:I do not add a single move every time but first get all the book moves into an array and later order them in time O(n*logn)
average time per move is O(logn).
The problem is that this is memory inefficient, it needs ~5-10 times as much memory and the post-processing also requires a large structure.
I don't understand what you mean by this:
"Even with adding a single move every time it is possible to have an array of pointers when pointer can in theory point to a small list of positions but the length of the list will usually be small(only 1 or 2 positions)."
-S.
Return to Programming and Technical Discussions
Users browsing this forum: No registered users and 12 guests