Taking advantage of the extra bit

Programming Topics (Computer Chess) and technical aspects as test techniques, book building, program tuning etc

Moderator: Andres Valverde

Taking advantage of the extra bit

Postby Daniel Shawul » 13 Oct 2008, 05:12

I have three questions so please bare with me.

Question 1) Taking advantage of the extra bit

I currently store 4 positions to one byte mapping in bitbases. Theoretically it is possible to
store 5 positions in one byte. There are 13 unused values (256 - 3^4 = 13)
with the former. So I wanted to check it out and to my dismay the database
produced after compression is larger!! I don't know what the reason is but
I am suspecting the follwong
* pattern produced by the first method is better for compression.
Also usually the square enumerations result in a database which is divisible by 4??
So in short I am thinking there is more entropy introduced in the second case.
I sue LZ + Huffman btw.
The broken positions are replaced in such a way that they continue the previous run for both cases.

Code: Select all
case ILLEGAL:
#      ifdef FIVE_IN_ONE
         if(q >= 1) myval |= (COMPT[*(v - 1)] & (3 << (r << 1)));
#      else
         if(q >= 1) myval |= (*(v - 1) & (3 << (r << 1)));
#      endif
         break;

    q,r-> quotient and remainder. v is the qth byte which has the values for the positions.
 



Next I tried to use prediction for both. For the first one I used the fourth value
to indicate whether the result is correctly predicted or not.
Code: Select all
            00 - draw
            01 - win
            10 - loss
            11 - predicted

For the second case the 13 remaning values are used to indicate from correct prediction for
all positions ---> to correct prediction for all but one of the positions. That is the maximum
allowed by the 13 remining values. So I guess that the predictor should predict correctly
at least 80% of the time (1 - 1/5 = 0.8). After that the results became much closer but still the first one is better!

So I am deciding to choose the first one, apparently not only for its simplicity!
Also I think that for 6 men and above prediction will be a lot more harder, in that
case I want to use the extra bit to indicate if the WDL is achieved by more than 50 moves.
Many such positions occur even in some 5 men, so if the table has that info engines can use
that information to choose between moves which lead to longer WDLs with in the 50 move rule.


What is your experience with it?

Question 2) is left over from the previous thread.

I am ashamed to stay I did not figure out the way to go back for the 3 similar pieces
from the formula given for the 2 pieces. So I am still using the table.
The rule given for the 2 similar pieces is

Code: Select all
q1 = Q;
q2 = 1;
while (q1 >= q2) {
  q1 -= q2;
  q2++;
}


which works by the way. The enumeration I got from the paper tries to count from
the bottom of the pascal triangle which makes it dependent on the chosen N (64, 63 48 47 etc)

Question 3)
Is it possible to generate 6 man bitbases on 32 bit pcs.
There are 2 ^ 31 = 2147483648 possible indices. Maximum table size is 462 * 62 * 61 * 60 * 59 = 6185385360 = S.
So it is not possible to generate dtm tables if we use one byte per position. But it will turn out
to be just safe if we store 4 or 5 positions per byte (which I intend to use during generation as well) like in the bitbases.
2 ^ 31 - S/4 = 601137308
2 ^ 31 - S/5 = 910406576
The only problem I see is generation methods that keep track of *number of visits* need one byte so
I 've to use the forward retrograde method I guess.

thanks in advance
Daniel Shawul
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby H.G.Muller » 13 Oct 2008, 10:57

To your third question:

I don't really understand your arithmetic, or what it means. But I am currently working on a tablebase generator for 7-men end-games, which is necessarily disk-based. But applying a similar algorithm to a 6-men I see no problems in building a 6-men DTM tablebase in RAM provided you are prepared to write out the raw results for each cycle to the disk. If you were interested only in the bitbase, even this is not neessary, as you could merge the results from each subsequent cycle with the results you had from previous cycles, without the total ever getting very large.

The RAM requirements for the building process are 1 bit/pos for the wtm positions and upto ~0.5 bit/pos for the btm poition. A pawnless 6-men contains 518*2^24 = 8.08G positions. So the wtm bitmaps would require 1.01GB, storing the btm info for the two most-recent cycles would rquire another 1 GB. ccumulating the btm info as another bitmap would require another 1.01GB. On a 4GB machine that woud leave enough storage for some working space and the OS.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 13 Oct 2008, 11:43

I don't really understand your arithmetic, or what it means.


I was trying to calculate the number of bytes required for generation of the largest 6 men tb. If I use one to one mapping (one byte for one result as it is the case for generating DTm tables) then that will be greater than the address space of 32bit pcs. But incase of bitbases 4 positions are stored in one which decrease the tb size by 4 and bring it into the addressable range and hence *no paging with enough RAM*. I am probably wrong as I am not good at OS stuff...

But applying a similar algorithm to a 6-men I see no problems in building a 6-men DTM tablebase in RAM provided you are prepared to write out the raw results for each cycle to the disk.


That is what I was trying to say. I want to atleast avoid writing to /reading from disk at least for the tb to be generated. For positons that are found after captures and promotions it could read from disk. I think avoiding disk read for the current tb to be generated might make a difference.

So I think you might have a problem with generating 6men dtm as you have to read back the result from disk in case of transposes from later positons. I don't know the algorthm that you use could avoid that.

If you were interested only in the bitbase, even this is not neessary, as you could merge the results from each subsequent cycle with the results you had from previous cycles, without the total ever getting very large.


So it is possible then.

The RAM requirements for the building process are 1 bit/pos for the wtm positions and upto ~0.5 bit/pos for the btm poition.

If I use the 2bit/pos for any side to move would a 32bit pc be able to address it provided I install the required RAM.I have only a 2 GB RAM 32bit machine for now.

Daniel
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby xinix » 13 Oct 2008, 12:42

Sure you can. You just have to split the file/array up :)

Tony
xinix
 
Posts: 22
Joined: 28 Aug 2008, 21:42

Re: Taking advantage of the extra bit

Postby H.G.Muller » 13 Oct 2008, 14:14

My algorithm calculates the won and the lost positions separately. I guess for probing this would also be optimal: if you are in a better position, (score 0) you are only interested in the distinction won / not-won, as both draws and losses lead to a fail low. But even if you would want to merge the won and the lost data, to get WDL, you could do that afterwards. Writing the 1GB bitmaps once, and later read them both back for merging into a single 2GB file (which you might compress on the fly) woud not be too time consuming.

I have never tried it, but I think 32-bit computers should have no problems installing and using 4GB of memory. There will be some holes in the physical address space for your video card and other stuff, but I would be surprised if you couldn't use at least 3.5GB for user data.

Perhaps it is good to note that programs use virtual memory, which is mapped onto physical memory in units of 4KB (pages). A 3-men chunk of the bitmap would occupy 64^3 = 256K bits = 32KB = 8 pages. The bitmap would have holes larger than a page in it for broken states where the 'third' piece (the one providing the highest-order index bits except for the two pieces used to define symmetry) would coincide with one of the symmetry pieces (which provide the highest bits of the index). These holes thus make up 3% (1/32) of the bitmap. The virtual addresses in theseholes will never be accessed, so the corresponding virtual pages will never make a demand on physical memory. So the 6-men bitmaps only occupy 0.98GB of physical memory. That still leaves 20MB of physical memory for other stuff.

Perhaps the calculation can even be done in 2GB: you might not have to accumulate the lost (btm), positions, only the new ones of the previous two cycles. (Which are very sparse, and thus can be highly compressed through run-length coding.) This because you already accumulate the won (wtm) positions. And when you finally have all won positions, you could do a single extra pass to determine all lost (btm) positions. At that time you no longer need any lost-info for the last two cyces, so the 1GB you reserved for storing that can be used to build the lost (btm) bitmap. I guess you would not even need that, as you could stream out the lost (btm) bitmap in much smaller chunks as you generate it, requiring a much smaller buffer.

So 2GB might actually be sufficient: 0.98GB of physical memory for the won (wtm) bitmap, and then hoping the run-length-compressed lost (btm) info for the latest two cycle will never exceed ~0.6GB so that you still have left 400MB of working space and OS. If this will work for any pawnless 6-men depends a little on the maximum number of positions that will turn into lost during any cycle. Usually I count on this being not larger than about 5% of all positions. (Note that even for totally won end-games usually 60% or more of the btm positions is not lost, due to white hanging pieces or white being in check. So only 40% will eventually be declared lost (btm), and if this happens distributed over 8 cycles each cycle adds only 5%.) If such sparse sets can be run-length coded with 6 bits per lost position, this would take 0.05*6 = 0.3 bits per any position. So only 0.3GB per cycle, or 0.6GB for two very busy cycles. For most of the cycles the number of lost (btm) positions would be much much less, and hardy require any storage. But, unfortunately, it is the worst-case cycle that determines if we can do it or not.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 13 Oct 2008, 14:17

Ok how to split? I think that the purpose of it is to get independent sub-classes which can be generated in parallel. For bitbases with pawns maybe that is possible by starting from advanced positions and going backwards. For bitbases like KbnKbn how would you split? If I randomly split it in to two and try to generate the first half , there would be positions in the second half I have to refer to.
I think that the chinook 10men checkers database was splitted in many parts to generate them in paralell. But I am not sure about chess though.
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 13 Oct 2008, 14:59

The most important 6men are KxyKxy types where you can't tell if one side is ahead,
except for having the right to move. In that case there might not be a dominant value in the database.
I guess that trying to generate with 2gb ram has too much hassle. Even with naive rle compression it
can take too much time to decompress, that is assuming the data can be compressed well.

I have your tabalebase homepage downloaded as a whole, to try to figure out the tricks you
use to generate the 7 men.

thanks
Daniel
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby H.G.Muller » 13 Oct 2008, 15:42

Note that a large part of the 7-men effort is directed at minimizing the disk traffic. This is complicated because a 6-men slice of a 7-men EGT in general does not have any symmetry left (the fixed position of the 7th men breaking the symmetry). As a consequence, such a slice has 64G positions, and even at 1 bit per position would overflow the 4GB addres space of a 32-bit computer. This is why you need trick like K-slice streaming.

For a pawnless 6-men all this doesn't play a role, as the 8-fold symmetry reduces the bitmap size to 1GB, which does fit. So basically you only have to perform the in-RAM part of the algorithm, which could be split in a white and a black pass.

End-games with Pawns have a much smaller memory demand, although they take more effort in total: each P-slice (with the Pawns fixed in a given configuration) is logiclly a separate tablebase, and each pawn move a conversion, and the only thing you hve to worry about is building them in the correct order. But a 6-men with a single Pawn has the memory requirements of a 5-men without symmetry, i.e. only 1G positions, and 128MB for a bitmap. With two pawns this reduces by another factor 64. But the number of P-slices you have to crank through of course increases, to balance that.
User avatar
H.G.Muller
 
Posts: 3453
Joined: 16 Nov 2005, 12:02
Location: Diemen, NL

Re: Taking advantage of the extra bit

Postby xinix » 15 Oct 2008, 12:35

Daniel Shawul wrote:Ok how to split? I think that the purpose of it is to get independent sub-classes which can be generated in parallel. For bitbases with pawns maybe that is possible by starting from advanced positions and going backwards. For bitbases like KbnKbn how would you split? If I randomly split it in to two and try to generate the first half , there would be positions in the second half I have to refer to.
I think that the chinook 10men checkers database was splitted in many parts to generate them in paralell. But I am not sure about chess though.


In checkers, splitting up is a bit easier, since there is a "natural order" because you can't move a checker back.

For chess, one could split up by King-King enum. So with no pawns, 462 slices.
How many would you have to keep in memory ? If we would move only backward or only forward: at most 18 ( your current king pos, plus the max 8 squares mobility plus mine) times 2 ( btm and wtm)

Now the big trick: Keep looping over the maximum number of slices you can keep in memory until no more changes occur. Only then flush to disk and advance a king to the next position (ie to the next slice).

This works because we are not interested in the shortest mate with bitbases.

In general, you can split up by squares of any piece. Pieces with the lowest mobility are prefered first however, since you get a slice size reduction of (squares-count div avg mobility)

Tony
xinix
 
Posts: 22
Joined: 28 Aug 2008, 21:42

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 15 Oct 2008, 15:39

For pawn slices (P-Slice) slicing is straighforward especially with a single pawn. Generate positions with pawn on RANK7 first,
write result to disk,then go backwards. For other pieces,including the K-slices, the best we can hope for is a streaming order
that increases cache efficiency (locality of reference). For some reason the king might have moved several squares which asks for
read operation that we already wrote. Thats why I think that your suggestions will not completely avoid random seeks.
Originally I was only aware of the P-slices (pawns move forward only) which completely avoid inter-reference between slices.
After looking at H.G Muller's page, I understood that other slicing methods like the K-slice streaming might be helpful
during the generation phase. But I may have misunderstood you, and that it is possible to generate completely independent K-slices?

On a side note, I think that K-slice streaming order can also be used during the write to disk phase also
to inrease compression ratio later?? Recently I changed the streaming order in such a way that the loser's king
is enumerated first and that has resulted in significant compression ratio. This idea and enumeration of like pieces
resulted in a decrease of 400MB for the 5 men bitbases.

Daniel
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 16 Oct 2008, 05:50

Okay for the first part, it seems the 5 to 1 mapping is slightly better for 5 men. Don't judge me :) How would I know that it works so badly for 4 men and then suddenly becomes good for 5 men. I did the test on 4 men previously
4 to 1 5 to 1
4men - 3.8mb 5mb
5men 826mb 794mb

Daniel
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 24 Oct 2008, 07:29

Ok I got the solution for my question 2 from the horse's mouse i.e syzygy.
For those of you interested , here is the code I wrote for
indexing/de-indexing of the k! unique placements. It could
be optimized by unrolling the loops or whatever. Let me know if
it has a bug.
Code: Select all
/*
** Indexing and de-indexing functions for k similar pieces which
** have k! unique placements on a chess board. This method of binomial
** indexing is courtesy of syzygy of CCRL forum.
*/

/*
Accepts:
    integers n,k >= 0
Returns:
    combination(n,k) = n! / (n - k)!k!
*/
int combination(const int n,const int k) {
   int product = n;
   for(int j = 1;j <= k;j++)
      product = (product * (n - j)) / (j + 1);
   return product;
}
/*
Accepts:
     sq[] = set of squares in ascending order
       N = number of squares
Returns:
     Unique index calculated using binomial coefficients
*/
int get_index(const int* square, const int N) {
   int index = 0;
   for(int i = 0;i < N;i++)
      index += combination(square[i],i);
   return index;
}
/*
Accepts:
    index = unique index
Returns:
    sq[]  = set of N squares corresponding to the index
           in ascending order
*/
void get_squares(int* sq,const int N,const int index) {
    int comb;
   sq[0] = index;
   for(int i = N - 1;i >= 1;i--) {
      sq[i] = i;
      comb = 1;
      while(sq[0] >= comb) {
         sq[0] -= comb;
         comb = combination(++sq[i],i-1);
      }
   }
}
/*
test
*/
#include <stdio.h>
void main() {
   const int N = 5;
   int i,sq[N],index = 106;

   get_squares(sq,N,index);
   for(i = 0;i < N;i++) printf("%d ",sq[i]);
   i = get_index(sq,N);
   printf("\nindex: in = %d out = %d\n",index,i);
}
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia

Re: Taking advantage of the extra bit

Postby Dann Corbit » 24 Oct 2008, 19:48

Daniel Shawul wrote:Ok I got the solution for my question 2 from the horse's mouse i.e syzygy.
For those of you interested , here is the code I wrote for
indexing/de-indexing of the k! unique placements. It could
be optimized by unrolling the loops or whatever. Let me know if
it has a bug.
Code: Select all
/*
** Indexing and de-indexing functions for k similar pieces which
** have k! unique placements on a chess board. This method of binomial
** indexing is courtesy of syzygy of CCRL forum.
*/

/*
Accepts:
    integers n,k >= 0
Returns:
    combination(n,k) = n! / (n - k)!k!
*/
int combination(const int n,const int k) {
   int product = n;
   for(int j = 1;j <= k;j++)
      product = (product * (n - j)) / (j + 1);
   return product;
}
/*
Accepts:
     sq[] = set of squares in ascending order
       N = number of squares
Returns:
     Unique index calculated using binomial coefficients
*/
int get_index(const int* square, const int N) {
   int index = 0;
   for(int i = 0;i < N;i++)
      index += combination(square[i],i);
   return index;
}
/*
Accepts:
    index = unique index
Returns:
    sq[]  = set of N squares corresponding to the index
           in ascending order
*/
void get_squares(int* sq,const int N,const int index) {
    int comb;
   sq[0] = index;
   for(int i = N - 1;i >= 1;i--) {
      sq[i] = i;
      comb = 1;
      while(sq[0] >= comb) {
         sq[0] -= comb;
         comb = combination(++sq[i],i-1);
      }
   }
}
/*
test
*/
#include <stdio.h>
void main() {
   const int N = 5;
   int i,sq[N],index = 106;

   get_squares(sq,N,index);
   for(i = 0;i < N;i++) printf("%d ",sq[i]);
   i = get_index(sq,N);
   printf("\nindex: in = %d out = %d\n",index,i);
}


Suggestion on combination:
create a Pascal's triangle and do a simple lookup.

For instance, here is a small one used for probability calculations with a 54 card deck of playing cards (52 + 2 jokers):

Code: Select all
static const double d0[] ={1.};
static const double d1[] ={1.};
static const double d2[] ={1., 2.};
static const double d3[] ={1., 3.};
static const double d4[] ={1., 4., 6.};
static const double d5[] ={1., 5., 10.};
static const double d6[] ={1., 6., 15., 20.};
static const double d7[] ={1., 7., 21., 35.};
static const double d8[] ={1., 8., 28., 56., 70.};
static const double d9[] ={1., 9., 36., 84., 126.};
static const double d10[] ={1., 10., 45., 120., 210., 252.};
static const double d11[] ={1., 11., 55., 165., 330., 462.};
static const double d12[] ={1., 12., 66., 220., 495., 792., 924.};
static const double d13[] ={1., 13., 78., 286., 715., 1287., 1716.};
static const double d14[] ={1., 14., 91., 364., 1001., 2002., 3003., 3432.};
static const double d15[] ={1., 15., 105., 455., 1365., 3003., 5005., 6435.};
static const double d16[] ={1., 16., 120., 560., 1820., 4368., 8008., 11440., 12870.};
static const double d17[] ={1., 17., 136., 680., 2380., 6188., 12376., 19448., 24310.};
static const double d18[] ={1., 18., 153., 816., 3060., 8568., 18564., 31824., 43758., 48620.};
static const double d19[] ={1., 19., 171., 969., 3876., 11628., 27132., 50388., 75582., 92378.};
static const double d20[] =
{1., 20., 190., 1140., 4845., 15504., 38760., 77520., 125970., 167960., 184756.};
static const double d21[] =
{1., 21., 210., 1330., 5985., 20349., 54264., 116280., 203490., 293930., 352716.};
static const double d22[] =
{1., 22., 231., 1540., 7315., 26334., 74613., 170544., 319770., 497420., 646646., 705432.};
static const double d23[] =
{1., 23., 253., 1771., 8855., 33649., 100947., 245157., 490314., 817190., 1144066., 1352078.};
static const double d24[] =
{
  1., 24., 276., 2024.,
  10626., 42504., 134596., 346104.,
  735471., 1307504., 1961256., 2496144.,
  2704156.};
static const double d25[] =
{
  1., 25., 300., 2300.,
  12650., 53130., 177100., 480700.,
  1081575., 2042975., 3268760., 4457400.,
  5200300.};
static const double d26[] =
{
  1., 26., 325., 2600.,
  14950., 65780., 230230., 657800.,
  1562275., 3124550., 5311735., 7726160.,
  9657700., 10400600.};
static const double d27[] =
{
  1., 27., 351., 2925.,
  17550., 80730., 296010., 888030.,
  2220075., 4686825., 8436285., 13037895.,
  17383860., 20058300.};
static const double d28[] =
{
  1., 28., 378., 3276.,
  20475., 98280., 376740., 1184040.,
  3108105., 6906900., 13123110., 21474180.,
  30421755., 37442160., 40116600.};
static const double d29[] =
{
  1., 29., 406., 3654.,
  23751., 118755., 475020., 1560780.,
  4292145., 10015005., 20030010., 34597290.,
  51895935., 67863915., 77558760.};
static const double d30[] =
{
  1., 30., 435., 4060.,
  27405., 142506., 593775., 2035800.,
  5852925., 14307150., 30045015., 54627300.,
  86493225., 119759850., 145422675., 155117520.};
static const double d31[] =
{
  1., 31., 465., 4495.,
  31465., 169911., 736281., 2629575.,
  7888725., 20160075., 44352165., 84672315.,
  141120525., 206253075., 265182525., 300540195.};
static const double d32[] =
{
  1., 32., 496., 4960.,
  35960., 201376., 906192., 3365856.,
  10518300., 28048800., 64512240., 129024480.,
  225792840., 347373600., 471435600., 565722720.,
  601080390.};
static const double d33[] =
{
  1., 33., 528., 5456.,
  40920., 237336., 1107568., 4272048.,
  13884156., 38567100., 92561040., 193536720.,
  354817320., 573166440., 818809200., 1037158320.,
  1166803110.};
static const double d34[] =
{
  1., 34., 561., 5984.,
  46376., 278256., 1344904., 5379616.,
  18156204., 52451256., 131128140., 286097760.,
  548354040., 927983760., 1391975640., 1855967520.,
  2203961430., 2333606220.};
static const double d35[] =
{
  1., 35., 595., 6545.,
  52360., 324632., 1623160., 6724520.,
  23535820., 70607460., 183579396., 417225900.,
  834451800., 1476337800., 2319959400., 3247943160.,
  4059928950., 4537567650.};
static const double d36[] =
{
  1., 36., 630., 7140.,
  58905., 376992., 1947792., 8347680.,
  30260340., 94143280., 254186856., 600805296.,
  1251677700., 2310789600., 3796297200., 5567902560.,
  7307872110., 8597496600., 9075135300.};
static const double d37[] =
{
  1., 37., 666., 7770.,
  66045., 435897., 2324784., 10295472.,
  38608020., 124403620., 348330136., 854992152.,
  1852482996., 3562467300., 6107086800., 9364199760.,
  12875774670., 15905368710., 17672631900.};
static const double d38[] =
{
  1., 38., 703., 8436.,
  73815., 501942., 2760681., 12620256.,
  48903492., 163011640., 472733756., 1203322288.,
  2707475148., 5414950296., 9669554100., 15471286560.,
  22239974430., 28781143380., 33578000610., 35345263800.};
static const double d39[] =
{
  1., 39., 741., 9139.,
  82251., 575757., 3262623., 15380937.,
  61523748., 211915132., 635745396., 1676056044.,
  3910797436., 8122425444., 15084504396., 25140840660.,
  37711260990., 51021117810., 62359143990., 68923264410.};
static const double d40[] =
{
  1., 40., 780., 9880.,
  91390., 658008., 3838380., 18643560.,
  76904685., 273438880., 847660528., 2311801440.,
  5586853480., 12033222880., 23206929840., 40225345056.,
  62852101650., 88732378800., 113380261800., 131282408400.,
  137846528820.};
static const double d41[] =
{
  1., 41., 820., 10660.,
  101270., 749398., 4496388., 22481940.,
  95548245., 350343565., 1121099408., 3159461968.,
  7898654920., 17620076360., 35240152720., 63432274896.,
  103077446706., 151584480450., 202112640600., 244662670200.,
  269128937220.};
static const double d42[] =
{
  1., 42., 861., 11480.,
  111930., 850668., 5245786., 26978328.,
  118030185., 445891810., 1471442973., 4280561376.,
  11058116888., 25518731280., 52860229080., 98672427616.,
  166509721602., 254661927156., 353697121050., 446775310800.,
  513791607420., 538257874440.};
static const double d43[] =
{
  1., 43., 903., 12341.,
  123410., 962598., 6096454., 32224114.,
  145008513., 563921995., 1917334783., 5752004349.,
  15338678264., 36576848168., 78378960360., 151532656696.,
  265182149218., 421171648758., 608359048206., 800472431850.,
  960566918220., 1052049481860.};
static const double d44[] =
{
  1., 44., 946., 13244.,
  135751., 1086008., 7059052., 38320568.,
  177232627., 708930508., 2481256778., 7669339132.,
  21090682613., 51915526432., 114955808528., 229911617056.,
  416714805914., 686353797976., 1029530696964., 1408831480056.,
  1761039350070., 2012616400080., 2104098963720.};
static const double d45[] =
{
  1., 45., 990., 14190.,
  148995., 1221759., 8145060., 45379620.,
  215553195., 886163135., 3190187286., 10150595910.,
  28760021745., 73006209045., 166871334960., 344867425584.,
  646626422970., 1103068603890., 1715884494940., 2438362177020.,
  3169870830126., 3773655750150., 4116715363800.};
static const double d46[] =
{
  1., 46., 1035., 15180.,
  163185., 1370754., 9366819., 53524680.,
  260932815., 1101716330., 4076350421., 13340783196.,
  38910617655., 101766230790., 239877544005., 511738760544.,
  991493848554., 1749695026860., 2818953098830., 4154246671960.,
  5608233007146., 6943526580276., 7890371113950., 8233430727600.};
static const double d47[] =
{
  1., 47., 1081., 16215.,
  178365., 1533939., 10737573., 62891499.,
  314457495., 1362649145., 5178066751., 17417133617.,
  52251400851., 140676848445., 341643774795., 751616304549.,
  1503232609098., 2741188875414., 4568648125690., 6973199770790.,
  9762479679106., 12551759587422., 14833897694226., 16123801841550.};
static const double d48[] =
{
  1., 48., 1128., 17296.,
  194580., 1712304., 12271512., 73629072.,
  377348994., 1677106640., 6540715896., 22595200368.,
  69668534468., 192928249296., 482320623240., 1093260079344.,
  2254848913647., 4244421484512., 7309837001104., 11541847896480.,
  16735679449896., 22314239266528., 27385657281648., 30957699535776.,
  32247603683100.};
static const double d49[] =
{
  1., 49., 1176., 18424.,
  211876., 1906884., 13983816., 85900584.,
  450978066., 2054455634., 8217822536., 29135916264.,
  92263734836., 262596783764., 675248872536., 1575580702584.,
  3348108992991., 6499270398159., 11554258485616., 18851684897584.,
  28277527346376., 39049918716424., 49699896548176., 58343356817424.,
  63205303218876.};
static const double d50[] =
{
  1., 50., 1225., 19600.,
  230300., 2118760., 15890700., 99884400.,
  536878650., 2505433700., 10272278170., 37353738800.,
  121399651100., 354860518600., 937845656300., 2250829575120.,
  4923689695575., 9847379391150., 18053528883775., 30405943383200.,
  47129212243960., 67327446062800., 88749815264600., 108043253365600.,
  121548660036300., 126410606437752.};
static const double d51[] =
{
  1., 51., 1275., 20825.,
  249900., 2349060., 18009460., 115775100.,
  636763050., 3042312350., 12777711870., 47626016970.,
  158753389900., 476260169700., 1292706174900., 3188675231420.,
  7174519270695., 14771069086725., 27900908274925., 48459472266975.,
  77535155627160., 114456658306760., 156077261327400., 196793068630200.,
  229591913401900., 247959266474052.};
static const double d52[] =
{
  1., 52., 1326., 22100.,
  270725., 2598960., 20358520., 133784560.,
  752538150., 3679075400., 15820024220., 60403728840.,
  206379406870., 635013559600., 1768966344600., 4481381406320.,
  10363194502115., 21945588357420., 42671977361650., 76360380541900.,
  125994627894135., 191991813933920., 270533919634160., 352870329957600.,
  426384982032100., 477551179875952., 495918532948104.};
static const double d53[] =
{
  1., 53., 1378., 23426.,
  292825., 2869685., 22957480., 154143080.,
  886322710., 4431613550., 19499099620., 76223753060.,
  266783135710., 841392966470., 2403979904200., 6250347750920.,
  14844575908435., 32308782859535., 64617565719070., 119032357903550.,
  202355008436035., 317986441828055., 462525733568080., 623404249591760.,
  779255311989700., 903936161908052., 973469712824056.};
static const double *PascalsTriangle[55] =
{
  d0, d1, d2, d3, d4, d5, d6, d7,
  d8, d9, d10, d11, d12, d13, d14, d15,
  d16, d17, d18, d19, d20, d21, d22, d23,
  d24, d25, d26, d27, d28, d29, d30, d31,
  d32, d33, d34, d35, d36, d37, d38, d39,
  d40, d41, d42, d43, d44, d45, d46, d47,
  d48, d49, d50, d51, d52, d53
};
#define ABS(x) (x) > 0 ? (x) : -(x)
#define MIN(a,b) (a) < (b) ? (a) : (b)
#ifdef UNIT_TEST
#include <stdio.h>
#endif
double
nCr (int n, int r)
{
  if (n < r)
    {
      int temp;
      temp = n;
      n = r;
      r = temp;
    }
  r = MIN (ABS (n - r), r);
#ifdef UNIT_TEST
  printf ("Used n = %d, r = %d\n", n, r);
#endif
  return PascalsTriangle[n][r];
}
#ifdef UNIT_TEST
int
main (void)
{
  int n;
  int r;
  for (n = 0; n < 52; n++)
    for (r = 0; r < 50; r += 10)
      printf ("%d Choose %d = %.0f\n", n, r, nCr (n, r));
  return 0;
}
#endif
Dann Corbit
 

Re: Taking advantage of the extra bit

Postby wgarvin » 25 Oct 2008, 04:30

For indexing two or three like pieces, I suggest "pseudo diagonalization". Using one (or two) table lookups in a small table, and some cheap min/max operations, you can map two (or three) like pieces onto an enumeration of all the unique combinations (i.e. 64*63/2 combinations for two like pieces, or (64*63*62)/(2*3) combinations for three like pieces).

The way it works is: Suppose you have pieces on two squares, lets say 19 and 14. You use the larger one as an index into the table (i.e. pseudoD2[19]). The other piece can only be on the 19 squares between 0 and 18, so pseudoD2[19] gives the start of a range of 19 indices. In other words, pseudoD2[19] + 19 == pseudoD2[20].

To avoid having gaps in the resulting enumeration, the square indices should start at zero (so for pawns, you'd want to subtract 8 from them before the lookups). The same tables should work for pawns and other men, though.

Code: Select all
// with 2 like men on squares S1 and S2:
ASSERT(S1 < S2);
int combinedIndex = psuedoD2[S2] + S1;

// with 3 like men on squares S1, S2 and S3:
ASSERT(S1 < S2);
ASSERT(S2 < S3);
int combinedIndex = pseudoD6[S3] + pseudoD2[S2] + S1;


And here is some code to make the needed tables:

Code: Select all
static void InitPsuedoDiagonalizationTables()
{
   U32 a=0, b=0;
   for (U32 i = 0; i<64; i++)
   {
      PseudoD2Table[i] = (U16)a;
      PseudoD6Table[i] = (U16)b;
      a += i;
      b += PseudoD2Table[i];
   }
   ASSERT(a == (64*63)/2);
   ASSERT(b == (64*63*62)/6);
}

// The tables it produces look like this:

USHORT const PseudoD2Table[64] = {
    0x0000, 0x0000, 0x0001, 0x0003, 0x0006, 0x000A, 0x000F, 0x0015,
    0x001C, 0x0024, 0x002D, 0x0037, 0x0042, 0x004E, 0x005B, 0x0069,
    0x0078, 0x0088, 0x0099, 0x00AB, 0x00BE, 0x00D2, 0x00E7, 0x00FD,
    0x0114, 0x012C, 0x0145, 0x015F, 0x017A, 0x0196, 0x01B3, 0x01D1,
    0x01F0, 0x0210, 0x0231, 0x0253, 0x0276, 0x029A, 0x02BF, 0x02E5,
    0x030C, 0x0334, 0x035D, 0x0387, 0x03B2, 0x03DE, 0x040B, 0x0439,
    0x0468, 0x0498, 0x04C9, 0x04FB, 0x052E, 0x0562, 0x0597, 0x05CD,
    0x0604, 0x063C, 0x0675, 0x06AF, 0x06EA, 0x0726, 0x0763, 0x07A1
};

USHORT const PseudoD6Table[64] = {
    0x0000, 0x0000, 0x0000, 0x0001, 0x0004, 0x000A, 0x0014, 0x0023,
    0x0038, 0x0054, 0x0078, 0x00A5, 0x00DC, 0x011E, 0x016C, 0x01C7,
    0x0230, 0x02A8, 0x0330, 0x03C9, 0x0474, 0x0532, 0x0604, 0x06EB,
    0x07E8, 0x08FC, 0x0A28, 0x0B6D, 0x0CCC, 0x0E46, 0x0FDC, 0x118F,
    0x1360, 0x1550, 0x1760, 0x1991, 0x1BE4, 0x1E5A, 0x20F4, 0x23B3,
    0x2698, 0x29A4, 0x2CD8, 0x3035, 0x33BC, 0x376E, 0x3B4C, 0x3F57,
    0x4390, 0x47F8, 0x4C90, 0x5159, 0x5654, 0x5B82, 0x60E4, 0x667B,
    0x6C48, 0x724C, 0x7888, 0x7EFD, 0x85AC, 0x8C96, 0x93BC, 0x9B1F
};
wgarvin
 
Posts: 2
Joined: 22 Oct 2008, 19:54
Location: Montreal, Canada

Re: Taking advantage of the extra bit

Postby Daniel Shawul » 25 Oct 2008, 12:25

Yes I could use pre-computed values of the combination encountered in the indexing
step which could be quite few for a fixed number of similar pieces(For 6men maximum similar
pieces is 4). For the current de-indexing mehtod however some more combinations might
be required.
As wgarvin suggested a number of small lookup tables could be used to
avoid the functions. But I don't know if this also works for the de-indexing as well.
User avatar
Daniel Shawul
 
Posts: 366
Joined: 28 Sep 2004, 09:33
Location: Ethiopia


Return to Programming and Technical Discussions

Who is online

Users browsing this forum: No registered users and 44 guests