I think I understand what bitmasking is , setting zero the bits we do not want in the result.
For example:
- Code: Select all
RRRR RGGG GGGB BBBB // Color values stored in a 16-bit value
const unsigned short redMask = 0xF800;
unsigned short lightGray = 0x7BEF;
unsigned short redComponent = (lightGray & redMask) >> 11;
would mean
0XF800 = 1111100000000000
0x7BEF = 0111101111101111
(lightGray & redMask) = 01111000000000000
(lightGray & redMask ) >>11 = 0000000000001111
However I don`t understand these bit patterns https://chessprogramming.wikispaces.com/Encoding+Moves
Why anding with 0x3f? How do I choose TARGET SQUARE (Move) = (Some bit pattern) >> 8