Help! Why don't fairy chess pieces have values?
I did test RF (7) and BW (5.25), and these have about the same difference as R and B.
Hello Master Muller ! I am a newcomer but interested in your theory and know you long ago.
I have saw similar claim that RF is better than BW before too, but he didn't give why. May I ask what kind of test you are using now to reach this conclusion?
And, does your piece-value-calculation-formula-by-square-control get any update these years? I get quite little information on it.
What I did is play matches with the Fairy-Max engine from start positions where in the normal (FIDE) start position pieces of one player are replaced by the pieces I want to test, sometimes deleting a Pawn of one player to make the outcome more equal. Just deleting a Pawn typically results in a 66% score.
E.g. replacing both Rooks of one player by WD causes that player to win a match of a few hundred games (played with alternating colors to eliminate the first-move advantage, and with shuffling of Q, N and B on the backrank to increase game diversity) by about 58%; when deleting a Pawn of that same player the score inverts to 58% for the other player. So the strength difference is half of what the extra Pawn gives, and that for two pieces. So a single BW is about a quarter of a Pawn stronger than a Rook, or 5.25 Pawn units.
For the RF I did tests replacing one player's Queen by it, and deleting a Pawn of the opponent, or replacing a Rook by it, and deleting a Pawn of the same player. This still causes around 65% scoring for the strong players, putting teh RF aboult half-way between Q and R. I also tried replacing one players Bishop pair (nominally worth 2 x 3.25 + 0.50 = 7.00) with a single RF; this causes a result close to 50%.
I have extended the empirical formula for piece value of short-range leapers (33+0.7*N)*N centi-Pawn for a leaper with N potential moves (or (nr_of_captures*2 + nr_of_noncaptures)/3 for divergent pieces) to a computer model for guestimating values in the Interactive Diagram. This determines the required value of N for other piece types (e.g. long-range or lame leapers, sliders, hoppers) by averaging the number of moves in a number of randomly generated positions with 25% filling, where the density of pieces of a given color increases linearly owards its own back-rank. And then adding one standard deviation of this number of moves to the average to account for the fact that players tend to place their pieces on squares where these have good mobility, rather than a poor one. (E.g. one would almost never find Knights on a corner square in a chess game, and in any case much less often than in 6% of the positions, while corners do make up 6% of the board.)
This works reasonably well, even though it still gets too small a value for the BN compound. I guess I should add some extra bonus for attacking orthogonally adjacent squares for refining the model further.
Excerpt means “part of” i.e. not the whole thing. Taylor defines the “safe check” on page 2 of his paper, had you even bothered to read it. Way to demonstrate not only your bias, but your ignorance also. Here is the proof that Taylor used the term “safe check.”
A non capture kraken(universal leaper) +Non royal king would be value 0 then
I did test RF (7) and BW (5.25), and these have about the same difference as R and B.
Hello Master Muller ! I am a newcomer but interested in your theory and know you long ago.
I have saw similar claim that RF is better than BW before too, but he didn't give why. May I ask what kind of test you are using now to reach this conclusion?
And, does your piece-value-calculation-formula-by-square-control get any update these years? I get quite little information on it.