Forums

same moves get rated differently depening on the game?

Sort:
AngusByers

I suspect it will have to do with how much time the computer has to evaluate each move. Game review completes in a matter of seconds, so each move/ply doesn't get much time. Computer analysis can vary when it doesn't have enough time to settle on its choice. It probably starts evaluating all the legal moves in different orders, and without getting deep enough, the ranking of the moves can differ.

The analysis board allows for a lot more time to be spent on a given position and you can see how it's "top choice" changes over time. Let it sit for awhile and It should be more stable than Game Review. Game Review is really just a "quick go over".

Falkentyne

The evaluation time could be a clue, since the entire game has to be analyzed rather quickly, and cloud engines are also limited in depth (I believe there is a way to have stockfish run using your own CPU cores rather than the cloud, which increases its strength drastically, at least I saw that recently, forgot if it was here or lichess).

The move ...d5 is anything but a blunder in that Sicilian wing gambit. It leads up to equality with best play (eval: 0.10-0.16), but at shorter depths, it goes up to +.35. The main issue with ...d5, instead of the more restrained ...d6, is that Black's Queen gets brought out early, when he is already behind in development, and White is trying to fish for compensation for giving up a pawn in this marginal opening. With best play, Black ends up with a worse pawn structure but at least keeps his pawn. 7 exd5 Qxd5 8 Na3 Qa5 (pinning the Na3 and making the obvioius 9 Bc4 less desirable). 9 Nf3 e6 (Qxc3?? loses to Bd2 1-0) 10 Bb5 Nf6 11 Bxc6 bxc6 12 0-0 Be7 13 Ne5 Bd7 14 c4, with compensation for the pawn. White will follow with Qf3 and Rd1.

I'm not sure however as to the question marks in the analysis mode. It could just be from wild evaluation swings from lack of depth, but as to why this happened in a 2500 vs 1100 game, no idea.

BTW If you're trying to win with Black after 2 a3, don't play 2...Nc6. Play 2...g6. A Dragon setup vs the non developing 2 a3 puts this entire variation into question. Then if White tries to play 3 b4, he is left wondering why he bothered, as 3 ...Bg7 followed by 4...d6 leaves Black already better. It's a similar idea to a kingside fianchetto being a strong response to the Larsen/Reti openings where White plays b3 (various tactical issues with that unprotected bishop on b2 and the long diagonal after Black castles short).

prplt

first game: Qg6 is a great move

second game: Qg6 is an inaccuracy 🀷‍♂️

prplt
prplt wrote:

first game: Nh3 is best, Rh6 best, Qxg2 great

second one: Nh3 great, Rh6 great, Qxg2 best

also Nd4+ is an inaccuracy for some reason even though it's the best move πŸ€”

here Nh3 is great, Rh6 best, Qxg2 greatΒ 

and now it's showing that Nd4+ is the best move instead of Qg2+ πŸ€·β€β™‚οΈ

AngusByers

I just noted that the review evaluation is influenced by what computer I'm on to run the review. If I review a game when I'm logged in from one computer, I was given an accuracy of 93.6 (it was against this months 2nd bot, Clyde, so not really anything to crow about), but when I showed my son the game from a different computer, the review gave it a 91.6. I've now rechecked by going back to the other machine, and sure enough, it's re-evaluated it to 93.6. While the "estimated Elo" for myself didn't change, the estimated Elo for the bot would change between 700 or 850, depending upon which computer I checked from.

To me, this is very suggestive that the evaluations given are simply unstable due to how quickly it has to analyze the whole game, so perhaps it is not too surprising that it changes the evaluation about some moves/games from time to time.

prplt

had this game today with 1 best & excellent move inside the book moves 🀯

so

  • 4. d4 c5 - book
  • 5. f4 e6 - best/excellent
  • 6. Nf3 Nf6 - book
rmklabourhire

It's fascinating how the same moves can receive different ratings depending on the game. Each game's mechanics, strategies, and scoring systems assign unique value to certain moves, making them effective in one context but less so in another. This variation truly emphasizes the diversity in game design.

Vonbishoffen
mikewier wrote:

I encountered this a while back. In a game review, one move was suggested as best, with a positive evaluation of the position. The suggested move seemed strange to me and so I played the game through on an Analysis board. Here, the suggested move was considered a mistake, leading to a negative evaluation. This was just a few minutes after the game review, so it is unlikely that a software update occurred.

So, the all-knowing stockfish is not infallible

My theory has always been that it's to render cheating via the chess.com engine impossible. You'd then be forced to use an external source for moves. Doesn't stop repeat offeners but kind of means one time cheaters get disenchanted with cheating if they're dumb enough to use the chess.com engine and find the 'best' moves are actually blunders. Obviously this only appears once in a while, I've only seen in review but I imagine it works for running analysis too...

Kind of like how speed cameras work. Most of the time you won't get caught, but when you do, it worse than if you hadn't been speeding at all.

prplt

again there are other moves "inside" the book moves

BigChessplayer665
Vonbishoffen wrote:
mikewier wrote:

I encountered this a while back. In a game review, one move was suggested as best, with a positive evaluation of the position. The suggested move seemed strange to me and so I played the game through on an Analysis board. Here, the suggested move was considered a mistake, leading to a negative evaluation. This was just a few minutes after the game review, so it is unlikely that a software update occurred.

So, the all-knowing stockfish is not infallible

My theory has always been that it's to render cheating via the chess.com engine impossible. You'd then be forced to use an external source for moves. Doesn't stop repeat offeners but kind of means one time cheaters get disenchanted with cheating if they're dumb enough to use the chess.com engine and find the 'best' moves are actually blunders. Obviously this only appears once in a while, I've only seen in review but I imagine it works for running analysis too...

Kind of like how speed cameras work. Most of the time you won't get caught, but when you do, it worse than if you hadn't been speeding at all.

No you an just watch the game with an alt account and use stockfish I think it's not about cheating I think it's more likely the fact that they rank the system based of your elo which means you would get different results for each elo even if it was the same game played

Falkentyne

So what was the final verdict on this?

prplt

had this game today and I remembered Nxc2+ being a brilliant move but now it was only good

then I went to analysis and there it indeed showed up as brilliant πŸ€·β€β™‚οΈ

another game for reference where it is brilliant in the review

scopegranites

The algorithm might need to be fine-tuned during the period between games in order to make it more accurate. The classification of a move is a judgment and the criteria for doing so can change over time.

prplt
scopegranites wrote:

The algorithm might need to be fine-tuned during the period between games in order to make it more accurate. The classification of a move is a judgment and the criteria for doing so can change over time.

I doubt that's the case here though, plus the variation shouldn't be that big (good to brilliant) just because the engine was slightly adjusted evil

KIKEN6912
Nope
prplt

here basic development moves are rated great for some reason 🀯

and on a lower level they aren't πŸ€·β€β™‚οΈ

magipi

This last one strongly smells like a bug.