Forums

same moves get rated differently depening on the game?

Sort:
AngusByers

I suspect it will have to do with how much time the computer has to evaluate each move. Game review completes in a matter of seconds, so each move/ply doesn't get much time. Computer analysis can vary when it doesn't have enough time to settle on its choice. It probably starts evaluating all the legal moves in different orders, and without getting deep enough, the ranking of the moves can differ.

The analysis board allows for a lot more time to be spent on a given position and you can see how it's "top choice" changes over time. Let it sit for awhile and It should be more stable than Game Review. Game Review is really just a "quick go over".

Falkentyne

The evaluation time could be a clue, since the entire game has to be analyzed rather quickly, and cloud engines are also limited in depth (I believe there is a way to have stockfish run using your own CPU cores rather than the cloud, which increases its strength drastically, at least I saw that recently, forgot if it was here or lichess).

The move ...d5 is anything but a blunder in that Sicilian wing gambit. It leads up to equality with best play (eval: 0.10-0.16), but at shorter depths, it goes up to +.35. The main issue with ...d5, instead of the more restrained ...d6, is that Black's Queen gets brought out early, when he is already behind in development, and White is trying to fish for compensation for giving up a pawn in this marginal opening. With best play, Black ends up with a worse pawn structure but at least keeps his pawn. 7 exd5 Qxd5 8 Na3 Qa5 (pinning the Na3 and making the obvioius 9 Bc4 less desirable). 9 Nf3 e6 (Qxc3?? loses to Bd2 1-0) 10 Bb5 Nf6 11 Bxc6 bxc6 12 0-0 Be7 13 Ne5 Bd7 14 c4, with compensation for the pawn. White will follow with Qf3 and Rd1.

I'm not sure however as to the question marks in the analysis mode. It could just be from wild evaluation swings from lack of depth, but as to why this happened in a 2500 vs 1100 game, no idea.

BTW If you're trying to win with Black after 2 a3, don't play 2...Nc6. Play 2...g6. A Dragon setup vs the non developing 2 a3 puts this entire variation into question. Then if White tries to play 3 b4, he is left wondering why he bothered, as 3 ...Bg7 followed by 4...d6 leaves Black already better. It's a similar idea to a kingside fianchetto being a strong response to the Larsen/Reti openings where White plays b3 (various tactical issues with that unprotected bishop on b2 and the long diagonal after Black castles short).

prplt

first game: Qg6 is a great move

second game: Qg6 is an inaccuracy 🀷‍♂️

prplt
prplt wrote:

first game: Nh3 is best, Rh6 best, Qxg2 great

second one: Nh3 great, Rh6 great, Qxg2 best

also Nd4+ is an inaccuracy for some reason even though it's the best move πŸ€”

here Nh3 is great, Rh6 best, Qxg2 greatΒ 

and now it's showing that Nd4+ is the best move instead of Qg2+ πŸ€·β€β™‚οΈ

AngusByers

I just noted that the review evaluation is influenced by what computer I'm on to run the review. If I review a game when I'm logged in from one computer, I was given an accuracy of 93.6 (it was against this months 2nd bot, Clyde, so not really anything to crow about), but when I showed my son the game from a different computer, the review gave it a 91.6. I've now rechecked by going back to the other machine, and sure enough, it's re-evaluated it to 93.6. While the "estimated Elo" for myself didn't change, the estimated Elo for the bot would change between 700 or 850, depending upon which computer I checked from.

To me, this is very suggestive that the evaluations given are simply unstable due to how quickly it has to analyze the whole game, so perhaps it is not too surprising that it changes the evaluation about some moves/games from time to time.

prplt

had this game today with 1 best & excellent move inside the book moves 🀯

so

  • 4. d4 c5 - book
  • 5. f4 e6 - best/excellent
  • 6. Nf3 Nf6 - book