Evolution of a Chess Fish: What is NNUE, anyway?
Stockfish 12 introduced the chess world to NNUE, a new and improved type of neural network. And since its release there has been a wave of projects implementing NNUE: Dragon, Igel, Minic, RubiChess, and others. All reported huge strength improvements upon switching.
But what is NNUE, and why is it such a strong technique? In this post I'll give a nontechnical answer to that question, by answering a number of smaller questions...
What does NNUE stand for?
NNUE stands for Efficiently Updateable Neural Network. It's "NNUE" instead of "EUNN" because the technique was adopted from a shogi engine, and NNUE is wordplay in Japanese.
Wait, there's such a thing as computer shogi?
Yes! Initially people (Looking at you, Viz!) thought that computer chess was much more advanced than computer shogi, and that chess engines had nothing to learn from shogi engines. The Stockfish people knew about NNUE for a year, but just assumed that it wasn't worth trying because it was from shogi.
So what made the Stockfish people try it?
Eventually someone decided to fork Stockfish and put NNUE into it, just for fun. Their fork wasn't immediately stronger than regular Stockfish, but it was close enough that other people decided to try. Within a month or two it was much stronger than Stockfish, so NNUE was put into the main Stockfish.
What do you mean that NNUE was "put into" Stockfish?
All chess engines have evaluation functions. That part of the engine looks at a single position, and assigns it a score. A score of 0.00 is equal, positive is good for white, and negative is good for black.
Traditional chess engines have handcrafted evaluation functions: scores are calculated based on material, king safety, pawn structure, etc. Pure NNUE engines replace this handcrafted code with a neural network, that inputs the position into the net and gives back a score. So it's just one single part of the engine (the evaluation function) that's being changed.
But I thought Leela and AlphaZero already did that?
You're right; both those engines use neural networks to evaluate positions. But they also use a different search algorithm. Leela and AlphaZero use a modified Monte-Carlo Tree Search, whereas all NNUE engines are using the traditional AB-minimax pruning search technique, which seems to work better for chess.
If AB-minimax is so great, why doesn't Lc0 use it?
Compare the speeds of the two engines on CCC hardware- Stockfish NNUE gets something like 100Mnps, whereas Lc0 gets about 75Knps. Stockfish can accomplish an effective AB search because it can look at so many positions. Lc0 is slower, so has to settle for a PUCT algorithm.
Jeepers! How is Stockfish so fast? I thought neural-network engines were slow.
The simplest explanation is that Stockfish NNUE has a tiny network (around 50kb) that doesn't take a long time to evaluate. Lc0's networks are much, much larger (50-100mb). There's also the efficiently-updating part, which speeds up the NNUE engines.
OK, tell me about this "efficient updating."
Each time A0 or Lc0 wants to evaluate a position, it calculates the entire network from scratch. Even if both sides just pointlessly shuffle bishops around the board, the whole position is reevaluated and recalculated. That's a lot of work.
Stockfish NNUE, on the other hand, only bothers recalculating important parts of the network, and assumes the parts that aren't important, can just reuse the values from the previous position. So it has a small network to begin with, and only has to calculate part of it for each move.
Wait- NNUE just ignores that some part of the board has changed? Isn't that really risky?
I'm sure it is! But NNUE recalculates based on whether a piece has moved relative to both kings, so that probably helps it not hang checkmate. Also, a lot of problems get solved by looking at 100 million positions per second.
I tried using Lc0-CPU and it was really weak. How is it that NNUE is strong only using CPU?
Again- the NNUE network is tiny compared to Lc0 networks. And NNUE is efficiently updating. So it can be quickly evaluated.
But why does NNUE not use GPUs anyway? Those seem better!
It takes a long time for a computation to be processed, handed from the CPU to the GPU, and then given back to the CPU. This is fine if you're handling a huge Lc0-sized network: it would be so slow to evaluate on CPU that it's worth going through all the trouble of using the GPU.
But the NNUE network is so small (and efficiently updating!), that it's faster to handle on the CPU than try to pass it to GPU.
All your answers seem to be "because the network is small and efficiently updating."
Yep. That's all NNUE is.
Does NNUE have any weaknesses?
Stockfish NNUE rarely loses tactically, because it has the ability to quickly evaluate huge numbers of positions. It also has more positional information stored in its network than in the old handcrafted evaluation function.
However, Lc0 (and other engines with big networks) still have more positional information in their evaluation than NNUE. So these engines can still sometimes win by recognizing subtle positional features.
One such game is analyzed by IM Daniel Rensch:
What has NNUE meant for computer chess in general?
There are a number of doomsayers who have said that NNUE is going to kill computer chess. They don't like that the art of crafting evaluations has entirely disappeared. Indeed, the top engines are now either GPU-using NNs, or NNUEs.
But it must be admitted that computer chess had gotten stale before NNUE. Stockfish and Lc0 had been so far ahead of the competition that developers had given up chasing them. The hope is that innovation about how to use these NNUEs will allow for tournaments with more than two legitimate contenders.
Can I get myself an NNUE engine?
Of course! There are a number of options based on how much you want to spend, and how technically-minded you are. Keep in mind these all are meant to be run with a GUI.
Stockfish 12: https://stockfishchess.org/download/
Dragon: https://komodochess.com/
Scorpio: https://github.com/dshawul/Scorpio
RubiChess: https://github.com/Matthies/RubiChess
Thanks for the explanation! How do I thank you for such a fantastic article?
You're welcome! I'd appreciate if you left a comment with questions, feedback, or to tell me how great I am.
[Image credit: Yu Nasu (2018). ƎUИИ Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi. Ziosoft Computer Shogi Club, pdf (Japanese with English abstract)]