but I'd assume for real world energy usage the KV cache would be very impactful if not eventually dominant.
They talk about this in the appendix where they go over the (estimated) effects of large amounts of input tokens (up to 100k). This isn't really relevant for Gemini Nano because it only has a max 32k context window, and the deployment in Chrome probably caps it at far less than that.
I'm inclined to believe the main analysis is reasonably accurate. The numbers are similar to what I get on my local machine with local models. Granted, I tested with smaller models (7b parameter Mistral in this case) on weaker hardware (AMD 6700XT), but on a quick test I get about 50 tok/s locally at 180 W power use, which is about 0.5 Wh for 500 tokens. AMD GPUs suck for AI, so I think it's plausible that dedicated compute hardware would get basically the same energy efficiency on a frontier model.
Gemini Nano on a phone NPU is obviously going to be far more efficient -- by all accounts it gets the same or better tok/s I am getting at like 1/50th the TDP.










No it's not. You clearly have zero perspective on energy consumption.
The power draw on a phone with an NPU (where Gemini Nano is mostly used) is comparable to watching a video on your phone, maybe a couple of watts. On devices without NPUs (e.g. PCs) it will be more, but not dramatically so. The power use of this is absolutely zilch in the grand scheme of things.
To be extremely generous, let's say the average power draw is 50 watts, and that the model generates on average 10 tok/s, and that the average user has it generate 500 tokens per day (about 400 words). That's 50 seconds of 50 watts for every user, and let's say this is done by a billion users. This is a very generous estimate: in reality the average power draw is lower, the average tokens generated is likely lower (the intended use is generating short snippets like, say, email titles based on the email's content), and this definitely won't be used by a billion people.
WolframAlpha tells us that this takes 694 MWh of power, and helpfully mentions that this is 74% the fuel energy of an Airbus A330-300, and indeed this energy use is roughly in the ballpark of one transatlantic flight. There's about 500 transatlantic flights every day. Two offshore wind turbines will generate this much power on a windy day.
In all likelihood an order of magnitude more energy is spent every day watching short form videos. I'm not going to do the napkin math on that though.
edit: in reality, local models like this will likely reduce net power consumption as fewer API calls are made to cloud LLMs, which are both less power efficient and have overhead from the whole internet thing.