Yeah, bloating the install size is the main problem with this. Users running out of storage space is inconvenient, but has no real bearing on climate or privacy.
Yes it is. Small models like this are on the order of 100x more efficient than the big models backing ChatGPT or Gemini proper.
It's a local model. It uses a fraction of the power a cloud AI query uses, and cloud AI queries already use much less power than you obviously think they do (it is AI training -- specifically training frontier models -- that burns power like crazy).
If it is not immediately obvious to you how negligible the cost is going to be, you have no clue how little compute small models like this require. Apply a bit of common sense: this is a model designed to run locally on smartphones. If it used a lot of power, the phone would run out of battery.
It's hard (if not impossible) to find power usage figures for Gemini Nano, because they're going to depend on the efficiency of the device it's running on. If it's on a phone (where most Chrome installs are), that phone likely has an NPU, in which case the power draw will be negligible. If it has to run on the CPU, it'll be more.
So let's instead assume every user will be using a model comparable to ChatGPT, for which we do have reasonable estimates. According to this estimate, 500 output tokens would use about 0.3 Wh of energy. 500 output tokens is about 400 words, which is probably more than the average user will be using Gemini Nano for (it is intended for small tasks), but let's assume that as the average daily use. 1 billion users times 0.3 Wh is 300 MWh. Fuck all on a global scale, about 0.0015% of the world's energy production (20 TWh per day).
Keep in mind that figure is for the full ChatGPT, which runs on 1500-watt GPUs. Gemini Nano runs on chips that draw more like 1.5 W, and on devices that physically cannot draw more than 15 W. It's thus reasonable to estimate that it is on the order of 100x more efficient.
That's ridiculous. How is it "illegal abuse" for an application to install new features on your computer? If you don't like the feature then uninstall the application. This is how it works for all software.
It's a local model so it doesn't even have the privacy concerns a cloud model would have. Not that that really matters because Chrome is a privacy concern in and of itself already.
The blockers are in Gnome's design guidelines, which many Gnome-related apps tend to follow.
The quintessential app I am thinking of here is Bottles, which has one of the worst UIs I've had the displeasure of using in recent memory.
Open context menus.
I know the reasons. I do not agree with the reasons.
The climate costs aren't "insane". One billion devices receiving the push (probably an overestimate) represents about 0.02% of global internet traffic.
The guy kind of proves this in his own post. The annual emissions of 13 000 cars (which is what this would equate to on 1 billion devices) is fuck all on a global level. One city pushing for bike-friendly infrastructure will have 10x that effect.
This is not to say this isn't kind of a stupid update, but the only thing insane about the climate costs is how insanely contrived bringing them up here is.
Funny, for me it's the exact opposite. The design language of most of the apps is stupid. I'm on a PC. I have a mouse and a widescreen monitor. Why does the app have a single column smartphone app layout where everything is gigantic and the right mouse button is never used for anything?
Cleverbot's trick was that it made humans respond to one another, it's actually kind of similar to this. The difference is that Cleverbot stored the responses and whenever a query was made it picked the closest stored match.
Back in my day this was called Cleverbot.










No it's not. You clearly have zero perspective on energy consumption.
The power draw on a phone with an NPU (where Gemini Nano is mostly used) is comparable to watching a video on your phone, maybe a couple of watts. On devices without NPUs (e.g. PCs) it will be more, but not dramatically so. The power use of this is absolutely zilch in the grand scheme of things.
To be extremely generous, let's say the average power draw is 50 watts, and that the model generates on average 10 tok/s, and that the average user has it generate 500 tokens per day (about 400 words). That's 50 seconds of 50 watts for every user, and let's say this is done by a billion users. This is a very generous estimate: in reality the average power draw is lower, the average tokens generated is likely lower (the intended use is generating short snippets like, say, email titles based on the email's content), and this definitely won't be used by a billion people.
WolframAlpha tells us that this takes 694 MWh of power, and helpfully mentions that this is 74% the fuel energy of an Airbus A330-300, and indeed this energy use is roughly in the ballpark of one transatlantic flight. There's about 500 transatlantic flights every day. Two offshore wind turbines will generate this much power on a windy day.
In all likelihood an order of magnitude more energy is spent every day watching short form videos. I'm not going to do the napkin math on that though.
edit: in reality, local models like this will likely reduce net power consumption as fewer API calls are made to cloud LLMs, which are both less power efficient and have overhead from the whole internet thing.