694
Mozilla lays off 60 people, wants to build AI into Firefox
(arstechnica.com)
This is a most excellent place for technology news and articles.
From what I understand, they're divesting resources that aren't in Firefox or at least involved in a trustworthy/open source AI project.
I see a lot of people in this theead are upset at this, but I'm tentatively excited. If they can pull off a good AI engine, especially built into the browser, that would be nice. If it had offline capabilities, that would be amazing.
Even if they can pull off a good AI solution that's not built into Firefox but it's offline, I'd be really excited. I'm not crazy about having especially detailed and intimate information being thrown to some vendor out there, not knowing where it's going. Modern AI can do some amazing things, but a lot of them reserve the right to have a human read whatever you put in them and warn you about that. This is too limiting to me for my preferred use-cases.
One concern I have is that Firefox and its engine are one of the last non-chromium browser platforms that have a household name and are FOSS. So to me, that has to be the first goal to keep healthy. Maybe the AI thing will help in this respect
Though, it's tough to pull from the headline/discussion this pivot is explicitly meant to refocus on the browser.
As far as the AI stuff goes, Mozilla has long been the most ethical player in this space. All of their datasets/models are open source and usually crowdsourced. Not to mention, their existing work is primarily in improving accessibility. It's really hard to see how this is a bad thing.
There is no such thing as a good AI engine… all I really want from any AI engine is the ability to watermark everything it outputs as generated by an AI so it can later be filtered out when it's discovered to be inaccurate or just simply plagiaristic.
Why has opposition to AI become so ideological? I had to show my dad how easy it is to unintentionally induce very confident hallucinations in Google Bard when it was giving him false medical information, but that doesn't make it any more useless than using a search engine in general. The only difference is rather than blindly trusting a "reliable" site, you instead have to think critically and investigate content. I personally find AI most useful in giving me the names of solutions to problems, allowing me to more effectively search for information on them.
Honestly, a search engine companion is probably its least offensive case, you're correct. Mostly, it makes me so mad because they are polluting our entire collected knowledge base, because there is no way to watermark anything as AI-generated (especially when it's text, not images) which means that every search you make from here on out returns worse results. It's like being forced to share the road with self-driving Teslas because the self-driving car companies (especially Tesla) have made us all involuntarily part of their beta test.
The "screw everyone else trying to use the same public resource" mentality is out of control.
The thing is all those SEO bait articles existed long before modern LLMs, they just filled in templates basically. I agree though I am a bit worried that it will get worse now.
I mean you're also part of testing a human driver.
Yeah, but that's unavoidable. Whereas, Tesla and Waymo, etc getting to use our roads for self-driving testing is just our government not doing their job to protect the roads adequately, IMHO. This is veering way off topic, but I just recently watched a video that had stats on Teslas and the fact they're like 8.2x more likely to be in a crash than a standard level 2 car driving system.
If they just built a browser and started acting like a foundation, I’d support them in a heart beat. As it happens today, I feel like I’m pouring money into a set of holes that neither I, nor seemingly the whole world, has much interest in.