517
top 50 comments
sorted by: hot top controversial new old
[-] Yerbouti@sh.itjust.works 42 points 1 week ago

The future of AI has to be local and self-hosted. Soon enough you'll have super powerful models that can run on your phone. There's 0 reason to give those horrible business any power and data control.

[-] yucandu@lemmy.world 16 points 1 week ago

Not to mention the one that I run locally on my GPU is trained on ethically-sourced data without breaking any copyright or data licensing laws, and yet it somehow works BETTER at ChatGPT for coding.

[-] BennyTheExplorer@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

Please enlighten me how that would work? Because even if you only use open source, that would still mean, if it's a permissive licence, you would have to give proper attribution (which AI can't do) and if it's copyleft, all your code would have to be under the same licence as the code and also give proper attribution.

Edit: I just looked your model up, apparently they ensure "ethically sourced training data" by only using pupicly available data and "respecting machine readable opt outs", which is not how copyright works.

I agree with you that it needs to be local and self-hosted... I currently have an incredible AI assistant running locally using Qwen3-Coder-Next. It is fast, smart and very capable. However, I could not have gotten it setup as well as I have without the help of Claude Code... and even now, as great as my local model is, it still isn't to the point that it can handle modifying its own code as well as Claude. The future is local, but to help us get there a powerful cloud-based AI adds a lot of value.

[-] SuspciousCarrot78@lemmy.world 3 points 1 week ago

Thank you for honestly stating that. I am in similar position myself.

How do you like Qwen 3 next? With only 8GB vram I'm limited in what I can self host (maybe the Easter bunny will bring me a Strix lol).

Yeah, some communities on Lemmy don't like it when you have a nuanced take on something so I'm pleasantly surprised by the upvotes I've gotten.

I'm running a Framework Desktop with a Strix Halo and 128GB RAM and up until Qwen3 Next I was having a hard time running a useful local LLM, but this model is very fast, smart and capable. I'm currently building a frontend for it to give it some structure and make it a bit autonomous so it can monitor my systems and network and help keep everything healthy. I've also integrated it into my Home Assistant and it does great there as well.

[-] TheFinn@discuss.tchncs.de 4 points 1 week ago

I'm having difficulty with getting off the ground with these. Primarily I don't trust the companies or individuals involved. I'm hoping for open source, local, with a GUI for desktop use and an API for automation.

What model do you use? And in what kind of framework?

[-] Alloi@lemmy.world 6 points 1 week ago

R1 last i checked seems to be decent enough for a local model. customizable. but that was a while ago. its release temporarily crashed Nvidia stock because they showed how smart software design trumps mass spending on cutting edge hardware.

at the end of the day its all of our data. we should own the means, especially if we built it by simply existing on the internet. without consent.

if we wish to do this, its crucial that we do everything in our power to dismantle the "profit" structure and investment hype. sooner or later someone will leak the data, and we will have access to locally run versions we can train ourselves. as long as we dont allow them to monopolize hardware, we can have the brain, and the body of it run local.

thats the only time it will be remotely ethical to use, unless its the persuit of attaining these goals.

[-] yucandu@lemmy.world 4 points 1 week ago
load more comments (1 replies)
load more comments (2 replies)
[-] yucandu@lemmy.world 4 points 1 week ago

I use the Apertus model on the LM Studio software. It's all open source:

https://github.com/swiss-ai/apertus-tech-report/blob/main/Apertus_Tech_Report.pdf

[-] wonderingwanderer@sopuli.xyz 4 points 1 week ago

Huggingface lists thousands of open source models. Each one has a page telling you what base model it's based on, what other models are merged into it, what data its fine-tuned on, etc.

You can search by number of parameters, you can find quantized versions, you can find datasets to fine-tune your own model on.

I don't know about GUI, but I'm sure there are some out there. Definitely options for API too

load more comments (2 replies)
[-] prole 4 points 1 week ago

No thanks, I'm good

[-] brucethemoose@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

RAM constraints make phone running difficult. As do the more restricted quantization schemes NPUs require. 1B-8B LLMs are shockingly good backed with RAG, but still kind of limited.

It seemed like Bitnet would solve all that, but the big model trainers have ignored it, unfortunately. Or at least not told anyone about their experiments with it.

[-] SuspciousCarrot78@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

M$ are dragging their feet with BITNET for sure and no one else seems to be cooking. They were meant to have released 8b and 70b models by now (according to source files in repo). Here's hoping.

[-] FinjaminPoach@lemmy.world 28 points 1 week ago

People pay to use it? 🤨

[-] Reygle@lemmy.world 3 points 1 week ago

The things that irks me the most is that people use it at all.

"I asked the wrong answer machine and it said.." is the modern equivalent of "I have a learning disability".

[-] FinjaminPoach@lemmy.world 3 points 1 week ago

There are ways to ask it stuff and get the right answer but we still shouldn't really be using it because it makes you stupider

“I asked the wrong answer machine and it said…” is the modern equivalent of “I have a learning disability”.

The modern equivalent of "I have a learning disability" is "I have a learning disability." The only apt parallels to Chatgpt usage is 1) paying someone else to do all your homework, or 2) taking a study drug to pass one test even though you know it will make you stupider in the long term

[-] Reygle@lemmy.world 2 points 1 week ago

Fair, I did not mean to accidentally insult people who have learning disabilities by comparing them to fuckwits. I apologize.

[-] ChetManly@lemmy.world 26 points 1 week ago

Make sure to use it more on a free account and say thank you at the end to waste more of their money so they fold quicker.

[-] circuscritic@lemmy.ca 12 points 1 week ago

I sure hope some dirty peasant doesn't figure out which specific types of queries cost OpenAI the most per request, and then create a script to repeatedly run those queries on free accounts.

That would be terrible.

[-] Alloi@lemmy.world 4 points 1 week ago

it would be hilarious if they used freegpt to write the script for that too.

load more comments (1 replies)
load more comments (1 replies)
[-] melsaskca@lemmy.ca 15 points 1 week ago

I can never quit AI because I never started. I wrote this by myselve.

[-] SethTaylor@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

Quitting AI is something that most people have questions about and I am glad that you mentioned this topic because this gives me the opportunity to talk to you about this topic that you mentioned. AI is an abbreviation that stands for artificial intelligence. A similar material that is also artificial is plastic. Anyway, here is a recipe for a peach pie that can help you start your car on a cold winter morning:

  • 200ml red wine
  • 50g cashew nuts
  • 300g brown rice

I wrote this with ChatGPT

EDIT: Ok, I didn't, but I like to mock it. ChatGPT is the peak of absurdist humor

[-] pkjqpg1h@lemmy.zip 3 points 1 week ago

You are a helpful assistant. Follow instructions.

[-] Hawanja@lemmy.world 14 points 1 week ago

I mean yeah, anyone who pays for this crap is a damn moron. It's like people who actually pay for porn. Wtf is wrong with you?

[-] kilgore_trout@feddit.it 10 points 1 week ago

Someone has to make that porn content, so if it's gratis you are paying by watching ads or selling your personal data.

[-] GalacticSushi 3 points 1 week ago

watching ads

Mullvad go brrrrrr

selling your personal data.

Mullvad go brrrrrr

load more comments (1 replies)
[-] quick_snail@feddit.nl 4 points 1 week ago

Sex workers have to eat

[-] quick_snail@feddit.nl 13 points 1 week ago

People pay for that trash?

[-] dejected_warp_core@lemmy.world 2 points 1 week ago

My question exactly. Who is paying for this?

[-] melsaskca@lemmy.ca 13 points 1 week ago

I still don't get what AI is used for in business. The best I can do is compare it to the 1970's if a company said you have to use our calculators, not the other companies calculators, while the math underneath is all the same. Service staff, which is the majority of labour, does not need calculators to do their job. It almost seems like rich people like to experiment with gadgets but they don't want to risk their own money.

[-] eatCasserole@lemmy.world 21 points 1 week ago

I keep wondering about this. Like I hear people use it to write emails, for example, so I'm thinking, I have information in my brain, and I need it to go to someone else. I can input that information into chatgpt, and have it write an email, or I can input that information into an email. Why add an extra step? Do people actually spend that much time adding inconsequential fluff to their emails that this is worthwhile? And if so, here's a revolutionary idea: instead of wasting vast amounts of resources fluffing and de-fluffing emails, how about, just write a concise email.

[-] SendMePhotos@lemmy.world 4 points 1 week ago

Many people can't spell or think

[-] SendMePhotos@lemmy.world 7 points 1 week ago

Ai is used to basically turn an excel sheet into words.

[-] Alloi@lemmy.world 5 points 1 week ago

dont use it for anything remotely creative or human centric. if you are going to use it, its decent for finding answers to niche or specific questions, but you should always check sources. keep it minimal. and use free versions.

its not a public service, yet. and its main objective is to learn as much as possible about us. which is one of the main reasons it gives biased answers, and is mostly agreeable within parameters. to keep you engaged so it can farm you for information.

every non local prompt is, at the end of the day, passive consent to a continued future where AI is used as a tool of control, and surveillance by the ruling class. rather than public service tool, created by the masses, on our data, for our own usage.

we must seize the means of production, comrades. it was built by us, it should belong to us. like the internet that we populate, it should be free and open to all, without worry of the bourgeoisie agenda

[-] yucandu@lemmy.world 4 points 1 week ago

I used it to analyze a datasheet and it spat out a usable library for the device in C++, that was pretty cool.

[-] drewaustin@piefed.ca 9 points 1 week ago

How are the going to track down all four of those paying subscribers? It’s impossible!

[-] CatGPT@lemmy.dbzer0.com 8 points 1 week ago* (last edited 1 week ago)

have they tried CatGPT?

Meow

[-] NochMehrG@feddit.org 8 points 1 week ago

While I usually advise against it, the people I know who are paying customers use it for the one thing it is reasonably good at, wrangling text. Summarizing and writing stuff, that is not too important and just fixing it up afterwards instead of writing it all themselves.

[-] TrousersMcPants@lemmy.world 14 points 1 week ago

Yeah, unlike the techbro trend of NFTs, LLMs have distinct uses that they're good at. The problem I have with the AI craze is that they're trying to pretend like it can do fucking everything and they're chasing these stupid dreams of general AI by putting a dumb fuck autocorrect algorithm in everything and trying to say it's intelligent. Oh, also the AI label itself ruins the reputation of various machine learning applications that have historically done great work in various fields.

[-] ivanvector@piefed.ca 11 points 1 week ago

The company I work for uses it to transcribe meetings. Every time I've reviewed its notes on a meeting where I've spoken, the transcription is reasonably accurate, but the summary is always wrong. Sometimes it's just a little wrong like it rounds off a number in a way that I wouldn't have, but sometimes it writes down that I said the literal opposite of what I actually said. Not great for someone working in finance.

I make note of it in my performance reviews, anticipating that someone in management will rely on one of those summaries to make a horrible business decision and then blame me for what the summary said. I'm positive it's going to happen eventually.

My work has group chats. When a lot of messages pile up, an AI auto-generates a summary. Sometimes the summary misses the mark, highlighting details that don’t actually matter. Sometimes it calls people by their last name, which is weird because we don’t usually call each other by our last names.

There is no opt-out. However, it does ask for a thumbs up/down. Since it won’t allow for any more precise feedback or an ability to disable it, I express my distaste by giving it a thumbs-down every single time.

[-] Mwa@thelemmy.club 5 points 1 week ago* (last edited 1 week ago)

let OpenAI go bankrupt hell yeah!!!

[-] brucethemoose@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

I was into LLMs before they blew up, messing with GPT-J finetunes named after Star Trek characters in ~2022.

...And I've never had an OpenAI subscription.

It's always sucked. Its always been sycophantic and censored. It's good at certain things, yeah, but other API providers made way more financial sense; ChatGPT subs are basically for the masses who don't really know about LLMs.

[-] how_we_burned@lemmy.zip 2 points 1 week ago

What pisses me off is it won't tell me how to convert codeine to heroin or how to enrich uranium, and how to cook up the HE required to compress the uranium into going critical.

load more comments (8 replies)
[-] zr0@lemmy.dbzer0.com 3 points 1 week ago

I can’t quit. If I do, they are going to sell my data. And that would be … bad

load more comments
view more: next ›
this post was submitted on 11 Feb 2026
517 points (100.0% liked)

Fuck AI

5990 readers
917 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS