103
Red Hat pushing AI (fedoramagazine.org)
submitted 2 days ago* (last edited 2 days ago) by vogi@piefed.social to c/fuck_ai@lemmy.world

Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can't tell if OP is responding with LLM as well–it would be really embarrassing if so.

PS: Debian is really cool btw :)

all 35 comments
sorted by: hot top controversial new old
[-] CodenameDarlen@lemmy.world 39 points 2 days ago* (last edited 2 days ago)

That's exactly why I migrated from Fedora to Arch, I want a distro with little to none corporation influence.

To me, the real Linux experience is Arch.

I didn't want to wait for some bad decision to be made. Time is showing I took the right decision.

[-] gustofwind@lemmy.world 15 points 2 days ago

Now I’m thinking of doing the same because the entire reason I chose fedora was to take advantage of their institutional support

But if those institutions are going to sell the floor right under me to ai then there’s no point 🫩

[-] vogi@piefed.social 4 points 2 days ago* (last edited 2 days ago)

Check out Debian as well! (or lmde) I have recently switched from Arch to Debian (with sway) to have some more stability. Love it so far, I don't bother about configs or having by dependencies anymore and can just focus on actually doing some work. Arch has a really active community though with a big repository (which quality varies a lot) and nice documentation.

[-] gustofwind@lemmy.world 4 points 2 days ago

I might try Debian testing branch first because, frankly, I do not want to deal with arch linux...I just want to use my computer

[-] cecilkorik@piefed.ca 1 points 2 days ago

Check out PikaOS, it's pretty much pure Debian, but for gaming, makes updating drivers and managing games easy

[-] SuperUserDO@piefed.ca 8 points 2 days ago

IMO there are two main Linux camps, and most users fall somewhere in-between. Rolling OS lovers who want to tinker (eg Arch). People who want stability over everything (eg Debian).

The only truly wrong answer is paying for RHEL.

[-] myrmidex@belgae.social 3 points 2 days ago

Same. I was on Fedora Atomic but the stench hardened so I jumped over to nixOS a few weeks back. Glad I did.

i'm not ready to go to arch yet since i'm not comfortable enough but i'm curious; is arch based in us or outside? i know mint is based in eu/ireland. i do wish i went with mint debian instead of mint ubuntu but next time

[-] brucethemoose@lemmy.world 4 points 2 days ago

It’s international, but there are a lot of Europeans: https://archlinux.org/people/developers/

CachyOS has its roots in the Polish Arch community if I recall correctly. It’s much less daunting, and I’d highly recommend it.

[-] RipLemmDotEE@lemmy.today 2 points 2 days ago

Check out Garuda. It's a really stable, well maintained version of Arch. I've been running the same install of it for over 9 months with only one screw up that garuda's own health tool fixed for me.

It's been a great way for me to learn Linux and Arch, and it rivals Bazzite in my own gaming benchmarks.

thanks! ill look at this

its also the best final fantasy eikon.

[-] aesthelete@lemmy.world 20 points 2 days ago* (last edited 2 days ago)

MCP servers are fucking bizarre too. I expected them to be like a normal API server, not realizing how little LLM developers wanted to do anything but make more chatbot. The default implementation is to read from stdin and write to stdout and have it launch a process. There's "streaming http" as well. It's all so that it can have a fucking "chat" with the server involved. 🤢

Do not hand these things unsupervised system access, they will do bizarre bullshit and ruin your system.

[-] AA5B@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Seems like everyone taking a left turn into crazy ville ……

This is a good approach.

  1. It’s an mcp server, a “bridge”. A standard way LLMs could talk to your system. It’s not an LLM. It doesnt mandate an LLM. It doesn’t tie you to a specific LLM
  2. It’s optional. Don’t use it, or don’t install it. No harm done. Even if it’s installed and running, if you don’t use an LLM with local access, no harm done.
  3. Even the increased attack surface is not a big deal since it is local, optional, and focuses on reading statuses rather than executing actions
  4. It’s an open standard. If you decide to use it with an LLM but don’t like the results, try a different LLM
[-] nupo@quokk.au 3 points 1 day ago

This is a community for criticizing LLMs. There is, therefore, no positive interest in a bridge for LLMs. This community is optional. Don't subscribe, or don't read it. No harm done.

[-] AA5B@lemmy.world 1 points 1 day ago

Yeah, I suppose so. I find plenty to hate about the way companies and too many individual apps talk about LLMs. I really hate that it’s one of the metrics my employer looks at. I always hate how wasteful speculative bubbles like this are.

But maybe this isn’t the place for me since I see some good use cases and appreciate the few times someone does it right.

  • I hate how companies like Google and Microsoft are putting LLMs everywhere, making things worse for everyone, forcing everyone to deal with it
  • I find it strange that there’s an LLM in my car but it Carrie’s a decent conversation and it can’t actually do anything
  • I like this approach of building the mcp server. Keeping it read only, keeping it optional, and leaving the actual LLM open. I also like Apples approach where they seem to care about privacy, about executing on device, and about taking their time to put it in useful places
[-] Finalsolo963 1 points 1 day ago

Yeah, most people, myself included, don't like AI on principle, but there are valid use cases for it, and not having the capability of integrating with AI tools is going to be a dealbreaker for someone.

That said, I've heard MCP is a bit of a shitshow of a standard and is woefully inefficient.

[-] AA5B@lemmy.world 1 points 1 day ago

Yeah it is. I have no idea how you more efficiently do that task, but there’s got to be a better way

[-] TheImpressiveX@lemmy.today 16 points 2 days ago

Et tu, Brute?

[-] brucethemoose@lemmy.world 12 points 2 days ago* (last edited 2 days ago)

gpt-oss 20B

See all the errors in that rambling wall of slop (which they posted and didn’t even check for some reason?)

Trying to use a local LLM… could be worse. But in my experience, small ones are just too dumb for stuff beyond fully automated RAG or other really focused cases. They feel like fragile toys until you get to 32B dense or ~120B MoE.

Doubly so behind buggy, possibly vibe coded abstractions.

The other part is that Goose is probably using a primitive CPU-only llama.cpp quantization. I see they name check “Ryzen AI” a couple of times, but it can’t even use the NPU! There’s nothing “AI” about it, and the author probably has no idea.

I’m an unapologetic local LLM advocate in the same way I’d recommend Lemmy/Piefed over Reddit, but honestly, it’s just not ready. People want these 1 click agents on their laptops and (unless you’re an enthusiast/tinkerer) the software’s simply not there yet, no matter how much AMD and such try to gaslight people into thinking it is.

Maybe if they spent 1/10th of their AI marketing budget on helping open source projects, it would be…

[-] TipsyMcGee@lemmy.dbzer0.com 2 points 2 days ago

I have been using gpt-oss:20b for helping me with bash scripts, so far it’s been pretty handy. But I make sure to know what I’m asking for and make sure I understand the output, so basically I might have been better off with 2010-ish Google and non-enshitified community resources.

[-] brucethemoose@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.

It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.

Contrast that with Red Hat’s examples.

They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.

Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.

Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.


Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.

[-] cupcakezealot@piefed.blahaj.zone 13 points 2 days ago

i mean the entire concept of red hat is corporations profiting off of open source

[-] Dogiedog64@lemmy.world 5 points 2 days ago

The comments on the blog post are eviscerating the author. Literally nobody asked for this, and they're making DAMN SURE they know that.

[-] redshift@lemmy.zip 2 points 2 days ago

And now the comments are closed. Embarrassing.

[-] _Nico198X_@europe.pub 7 points 2 days ago
this post was submitted on 12 Dec 2025
103 points (100.0% liked)

Fuck AI

4848 readers
623 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS