551
slop rule (lemmy.blahaj.zone)
submitted 2 weeks ago by als to c/onehundredninetysix
top 37 comments
sorted by: hot top controversial new old
[-] gegil@sopuli.xyz 61 points 2 weeks ago* (last edited 2 weeks ago)

Windows 11 is not being developed by people. It is entirely undeveloped by ai.

[-] Gork@sopuli.xyz 46 points 2 weeks ago

Sounds like all the hard work they did refactoring Windows 10 is gonna go to waste with the new AI vibe coding in Windows 11.

[-] morto@piefed.social 25 points 2 weeks ago

If they keep like this, reactos might actually one day surpass windows into being a better windows-like os

[-] Cethin@lemmy.zip 6 points 2 weeks ago

Isn't it already?

[-] NONE_dc@lemmy.world 32 points 2 weeks ago

What a punchline is finding out how absurdly long the image is while I was scrolling lol

[-] Rollade@lemmy.ml 24 points 2 weeks ago

Well letting us put the taskbar on top of on the side is to hard but breaking everything else with ai integration is quite easy lmao

[-] TotallynotJessica 16 points 2 weeks ago

I'm so glad to not be using Windows 11. Saving $100 on the license alone is worth it.

[-] Smorty 14 points 2 weeks ago

just 30%?...

that should cause much less harm.

are the devs there that lazy? do they just not review the code?
I thought that especially these companies do code review and such...

[-] Catpuccino@lemmy.world 19 points 2 weeks ago

Its a mix. Some devs are definitely lazy but from what I have heard there is also a big push for devs to deploy faster and to actively use AI or be punished. So there is incentive to just get code out to meet deadlines/expectations and move on to the next task. The amount of work being put on an individual dev is rising with these accelerated expectations in mind, and getting another dev to review your code takes time from them and their own stack of tasks so code review quality has fallen greatly. Not to mention the high likelihood that AI is also doing code reviews to make up for this and we can only guess how many reviews get approved saying "you're absolutely right!"

[-] Truscape 14 points 2 weeks ago

They got rid of their entire QA staff team. So the cancer spreads much more unchecked than usual.

[-] gegil@sopuli.xyz 11 points 2 weeks ago* (last edited 2 weeks ago)

30% is last year news. Now windows 11 in entirely developed by ai. How it works:

1: Ai hallucinates a new feature.

2: Ai agents generate enough code to meet investors demands.

3: Ai Qa agents are being gaslighted into verifying code.

4: The result is a feature which does not even work, while also ten breaking changes in core system features, which worked completely fine, but ai rewrote them anyway.

[-] Smorty 6 points 2 weeks ago* (last edited 2 weeks ago)

.... i assume this is you hallucinating a pipeline?

EDIT: what i meant was: i assume this is your guess?

[-] Filetternavn 6 points 2 weeks ago

OP was making a joke. It was sarcasm

[-] Smorty 2 points 2 weeks ago
[-] gegil@sopuli.xyz 3 points 2 weeks ago

Yes, this is my guess, but it is indeed accurate.

[-] Bytemeister@lemmy.world 1 points 2 weeks ago

You forgot part 5 : AI scrapes MSlop forums for reported issues with the new update, then posts someone's workaround as the official fix while the bots in the back hallucinate another fix for next week

[-] Ephera@lemmy.ml 10 points 2 weeks ago

I mean, for the bugs in the screenshot, it is more than plenty, if even just 1% of bad code slips through.

And AI-generated code is extremely time-consuming and tricky to review, because you can't assume there to be rhyme and reason to the changes, so I would be surprised, if they actually put in all the effort to properly review.

[-] megopie 9 points 2 weeks ago* (last edited 2 weeks ago)

I suspect it’s largely more the result of failing internal organization. Like, a detached from reality and ideologically motivated faction with in corporate leadership has seized control of the company and fired anyone who told them they were being idiots or opposed their initiatives. People are probably getting promoted or hired to management positions based on their ability to tell leadership what they want to hear rather than their ability to actually run things. Everyone lower down has internalized that telling the higher ups what’s going on will get them fired and only is telling them what they want to hear. Resources and people got diverted away from projects that the management doesn’t care about (have no potential to drive growth), and they’re just assuming that the “increase in productivity of AI” will make up the difference. Now everything is melting down and their core product is losing market share while the new products intended to drive growth are failing to see meaningful adoption. Heads will probably role, but it’s unlikely it will be the people who are causing the problem.

That’s what it looks like to me from the outside.

[-] pastel_de_airfryer@lemmy.eco.br 13 points 2 weeks ago

A few weeks ago, my laptop speakers stopped working out of nowhere on Windows 11. They work perfectly on CachyOS. My Windows 11 partition won't live for much longer.

[-] thethunderwolf@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

Audio issues have now been ported to Windows by AI

[-] ZombiFrancis@sh.itjust.works 11 points 2 weeks ago

I have to use windows 11 and teams for all my work. Teams is being utilized as the central file management system.

The benefit these days is if something doesn't work I can shrug and say 'must be ai' or 'just windows 11 things' and generally get tacot agreement.

[-] Passerby6497@lemmy.world 10 points 2 weeks ago

using teams for file management

* Looks inside *

This is just SharePoint 🤮

[-] Zanathos@lemmy.world 5 points 2 weeks ago

This is just SharePoint

Looks inside

This is just OneDrive 🤮

[-] Passerby6497@lemmy.world 4 points 2 weeks ago* (last edited 2 weeks ago)

This is just OneDrive 🤮

* Looks inside *

This is still just SharePoint!?!?!

[-] alt_xa_23@lemmy.world 4 points 2 weeks ago

I did an internship at a company that used SharePoint mapped as a network drive for file sharing. It was...not an ideal setup

[-] morto@piefed.social 8 points 2 weeks ago

Life really imitates art these days

[-] ToastedPlanet 5 points 2 weeks ago* (last edited 2 weeks ago)

The last time I touched a windows device was to make windows 11 look like windows 10, so the task bar could be moved to the top. edit: typo

[-] ToastedPlanet 6 points 2 weeks ago

Actually more recently I was setting up an older, but still great, canon printer, with a new windows 11 machine. I had to install a driver for windows 7/8/8.1/10 because there is no working windows 11 driver and the 7/8/8.1/10 driver still works. XD

[-] thethunderwolf@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

then: michaelsoft binbows

now: microslop losedows

next year for sure: year of the linux desktop

[-] Bytemeister@lemmy.world 3 points 2 weeks ago

Current one I'm dealing with...

Word will open (uncommanded) copilot processes in the background which will immediately request location permissions every few minutes.

Microsofts workaround: just let copilot know where you are bro.

[-] 1rre@discuss.tchncs.de 2 points 2 weeks ago

I've started using AI pretty heavily for writing code in languages I'm not as confident in (especially JS and SQL) after being skeptical for a while, as well as code which can be described briefly but is tedious to write, and I think the problem here is "by" - it would be better to say "with"

You don't say that 90% of code was written by code completion plugins, because it takes someone to pick the right thing from the list, check the docs to see it's right, etc.

It's the same for AI, I check the "thinking"/planning logs to make sure the logic is right, and sometimes it is, sometimes it isn't, at which point you can write a brief psudocode brief of what you want to do, sometimes it starts on the right path then goes off, at which point you can say "no, go back to this point" and generally it works well.

I'd say this kind of code is maybe 30-50% of what I write, the other 50-70% being more technically complex and in a language I'm more experienced in, so I can't fully believe the 30% figure when you're going to be having some people wasting time by not using it when they could use it for speedup, and others using it too much and wasting time trying to implement more complex things than it's capable of - this one irks me especially after having to spend 3½ hours yesterday reviewing a new hire's MR that they could've spent actually learning the libraries, or I could've spent implementing the whole ticket with some time left over to teach them.

[-] TonyTonyChopper@mander.xyz 10 points 2 weeks ago

Large language models can't think. The "thinking" it spits out to explain the other text it spits out is pure bullshit.

[-] 1rre@discuss.tchncs.de 5 points 2 weeks ago

Why do you think I said "thinking"/planning instead of just calling it thinking...

The "thinking" stage is actually just planning so that it can list out the facts and then try and find inconsistencies, patterns, solutions etc. I think planning is a perfectly reasonable thing to call it, as it matches the distinct between planning and execution in other algorithms like navigation.

[-] AliasAKA@lemmy.world 9 points 2 weeks ago

“Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.

The thinking is not thinking. It’s fancier probabilistic look up.

[-] zurchpet@lemmy.ml 6 points 2 weeks ago

I like this write up.

It reflects my experience with AI assisted code generation.

[-] 1rre@discuss.tchncs.de 2 points 2 weeks ago

That kind of matches my experience, but some of the negatives they bring up can be fixed with monitoring thinking mode. If they start to make assumptions on your behalf, or go down the wrong path, you can interrupt it and tell it to persue the correct line without polluting the context.

this post was submitted on 06 Feb 2026
551 points (100.0% liked)

196

5775 readers
1551 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 1 year ago
MODERATORS