1041
top 50 comments
sorted by: hot top controversial new old
[-] ZILtoid1991@lemmy.world 4 points 17 hours ago
> be me
> installed VScode to test whether language server is just unfriendly with KATE
> get bombarded with "try our AI!" type BS
> vomit.jpg
> managed to test it, but the AI turns me off
> immediately uninstalled this piece of glorified webpage from my ThinkPad

It seems I'm having to do more jobs with KATE. (Does the LSP plugin for KATE handle stuff differently from the standard in some known way?)

[-] Clent@lemmy.dbzer0.com 22 points 1 day ago

What a trash click bait headline. That's not how the statement "saying the quiet part out loud" works. This isn't a secret and it's not unspoken and it certainly doesn't not reveal some underlying motive.

[-] bytepursuits@programming.dev 9 points 1 day ago

They said they still adding all of it. They are adding ai. Just not talking about it. Which is probably correct 😂

[-] Blackmist@feddit.uk 9 points 1 day ago

Yeah, I'm not sure what the point of a cheap NPU is.

If you don't like AI, you don't want it.

If you do like AI, you want a big GPU or to run it on somebody else's much bigger hardware via the internet.

[-] rumba@lemmy.zip 3 points 22 hours ago

A cheap NPU could have some uses. If you have a background process that runs continuously, offloading the work to a low-cost NPU can save you both power and processing. Camera authorization, if you get up, it locks; if you sit down, it unlocks. No reason to burn a core or GPU for that. Security/Nanny cameras recognition. Driving systems monitoring a driver losing consciousness and pulling over. We can accomplish this all now with CPUs/GPUs, but purpose-built systems that don't drain other resources aren't a bad thing.

Of course, there's always the downside that they use that chip for recall. Or malware gets a hold of it for recall, ID theft, There's a whole lot of bad you can do with a low-cost NPU too :)

[-] EndlessNightmare@reddthat.com 23 points 1 day ago

I actually do care about AI PCs. I care in the sense that it is something I want to actively avoid.

[-] Electricd@lemmybefree.net 14 points 1 day ago* (last edited 1 day ago)

I want to run LLMs locally, or things like TTS or STT locally so it’s nice but there’s no real support rn

Most people won’t care nor use it

LLMs are best used when it’s a user choice, not a platform obligation

[-] UsoSaito@feddit.uk 32 points 1 day ago

It doesn't confuse us... it annoys us with the blatant wrong information. e.g. glue is a pizza ingredient.

[-] Electricd@lemmybefree.net 1 points 1 day ago

That’s when you use 3 years old models

[-] AngryRobot@lemmy.world 13 points 1 day ago

Are you trying to make us believe that AI doesn't hallucinate?

[-] oascany@lemmy.world 2 points 1 day ago

It doesn't, it generates incorrect information. This is because AI doesn't think or dream, it's a generative technology that outputs information based on whatever went in. It can't hallucinate because it can't think or feel.

[-] Blue_Morpho@lemmy.world 9 points 1 day ago

Hallucinate is the word that has been assigned to what you described. When you don't assign additional emotional baggage to the word, hallucinate is a reasonable word to pick to decribe when an llm follows a chain of words that have internal correlation but no basis in external reality.

[-] oascany@lemmy.world 3 points 1 day ago

Trying to isolate out "emotional baggage" is not how language works. A term means something and applies somewhere. Generative models do not have the capacity to hallucinate. If you need to apply a human term to a non-human technology that pretends to be human, you might want to use the term "confabulate" because hallucination is a response to stimulus while confabulation is, in simple terms, bullshitting.

[-] Blue_Morpho@lemmy.world 2 points 22 hours ago

A term means something and applies somewhere.

Words are redefined all the time. Kilo should mean 1000. It was the international standard definition for 150 years. But now with computers it means 1024.

Confabulation would have been a better choice. But people have chosen hallucinate.

[-] mushroommunk@lemmy.today 2 points 22 hours ago

Although I agree with you, you chose a poor example.

Kilo doesn't mean 1024, that's kibi. Many of us in tech differentiate because it's important.

[-] Electricd@lemmybefree.net 1 points 1 day ago

No, but I was specifically talking about the glue and pizza example

[-] Darkness343@lemmy.world 4 points 1 day ago
[-] musubibreakfast@lemmy.world 1 points 1 day ago

I'm readying for some new bullshit. I just hope it's not tech related

[-] samus12345@sh.itjust.works 4 points 22 hours ago

Does a third world war count as tech related? It certainly uses a lot of tech!

[-] mctoasterson@reddthat.com 91 points 2 days ago

What people don't want is blackbox AI agents installed system-wide that use the carrot of "integration and efficiency" to justify bulk data collection, that the end user implicitly agrees to by logging into the OS.

God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.

[-] Gsus4@mander.xyz 30 points 2 days ago

Unveiling: the APU!!! (ad processing unit)

load more comments (2 replies)
[-] Aceticon@lemmy.dbzer0.com 12 points 2 days ago* (last edited 2 days ago)

God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.

I think that way of thinking is still pretty niche.

Hope it's becoming more widespread, but in my experience most people don't actually concern themselves with "my device does some stuff in the background that goes beyond what I want it for" - in their ignorance of Technology, they just assume it's something that's necessary.

I think were people have problems is mainly at the level of "this device is slower at doing what I want it to do than the older one" (for example, because AI makes it slower), "this device costs more than the other one without doing what I want it to do any better" (for example, they're unwilling to pay more for the AI functionality) or "this device does what I want it to do worse than before/that-one" (for example, AI is forced on users, actually making the experience of using that device worse, such as with Windows 11).

[-] village604@adultswim.fan 13 points 2 days ago

I think you're making the mistake of thinking the general population is as informed or cares as much about AI as people on Lemmy.

[-] InFerNo@lemmy.ml 26 points 1 day ago

"Recall was met with serious backlash". Meanwhile I'm looking for a simple setting regarding the power button on my wife's phone and stumble upon a setting that is enabled by default that has Gemini scanning the screen and using it for whatever it is that it does, but my wife doesn't use any AI features on her device. Correct me if I'm wrong, but isn't this basically the same as Recall? Google was just smart enough to silently roll this out.

[-] Xanvial@lemmy.world 13 points 1 day ago

Isn't this only triggered when user use Gemini (and the google assistant before). To use something like circle to search. I'm rather sure this already exists before AI craze

[-] Rooster326@programming.dev 1 points 1 day ago

That is the assumption but that is explicitly spelled out somewhere. I'm not sure you can trust it.

[-] arararagi@ani.social 2 points 1 day ago

Yeah, Google assistant was able to read your screen and take screenshots when asked years ago.

[-] scarabic@lemmy.world 29 points 1 day ago* (last edited 1 day ago)

As time goes by I’m finding a place for AI.

  1. I use it for information searches, but only in cases where I know the information exists and there is an actual answer. Like history questions or asking for nuanced definitions of words and concepts.

  2. I use it to manipulate documents. I have a personal pet peeve about the format of most recipes for example. Recipes always list the ingredient amounts in a table at the top, but then down in the steps they just say “add the salt” or “mix in the flour.” Then I have to look up at the chart and find the amount of salt/flour, and then I lose my place in the steps and have to find it again. I just have AI throw out the chart and integrate the amounts into the steps: “mix in 2 cups of flour”. I can have it shorten the instructions too and break them into easier to read bullet points. I also ask it to make ingredient substitutions and other modifications. The other day I gave it a bread recipe and asked it to introduce a cold-proofing step and reformat everything the way I like. It did great.

  3. Learning interactively. When I need to absorb a new skill or topic I sometimes do it conversationally with AI. Yes I can find articles and videos but then I am stuck with the information they lay out and the pace and order in which they do it. With AI you can stop and ask clarifying questions, or have it skip over the parts you already know. I find this is way faster than laborious googling. However only trust it for very straightforward topics. Like “explain the different kinds of welding and what they are for.” I wouldn’t trust it for more nuanced topics where perspective and opinion come into it. And I’ve leaned that it isn’t great at topics where there isn’t enough information out there. Like very niche questions about the meta of a certain video game that’s only been out a month.

  4. Speech to text and summarization. AI records all my Zoom meetings for work and gives summaries of what was discussed and next steps. This is always better than nothing. I’m also impressed with how it seems to understand how to discard idle chit chat and only record actual work content. At most it says “the meeting began with coworkers exchanging details from their respective weekends.”

This kind of hard-and-fast summarization and manipulation of factual text is much easier with AI. Doing my job for me? No. Hovering over my entire computer? No. Writing my emails for me? Fuck off.

The takeaway is that specific tools I can go to when I need them, for point-specific needs, is all I want. I don’t need or what a hovering AI around all the time, and I don’t want whatever tripe Dell can come up with when I can get the best latest models direct from the leading players.

[-] phil@lymme.dynv6.net 1 points 21 hours ago

Assuming you keep a critical eye on the results, surely AI can be used for some meaningful things like the ways you found - thanks for sharing them. But i could bet that most people will be stuck at the BS generator level with its poisonous effects on them and the society at large.

[-] scarabic@lemmy.world 1 points 18 hours ago* (last edited 18 hours ago)

I agree. I share my use cases mostly to put the critical thinking behind them on display. I’m sure the crowd here is very savvy. But in the general public I agree that many if not most people would be completely seduced by the obsequious & confident tone of the robot. It can do so many things that it becomes tempting to rely on it. You wish it worked better than it did, and if you let yourself get lazy, you can easily slip into trusting it too much.

[-] Lfrith@lemmy.ca 4 points 1 day ago

Extent of my comfort with AI is through website and interaction is limited to copy and paste or upload. Capabilities not running on a system level.

But, when it comes to actually running on hardware and being able to do things by reading what is on the screen or hearing what is said I don't trust AI to be secure or privacy respecting. When it comes to that type of functionality I'll only trust ones that is compiled myself to run locally as opposed to provided by a corporations who are largely in the business of data collection.

[-] SethTaylor@lemmy.world 28 points 2 days ago* (last edited 2 days ago)

Holy crap that Recall app that "works by taking screenshots" sounds like such a waste of resources. How often would you even need that?

https://www.windowscentral.com/software-apps/windows-11/the-verdict-is-in-windows-recall-is-great-actually

Virtually everything described in this article already exists in some way...

[-] Buddahriffic@lemmy.world 12 points 1 day ago

It's such a stupid approach to the stated problem that I just assumed it was actually meant for something else and the stated problem was to justify it. And made the decision to never use win 11 on a personal machine based on this "feature".

[-] tal@lemmy.today 7 points 1 day ago

So, it's not really a problem I've run into, but I've met a lot of people who have difficulty on Windows understanding where they've saved something, but do remember that they've worked on or looked at it at some point in the past.

My own suspicion is that part of this problem stems from the fact that back in the day, DOS had a not-incredibly-aimed-at-non-technical-users filesystem layout, and Windows tried to avoid this by hiding that and stacking an increasingly number of "virtual" interfaces on top of things that didn't just show one the filesystem, whether it be the Start menu or Windows Explorer and file dialogs having a variety of things other than just the filesystem to navigate around. The result is that you have had Microsoft banging away for much of the lifetime of Windows trying to add more ways to access files, most of which increase the difficulty of actually understanding what is going on fully through the extra layers. But regardless of why, some users do have trouble with it.

So if you can just provide a search that can summon up that document where they were working on that had a picture of giraffes by typing "giraffe" into some search field, maybe that'll do it.

[-] manxu@piefed.social 45 points 2 days ago

Dell is the first Windows OEM to openly admit that the AI PC push has failed. Customers seem uninterested in buying a laptop because of its AI capabilities, likely prioritizing other aspects such as battery life, performance, and display above AI.

Silicon Valley always had the annoying habit of pushing technology-first products without even much consideration of how they would solve real world problems. It always had it, but it's becoming increasingly bad. When Zuck unveiled the Metaverse it was already starting to be ludicrous, but with the AI laptop wave it turned into Onion territory.

[-] Lucidlethargy@sh.itjust.works 1 points 1 day ago

What do you mean? Do you even have ANY foundation to this accusation?

Hold on, I need to turn off my heater. 22211123222234663fffvsnbvcsdfvxdxsssdfgvvgfgg

There it is. The off button. Touch controls are so cool guys.

[-] manxu@piefed.social 1 points 1 day ago

Ha! Enjoy your off button while they still make them. Once our AI Overlords have won the War, you can only politely ask your laptop to please temporarily quiet itself, please and thank you if it's not too much asking.

[-] torubrx@piefed.social 26 points 2 days ago

Why not just leave it alone inside a browser tab? If I want AI, and I use it quite a lot, I will go into their website. Don't force it system wide, just sucks

[-] AstralPath@lemmy.ca 17 points 2 days ago

They want their greasy tendrils all up in your PC's guts. Every bit of info flowing in your system can be monetized. All they care about is money and dominance and their "AI" in everyone's devices is their wet dream.

Cancer is preferable to tech bros as cancer doesn't know its killing the host. Tech bros know full well their actions are killing the planet and its inhabitants. Their actions are willfully vile and toxic; completely at odds with the needs of humanity.

Don't expect them to ever do the right thing for anyone but themselves.

load more comments (1 replies)
[-] tal@lemmy.today 45 points 2 days ago* (last edited 2 days ago)

Not the position Dell is taking, but I've been skeptical that building AI hardware directly into specifically laptops is a great idea unless people have a very concrete goal, like text-to-speech, and existing models to run on it, probably specialized ones. This is not to diminish AI compute elsewhere.

Several reasons.

  • Models for many useful things have been getting larger, and you have a bounded amount of memory in those laptops, which, at the moment, generally can't be upgraded (though maybe CAMM2 will improve the situation, move back away from soldered memory). Historically, most users did not upgrade memory in their laptop, even if they could. Just throwing the compute hardware there in the expectation that models will come is a bet on the size of the models that people might want to use not getting a whole lot larger. This is especially true for the next year or two, since we expect high memory prices, and people probably being priced out of sticking very large amounts of memory in laptops.

  • Heat and power. The laptop form factor exists to be portable. They are not great at dissipating heat, and unless they're plugged into wall power, they have sharp constraints on how much power they can usefully use.

  • The parallel compute field is rapidly evolving. People are probably not going to throw out and replace their laptops on a regular basis to keep up with AI stuff (much as laptop vendors might be enthusiastic about this).

I think that a more-likely outcome, if people want local, generalized AI stuff on laptops, is that someone sells an eGPU-like box that plugs into power and into a USB port or via some wireless protocol to the laptop, and the laptop uses it as an AI accelerator. That box can be replaced or upgraded independently of the laptop itself.

When I do generative AI stuff on my laptop, for the applications I use, the bandwidth that I need to the compute box is very low, and latency requirements are very relaxed. I presently remotely use a Framework Desktop as a compute box, and can happily generate images or text or whatever over the cell network without problems. If I really wanted disconnected operation, I'd haul the box along with me.

EDIT: I'd also add that all of this is also true for smartphones, which have the same constraints, and harder limitations on heat, power, and space. You can hook one up to an AI accelerator box via wired or wireless link if you want local compute, but it's going to be much more difficult to deal with the limitations inherent to the phone form factor and do a lot of compute on the phone itself.

EDIT2: If you use a high-bandwidth link to such a local, external box, bonus: you also potentially get substantially-increased and upgradeable graphical capabilities on the laptop or smartphone if you can use such a box as an eGPU, something where having low-latency compute available is actually quite useful.

[-] cmnybo@discuss.tchncs.de 16 points 2 days ago

There are a number of NPUs that plug into an m.2 slot. If those aren't powerful enough, you can just use an eGPU.
I would rather not have to pay for an NPU that I'm probably not going to use.

load more comments (8 replies)
[-] Eternal192@anarchist.nexus 20 points 2 days ago

We should have been given a choice whether we want to use it or not, them trying to force it on us is why they are getting so much pushback, let those that want to use it use it and those that don't want it to be given the option to turn it off, it's not rocket science, but they are constantly going:

Tech CEOs - this is our AI you have to use it! Consumers - but i don't want to! Tech CEOs - FUCKING USE IT!!!

and then they are whining "WAAAHHHHH PEOPLE ARE MEANIES THAT DON'T LIKE OUR AI THAT DOES NOTHING TO IMPROVE THEIR LIVES AND WILL MAKE US MORE MONEY BY LETTING US PUT TARGETED ADS INTO THEIR EYEBALLS WWWWAAAAAAAAAAHHHHHHHH!!!!

load more comments (2 replies)
[-] MutantTailThing@lemmy.world 27 points 2 days ago

For me at least, AI reminds me too much of that thrice cursed MS Word paperclip. I did not want it then and I do not want it now.

Also, adding ‘personality’ to pieces of software is cringy at best and downright creepy at worst.

[-] justsomeguy@lemmy.world 20 points 2 days ago

Forget about the personality for a minute. They have a different thing in common. Uselessness. I tried AI for a bunch of general use cases and it almost always fails to satisfy. Either it just can't do the task in the first place or it makes mistakes that then cost too much time to fix.

There are exceptions and specialized models will have their use but it's not the Swiss army knife tool AI companies are promising.

load more comments (1 replies)
[-] SabinStargem@lemmy.today 12 points 2 days ago

It is going to take at least five years before local AI is user-friendly enough and with performant hardware circulating, that ordinary folks would consider buying an AI machine.

I have a top-end DDR4 gaming rig. It takes a long time for 100b sized AI to give some roleplaying output, at least forty minutes for my settings via KoboldCPP with a GGUF. I don't think a typical person would want to wait more than 2 minutes for a good response. So we will need at least DDR6 era devices before it is practical for everyday people.

[-] lmuel@sopuli.xyz 16 points 2 days ago

A local LLM is still an LLM... I don't think it's gonna be terribly useful no matter how good your hardware is

load more comments (3 replies)
[-] Taleya@aussie.zone 13 points 2 days ago

You spelled "pisses them off" wrong

[-] WraithGear@lemmy.world 8 points 2 days ago

if “I” were the one confused, then AI would actually be USEFUL.

load more comments
view more: next ›
this post was submitted on 08 Jan 2026
1041 points (100.0% liked)

Technology

78482 readers
2586 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS