876

This is the technology worth trillions of dollars huh

you are viewing a single comment's thread
view the rest of the comments
[-] Djehngo@lemmy.world 60 points 1 month ago

The letters that make up words is a common blind spot for AIs, since they are trained on strings of tokens (roughly words) they don't have a good concept of which letters are inside those words or what order they are in.

[-] PixelatedSaturn@lemmy.world 12 points 1 month ago

I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can't go under water.

[-] skisnow@lemmy.ca 88 points 1 month ago* (last edited 1 month ago)

Not bizarre at all.

The point isn't "they can't do word games therefore they're useless", it's "if this thing is so easily tripped up on the most trivial shit that a 6-year-old can figure out, don't be going round claiming it has PhD level expertise", or even "don't be feeding its unreliable bullshit to me at the top of every search result".

[-] PixelatedSaturn@lemmy.world 10 points 1 month ago

I don't want to defend ai again, but it's a technology, it can do some things and can't do others. By now this should be obvious to everyone. Except to the people that believe everything commercials tell them.

[-] kouichi@ani.social 23 points 1 month ago

How many people do you think know that AIs are "trained on tokens", and understand what that means? It's clearly not obvious to those who don't, which are roughly everyone.

[-] PixelatedSaturn@lemmy.world 4 points 1 month ago

You don't have to know about tokens to see what ai can and cannot do.

[-] huppakee@feddit.nl 9 points 1 month ago

Go to an art museum and somebody will say 'my 6 year old can make this too', in my view this is a similar fallacy.

[-] PixelatedSaturn@lemmy.world 1 points 1 month ago

That makes no sense. That has nothing to do with it. What are you on about.

That's like watching tv and not knowing how it works. You still know what to get out of it.

[-] sqgl@sh.itjust.works 18 points 1 month ago* (last edited 1 month ago)

358 instances (so far) of lawyers in Australia using AI evidence which "hallucinated".

And this week one was finally punished.

[-] PixelatedSaturn@lemmy.world 4 points 1 month ago

Ok? So, what you are saying is that some lawyers are idiots. I could have told you that before ai existed.

[-] Aceticon@lemmy.dbzer0.com 8 points 1 month ago* (last edited 1 month ago)

It's not the AIs which are crap, its what they've been sold as capable of doing and the reliability of their results that's massivelly disconnected from reality.

The crap is what a most of the Tech Investor class has pushed to the public about AI.

It's thus not at all surprising that many who work or manage work in areas were precision and correctness is essential have been deceived into thinking AI can do much of the work for them and it turns out AI can't really do it because of those precision and correctness requirement that it simply cannot achieve.

This will hit more those people who are not Tech experts, such as Lawyers, but even some supposedly Tech experts (such as some programmers) have been swindled in this way.

There are many great uses for AI, especially stuff other than LLMs, in areas where false positives or false negatives are no big deal, but that's not were the Make Money Fast slimy salesmen push for them is.

[-] PixelatedSaturn@lemmy.world 1 points 1 month ago

I think people today, after having a year experience with ai know it's capabilities reasonably well. My mother is 73 and it's been a while since she stopped joking about what ai wrote to her that was silly or wrong, so people using computers at their jobs should be much more aware.

I agree about that llms are good at some things. They are great tools for what they can do. Let's use them for those things! I mean even programming has benefitted a lot from this, especially in education, junior level stuff, prototyping, ...

When using any product, a certain responsibility falls on the user. You can't blame technology for what stupid users do.

[-] sqgl@sh.itjust.works 5 points 1 month ago* (last edited 1 month ago)

I recommended to one person (who I didn't know well) that she use chatGPT to correct her grammar. It is great for that.

However she then paid for a subscription because she likes the "conversations". Am feeling guilty now. Better check on her that she isn't losing the plot.

[-] 1rre@discuss.tchncs.de 7 points 1 month ago

A six year old can read and write Arabic, Chinese, Ge'ez, etc. and yet most people with PhD level experience probably can't, and it's probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It's completely pointless to make LLMs to do that, as it'd just make them way less efficient than they need to be while adding nothing useful.

[-] skisnow@lemmy.ca 21 points 1 month ago

LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM's output, looks superficially clever at first glance but doesn't stand up to the slightest bit of scrutiny.

[-] 1rre@discuss.tchncs.de 2 points 1 month ago

it's more that the only way to get some anti AI crusader that there are some uses for it is to put it in an analogy that they have to actually process rather than spitting out an "ai bad" kneejerk.

I'm probably far more anti AI than average, for 95% of what it's pushed for it's completely useless, but that still leaves 5% that it's genuinely useful for that some people refuse to accept.

[-] abir_v@lemmy.world 6 points 1 month ago

I feel this. In my line of work I really don't like using them for much of anything (programming ofc, like 80% of Lemmy users) because it gets details wrong too often to be useful and I don't like babysitting.

But when I need a logging message, or to return an error, it's genuinely a time saver. It's good at pretty well 5%, as you say.

But using it for art, math, problem solving, any of that kind of stuff that gets tauted around by the business people? Useless, just fully fuckin useless.

[-] 1rre@discuss.tchncs.de 1 points 1 month ago* (last edited 1 month ago)

I don't know about "art", one part of ai image generation is of replacing stock images and erotic photos which frankly I don't have a huge issue with as they're both at least semi-exploitative industries anyway in many ways and you just need something that's good enough.

Obviously these don't extend to things a reasonable person would consider art, but business majors and tech bros rebranding something shitty to position it as a competitor to or in the same class as something it so obviously isn't.

[-] abir_v@lemmy.world 1 points 1 month ago

Yeah - I first hand have seen business majors I work with try to pitch a song from AI as our new marketing jingle. It was neither good, nor catchy for marketing purposes, but business ghouls hear something that sounds close enough to something someone put real effort into and think that's the hard part sorted.

[-] TempermentalAnomaly@lemmy.world 5 points 1 month ago

It's amazing that if you acknowledge that:

  1. AI has some utility and
  2. The (now tiresome and sloppy) tests they're using doesn't negate 1

You are now an AI evangelist. Just as importantly, the level of investment into AI doesn't justify #1. And when that realization hits business America, a correction will happen and the people who will be effected aren't the well off, but the average worker. The gains are for the few, the loss for the many.

[-] Jomega@lemmy.world 4 points 1 month ago

it's more that the only way to get some anti AI crusader that there are some uses for it

Name three.

[-] 1rre@discuss.tchncs.de 3 points 1 month ago

I'm going to limit to LLMs as that's the generally accepted term and there's so many uses for AI in other fields that it'd be unfair.

  1. Translation. LLMs are pretty much perfect for this.

  2. Triaging issues for support. They're useless for coming to solutions but as good as humans without the need to wait at sending people to the correct department to deal with their issues.

  3. Finding and fixing issues with grammar. Spelling is something that can be caught by spell-checkers, but grammar is more context-aware, another thing that LLMs are pretty much designed for, and useful for people writing in a second language.

  4. Finding starting points to research deeper. LLMs have a lot of data about a lot of things, so can be very useful for getting surface level information eg. about areas in a city you're visiting, explaining concepts in simple terms etc.

  5. Recipes. LLMs are great at saying what sounds right, so for cooking (not so much baking, but it may work) they're great at spitting out recipes, including substitutions if needed, that go together without needing to read through how someone's grandmother used to do xyz unrelated nonsense.

There's a bunch more, but these were the first five that sprung to mind.

[-] voronaam@lemmy.world 8 points 1 month ago
  1. Translation. Only works for unified technical texts. The older non-LLM translation is still better for any general text and human translation for any fiction is a must. Case in point: try to translate Severance TV show transcript to another language. The show makes a heavy use of "Innie/Outie" language that does not exist in modern English. LLM fail to translate that - human translator would be able to find a proper pair of words in the target language.

  2. Triaging issues for support. This one is a double-edged sword. Sure you can triage issues faster with LLM, but other people can also write issues faster with their LLMs. And they are winning more. Overall, LLM is a net negative on your triage cost as a business because while you can process each one faster than before, you are also getting way higher volume of those.

  3. Grammar. It fails in that. I asked LLM about "fascia treatment" but of course I misspelled "fascia". The "PhD-level" LLM failed to recognize the typo and gave me a long answer about different kinds of "facial treatment" even though for any human the mistake would've been obvious. Meaning, it only corrects grammar properly when the words it is working on are simple and trivial.

  4. Starting points for deeper research. So was the web search. No improvement there. Exactly on-par with the tech from two decades ago.

  5. Recipes. Oh, you stumbled upon one of my pet peeves! Recipes are generally in the gutter on the textual Internet now. Somehow a wrong recipe got into LLM training for a few things and now those mistakes are multiplied all over the Internet! You would not know the mistakes if you did not not cook/bake the thing previously. The recipe database was one of the early use cases for the personal computers back in 1990s and it is one of the first ones to fall prey to "innovation". The recipes online are so bad, that you need an LLM to distill it back to manageable instructions. So, LLM in your example are great at solving the problem they created in the first place! You would not need LLM to get cooking instructions out of 1990s database. But early text generation AIs polluted this section of the Internet so much, that you need the next generation AI to unfuck it. Tech being great at solving the problem it created in the first place is not so great if you think about it.

[-] 1rre@discuss.tchncs.de 1 points 1 month ago* (last edited 1 month ago)

You're bringing up edge cases for #1, and it should be replacing google translate and basic human translation, eg allowing people to understand posts online or communicate textually with people with whom they don't share a common language. Using it for anything high stakes or legal documents is asking for trouble though.

For 2, it's not for AIs finding issues, it's for people wanting to book a flight, or seek compensation for a delayed flight, or find out what meals will be served on their flight. Some people prefer to use text or voice communication over a UI, and this makes it easier to provide.

For 3, grammar and spelling are different. I said it wasn't useful for spellcheck, but even then if you give it the right context it may or may not catch it. I was referring more to word order and punctuation positioning.

For 4, yeah for me it's on par in terms of results, but much much faster, especially when asking followup questions or specifying constraints. A lot of people aren't search engine powerusers though, so will find it significantly easier, faster and better than conventional search than having to manage tabs or keep track of what you've seen without just scrolling back up in the conversation.

For 5, recipes have been in the gutter for a decade or more now, SEO came before LLMs, but yeah, you've actually caught on to an obvious #6 I missed here of text summarisation...

What I'm getting overall though is that you're not considering how tech-savvy the average person is, which absolutely makes them seem less useful as the more tech savvy you are, both the more you're aware of their weaknesses and the less you benefit from the speedup by simplification they bring. This does make ai's shortcomings more dangerous, but as it matures one would hope that it becomes common knowledge.

[-] voronaam@lemmy.world 3 points 1 month ago

I think you are correct at the main point:

you’re not considering how tech-savvy the average person is

I am actually having hard time understanding where all of that hype is coming from. The first time I've seen AI solve a problem better than a human was back in 1996. I have used various generations of AI tools ever since. LLMs are fun, but it is not like they are that much different from the other AI tools before them. Every time a new AI technology comes around I am finding a use case for it in my own flow. LLMs have their uses as well. But I am not trying to solve ALL the problems with the new tech.

I do not understand "the average person". And I guess I never will.

[-] Jomega@lemmy.world 8 points 1 month ago

Right, except they suck at all of those things. Especially the last one. Unless you think glue is an acceptable pizza topping.

[-] 1rre@discuss.tchncs.de 2 points 1 month ago

Nice, here's a gold star for finding one case of it doing something wrong. I'll call the CEO of AI and tell them to call it off, it's a good thing humans have never said anything like that!

[-] Jomega@lemmy.world 6 points 1 month ago

Bruh, you were the one that picked the examples. If you had a better argument you should have used that one instead.

[-] 1rre@discuss.tchncs.de 1 points 1 month ago

And no matter what I picked, you'd reject them because you're not actually considering them, you're just either a troll, a contrarian or a luddite.

[-] Jomega@lemmy.world 4 points 1 month ago

Riiiiight. Everyone who disagrees with you is an evil scary luddite. Sure fam.

[-] 1rre@discuss.tchncs.de 1 points 1 month ago

Who said you were scary?

Frankly I pity you more than anything.

[-] echodot@feddit.uk 10 points 1 month ago

So if the AI can't do it then that's just proof that the AI is too smart to be able to do it? That's your arguement is it. Nah, it's just crap

You think just because you attached it to an analogy that makes it make sense. That's not how it works, look I can do it.

My car is way too technologically sophisticated to be able to fly, therefore AI doesn't need to be able to work out how many l Rs are in "strawberry".

See how that made literally no sense whatsoever.

[-] 1rre@discuss.tchncs.de 2 points 1 month ago

Except you're expecting it to do everything. Your car is too "technically advanced" to walk on the sidewalk, but wait, you can do that anyway and don't need to reinvent your legs

[-] knatschus@discuss.tchncs.de 20 points 1 month ago

Then why is Google using it for question like that?

Surely it should be advanced enough to realise it's weakness with this kind of questions and just don't give an answer.

[-] PixelatedSaturn@lemmy.world 14 points 1 month ago* (last edited 1 month ago)

They are using it for every question. It's pointless. The only reason they are doing it is to blow up their numbers.

... they are trying to be infront. So that some future ai search wouldn't capture their market share. It's a safety thing even if it's not working for all types of questions.

[-] echodot@feddit.uk 12 points 1 month ago

Well it also can't code very well either

[-] figjam@midwest.social 7 points 1 month ago

Understanding the bounds of tech makes it easier for people to gage its utility. The only people who desire ignorance are those that profit from it.

[-] PixelatedSaturn@lemmy.world 4 points 1 month ago

Sure. But you can literally test almost all frontier models for free. It's not like there is some conspiracy or secret. Even my 73 year old mother uses it and knows it's general limits.

[-] FishFace@lemmy.world 1 points 1 month ago

Saying "it's worth trillions of dollars huh" isn't really promoting that attitude.

[-] EnsignWashout@startrek.website 2 points 1 month ago* (last edited 1 month ago)

I find it bizarre that people find these obvious cases to prove the tech is worthless. Like saying cars are worthless because they can't go under water.

This reaction is because conmen are claiming that current generations of LLM technology are going to remove our need for experts and scientists.

We're not demanding submersible cars, we're just laughing about the people paying top dollar for the lastest electric car while plannig an ocean cruise.

I'm confident that there's going to be a great deal of broken... everything...built with AI "assistance" during the next decade.

[-] PixelatedSaturn@lemmy.world 1 points 1 month ago

That's not what you are doing at all. You are not laughing. Anti ai people are outraged, full of hatred and ready to pounce on anyone who isn't as anti as they are. It's a super emotional issue, especially on fediverse.

You may be confident, because you probably don't know how software is built. Nobody is going to just abandon all the experience they have, vibe code something and release whatever. Thats not how it works.

[-] EnsignWashout@startrek.website 1 points 1 month ago

because you probably don't know how software is built.

Oh shit. Nevermind then.

[-] mrductape@eviltoast.org 1 points 1 month ago

Well technically cars can go underwater. They just cannot get out because they stop working.

[-] PixelatedSaturn@lemmy.world 1 points 1 month ago

Intentionally missing the point is not an argument in itself.

[-] azertyfun@sh.itjust.works 3 points 1 month ago

It's very funny that you can get ChaptGPT to spell out the word (making each letter an individual token) and still be wrong.

Of course it makes complete sense when you know how LLMs work, but this demo does a very concise job of short-circuiting the cognitive bias that talking machine == thinking machine.

this post was submitted on 11 Sep 2025
876 points (100.0% liked)

Technology

76365 readers
1492 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS