40
you are viewing a single comment's thread
view the rest of the comments
[-] self@awful.systems 26 points 8 months ago

Speaking at an event in London on Tuesday, Meta’s chief AI scientist Yann LeCun said that current AI systems “produce one word after the other really without thinking and planning”.

Because they struggle to deal with complex questions or retain information for a long period, they still “make stupid mistakes”, he said.

Adding reasoning would mean that an AI model “searches over possible answers”, “plans the sequence of actions” and builds a “mental model of what the effect of [its] actions are going to be”, he said.

wait, you mean the same models that supposed AI researchers were swearing had “glimmerings of intelligent reasoning” and “a complex world model” really were just outputting the most likely next word for a prompt? the current models are just fancy autocomplete but now that there’s a new product to sell, that one will be the real thing? and of course, the new models are getting pre-announced as revolutionary as interest in this horseshit in general takes a nosedive.

LeCun said it was working on AI “agents” that could, for instance, plan and book each step of a journey, from someone’s office in Paris to another in New York, including getting to the airport.

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid? like, there’s apps that do this already. this shit was solved already by application of the least-terrible surviving algorithms from the first AI boom. what the fuck is the point of re-solving travel planning, but now incredibly expensive and you can’t trust the results?

[-] froztbyte@awful.systems 17 points 8 months ago* (last edited 8 months ago)

the thing that bothers me about that lecunn statement is that it's another of those not-even-wrong fuckers with an implicit assumption: that the problem is not that it doesn't have intelligence, just that the intelligence isn't very advanced yet - "oh yeah it just didn't think ahead! that's why foot in mouth! it's like your drunk friend at a party!"

which, y'know, is not the case. but they all fucking speak with that implicit foundation, as though the intelligence is proven fact instead of total suggestion (I wanted to say "conjecture", but that isn't the right word either)

these must be the multi-agent models that AI fans won’t shut the fuck up about now that multi-modal LLMs are here and disappointing. is it just me or does the use case for this sound fucking stupid?

it's also the pitch I keep seeing from a number of places, including that rabbit or whatever the fuck thing? and, frankly, can we not? these goddamn things can barely parse sentences and keep context, and someone wants to tell me that a model use is for it to plan my travel? with visas and flight times and transfers? nevermind all the extra implications of accounting for real-world issues (e.g. political sensitivity), preferences in sight-seeing, data privacy considerations (visiting friends)....

like it's just a gigantic fucking katamari ball of nope

[-] carlitoscohones@awful.systems 13 points 8 months ago

someone wants to tell me that a model use is for it to plan my travel?

I don't think any of these people have ever traveled. Honestly, I used to work for a company where the corporate travel people mostly lived in a small village in Germany, and their recommendations could be insane sometimes, but at least they knew what being a human was like.

load more comments (2 replies)
load more comments (22 replies)
this post was submitted on 10 Apr 2024
40 points (100.0% liked)

TechTakes

1489 readers
50 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS