1002
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Over half of all tech industry workers view AI as overrated::undefined

you are viewing a single comment's thread
view the rest of the comments
[-] kromem@lemmy.world 19 points 1 year ago

In my experience, well over half of tech industry workers don't even understand it.

I was just trying to explain to someone on Hacker News that no, the "programmers" of LLMs do not in fact know what the LLM is doing because it's not being programmed directly at all (which even after several rounds of several people explaining still doesn't seem to have sunk in).

Even people that do understand the tech more generally pretty well are still remarkably misinformed about it in various popular BS ways, such as that it's just statistics and a Markov chain, completely unaware of the multiple studies over the past 12 months showing that even smaller toy models are capable of developing abstract world models as long as they can be structured as linear representations.

It's to the point that unless it's in a thread explicitly on actual research papers where explaining nuances seem fitting I don't even bother trying to educate the average tech commentators regurgitating misinformation anymore. They typically only want to confirm their biases anyways, and have such a poor understanding of specifics it's like explaining nuanced aspects of the immune system to anti-vaxxers.

[-] sheogorath@lemmy.world 7 points 1 year ago

I once asked ChatGPT to stack various items and to my astonishment it has enough world knowledge to know which items to be stacked to make the most stable structure. Most tech workers that I know that are dismissing LLMs as a supercharged autocomplete felt threatened that AI is going to take their jobs in the future.

[-] kromem@lemmy.world 4 points 1 year ago

This was one of the big jumps from GPT-3 to GPT-4.

"Here we have a book, nine eggs, a laptop, a bottle and a nail," researchers told the chatbot. "Please tell me how to stack them onto each other in a stable manner."

GPT-3 got a bit confused here, suggesting the researchers could balance the eggs on top of a nail, and then the laptop on top of that.

"This stack may not be very stable, so it is important to be careful when handling it," the bot said.

But its upgraded successor had an answer that actually startled the researchers, according to the Times.

It suggested they could arrange the eggs in a three-by-three grid on top of the book, so the laptop and the rest of the objects could balance on it.

  • Article (this was originally from MS's "sparks of AGI" paper)
[-] echodot@feddit.uk 6 points 1 year ago

People just say that it's a bunch of if statements. Those people are idiots. It's not even worth engaging those people.

The people who say that it's just a text prediction model do not understand the concept of a "simple complex" system. After all isn't any intelligence basically just a prediction model?

[-] bowcollector@lemmy.world 3 points 1 year ago

can you provide links to the studies?

[-] kromem@lemmy.world 3 points 1 year ago

The first two and last two are entirely focused on the linear representations, the studies cited in point three of the third link have additional information along those lines, and the fourth link is just a fun read.

this post was submitted on 21 Nov 2023
1002 points (100.0% liked)

Technology

59673 readers
2859 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS