763
submitted 1 year ago by yesman@lemmy.world to c/memes@lemmy.ml

I think AI is neat.

top 50 comments
sorted by: hot top controversial new old
[-] DrJenkem@lemmy.blugatch.tube 172 points 1 year ago

They're kind of right. LLMs are not general intelligence and there's not much evidence to suggest that LLMs will lead to general intelligence. A lot of the hype around AI is manufactured by VCs and companies that stand to make a lot of money off of the AI branding/hype.

[-] casmael@lemm.ee 38 points 1 year ago

Yeah this sounds about right. What was OP implying I’m a bit lost?

[-] ricecake@sh.itjust.works 56 points 1 year ago

I believe they were implying that a lot of the people who say "it's not real AI it's just an LLM" are simply parroting what they've heard.

Which is a fair point, because AI has never meant "general AI", it's an umbrella term for a wide variety of intelligence like tasks as performed by computers.
Autocorrect on your phone is a type of AI, because it compares what words you type against a database of known words, compares what you typed to those words via a "typo distance", and adds new words to it's database when you overrule it so it doesn't make the same mistake.

It's like saying a motorcycle isn't a real vehicle because a real vehicle has two wings, a roof, and flies through the air filled with hundreds of people.

load more comments (8 replies)
[-] Redacted@lemmy.world 21 points 1 year ago

I believe OP is attempting to take on an army of straw men in the form of a poorly chosen meme template.

[-] Feathercrown@lemmy.world 14 points 1 year ago

No people say this constantly it's not just a strawman

[-] c0mbatbag3l@lemmy.world 13 points 1 year ago

People who don't understand or use AI think it's less capable than it is and claim it's not AGI (which no one else was saying anyways) and try to make it seem like it's less valuable because it's "just using datasets to extrapolate, it doesn't actually think."

Guess what you're doing right now when you "think" about something? That's right, you're calling up the thousands of experiences that make up your "training data" and using it to extrapolate on what actions you should take based on said data.

You know how to parallel park because you've assimilated road laws, your muscle memory, and the knowledge of your cars wheelbase into a single action. AI just doesn't have sapience and therefore cannot act without input, but the process it does things with is functionally similar to how we make decisions, the difference is the training data gets input within seconds as opposed to being built over a lifetime.

[-] DrJenkem@lemmy.blugatch.tube 17 points 1 year ago

People who aren't programmers, haven't studied computer science, and don't understand LLMs are much more impressed by LLMs.

[-] Feathercrown@lemmy.world 16 points 1 year ago* (last edited 1 year ago)

That's true of any technology. As someone who is a programmer, has studied computer science, and does understand LLMs, this represents a massive leap in capability. Is it AGI? No. Is it a potential paradigm shift? Yes. This isn't pure hype like Crypto was, there is a core of utility here.

load more comments (3 replies)
load more comments (1 replies)
load more comments (2 replies)
[-] Ragdoll_X@lemmy.world 19 points 1 year ago* (last edited 1 year ago)

Depends on what you mean by general intelligence. I've seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

Wikipedia gives two definitions of AGI:

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.

If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning. The question then is how general do LLMs have to be for one to consider them to be AGIs, and there's no hard metric for that question.

I can't pass the bar exam like GPT-4 did, and it also has a lot more general knowledge than me. Sure, it gets stuff wrong, but so do humans. We can interact with physical objects in ways that GPT-4 can't, but it is catching up. Plus Stephen Hawking couldn't move the same way that most people can either and we certainly wouldn't say that he didn't have general intelligence.

I'm rambling but I think you get the point. There's no clear threshold or way to calculate how "general" an AI has to be before we consider it an AGI, which is why some people argue that the best LLMs are already examples of general intelligence.

[-] DrJenkem@lemmy.blugatch.tube 11 points 1 year ago

Depends on what you mean by general intelligence. I've seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.

Well, I mean the ability to solve problems we don't already have the solution to. Can it cure cancer? Can it solve the p vs np problem?

And by the way, wikipedia tags that second definition as dubious as that is the definition put fourth by OpenAI, who again, has a financial incentive to make us believe LLMs will lead to AGI.

Not only has it not been proven whether LLMs will lead to AGI, it hasn't even been proven that AGIs are possible.

If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning.

No it can't. If the task requires the LLM to solve a problem that hasn't been solved before, it will fail.

I can't pass the bar exam like GPT-4 did

Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.

Ask an LLM to solve a problem without a known solution and it will fail.

We can interact with physical objects in ways that GPT-4 can't, but it is catching up. Plus Stephen Hawking couldn't move the same way that most people can either and we certainly wouldn't say that he didn't have general intelligence.

The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.

load more comments (5 replies)
load more comments (1 replies)
[-] force@lemmy.world 12 points 1 year ago* (last edited 1 year ago)

It depends a lot on how we perceive "intelligence". It's a lot more vague of a term than most, so people have very different views of it. Some people might have the idea of it meaning the response to stimuli & the output (language or art or any other form) being indistinguishable from humans. But many people may also agree that whales/dolphins have the same level of, or superior, "intelligence" to humans. The term is too vague to really prescribe with confidence, and more importantly people often use it to mean many completely different concepts ("intelligence" as a measurable/quantifiable property of either how quickly/efficiently a being can learn or use knowledge or more vaguely its "capacity to reason", "intelligence" as the idea of "consciousness" in general, "intelligence" to refer to amount of knowledge/experience one currently has or can memorize, etc.)

In computer science "artificial intelligence" has always simply referred to a program making decisions based on input. There was never any bar to reach for how "complex" it had to be to be considered AI. That's why minecraft zombies or shitty FPS bots are "AI", or a simple algorithm made to beat table games are "AI", even though clearly they're not all that smart and don't even "learn".

load more comments (1 replies)
load more comments (20 replies)
[-] poke@sh.itjust.works 68 points 1 year ago

Knowing that LLMs are just "parroting" is one of the first steps to implementing them in safe, effective ways where they can actually provide value.

[-] Kushia@lemmy.ml 11 points 1 year ago

LLMs definitely provide value its just debatable whether they're real AI or not. I believe they're going to be shoved in a round hole regardless.

load more comments (2 replies)
[-] antidote101@lemmy.world 61 points 1 year ago

I think LLMs are neat, and Teslas are neat, and HHO generators are neat, and aliens are neat...

...but none of them live up to all of the claims made about them.

load more comments (3 replies)
[-] WallEx@feddit.de 37 points 1 year ago

They're predicting the next word without any concept of right or wrong, there is no intelligence there. And it shows the second they start hallucinating.

[-] LarmyOfLone@lemm.ee 21 points 1 year ago

They are a bit like you'd take just the creative writing center of a human brain. So they are like one part of a human mind without sentience or understanding or long term memory. Just the creative part, even though they are mediocre at being creative atm. But it's shocking because we kind of expected that to be the last part of human minds to be able to be replicated.

Put enough of these "parts" of a human mind together and you might get a proper sentient mind sooner than later.

[-] WallEx@feddit.de 14 points 1 year ago

Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can't reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)

load more comments (4 replies)
load more comments (5 replies)
load more comments (5 replies)

EXACTLY. there is no problem solving either (except that to calculate the most probable text)

Even worse is some of my friends say that alexa is A.I.

load more comments (17 replies)
[-] Wirlocke 35 points 1 year ago* (last edited 1 year ago)

The way I've come to understand it is that LLMs are intelligent in the same way your subconscious is intelligent.

It works off of kneejerk "this feels right" logic, that's why images look like dreams, realistic until you examine further.

We all have a kneejerk responses to situations and questions, but the difference is we filter that through our conscious mind, to apply long-term thinking and our own choices into the mix.

LLMs just keep getting better at the "this feels right" stage, which is why completely novel or niche situations can still trip it up; because it hasn't developed enough "reflexes" for that problem yet.

[-] fidodo@lemmy.world 17 points 1 year ago

LLMs are intelligent in the same way books are intelligent. What makes LLMs really cool is that instead of searching at the book or page granularity, it searches at the word granularity. It's not thinking, but all the thinking was done for it already by humans who encoded their intelligence into words. It's still incredibly powerful, at it's best it could make it so no task ever needs to be performed by a human twice which would have immense efficiency gains for anything information based.

load more comments (2 replies)
[-] Starkstruck@lemmy.world 32 points 1 year ago

I feel like our current "AIs" are like the Virtual Intelligences in Mass Effect. They can perform some tasks and hold a conversation, but they aren't actually "aware". We're still far off from a true AI like the Geth or EDI.

load more comments (5 replies)
[-] kibiz0r@midwest.social 24 points 1 year ago

LLMs are a step towards AI in the same sense that a big ladder is a step towards the moon.

[-] pachrist@lemmy.world 24 points 1 year ago

If an LLM is just regurgitating information in a learned pattern and therefore it isn't real intelligence, I have really bad news for ~80% of people.

[-] Gormadt 24 points 1 year ago

I know a few people who would fit that definition

[-] agitatedpotato@lemmy.world 17 points 1 year ago

Like almost all politicians?

[-] KeenFlame@feddit.nu 24 points 1 year ago

Been destroyed for this opinion here. Not many practicioners here just laymen and mostly techbros in this field.. But maybe I haven't found the right node?

I'm into local diffusion models and open source llms only, not into the megacorp stuff

[-] webghost0101@sopuli.xyz 15 points 1 year ago* (last edited 1 year ago)

If anything people really need to start experimenting beyond talking to it like its human or in a few years we will end up with a huge ai-illiterate population.

I’ve had someone fight me stubbornly talking about local llms as “a overhyped downloadable chatbot app” and saying the people on fossai are just a bunch of ai worshipping fools.

I was like tell me you now absolutely nothing you are talking about by pretending to know everything.

[-] KeenFlame@feddit.nu 9 points 1 year ago

But the thing is it's really fun and exciting to work with, the open source community is extremely nice and helpful, one of the most non toxic fields I have dabbled in! It's very fun to test parameters tools and write code chains to try different stuff and it's come a long way, it's rewarding too because you get really fun responses

load more comments (7 replies)
load more comments (14 replies)
[-] Redacted@lemmy.world 24 points 1 year ago

I fully back your sentiment OP; you understand as much about the world as any LLM out there and don't let anyone suggest otherwise.

Signed, a "contrarian".

[-] Adalast@lemmy.world 23 points 1 year ago

Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary "correct" way to answer that question and it is still an incorrect parroting of something they have been told.

Ask yourself, what do you actually understand? How many topics could you be asked "why?" on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.

[-] Blackmist@feddit.uk 12 points 1 year ago

I feel that knowing what you don't know is the key here.

An LLM doesn't know what it doesn't know, and that's where what it spouts can be dangerous.

Of course there's a lot of actual people that applies to as well. And sadly they're often in positions of power.

load more comments (1 replies)
load more comments (6 replies)
[-] someacnt_@lemmy.world 20 points 1 year ago

Keep seething, OpenAI's LLMs will never achieve AGI that will replace people

[-] Gabu@lemmy.ml 12 points 1 year ago

That was never the goal... You might as well say that a bowling ball will never be effectively used to play golf.

[-] Rozauhtuno 14 points 1 year ago

That was never the goal…

Most CEOs seem to not have got the memo...

load more comments (1 replies)
[-] JustJack23@slrpnk.net 9 points 1 year ago

I agree, but it's so annoying when you work as IT and your non-IT boss thinks AI is the solution to every problem.

At my previous work I had to explain to my boss at least once a month why we can't have AI diagnosing patients (at a dental clinic) or reading scans or proposing dental plans... It was maddening.

load more comments (1 replies)
load more comments (3 replies)
[-] inb4_FoundTheVegan@lemmy.world 18 points 1 year ago

As someone who has loves Asimov and read nearly all of his work.

I absolutely bloody hate calling LLM's AI, without a doubt they are neat. But they are absolutely nothing in the ballpark of AI, and that's okay! They weren't trying to make a synethic brain, it's just the culture narrative I am most annoyed at.

load more comments (1 replies)
[-] Xeroxchasechase@lemmy.world 16 points 1 year ago

You've just described most people...

load more comments (2 replies)
[-] naevaTheRat@lemmy.dbzer0.com 9 points 1 year ago

So super informed OP, tell me how they work. technically, not CEO press release speak. explain the theory.

[-] PM_ME_VINTAGE_30S@lemmy.sdf.org 10 points 1 year ago

I'm not OP, and frankly I don't really disagree with the characterization of ChatGPT as "fancy autocomplete". But...

I'm still in the process of reading this cover-to-cover, but Chapter 12.2 of Deep Learning: Foundations and Concepts by Bishop and Bishop explains how natural language transformers work, and then has a short section about LLMs. All of this is in the context of a detailed explanation of the fundamentals of deep learning. The book cites the original papers from which it is derived, most of which are on ArXiv. There's a nice copy on Library Genesis. It requires some multi-variable probability and statistics, and an assload of linear algebra, reviews of which are included.

So obviously when the CEO explains their product they're going to say anything to make the public accept it. Therefore, their word should not be trusted. However, I think that when AI researchers talk simply about their work, they're trying to shield people from the mathematical details. Fact of the matter is that behind even a basic AI is a shitload of complicated math.

At least from personal experience, people tend to get really aggressive when I try to explain math concepts to them. So they're probably assuming based on their experience that you would be better served by some clumsy heuristic explanation.

IMO it is super important for tech-inclined people interested in making the world a better place to learn the fundamentals and limitations of machine learning (what we typically call "AI") and bring their benefits to the common people. Clearly, these technologies are a boon for the wealthy and powerful, and like always, have been used to fuck over everyone else.

IMO, as it is, AI as a technology has inherent patterns that induce centralization of power, particularly with respect to the requirement of massive datasets, particularly for LLMs, and the requirement to understand mathematical fundamentals that only the wealthy can afford to go to school long enough to learn. However, I still think that we can leverage AI technologies for the common good, particularly by developing open-source alternatives, encouraging the use of open and ethically sourced datasets, and distributing the computing load so that people who can't afford a fancy TPU can still use AI somehow.

I wrote all this because I think that people dismiss AI because it is "needlessly" complex and therefore bullshit. In my view, it is necessarily complex because of the transformative potential it has. If and only if you can spare the time, then I encourage you to learn about machine learning, particularly deep learning and LLMs.

load more comments (13 replies)
load more comments
view more: next ›
this post was submitted on 05 Feb 2024
763 points (100.0% liked)

Memes

49578 readers
1565 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 6 years ago
MODERATORS