382
submitted 1 week ago by misk@sopuli.xyz to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] adarza@lemmy.ca 313 points 1 week ago

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[-] floofloof@lemmy.ca 98 points 1 week ago

The same way these capitalists evaluate human beings.

[-] LostXOR@fedia.io 48 points 1 week ago

Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

[-] hemmes@lemmy.world 12 points 1 week ago

AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.

[-] lazynooblet@lazysoci.al 65 points 1 week ago

The vast majority of those implementations are worthless. Mostly ignored by it's intended users, seen as a useless gimmick.

LLM have it's uses but companies are pushing them into every areas to see what sticks at the moment.

[-] Benjaben@lemmy.world 22 points 1 week ago

Not the person you replied to, but I think you're both "right". The ridiculous hype bubble (I'll call it that for sure) put "AI" everywhere, and most of those are useless gimmicks.

But there's also already uses that offer things I'd call novel and useful enough to have some staying power, which also means they'll be iterated on and improved to whatever degree there is useful stuff there.

(And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)

[-] Auli@lemmy.ca 22 points 1 week ago

I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn't mean AI isn't useful just that most of the chaf well disapear.

load more comments (1 replies)
load more comments (1 replies)
[-] hemmes@lemmy.world 8 points 1 week ago

To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.

load more comments (6 replies)
[-] NotSteve_@lemmy.ca 27 points 1 week ago

That's an Onion level of capitalism

[-] drmoose@lemmy.world 20 points 1 week ago* (last edited 1 week ago)

The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.

[-] echodot@feddit.uk 14 points 1 week ago

Right but that's not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you've developed something that you might not have developed.

[-] drmoose@lemmy.world 7 points 1 week ago* (last edited 1 week ago)

honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn't have anything to do with "AGI". For context, Apple's net income is 90 Billion this year.

I've listened to enough interviews to know that all of AI leaders want this holy grail title of "inventor of AGI" more than anything else so I don't think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

load more comments (1 replies)
[-] Mikina@programming.dev 189 points 1 week ago

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[-] SlopppyEngineer@lemmy.world 38 points 1 week ago

There are already a few papers about diminishing returns in LLM.

[-] GamingChairModel@lemmy.world 27 points 1 week ago

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

load more comments (4 replies)
[-] daniskarma@lemmy.dbzer0.com 23 points 1 week ago* (last edited 1 week ago)

What is your brain doing if not statistical text prediction?

The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn't seem that unlikely. It's basically text prediction on a loop with some exterior inputs to interact.

[-] barsoap@lemm.ee 27 points 1 week ago

How to tell me you're stuck in your head terminally online without telling me you're stuck in your head terminally online.

But have something more to read.

[-] daniskarma@lemmy.dbzer0.com 15 points 1 week ago* (last edited 1 week ago)

Why being so rude?

Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don't even know?

I will actually read it. Probably the only one of us two who would do that.

If it's convincing I may change my mind. I'm not a radical, like many other people are, and my opinions are subject to change.

[-] Ageroth@reddthat.com 19 points 1 week ago* (last edited 1 week ago)

Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.

The reason OP was so rude is that your very premise of "what is the brain doing if not statistical text prediction" is completely wrong and you don't even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.

The brain uses words to describe thoughts, the words are not actually the thoughts themselves.

https://advances.massgeneral.org/neuro/journal.aspx?id=1096

Think about small children who haven't learned language yet, do those brains still do "stastical text prediction" despite not having words to predict?

What about dogs and cats and other "less intelligent" creatures, they don't use any words but we still can teach them to understand ideas. You don't need to utter a single word, not even a sound, to train a dog to sit. Are they doing "statistical text prediction" ?

load more comments (4 replies)
[-] barsoap@lemm.ee 11 points 1 week ago

It's a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don't think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we'd need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don't have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the "adjust learning rate" level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that's where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.

As to "rudeness": Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree... because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including "my retirement savings are in AI stock") they c) would've come across the general argument themselves during their technological research. Or came up with it themselves, I've also seen examples of that: If you have a good intuition about complexity (and many programmers do) it's not unlikely a shower thought to have. Not as fleshed out as in the article, of course.

load more comments (4 replies)
load more comments (1 replies)
[-] SlopppyEngineer@lemmy.world 18 points 1 week ago

Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without "inner monologue" that function just fine

Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn't going to work.

load more comments (15 replies)
[-] aesthelete@lemmy.world 12 points 1 week ago

What is your brain doing if not statistical text prediction?

Um, something wrong with your brain buddy? Because that's definitely not at all how mine works.

[-] daniskarma@lemmy.dbzer0.com 10 points 1 week ago* (last edited 1 week ago)

Then why you just expressed in a statistical prediction manner?

You saw other people using that kind of language while being derogatory to someone they don't like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

Have you ever debated with someone from the polar opposite political spectrum and complain that "they just repeat the same propaganda"? Doesn't it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

load more comments (1 replies)
load more comments (3 replies)
[-] TheFriar@lemm.ee 15 points 1 week ago

The only text predictor I want in my life is T9

load more comments (2 replies)
[-] 7rokhym@lemmy.ca 12 points 1 week ago

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

[-] RoidingOldMan@lemmy.world 7 points 1 week ago

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

load more comments (1 replies)
load more comments (2 replies)
[-] suy@programming.dev 11 points 1 week ago

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don't think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.

Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

[-] bitjunkie@lemmy.world 7 points 1 week ago

I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought

[-] finitebanjo@lemmy.world 15 points 1 week ago

The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

[-] 11111one11111@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn't a single thing in the universe that can't be broken down to a mathematical equation for physics or chemistry? I'm curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it's a leap and I could be wrong but I thought I've heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

Like I said in the beginning this is straight up bong rips philosophy and haven't looked up any of the shit I brought up.

I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won't see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can't out perform any more independently than a 3 year old.

load more comments (12 replies)
load more comments (1 replies)
load more comments (2 replies)
[-] frezik@midwest.social 79 points 1 week ago

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

Cars, and snake oil, and propaganda

load more comments (3 replies)
[-] Free_Opinions@feddit.uk 56 points 1 week ago

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[-] LifeInMultipleChoice@lemmy.ml 17 points 1 week ago

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether... And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I'd say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can't, but language models to me aren't "AGI" in my opinion.

[-] hendrik@palaver.p3x.de 8 points 1 week ago

Agree. And these tasks can't be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn't enough in my eyes. Especially since it even struggles to do that. It's the "general" that is missing.

load more comments (4 replies)
load more comments (1 replies)
[-] zeca@lemmy.eco.br 7 points 1 week ago* (last edited 1 week ago)

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

load more comments (31 replies)
load more comments (7 replies)
[-] FlyingSquid@lemmy.world 55 points 1 week ago* (last edited 1 week ago)

"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

[-] ChowJeeBai@lemmy.world 45 points 1 week ago

This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.

Except that it isn't AGI.

[-] phoneymouse@lemmy.world 21 points 1 week ago* (last edited 1 week ago)

But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

The real motivation is to not be beholden to Microsoft

load more comments (1 replies)
[-] echodot@feddit.uk 20 points 1 week ago

So they don't actually have a definition of a AGI they just have a point at which they're going to announce it regardless of if it actually is AGI or not.

Great.

[-] hendrik@palaver.p3x.de 18 points 1 week ago* (last edited 1 week ago)

Why does OpenAI "have" everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there... They have a definition of AGI... Yet, they release none of that...

Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company's value, and you'd better not tell the truth. But with all the other things, it's just silly not to share anything.

Either they're even more greedy than the Metas and Googles out there, or all the articles and "leaks" are just unsubstantiated hype.

[-] mint_tamas@lemmy.world 21 points 1 week ago

Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

[-] HawlSera@lemm.ee 17 points 1 week ago

I'm gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

[-] reksas@sopuli.xyz 9 points 1 week ago

it would be quite trope inversion if people sided with the ai overlord

load more comments (1 replies)
[-] j4k3@lemmy.world 16 points 1 week ago

Does anyone have a real link to the non-stalkerware version of:

https://www.theinformation.com/articles/microsoft-and-openais-secret-agi-definition

-and the only place with the reference this article claims to cite but doesn't quote?

load more comments
view more: next ›
this post was submitted on 27 Dec 2024
382 points (100.0% liked)

Technology

60305 readers
2628 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS