There's magic?
Only if you believe in it. Many CEOs do. They're very good in magical thinking.
I have a counter argument. From an evolutionary standpoint, if you keep doubling computer capacity exponentially isn't it extraordinarily arrogant of humans to assume that their evolutionarily stagnant brains will remain relevant for much longer?
You can make the same argument about humans that you do AI, but from a biological and societal standpoint. Barring any jokes about certain political or geographical stereotypes, humans have gotten "smarter" that we used to be. We are very adaptable, and with improvements to diet and education, we have managed to stay ahead of the curve. We didn't peak at hunter-gatherer. We didn't stop at the Renaissance. And we blew right past the industrial revolution. I'm not going to channel my "Humanity, Fuck Yeah" inner wolf howl, but I have to give our biology props. The body is an amazing machine, and even though we can look at things like the current crop of AI and think, "Welp, that's it, humans are done for," I'm sure a lot of people thought the same at other pivotal moments in technological and societal advancement. Here I am, though, farting taco bell into my office chair and typing about it.
You can compare human intelligence to centuries ago on a simple linear scale. Neural density has not increased by any stretch of the imagination in the way that transistor density has. But I'm not just talking density I'm talking about scalability that is infinite. Infinite scale of knowledge and data.
Let's face it people are already not that intelligent, we are smart enough to use the technology of other smarter people. And then there are computers, they are growing intelligently with an artificial evolutionary pressure being exerted on their development, and you're telling me that that's not going to continue to surpass us in every way? There is very little to stop computers from being intelligent on a galactic scale.
Computer power doesn't scale infinitely, unless you mean building a world mind and powering if off of the spinning singularity at the center of the galaxy like a type 3 civilization, and that's sci-fi stuff. We still have to worry about bandwidth, power, cooling, coding and everything else that going into running a computer. It doesn't just "scale". There is a lot that goes into it, and it does have a ceiling. Quantum computing may alleviate some of that, but I'll hold my applause until we see some useful real world applications for it.
Furthermore, we still don't understand how the mind works, yet. There are still secrets to unlock and ways to potentially augment and improve it. AI is great, and I fully support the advancement in technology, but don't count out humans so quickly. We haven't even gotten close to human level intelligence and GOFAI, and maybe we never will.
If you keep doubling the number of fruit flies exponentially, isn't it likely that humanity will find itself outsmarted?
The answer is no, it isn't. Quantity does not quality make and all our current AI tech is about ways to breed fruit flies that fly left or right depending on what they see.
As a counter argument against that, companies are trying to make self driving cars work for 20 years. Processing power has increased by a million and the things still get stuck. Pure processing power isn't everything.
Magic as in street magician, not magic as in wizard. Lots of the things that people claim AI can do are like a magic show, it's amazing if you look at it from the right angle, and with the right skill you can hide the strings holding it up, but if you try to use it in the real world it falls apart.
Everything is magic if you don't understand how the thing works.
I wish. I don't understand why my stomach can't handle corn, but it doesn't lead to magic. It leads to pain.
The masses have been treating it like actual magic since the early stages and are only slowly warming up to the idea it‘s calculations. Calculations of things that are often more than the sum of it‘s parts as people start to realize. Well some people anyway.
Sam Altman will make a big pile of investor money disappear before your very eyes.
Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.
Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do .
I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.
It's not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don't think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There's also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.
Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.
IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.
Lol yup, some people think they're real smart for realizing how limited LLMs are, but they don't recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They're not just making the specific models better, they're also figuring out how to combine them to make something more generally intelligent instead of super specialized.
Yea, try talking to chatgpt about things that you really know in detail about. It will fail to show you the hidden, niche things (unless you mention them yourself), it will make lots of stuff up that you would not pick up on otherwise (and once you point it out, the bloody thing will "I knew that" you, sometimes even if you are wrong) and it is very shallow in its details. Sometimes, it just repeats your question back to you as a well-written essay. And that's fine...it is still a miracle that it is able to be as reliable and entertaining as some random bullshitter you talk to in a bar, it's good for brainstorming too.
I really only use for "oh damn, I known there's a great one-liner to do that in Python" sort of thing. It's usually right and of it isn't it'll be immediacy obvious and you can move on with your day. For anything more complex the gas lighting and subtle errors make it unusable.
ChatGPT is great for helping with specific problems. Google search for example gives fairly general answers, or may have information that doesn't apply to your specific situation. But if you give ChatGPT a very specific description of the issue you're running into it will generally give some very useful recommendations. And it's an iterative process, you just need to treat it like a conversation.
Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.
Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.
I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.
Also interesting is that most people don't understand the advances it makes possible so when they hear people saying it's amazing and then try it of course they're going to think it's not lived upto hype.
The big things are going to completely change things like how we use computers especially being able to describe how you want it to lay out ui and create custom tools on the fly.
I hope it collapses in a fire and we can just keep our foss local models with incremental improvements, that way both techbros and artbros eat shit
There are quite a lot of AI-sceptics in this thread. If you compare the situation to 10 years ago, isn't it insane how far we've come since then?
Image generation, video generation, self-driving cars (Level 4 so the driver doesn't need to pay attention at all times), capable text comprehension and generation. Whether it is used for translation, help with writing reports or coding. And to top it all off, we have open source models that are at least in a similar ballpark as the closed ones and those models can be run on consumer hardware.
Obviously AI is not a solved problem yet and there are lots of shortcomings (especially with LLMs and logic where they completely fail for even simple problems) but the progress is astonishing.
I think a big obstacle to meaningfully using AI is going to be public perception. Understanding the difference between CHAT-GPT and open source models means that people like us will probably continue to find ways of using AI as it continues to improve, but what I keep seeing is botched applications, where neither the consumers nor the investors who are pushing AI really understand what it is or what it's useful for. It's like trying to dig a grave with a fork - people are going to throw away the fork and say it's useless, not realising that that's not how it's meant to be used.
I'm concerned about the way the hype behaves because I wouldn't be surprised if people got so sick of hearing about AI at all, let alone broken AI nonsense, that it hastens the next AI winter. I worry that legitimate development may be held back by all the nonsense.
Lol. It doesn't do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as "AI" is just spam generators.
That's totally not how it works. Not only nobody has the need for such tools, but the technology got there much before the current state of AI
As I often mention when this subject pops up: while the current statistics-based generative models might see some application, I believe that they'll be eventually replaced by better models that are actually aware of what they're generating, instead of simply reproducing patterns. With the current models being seen as "that cute 20s toy".
In text generation (currently dominated by LLMs), for example, this means that the main "bulk" of the model would do three things:
- convert input tokens into sememes (units of meaning)
- perform logic operations with the sememes
- convert sememes back into tokens for the output
Because, as it stands, LLMs are only chaining tokens. They might do this in an incredibly complex way, but that's it. That's obvious when you look at what LLM-fuelled bots output as "hallucination" - they aren't the result of some internal error, they're simply an undesired product of a model that sometimes outputs desirable stuff too.
Sub "tokens" and "sememes" with "pixels" and "objects" and this probably holds true for image generating models, too. Probably.
Now, am I some sort of genius for noticing this? Probably not; I'm just some nobody with a chimp avatar, rambling in the Fediverse. Odds are that people behind those tech giants already noticed the same ages ago, and at least some of them reached the same conclusion - that better gen models need more awareness. If they are not doing this already, it means that this shit would be painfully expensive to implement, so the "better models" that I mentioned at the start will probably not appear too soon.
Most cracks will stay there; Google will hide them with an obnoxious band-aid, OpenAI will leave them in plain daylight, but the magic trick will still not be perfect, at least in the foreseeable future.
And some might say "use MOAR processing power!", or "input MOAR training data!", in the hopes that the current approach will "magically" fix itself. For those, imagine yourself trying to drain the Atlantic with a bucket: does it really matter if you use more buckets, or larger buckets? Brute-forcing problems only go so far.
Just my two cents.
I don't know much about LLMs but latent diffusion models already have "meaning" encoded into the model. The whole concept of the u-net is that as it reduces the spacial resolution of the image, it increases the semantic resolution by adding extra dimensions of information. It came from medical image analysis where the idea of labelling something as a tumor would be really useful.
This is why you get body dysmorphic results on earlier (and even current) models. It's identified something as a human limb, but isn't quite sure on where the hand is, so it adds one on to what we know is a leg.
I agree 100%, and I think Zuckerberg's attempt at a massive 340,000 of Nvidia’s H100 GPUs AI based on LLM with the aim to create a generel AI sounds stupid. Unless there's a lot more to their attempt, it's doomed to fail.
I suppose the idea is something about achieving critical mass, but it's pretty obvious, that that is far from the only factor missing to achieve general AI.
I still think it's impressive what they can do with LLM. And it seems to be a pretty huge step forward. But It's taken about 40 years from we had decent "pattern recognition" to get here, the next step could be another 40 years?
I think that Zuckerberg's attempt is a mix of publicity stunt and "I want [you] to believe!". Trying to reach AGI through a large enough LLM sounds silly, on the same level as "ants build, right? If we gather enough ants, they'll build a skyscraper! Chrust me."
In fact I wonder if the opposite direction wouldn't be a bit more feasible - start with some extremely primitive AGI, then "teach" it Language (as a skill) and a language (like Mandarin or English or whatever).
I'm not sure on how many years it'll take for an AGI to pop up. 100 years perhaps, but I'm just guessing.
Trying to make real and good use of AI generative models are cracks in the magic.
It's pretty useful if you know exactly what you want and how to work within it's limitations.
Coworkers around me already use ChatGPT to generate code snippets for Python, Excel VBA, etc. to good success.
Right, it's a tool with quirks, techniques and skills to use just like any other tool. ChatGPT has definitely saved me time and on at least one occasion, kept me from missing a deadline that I probably would have missed if I went about it "the old way" lmao
I found this graph very clear
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022...
"This post is for paid subscribers"
(Also that page has a script I had to override just to copy and paste that)
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed