1517
What do you think will the tech bros jump on next?
(feddit.org)
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
A collection of some classic Lemmy memes for your enjoyment
AI and NFT are not even close. Almost every person I know uses AI, and nobody I know used NFT even once. NFT was a marginal thing compared to AI today.
"AI" doesn't exist. Nobody that you know is actually using "AI". It's not even close to being a real thing.
We've been productively using AI for decades now – just not the AI you think of when you hear the term. Fuzzy logic, expert systems, basic automatic translation... Those are all things that were researched as artificial intelligence. We've been using neural nets (aka the current hotness) to recognize hand-written zip codes since the 90s.
Of course that's an expert definition of artificial intelligence. You might expect something different. But saying that AI isn't AI unless it's sentient is like saying that space travel doesn't count if it doesn't go faster than light. It'd be cool if we had that but the steps we're actually taking are significant.
Even if the current wave of AI is massively overhyped, as usual.
The issue is AI is a buzz word to move product. The ones working on it call it an LLM, the one seeking buy-ins call it AI.
Wile labels change, its not great to dilute meaning because a corpo wants to sell some thing but wants a free ride on the collective zeitgeist. Hover boards went from a gravity defying skate board to a rebranded Segway without the handle that would burst into flames. But Segway 2.0 didn’t focus test with the kids well and here we are.
The people working on LLMs also call it AI. Just that LLMs are a small subset in the AI research area. That is every LLM is AI but not every AI is an LLM.
Just look at the conference names the research is published in.
Maybe, still doesn’t mean that the label AI was ever warranted, nor that the ones who chose it had a product to sell. The point still stands. These systems do not display intelligence any more than a Rube Goldberg machine is a thinking agent.
Well now you need to define "intelligence" and that's wandering into some thick philosophical weeds. The fact is that the term "artificial intelligence" is as old as computing itself. Go read up on Alan Turing's work.
Does “AI” have agency?
It's still an unsettled question if we even do
We have functional agency, regardless of your stance on the determinism. “AI” does not even reach that bar, and so far has no pathways to reach that with its current direction. Though that might be by design. But whether humanity wants an actual AI is a different discussion entirely. Either way these large models are not AI, they are just sold as such to make them seem more than they actually are.
That's just kicking the can down the road, because now you have to define agency. Do you have agency? If you didn't, would you even know? Can you prove it either way? In any case, this is no longer a scientific discussion, but a philosophical one, because whether or not an entity has "intelligence" or "agency" are not testable questions.
We have functional agency regardless of your stance on determinism in the same way that computers can obtain functional randomness when they are unable to generate a true random number. Artificial intelligence requires agency and spontaneity, and these are the lowest bars it must pass. And they do not pass these and the current path of their development can not pass these, no matter how updated their training set, or how bespoke their weights are.
these large models do not have “true” concepts over what they provide in the same way a book does not have a concept of the material they contain, no matter how fancy the index is
Is this scientifically provable? I don't see how this isn't a subjective statement.
Says who? Hollywood? For almost a hundred years the term has been used by computer scientists to describe computers using "fuzzy logic" and "learning programs" to solve problems that are too complicated for traditional data structures and algorithms to reasonably tackle, and it's really a very general and fluid field of computer science, as old as computer science itself. See the Wikipedia page
And finally, there is no special sauce to animal intelligence. There's no such thing as a soul. You yourself are a Rube Goldberg machine of chemistry and electricity, your only "concepts" obtained through your dozens of senses constantly collecting data 24/7 since embryo. Not that the intelligence of today's LLMs are comparable to ours, but there's no magic to us, we're Rube Goldberg machines too.
“Functional” was the conditional that acknowledges the possibility of a totally deterministic existence, but dismisses it for what ever we actually perceive as agency, as to argue one way or the other is a distraction away from the topic and is wholly unnecessary.
Also: “However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's [not labeled AI anymore]” -wikipedia
This should tell you that the term AI is commonly, improperly used to refer to computer actions when not properly understood. AI was coined by science fiction to do what science fiction does best, force humanity to question, and in this case the question what is consciousness. That is to say, a consciousness that was designed, and not self built out of the muck. If you argue that how its used determines its meaning, then fine everything from punchcard looms, video game bosses, to excel spread sheets are or have AI. And its designation becomes worthless. Once the magic fades these LLM’s will be as much an artificial intelligence as siri.
Hucksters sell magic, scientists and engineers provide solutions.
And finally i agree there is nothing “special” but there is a difference between large models and consciousness. If you leave an LLM open, and left alone, how long before it starts to create something, or does anything? You leave an animal or a human in a blank room long enough it will do something not related to direct survival.
It took someone to literally create a picture of a full wine glass in order for an “art” AI to take and generate one. This should tell you these do not have functioning concept of the subject matter. But are good enough at convincing people they do.
Not to go way offtop here but this reminds me: Palm's "Graffiti" handwriting recognition was a REALLY good input method back when I used it. I bet it did something similar.
AI is a standard term that is used widely in the industry. Get over it.
While i grew up with the original definition as well the term AI has changed over the years. What we used to call AI is now what's referred to as AGI. There are several steps still to break through before we get the AI of the past. Here is a statement made by AI about the subject.
The Spectrum Between AI and AGI:
Narrow AI (ANI):
This is the current state of AI, which focuses on specific tasks and applications.
General AI (AGI):
This is the theoretical goal of AI, aiming to create systems with human-level intelligence.
Superintelligence (ASI):
This is a hypothetical level of AI that surpasses human intelligence, capable of tasks beyond human comprehension.
In essence, AGI represents a significant leap forward in AI development, moving from task-specific AI to a system with broad, human-like intelligence. While AI is currently used in various applications, AGI remains a research goal with the potential to revolutionize many aspects of life.
I don't really care what anyone wants to call it anymore, people who make this correction are usually pretty firmly against the idea of it even being a thing, but again, it doesn't matter what anyone thinks about it or what we call it, because the race is still happening whether we like it or not.
If you're annoyed with the sea of LLM content and generated "art" and the tired way people are abusing ChatGTP, welcome to the club. Most of us are.
But that doesn't mean that every major nation and corporation in the world isn't still scrambling to claim the most powerful, most intelligent machines they can produce, because everyone knows that this technology is here to stay and it's only going to keep getting worked on. I have no idea where it's going or what it will become, but the toothpaste is out and there's no putting it back.
Every NFT denial:
"They'll be useful for something soon!"
Every AI denial:
"Well then you must be a bad programmer."
I can’t think of anyone using AI. Many people talking about encouraging their customers/clients to use AI, but no one using it themselves.
Some of this is cool, lots of it is stupid, and lots of people are using it to scam other people. But it is getting used, and it is getting better.
And yet none of this is actually "AI".
The wide range of these applications is a great example of the "AI" grift.
I looked through you comment history. It's impressive how many times you repeat this mantra and while people fownvote you and correct you on bad faith, you keep doing it.
Why? I think you have a hard time realizing that people may have another definition of AI than you. If you don't agree with thier version, you should still be open to that possibility. Just spewing out your take doesn't help anyone.
For me, AI is a broad gield of maths, including ALL of Machine Learning but also other fields, such as simple if/else programming to solve a very specific task to "smarter" problem solving algorithms such as pathfinding or other atatistical methods for solving more data-heavy problems.
Machine Learning has become a huge field (again all of it inside the field of AI). A small but growing part of ML is LLM, which we are talking about in this thread.
All of the above is AI. None of it is AGI - yet.
You could change all of your future comments to "None of this is "AGI"" in order to be more clear. I guess that wouldn't trigger people as much though...
I'm a huge critic of the AI industry and the products they're pushing on us... but even I will push back on this kind of blind, mindless hate from that user without offering any explanation or reasoning. It's literally as bad as the cultists who think their AI Jesus will emerge any day now and literally make them fabulously wealthy.
This is a technology that's not going away, it will only change and evolve and spread throughout the world and all the systems that connect us. For better or worse. If you want to succeed and maybe even survive in the future we're going to have to learn to be a LOT more adaptable than that user above you.
If automatically generated documentation is a grift I need to know what you think isn't a grift.
You can name it whatever you want, and I highly encourage people to be critical of the tech, but this is so we get better products, not to make it "go away."
It's not going away. Nothing you or anyone else, no matter how many people join in the campaign, will put this back in the toothpaste tube. Short of total civilizational collapse, this is here to stay. We need to work to change it to something useful and better. Not just "BLEGH" on it without offering solutions. Or you will get left behind.
Oh, of course; but the question being, are you personally friends with any of these people - do you know them.
If I learned a friend generated AI trash for their blog, they wouldn’t be my friend much longer.
This makes you a pretty shitty friend.
I mean, I cannot stand AI slop and have no sympathy for people who get ridiculed for using it to produce content... but it's different if it's a friend, jesus christ, what kind of giant dick do you have to be to throw away a friendship because someone wanted to use a shortcut to get results for their own personal project? That's supremely performative. I don't care for the current AI content but I wouldn't say something like this thinking it makes me sound cool.
I miss when adults existed.
edit: i love that there's three people who read this and said "Well I never! I would CERTAINLY sever a friendship because someone used an AI product for their own project! " Meanwhile we're all wondering why people are so fucking lonely right now.
What! Is this guess or actual fact?
I have been using copilot since like April 2023 for coding, if you don't use it you are doing yourself a disservice it's excellent at eliminating chores, write the first unit test, it can fill in the rest after you simply name the next unit test.
Want to edit sql? Ask copilot
Want to generate json based on sql with some dummy data? Ask copilot
Why do stupid menial tasks that you have to do sometimes when you can just ask "AI" to do it for you?
What?
If you ever used online translators like google translate or deepl, that was using AI. Most email providers use AI for spam detection. A lot of cameras use AI to set parameters or improve/denoise images. Cars with certain levels of automation often use AI.
That's for everyday uses, AI is used all the time in fields like astronomy and medicine, and even in mathematics for assistance in writing proofs.
None of this stuff is "AI". A translation program is no "AI". Spam detection is not "AI". Image detection is not "AI". Cars are not "AI".
None of this is "AI".
Sure it is. If it's a program that is meant to make decisions in the same way an intelligent actor would, then it's AI. By definition. It may not be AGI, but in the same way that enemies in a video game run on AI, this does too.
They're functionalities that were not made with traditional programming paradigms, but rather by modeling and training the model to fit it to the desired behaviour, making it able to adapt to new situations; the same basic techniques that were used to make LLMs. You can argue that it's not "artificial intelligence" because it's not sentient or whatever, but then AI doesn't exist and people are complaining that something that doesn't exist is useless.
Or you can just throw statements with no arguments under some personal secret definition, but that's not a very constructive contribution to anything.
It's possible translate has gotten better with AI. The old versions, however, were not necessarily using AI principles.
I remember learning about image recognition tools that were simply based around randomized goal-based heuristics. It's tricky programming, but I certainly wouldn't call it AI. Now, it's a challenge to define what is and isn't; and likely a lot of labeling is just used to gather VC funding. Much like porn, it becomes a "know it when I see it" moment.
Image recognition depends on the amount of resources you can offer for your system. There are traditional methods of feature extractions like edge detection, histogram of oriented gradients and viola-jones, but the best performers are all convolutional neural networks.
While the term can be up for debate, you cannot separate these cases and things like LLMs and image generators, they are the same field. Generative models try to capture the distribution of the data, whereas discriminitive models try to capture the distribution of labels given the data. Unlike traditional programming, you do not directly encode a sequence of steps that manipulate data into what you want as a result, but instead you try to recover the distributions based on the data you have, and then you use the model you have made in new situations.
And generative and discriminative/diagnostic paradigms are not mutually exclusive either, one is often used to improve the other.
I understand that people are angry with the aggressive marketing and find that LLMs and image generators do not remotely live up to the hype (I myself don't use them), but extending that feeling to the entire field to the point where people say that they "loathe machine learning" (which as a sentence makes as much sense as saying that you loathe the euclidean algorithm) is unjustified, just like limiting the term AI to a single digit use cases of an entire family of solutions.
They just released AWS Q Developer. It's handy for the things I'm not familiar with but still needs some work
I am one of the biggest critics of AI, but yeah, it's NOT going anywhere.
The toothpaste is out, and every nation on Earth is scrambling to get the best, smartest, most capable systems in their hands. We're in the middle of an actual arms-race here and the general public is too caught up on the question of if a realistic rendering of Lola Bunny in lingerie is considered "real art."
The Chat GTP/LLM shit that we're swimming in is just the surface-level annoying marketing for what may be our last invention as a species.
I have some normies who asked me to to break down what NFTs were and how they worked. These same people might not understand how "AI" works, (they do not), but they understand that it produces pictures and writings.
Generative AI has applications for all the paperwork I have to do. Honestly if they focused on that, they could make my shit more efficient. A lot of the reports I file are very similar month in and month out, with lots of specific, technical language (Patient care). When I was an EMT, many of our reports were for IFTs, and those were literally copy pasted (especially when maybe 90 to 100 percent of a Basic's call volume was taking people to and from dialysis.)
Holy shit, then you definitely can't use an LLM because it will just "hallucinate" medical information.
If you were part of Starbucks loyalty scheme then you used NFTs.