'AI isn't reliable, has a ton of bias, tells many lies confidently, can't count or do basic math, just parrots whatever is fed to them from the internet, wastes a lot of energy and resources and is fucking up the planet...'.
When I see these critics about ai I wonder if it's their first day on the planet and they haven't met humans yet.
When I'm asking a question I don't want to hear what most people think but what people that are knowledgeable about the subject of my question think and LLM will fail at that by design.
LLMs don't wastes a lot, they waste at a ridiculous scale. According to statista training GPT-3 is responsible for 500 tCO2 in 2024.
All for what ? Having an automatic plagiarism bias machine ?
And before the litany of "it's just the training cost, after that it's ecologically cheap" tell me how you LLM will remain relevant if it's not constantly retrained with new data ?
LLMs don't bring any value, if I want information I already have search engine (even if LLMs degraded the quality of the result), if I want art I can pay someone to draw it, etc...
That seems so little because it doesn't account for the data-centers construction cost, hardware production cost, etc...
1 model costing as much as 1250 people breathing for a year is enormous to me.
I don't know why people downvoted you. It is surprisingly little! I checked the 500 tons number thinking it could be a typo or a mistake but I found the same.
When I'm asking a question I don't want to hear what most people think but what people that are knowledgeable about the subject of my question think and LLM will fail at that by design.
I mostly use it to ask about something that I can describe but I don't know or can't remember the word/name. But I've also asked it more specialized and even pretty niche questions. Some simply as a test. And it's done pretty well.
All for what ? Having an automatic plagiarism bias machine ?
Coming back to the point of the comment, you could argue that people aren't much more than 'automatic plagiarism bias machines' either.
Search engine where already doing great at giving you answer to by description.
For specialized question the wiki or documentation dedicated to your field will be much better.
You have no guarantees that LLMs will not generate garbage so you have to check the source (if they exists) so just read the source it'll waste less time and energy.
Human are much more than 'automatic plagiarism bias machines’ ... don't dare to equate an autocorrect with life.
I do dare to equate them. Sorry if it offends you but I'm not religious or spiritual. Thought, reasoning, consciousness... are just the product of the computing power of the human meatware. There's no reason that computing couldn't be done by electronics instead of chemistry. Are we there yet? I don't think so. Will we? Who knows. Equating llms to an autocorrect is like equating a lightbulb to a modern computer.
And answering your first two paragraphs: no they aren't doing great at that. In fact search engines have been going to shit in the last few years. And no it's not ai's fault, I would say it's seo's and upper management's.
[-]RushLana1 points1 day ago* (last edited 1 day ago)
I'm not arguing on religious or spiritual ground, I don't follow any religion and I'm not into spiritual stuff.
Reducing people to what they can produce is where my problem is.
LLM are a word prediction machine by design that's why I called them glorified autocorrect.
I won't waste both of our times, you have your opinion, I have mine and let's leave it at that.
I just think that many people are underestimating a very powerful tool. Taking labor from humans is the good part! Oppressing, controlling, spying... are the dangerous and scary parts. Even the adds! Imagine the level of personalization they can get.
You could argue that people aren't much different.
'AI isn't reliable, has a ton of bias, tells many lies confidently, can't count or do basic math, just parrots whatever is fed to them from the internet, wastes a lot of energy and resources and is fucking up the planet...'. When I see these critics about ai I wonder if it's their first day on the planet and they haven't met humans yet.
... You are deliberately missing the point.
When I'm asking a question I don't want to hear what most people think but what people that are knowledgeable about the subject of my question think and LLM will fail at that by design.
LLMs don't wastes a lot, they waste at a ridiculous scale. According to statista training GPT-3 is responsible for 500 tCO2 in 2024. All for what ? Having an automatic plagiarism bias machine ? And before the litany of "it's just the training cost, after that it's ecologically cheap" tell me how you LLM will remain relevant if it's not constantly retrained with new data ?
LLMs don't bring any value, if I want information I already have search engine (even if LLMs degraded the quality of the result), if I want art I can pay someone to draw it, etc...
500 tons of CO2 is... surprisingly little? Like, rounding error little.
I mean, one human exhales ~400 kg of CO2 per year (according to this). Training GPT-3 produced as much CO2 as 1250 people breathing for a year.
That seems so little because it doesn't account for the data-centers construction cost, hardware production cost, etc... 1 model costing as much as 1250 people breathing for a year is enormous to me.
I don't know why people downvoted you. It is surprisingly little! I checked the 500 tons number thinking it could be a typo or a mistake but I found the same.
I kind of am tbh
I mostly use it to ask about something that I can describe but I don't know or can't remember the word/name. But I've also asked it more specialized and even pretty niche questions. Some simply as a test. And it's done pretty well.
Coming back to the point of the comment, you could argue that people aren't much more than 'automatic plagiarism bias machines' either.
Search engine where already doing great at giving you answer to by description. For specialized question the wiki or documentation dedicated to your field will be much better. You have no guarantees that LLMs will not generate garbage so you have to check the source (if they exists) so just read the source it'll waste less time and energy.
Human are much more than 'automatic plagiarism bias machines’ ... don't dare to equate an autocorrect with life.
I do dare to equate them. Sorry if it offends you but I'm not religious or spiritual. Thought, reasoning, consciousness... are just the product of the computing power of the human meatware. There's no reason that computing couldn't be done by electronics instead of chemistry. Are we there yet? I don't think so. Will we? Who knows. Equating llms to an autocorrect is like equating a lightbulb to a modern computer.
And answering your first two paragraphs: no they aren't doing great at that. In fact search engines have been going to shit in the last few years. And no it's not ai's fault, I would say it's seo's and upper management's.
I'm not arguing on religious or spiritual ground, I don't follow any religion and I'm not into spiritual stuff.
Reducing people to what they can produce is where my problem is. LLM are a word prediction machine by design that's why I called them glorified autocorrect.
I won't waste both of our times, you have your opinion, I have mine and let's leave it at that.
I just think that many people are underestimating a very powerful tool. Taking labor from humans is the good part! Oppressing, controlling, spying... are the dangerous and scary parts. Even the adds! Imagine the level of personalization they can get.
But anyway, have a good day!
"I don't believe in magical thinking. I just believe that GenAI will magically develop conciousness one day"