950

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[-] apfelwoiSchoppen@lemmy.world 99 points 1 month ago* (last edited 1 month ago)

I ordered some well rated concert ear protection from the maker's website. The order waited weeks to ship after a label was printed and likely forgotten. I went to find a place to call or contact a human there, all they had was a self-described AI chat robot that just talked down to me condescendingly. It simply would not believe my experience.

I eventually got the ear protection but I won't be buying from them again. Can't even staff some folks to check email. I eventually found their PR email address but even that was outsourced to a PR firm that never got back to me. Utter shit, AI.

[-] zod000@lemmy.ml 36 points 1 month ago

I'm glad you mentioned the company directly as I also want to steer clear of companies like this.

[-] crank0271@lemmy.world 11 points 1 month ago

That's really good to know about these things. They've been on sale through Woot. I guess there's a good reason for that.

[-] Catoblepas 5 points 1 month ago

Wow, that’s extremely disappointing. I had a really positive experience with them a few years ago when I wanted to exchange what I got (it was too quiet for me), and they just sent me a free pair after I talked to an actual person on their chat thing. It’s good to know that’s not how they are anymore if I ever need to replace them.

Never thought about ear protection for concerts, sounds cool. I'll have to look into other options though, if anyone has any recommendations, let me know

[-] Andonyx@lemmy.world 6 points 1 month ago

A number of companies make "tuned" ear plugs to allow some sound through with a desired frequency curve, but reduce SPL to safe levels. I've used Etymotic, which sound great but I personally like a little more reduction. Alpine which I thought had enough reduction but too much coloring, and I settled on Earpeace, for like $25 on-line. Silicone, re-usable and easy to clean and they come with three filters to swap in or out depending on your needs / tastes.

[-] lapping6596@lemmy.world 3 points 1 month ago

Oh man, sad that's the customer service cause I deeply love my loops. I was already carrying them with me everywhere I went so I grabbed a pill keychain thing and attached them to my keys so I'd never forget to grab them.

load more comments (1 replies)
[-] romanticremedy 26 points 1 month ago

I think this problem will get worse because many websites that's used for "your own research" will lose human traffic to watch ads and more bots just scraping their data, reducing motivation to keep the websites running. Most people just take the least resistant path so AI search will be the default soon I think

Yes, I hate this timeline

[-] HalfSalesman@lemm.ee 7 points 1 month ago

Eventually they will pay AI companies to integrate advertisements into the llm's outputs.

[-] romanticremedy 5 points 1 month ago

Omg I can see it happening. Instead of annoying intrusive ads, this new type will be so natural as if your close friend is suggesting it.

More dystopian future. Yes we need it /s

[-] anachrohack@lemmy.world 21 points 1 month ago

I use claude to ask it coding questions. I don't use it to generate my code; I mostly use it to do a kind of automated code review to look for obvious pitfalls. It's pretty neat for that

I don't use any other AI-powered products. I don't let it generate emails, I don't let it analyze data. If your site comes with a built in LLM powered feature, I assume

  1. It sucks
  2. You are a con artist

AI is the new Crypto. If you are vaguely associated with it, I assume there's something criminal going on

load more comments (1 replies)
[-] IrateAnteater@sh.itjust.works 12 points 1 month ago

The only time I disagree with this is when the business is substituting "AI" in for "machine learning". I've personally seen that work in applications where traditional methods don't work very well (vision guided industrial robot movement in this case).

[-] TheTechnician27@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

Huh? Deep learning is a subset of machine learning is a subset of AI. This is like saying a gardening center is substituting "flowers" in for "chrysanthemums".

[-] IrateAnteater@sh.itjust.works 13 points 1 month ago

I don't control what the vendor marketing guys say.

If you're expecting "technically correct" from them, you'll be doomed to disappointment.

[-] TheTechnician27@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

The scenario you just described, though, is technically correct is my point (edit: whereas you seem to be saying it isn't technically correct; it's also colloquially correct). Referring to "machine learning" as "AI" is correct in the same way referring to "a rectangle" as "a quadrilateral" is correct.


EDIT: I think some people are interpreting my comment as "b-but it's technically correct, the best kind of correct!" pedantry. My point is that the comment I'm responding to seems to think they got it technically incorrect, but they didn't. Not only is it "technically correct", but it's completely, unambiguously correct in every way. They're the ones who said "If you’re expecting “technically correct” from them, you’ll be doomed to disappointment.", so I pointed out that I'm not doomed to disappointment because they literally are correct colloquially and correct technically. Please see my comment below where I talk about why what they said about distinguishing AI from machine learning makes literally zero sense.

[-] subignition@fedia.io 11 points 1 month ago

Language is descriptive, not prescriptive. "AI" has come to be a specific colloquialism, and if you refuse to accept that, you're going to cause yourself pain when communicating with people who aren't equally pedantic as you.

[-] TheTechnician27@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Okay, at this point, I'm convinced no one in here has even a bare minimum understanding of machine learning. This isn't a pedantic prescriptivism thing:

  1. "Machine learning" is a major branch of AI. That's just what it is. Literally every paper and every book ever published on the subject will tell you that. Go to the Wikipedia page right now: "Machine learning (ML) is a field of study in artificial intelligence". The other type of AI of course means that the machine can't learn and thus a human has to explicitly program everything; for example, video game AI usually doesn't learn. Being uninformed is fine; being wrong is fine. There's calling out pedantry ("reee you called this non-Hemiptera insect a bug") and then there's rendering your words immune to criticism under a flimsy excuse that language has changed to be exactly what you want it to be.

  2. Transformers, used in things like GPTs, are a type of machine learning. So even if you say that "AI is just generative AI like LLMs", then, uh... Those are still machine learning. The 'P' in GPT literally stands for "pretrained", indicating it's already done the learning part of machine learning. OP's statement literally self-contradicts.

  3. Meanwhile, deep learning (DNNs, CNNs, RNNs, transformers, etc.) is a branch of machine learning (likewise with every paper, every book, Wikipedia ("Deep learning is a subset of machine learning that focuses on [...]"), etc.) wherein the model identifies its own features instead of the human needing to supply them. Notably, the kind of vision detection the original commenter is talking about is deep learning like a transformer model is. So "AI when they mean machine learning" by their own standard that we need to be specific should be "AI when they mean deep learning".

The reason "AI" is used all the time to refer to things like LLMs etc. is because generative AI is a type of AI. Just like "cars" are used all the time to refer to "sedans". To be productive about this: for anyone who wants to delve (heh) further into it, Goodfellow et al. have a great 2016 textbook on deep learning*. In a bit of extremely unfortunate timing, transformer models were described in a 2017 paper, so they aren't included (generative AI still is), but it gives you the framework you need to understand transformers (GPTs, BERTs). After Goodfellow et al., just reading Google's original 2017 paper gives you sufficient context for transformer models.

*Goodfellow et al.'s first five chapters cover traditional ML models so you're not 100% lost, and Sci-Kit Learn in Python can help you use these traditional ML techniques to see what they're like.


Edit: TL;DR: You can't just weasel your way into a position where "AI is all the bad stuff and machine learning is all the good stuff" under the guise of linguistic relativism.

[-] petrol_sniff_king 5 points 1 month ago

Edit: TL;DR: You can't just weasel your way into a position where "AI is all the bad stuff and machine learning is all the good stuff" under the guise of linguistic relativism.

You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.

"Machine learning" is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.

So, you seem to be implying their isn't a difference, but there is: people who suck say one, people who don't say the other. No amount of academic rigor can sidestep this problem.

[-] TheTechnician27@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

Quite the opposite: I recognize there's a difference, and it horrifies me that corporations spin AI as something you – "you" meaning the general public who don't understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it's worse than you think it is. I would actually prefer specificity when we're talking about AI models; that's why instead of "AI slop", I use "LLM slop" for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it "AI slop" (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket "AI".

By trying to incorrectly differentiate "AI" from "machine learning", we're giving dishonest corporations more power by implying that only now do we truly have "artificial intelligence" and that everything that came before is merely "machine learning". By muddling what's actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of "AI is anything that I don't like, and ML is anything I do"), we're misinforming the public and making the problem worse. By showing that "AI" is just a very general field that GPTs live inside, we reduce the power of "AI" as a marketing buzzword word.

load more comments (3 replies)
[-] Hotzilla@sopuli.xyz 3 points 1 month ago* (last edited 1 month ago)

These new LLM models and vision models have their place in software stack. They do enable some solutions that have been nearly impossible in the past (mandatory xkcd ref: https://xkcd.com/1425/ , this is now trivial task)

ML works very well on large data sets and numbers, but it is poor at handling text data. LLM's again are shit with large data and numbers, but they are good at handling small text data. It is a tool, and properly used very powerful one. And it is not a magic bullet.

One easy example from real world requirements: you have five paragraph of human written text, and you need to summarize it to header automatically. Five years ago if some project owner would have request this feature, I would have said string.substring(100), live with it. Now it is pretty much one line of code.

[-] TheTechnician27@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

Even though I understand your sentiment that different types of AI tools have their place, I'm going to try clarifying some points here. LLMs are machine learning models; the 'P' in 'GPT' – "pretrained" – refers to how it's already done some learning. Transformer models (GPTs, BERTs, etc.) are a type of deep learning is a branch of machine learning is a field of artificial intelligence. (edit: so for a specific example of how this looks nested: AI > ML > DL > Transformer architecture > GPT > ChatGPT > ChatGPT 4.0.) The kind of "vision guided industrial robot movement" the original commenter mentions is a type of deep learning (so they're correct it's machine learning, but incorrect that it's not AI). At this point, it's downright plausible that the tool they're describing uses a transformer model instead of traditional deep learning like a CNN or RNN.

I don't entirely understand your assertion that "LLMs are shit with large data and numbers", because LLMs work with the largest data in human history. If you mean you can't feed a large, structured dataset into ChatGPT and expect it to be able to categorize new information from that dataset, then sure, because: 1) it's pretrained, not a blank slate that specializes on the new data you give it, and 2) it's taking it in as plaintext rather than a structured format. If you took a transformer model and trained it on the "large data and numbers", it would work better than traditional ML. Non-transformer machine learning models do work with text data; LSTMs (a type of RNN) do exactly this. The problem is that they're just way too inefficient computationally to scale well to training on gargantuan datasets (and consequently don't generate text well if you want to use it for generation and not just categorization). In general, transformer models do literally everything better than traditional machine learning models (unless you're doing binary classification on data which is always cleanly bisected, in which case the perceptron reigns supreme /s). Generally, though, yes, if you're using LLMs to do things like image recognition, taking in large datasets for classification, etc., what you probably have isn't just an LLM; it's a series of transformer models working in unison, one of which will be an LLM.


Edit: When I mentioned LSTMs, I should clarify this isn't just text data: RNNs (which LSTMs are a type of) are designed to work on pieces of data which don't have a definite length, e.g. a text article, an audio clip, and so forth. The description of the transformer architecture in 2017 catalyzed generative AI so rapidly because it could train so efficiently on data not of a fixed size and then spit out data not of a fixed size. That is: like an RNN, the input data is not of a fixed size, and the transformed output data is not of a fixed size. Unlike an RNN, the data processing is vastly more efficient in a transformer because it can make great use of parallelization. RNNs were our main tool for taking in variable-length, unstructured data and categorizing it (or generating something new from it; these processes are more similar than you'd think), and since that describes most data, suddenly all data was trivially up for grabs.

[-] sturger@sh.itjust.works 3 points 1 month ago

Now it is pretty much one line of code.

… and 5kW of GPU time. 😆

[-] TheObviousSolution@lemm.ee 11 points 1 month ago

Using AI is telling people they shouldn't care about your IP because you clearly don't care about theirs when it passes through the AI lens.

[-] 2xsaiko@discuss.tchncs.de 4 points 1 month ago

Stop making using AI sound based

[-] gmtom@lemmy.world 9 points 1 month ago* (last edited 1 month ago)

Cool, my work for my company with AI for medical scans has detected thousands upon thousands of tumors and respiratory diseases, long before even the most well trained doctor could have spotted them, and as a result saved many of those people's lives. But it's good to know we're all just lazy pieces of shit because we use AI.

[-] jjjalljs@ttrpg.network 17 points 1 month ago

Assuming what you're describing works (and i have no particular reason to doubt, beyond the generally poor reputation of AI), that's a different beast than "lol i fired all the copywriters, artists, and support staff so I, the owner, could keep more profits for myself!". Or, "I didn't pay attention in English 101 and don't know how to write, so I'll have expensive auto suggest do it for me"

load more comments (1 replies)
[-] racketlauncher831@lemmy.ml 9 points 1 month ago

Machine learning is not artificial intelligence.

load more comments (2 replies)
[-] JandroDelSol@lemmy.world 6 points 1 month ago

When people talk about "AI" nowadays, they're usually talking about LLMs and other generative AI, especially if it's used to replace workers or human effort. Analytical AI is perfectly valid and is a wonderful tool!

[-] zebidiah@lemmy.ca 7 points 1 month ago

i use AI every day in my daily work, it writes my emails, performance reviews, project updates etc.

.....and yeah, that checks out!

[-] Zacryon@feddit.org 5 points 1 month ago

LLMs != AI
LLMs strict subset of AI

Pls be a bit more specific about what you hate about the wide field of AI. Otherwise it's almost like saying you hate computers, because they can run applications that you don't like.

[-] skulkbane@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

I used to work in a software architecture team that used AI to write retrospectives, and upcoming projects, and everything needed to have a positive spin, that sounds good but mean nothing.

Extra funny when I find out people use AI to summarize it. So the comical cycle of bullet points to text and back again is real.

I had enough working at the company when my team was working on the new "fantastic" platform, cut corners to reach the deadline on something that will not be used by anyone... and its being built for the explicit purpose of making a better development and working environment.

[-] LMurch@thelemmy.club 4 points 1 month ago

Sounds like Brian can't figure out AI.

[-] Numuruzero@lemmy.dbzer0.com 13 points 1 month ago

Wrong /c/ my guy

[-] skisnow@lemmy.ca 6 points 1 month ago

Did you forget a /s ?

[-] null_dot@lemmy.dbzer0.com 3 points 1 month ago

Hes saying that the businesses he's interacting with can't.

load more comments (1 replies)
[-] GoodOleAmerika@lemmy.world 4 points 1 month ago

I use AI as a tool. AI should be a tool to help with job, not to take jobs. Same as calculator. Yep people will be able to code faster with AIs help, so that might mean less demand, at least for IT. But u still gotta know what the exact prompt u need to ask

[-] felykiosa@sh.itjust.works 3 points 1 month ago

I have to disagree with that one but not completely. It really depends on what type of company I interact with . is that an independent small company or a big corp . also what type of AI (generate picture or generate summary etc..) And is the application fullfit or not . ex if you generate a logo or a picture in a small business is the style of the picture correct or is it the same as everyone , also did you check if the image was correct etc... But for big corps yeah they can go fuck themselves, they have the budget to pay artist

[-] Xatolos@reddthat.com 3 points 1 month ago

So.... about his AI generated picture beside his name...

[-] lorty@lemmy.ml 12 points 1 month ago

Just because it's generic doesn't mean it's AI generated

[-] ameancow@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

I've been using a cartoonish profile picture for my work emails, teams portrait and other communications for many years. There is almost no way to tell that kind of icon apart from AI generated icons at that size anyway.

And even if it was, that's not the point of the conversation. Fixating on that is such bad faith it betrays a defensiveness about AI generated content, so it's particularly important that someone like you get this message, let me reiterate clearly:

I have a role of responsibility, I hire people and use company budget to make decisions on other companies and products we'll be paying for. When making these decisions I don't look at the email signatures of people or the icons they use. I look at their presentation materials and if that shit is AI generated I know immediately it's just a couple people pretending to be an agency or company, or some company that doesn't quality-control their slides and presentation decks. It shows laziness. I would rather go with a company that has data and specs rather than lean on graphics anyway. So if those graphics are also lazy AF that's a hard pass. Not my first rodeo, I've learned to listen to experience.

load more comments (3 replies)
[-] zerofk@lemm.ee 3 points 1 month ago

Ironically, an LLM could’ve made his post grammatically correct and understandable.

[-] ameancow@lemmy.world 8 points 1 month ago* (last edited 1 month ago)

If you had a hard time understanding the point being made in that post, you could probably be replaced by AI and we wouldn't notice the difference.

[-] Strawberry 5 points 1 month ago

His post is fairly gramatically correct and quite understandable

load more comments
view more: next ›
this post was submitted on 14 May 2025
950 points (100.0% liked)

Fuck AI

3246 readers
919 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS