839
AI is the future (lemmy.world)
top 50 comments
sorted by: hot top controversial new old
[-] kbal@fedia.io 94 points 4 months ago

Looks like it's learned that adding "according to Quora" makes it look more authoritative. Maybe with a few more weeks of training it'll figure out how to make fake citations of sources that are actually trustworthy.

[-] atro_city@fedia.io 46 points 4 months ago

Just wait until it starts taking stuff from 4chan, twitch, and twitter. Things are going to be come so much more interesting.

[-] sp3tr4l@lemmy.zip 6 points 4 months ago

Google signing a contract with 4chan for data training is actually so stupid I don't think it'll ever happen.

4chan is almost certainly blacklisted from basically everything AI given the sites content and history of intentionally destroying chatbots/earlier 'AI's.

[-] Magnetic_dud@discuss.tchncs.de 6 points 4 months ago

But at the same time they paid reddit millions to train on "authoritative" posts like that one from "fuckSmith" that suggested to add glue to pizza

load more comments (1 replies)
[-] TheFool@infosec.pub 18 points 4 months ago

As @Karyoplasma@discuss.tchncs.de pointed out, this is an actual answer on Quora so at least it got that right

[-] danc4498@lemmy.world 8 points 4 months ago

I think it’s also a way of shifting the blame.

[-] aido@lemmy.world 90 points 4 months ago* (last edited 4 months ago)

For some reason I don't have AI search on my account, but I still get the same answer:

[-] Fisch@discuss.tchncs.de 85 points 4 months ago

That's probably a real answer from someone on Quora then

[-] bstix@feddit.dk 65 points 4 months ago

What's the point in having an AI run the search and present the found answer for you, when you just ran the search yourself and gets the AI finding presented?

As this point AI helpers are just a layer that hides the details from the original search. It's useless for this. AI is wonderful for lots of stuff, but this just isn't it. I used to laugh when people used the Google search box to find Google so they could search in Google, but that is exactly what AI is doing for us now.

[-] nikita@sh.itjust.works 17 points 4 months ago

Plus the insane power consumption for such a marginally useful feature. Especially given that it’s on by default for everyone using google (as I understand)

It’s almost like the feature is not ready but they need to show off to their investors anyway. At the cost of user experience and the environment.

At least with ChatGPT you have to consciously go to their website and use, rather than being the first result of a fucking internet search.

load more comments (1 replies)
[-] RecallMadness@lemmy.nz 3 points 4 months ago

More eyes on your website, means less on other websites, making your adverts more valuable.

And when it doesn’t work, it doesn’t matter, because you run the advertising on the other websites too. Bonus: you can penalise rankings for websites that don’t use your advertising network.

[-] AFKBRBChocolate@lemmy.world 2 points 4 months ago

Was having a related conversation with an employee this morning (I manage a software engineering organization). He asked an LLM how to separate the parts of a date in Excel, and got a pretty good explanation of how do it with the text to columns wizard, and also how to use a formula to get each part. He was happy because he felt it would have taken him much longer to figure it out himself.

I was saying I thought that was a good use of an LLM - it's going to give a tailored answer - but my worry is that people will do less scrubbing of an answer coming from an AI than one they saw on a forum. I said we should think of it like a tailored Google search.

For comparison, I googled "Excel formula separate parts of a date" and one of the top results was a forum discussion that had the exact solutions the LLM gave, using the same examples. On the one hand, to get it from the forum you had to wade through all the wrong answers and discussions. On the other hand, that discussion puts the answer given in the context of a bunch of others that are off the mark, and I think make people less likely to assume it's correct.

In any case, it's still just synthesizing from or regurgitating training data.

[-] bstix@feddit.dk 2 points 4 months ago

I think LLMs are better for more fluffy stuff, like writing speeches etc.

Excel solutions are often very specific. A vague question like separating a date can be solved in many ways, using a variety of formulas, the text-to-column wizard, VBA, import queries or even just formatting, all depending on what you really need, what the input is and what locality is used and other things.

The text-to-column method is great, because it transforms whatever the input is into a date type, making it possible to treat it as and make calculations as an actual date. It's not always the right solution though, for instance if the input is ambiguous.

It's fine that he learned to use this method, but I wonder what he'd ask the LMM in a case where it isn't the right solution and what it'll come up with then. He didn't actually learn to separate a date from the input. He learned to use the text import wizard.

In my experience it's preferable to learn these things on a more basic level if only just to be able to search more specifically for the right answer, because there is a specific answer. Having a language model run through a bunch of solutions and presenting the most popular one might just be a waste of time and leading you into a wild goose chase.

[-] AFKBRBChocolate@lemmy.world 2 points 4 months ago

You might have missed where I said it explained both the text to columns wizard and a formula. He used the formula, which is what he was looking for. He's a top notch software developer, he just doesn't use Excel much.

But I agree with your broader point. I keep having to remind people that the "LM" part is for "language model." It's not figuring anything out, it's distilling what an answer should look like. A great example is to ask one for a mathematical proof that isn't commonly found online - maybe something novel. In all likelihood, it's going to give you one, and it will probably look like the right kind of stuff, but it will also probably be wrong. It doesn't know math (it doesn't know anything), it just has a model of what a response should look like.

That being said, they're pretty good for a number of things. One great example is lesson plans. From what I understand, most teachers now give an LLM the coursework and ask it to generate a lesson plan. Apparently they do an excellent job and save many hours of work. Anything that involves summarizing information is good, especially as that constrains the training data.

[-] NaiveBayesian@programming.dev 16 points 4 months ago

Most likely an answer written by another AI directly on Quora then

[-] moriquende@lemmy.world 8 points 4 months ago

So many fruits in the berrum family, can't believe they even had to google that question...

[-] mojo_raisin@lemmy.world 3 points 4 months ago

I love schnozzberrum

[-] Karyoplasma@discuss.tchncs.de 63 points 4 months ago
[-] SlopppyEngineer@lemmy.world 36 points 4 months ago

"If it's on the internet it must be true" implemented in a billion dollar project.

[-] lugal@sopuli.xyz 5 points 4 months ago

Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated

[-] EpeeGnome@lemm.ee 4 points 4 months ago* (last edited 4 months ago)

Neither, in this case it's an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!

[-] xavier666@lemm.ee 4 points 4 months ago

Pretty sure AI will start telling us "You should not believe everything you see on the internet as told by Abraham Lincoln"

[-] kate@lemmy.uhhoh.com 2 points 4 months ago

Can’t even rly blame the AI at that point

[-] TheFriar@lemm.ee 12 points 4 months ago

Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

[-] kate@lemmy.uhhoh.com 6 points 4 months ago

Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

[-] KevonLooney@lemm.ee 9 points 4 months ago

Do you just take what people say on here as fact? That's the problem, people are taking LLM results as fact.

[-] BakerBagel@midwest.social 4 points 4 months ago

It should if you are gonna feed it satire to learn from

[-] xavier666@lemm.ee 2 points 4 months ago

Sarcasm detection is a very hard problem in NLP to be fair

[-] Mrkawfee@lemmy.world 41 points 4 months ago

There goes my making shit up job.

[-] ekky@sopuli.xyz 15 points 4 months ago

Now LLMs are even taking the jobs of professional trolls! What's gonna be next? The scambots loosing their jobs to LLMs?!

[-] ares35@kbin.social 5 points 4 months ago

the scammers are already using 'ai'

load more comments (1 replies)
[-] captain_aggravated@sh.itjust.works 21 points 4 months ago

I'll allow that one because it said "According to Quora" so you knew to ignore it.

[-] thefrankring@lemmy.world 20 points 4 months ago

AI is just very creative, ok?

[-] Treczoks@lemmy.world 19 points 4 months ago

Looks like AI is lots and lots of "artificial" and close to nothing in the area of "intelligence".

[-] ares35@kbin.social 5 points 4 months ago

as real as artificial cheese.

[-] cmgvd3lw@discuss.tchncs.de 15 points 4 months ago

Coconut um!

[-] yamapikariya@lemmyfi.com 13 points 4 months ago

I for one am enjoying this AI thing at Google. I haven't had that many laughs from just searching for things.

[-] uranibaba@lemmy.world 11 points 4 months ago
[-] Sebbe@lemmy.sebbem.se 10 points 4 months ago

Everything ends with um in Latin!

[-] BakerBagel@midwest.social 3 points 4 months ago

So why does everything end with a vowel n modern Italian?

[-] basxto@discuss.tchncs.de 3 points 4 months ago

Hoc casu non est

[-] JohnSmith@feddit.uk 2 points 4 months ago

Latinum. Fix that for you.

load more comments (1 replies)
[-] puchaczyk 6 points 4 months ago

I love how it just gave up on coconut

[-] maxenmajs@lemmy.world 4 points 4 months ago

Always trust user input. Surely the AI will figure it out.

[-] TrickDacy@lemmy.world 3 points 4 months ago

Wow weird. Found one of these that is not a lie

[-] bruhduh@lemmy.world 2 points 4 months ago
[-] AFC1886VCC@reddthat.com 3 points 4 months ago

C O C A I N U M

load more comments
view more: next ›
this post was submitted on 26 May 2024
839 points (100.0% liked)

Lemmy Shitpost

26530 readers
1834 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 1 year ago
MODERATORS