103
top 50 comments
sorted by: hot top controversial new old
[-] queermunist@lemmy.ml 83 points 2 weeks ago

AI isn't saving the world lol

[-] spongebue@lemmy.world 21 points 2 weeks ago* (last edited 2 weeks ago)

Machine learning has some pretty cool potential in certain areas, especially in the medical field. Unfortunately the predominant use of it now is slop produced by copyright laundering shoved down our throats by every techbro hoping they'll be the next big thing.

[-] ryathal@sh.itjust.works 35 points 2 weeks ago

Both are happening. Samples of casual writing are more valuable to use to generate an article than research papers though.

[-] FaceDeer@fedia.io 10 points 2 weeks ago

Yeah. Scientific papers may teach an AI about science, but Reddit posts teach AI how to interact with people and "talk" to them. Both are valuable.

[-] geekwithsoul@lemm.ee 11 points 2 weeks ago

Hopefully not too pedantic, but no one is “teaching” AI anything. They’re just feeding it data in the hopes that it can learn probabilities for certain types of output. It “understands” neither the Reddit post nor the scientific paper.

[-] hoshikarakitaridia@lemmy.world 4 points 2 weeks ago* (last edited 2 weeks ago)

This might be a wild take but people always make AI out to be way more primitive than it is.

Yes, in it's most basic for an LLM can be described as an auto-complete for conversations. But let's be real: the amount of different optimizations and adjustments made before and after the fact is pretty complex, and the way the AI works is pretty close already to a brain. Hell that's where we started out; emulating a brain. And you can look into this, the base for AI is usually neural networks, which learn to give specific parts of an input a specific amount of weight when generating the output. And when the output is not what we want, the AI slowly adjusts those weights to get closer.

Our brain works the same in it's most basic form. We use electric signals and we think associative patterns. When an electric signal enters one node, this node is connected via stronger or lighter bridges to different nodes, forming our associations. Those bridges is exactly what we emulate when we use nodes with weighted connectors in artificial neural networks.

Our AI output is quality wise right now pretty good, but integrity and security wise pretty bad (hallucinations, not following prompts, etc.), but saying it is performing at the level of a three year old is simultaneously under-selling and overselling how AI performs. We should be aware that just because it's AI doesn't mean it's good, but it also doesn't mean it's bad either. It just means there's a feature (which is hopefully optional) and then we can decide if it's helpful or not.

I do music production and I need cover art. As a student, I can't afford commissioning good artworks every now and then, so AI is the way to go and it's been nailing it.

As a software developer, I've come to appreciate that after about 2y of bad code completion AIs, there's finally one that is a net positive for me.

AI is just like anything else, it's a tool that brings change. How that change manifests depends on us as a collective. Let's punish bad AI, dangerous AI or similar (copilot, Tesla self driving, etc.) and let's promote good AI (Gmail text completion, chatgpt, code completion, image generators) and let's also realize that the best things we can get out of AI will not hit the ceiling of human products for a while. But if it costs too much, or you need quick pointers, at least you know where to start.

[-] geekwithsoul@lemm.ee 3 points 2 weeks ago

This shows so many gross misconceptions and with such utter conviction, I’m not even sure where to start. And as you seem to have decided you like to get free stuff that is the result of AI trained off the work of others without them receiving any compensation, nothing I say will likely change your opinion because you have an emotional stake in not acknowledging the problems of AI.

[-] ImplyingImplications@lemmy.ca 24 points 2 weeks ago

Because AI needs a lot of training data to reliably generate something appropriate. It's easier to get millions of reddit posts than millions of research papers.

Even then, LLMs simply generate text but have no idea what the text means. It just knows those words have a high probability of matching the expected response. It doesn't check that what was generated is factual.

load more comments (2 replies)
[-] TheOubliette@lemmy.ml 23 points 2 weeks ago

"AI" is a parlor trick. Very impressive at first, then you realize there isn't much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an "AI" that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

Let's say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, "what is the relationship between sugar and obesity?". What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won't be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.

[-] Brahvim@lemmy.kde.social 3 points 2 weeks ago

They might even just take an actual semtence from an article and reframe it a little

Case for many things that can be answered via stackoverflow searches. Even the order in which GPT-4o brings up points is the exact same as SO answers or comments.

[-] TheOubliette@lemmy.ml 3 points 2 weeks ago

Yeah it's actually one of the ways I caught a previous manager using AI for their own writing (things that should not have been done with AI). They were supposed to write about something in a hyper-specific field and an entire paragraph ended up just being a rewording of one of two (third party) website pages that discuss this topic directly.

[-] howrar@lemmy.ca 2 points 2 weeks ago* (last edited 2 weeks ago)

Why does everyone keep calling them Markov chains? They're missing ~~all the required properties, including~~ the eponymous Markovian property. Wouldn't it be more correct to call them stochastic processes?

Edit: Correction, turns out the only difference between a stochastic process and a Markov process is the Markovian property. It's literally defined as "stochastic process but Markovian".

[-] TheOubliette@lemmy.ml 3 points 2 weeks ago

Because it's close enough. Turn off beam and redefine your state space and the property holds.

[-] howrar@lemmy.ca 4 points 2 weeks ago

Why settle for good enough when you have a term that is both actually correct and more widely understood?

load more comments (10 replies)
load more comments (6 replies)
[-] howrar@lemmy.ca 23 points 2 weeks ago

I find it amusing that everyone is answering the question with the assumption that the premise of OP's question is correct. You're all hallucinating the same way that an LLM would. 

LLMs are rarely trained on a single source of data exclusively. All the big ones you find will have been trained on a huge dataset including Reddit, research papers, books, letters, government documents, Wikipedia, GitHub, and much more. 

Example datasets:

[-] andrewta@lemmy.world 6 points 2 weeks ago

Rules of lemmy

Ignore facts, don’t do research to see if the comment/post is correct, don’t look at other comments to see if anyone else has corrected the post/comment already, there is only one right side (and that is the side of the loudest group)

[-] Rampsquatch@sh.itjust.works 21 points 2 weeks ago

You could feed all the research papers in the world to an LLM and it will still have zero understanding of what you trained it on. It will still make shit up, it can't save the world.

[-] SteposVenzny@beehaw.org 17 points 2 weeks ago

Training it on research papers wouldn’t make it smarter, it would just make it better at mimicking their writing style.

Don’t fall for the hype.

[-] Trainguyrom@reddthat.com 14 points 2 weeks ago

Short answer: they already are

Slightly longer answer: GPT models like ChatGPT are part of an experiment in "if we train the AI model on shedloads of data does it make a more powerful AI model?" and after OpenAI made such big waves every company is copying them including trying to train models similar to ChatGPT rather than trying to innovate and do more

Even longer answer: There's tons of different AI models out there for doing tons of different things. Just look at the over 1 million models on Hugging Face (a company which operates as a repository for AI models among other services) and look at all of the different types of models you can filter for on the left.

Training an image generation model on research papers probably would make it a lot worse at generating pictures of cats, but training a model that you want to either generate or process research papers on existing research papers would probably make a very high quality model for either goal.

More to your point, there's some neat very targeted models with smaller training sets out there like Microsoft's PHI-3 model which is primarily trained on textbooks

As for saving the world, I'm curious what you mean by that exactly? These generative text models are great at generating text similar to their training data, and summarization models are great at summarizing text. But ultimately AI isn't going to save the world. Once the current hype cycle dies down AI will be a better known and more widely used technology, but ultimately its just a tool in the toolbox.

load more comments (1 replies)
[-] sirico@feddit.uk 12 points 2 weeks ago

Redditors are always right, peer reviewed papers always wrong. Pretty obvious really. :D

load more comments (1 replies)
[-] Tabooki@lemmy.world 12 points 2 weeks ago

They already do that. You're being a troglodyte.

[-] Melatonin@lemmy.dbzer0.com 8 points 2 weeks ago

Hmmm. Not sure if I'm being insulted. Is that one of those fish fossils that looks kind of like a horseshoe crab?

[-] Glytch@lemmy.world 11 points 2 weeks ago

You're thinking of a trilobite

load more comments (2 replies)
[-] macabrett@lemmy.ml 11 points 2 weeks ago

editor's note: it will not save the world

[-] CanadaPlus@lemmy.sdf.org 8 points 1 week ago

They're trained on both, and the kitchen sink.

[-] tiddy@sh.itjust.works 8 points 2 weeks ago

Papers are most importantly a documentation of exactly what and how a procedure was performed, adding a vagueness filter over that is only going to decrease its value infinitely.

Real question is why are we using generative ai at all (gets money out of idiot rich people)

[-] Even_Adder@lemmy.dbzer0.com 7 points 2 weeks ago

They're trained on technical material too.

[-] TheReturnOfPEB@reddthat.com 7 points 2 weeks ago

The Ghost of Aaron Schwartz

[-] xmunk@sh.itjust.works 3 points 2 weeks ago

What he was fighting for was an awful lot more important than a tool to write your emails while causing a ginormous tech bubble.

[-] r00ty@kbin.life 6 points 2 weeks ago

Anyone running a webserver and looking at their logs will know AI is being trained on EVERYTHING. There are so many crawlers for AI that are literally ripping the internet wholesale. Reddit just got in on charging the AI companies for access to freely contributed content. For everyone else, they're just outright stealing it.

[-] HobbitFoot@thelemmy.club 6 points 2 weeks ago

Because they are looking for conversations.

[-] cobysev@lemmy.world 5 points 2 weeks ago

We are. I just read an article yesterday about how Microsoft paid research publishers so they could use the papers to train AI, with or without the consent of the papers' authors. The publishers also reduced the peer review window so they could publish papers faster and get more money from Microsoft. So... expect AI to be trained on a lot of sloppy, poorly-reviewed research papers because of corporate greed.

[-] RobotToaster@mander.xyz 5 points 2 weeks ago

Nobody wants an AI that talks like that.

load more comments (5 replies)
[-] originalucifer@moist.catsweat.com 5 points 2 weeks ago

money. theres no money in saving the world. lots of money in not saving the world.

greed will be humanities downfall

[-] Strayce@lemmy.sdf.org 5 points 2 weeks ago* (last edited 2 weeks ago)

They are. T&F recently cut a deal with Microsoft. Without author's consent, of course.

I'm fairly sure a few others have too, but that's the only article I could find quickly.

[-] RangerJosie@lemmy.world 4 points 2 weeks ago

Saving the world isn't profitable in the short term.

Vulture capitalists don't care about the future. They care about the immediate. Short term profitability. And nothing else.

[-] SkavarSharraddas@gehirneimer.de 3 points 2 weeks ago

How does that help disempower the fossil fuel mafia?

[-] NuXCOM_90Percent@lemmy.zip 3 points 2 weeks ago* (last edited 2 weeks ago)

Part of it is the same "human speech" aspects that have plagued NLP work over the past few years. Nobody (except the poor postdoctoral bastard who is running the paper farm for their boss) actually speaks in the same way that scholarly articles are written because... that should be obvious.

This combines with the decades of work by right wing fascists to vilify intellectuals and academia. If you have ever seen (or written) a comment that boils down to "This youtuber sounds smug" or "They are presenting their opinion as fact" then you see why people prefer "natural human speech" over actual authoritatively researched and tested statements.

And... while not all pay to publish journals are trash, I feel confident saying that most are. And filtering those can be shockingly hard by design.

But the big one? Most of the owners of the various journals are REALLY fucking litigious and will go scorched earth on anyone who is using their work (because Elsevier et al own your work) to train a model.

[-] peanuts4life 3 points 2 weeks ago

Tons of people already are. The following site is useful for searching papers using ai https://consensus.app/

load more comments (1 replies)
[-] scottmeme@sh.itjust.works 3 points 2 weeks ago

Brain damage is cheaper than professionals

[-] Ziggurat@sh.itjust.works 2 points 2 weeks ago

Because broken English from research paper and relatively structured style will be even worse than reddit posts

[-] callouscomic@lemm.ee 2 points 2 weeks ago

Most research papers are likely ad valid as an average reddit point.

Getting published is a circlejerk, and rarely are they properly tested, or does anyone actually read them.

[-] lattrommi@lemmy.ml 2 points 2 weeks ago

I think I read this post wrong.

I was thinking the sentence "We could be saving the world!" meant 'we' as in humans only.

No need to be training AI. No need to do anything with AI at all. Humans simply start saving the world. Our Research Papers can train on Reddit. We cannot be training, we are saving the world. Let the Research Papers run a train on Reddit AI. Humanity Saves World.

No cynical replies please.

[-] slacktoid@lemmy.ml 2 points 2 weeks ago* (last edited 2 weeks ago)

AuroraGPT. They are trying to do it.

Its cause number of people who can read, understand, and then create the necessary dataset to train and test the LLM are very very very few for research papers vs the data for pop culture is easilier to source.

[-] atimehoodie@lemmy.ml 2 points 2 weeks ago

Who's going to peer review that?

load more comments
view more: next ›
this post was submitted on 01 Oct 2024
103 points (100.0% liked)

Asklemmy

43603 readers
1489 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS