649
rule (lemmy.dbzer0.com)
you are viewing a single comment's thread
view the rest of the comments
[-] DigitalAudio@sopuli.xyz 4 points 15 hours ago

The problem is that you do need to keep training models for this to make sense.

And you always need at least some human editorialization of models, otherwise the model will just say whatever, learn from itself and degrade over time. This cannot be done by other AIs, so for now you still need humans to make sure the AI models are actually getting useful information.

The problem with this, which many have already pointed out, is that it makes AIs just as unreliable as any traditional media. But if you don't oversee their datasets at all and just allow them to learn from everything then they're even more useless, basically just replicating social media bullshit, which nowadays is like at least 60% AI generated anyway.

So yeah, the current model is, not surprisingly, completely unsustainable.

The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first. Also imagine the impact to the medical field with early cancer detection or virus spreading patterns, or even DNA analysis for certain diseases.

It's also super good if used for creative purposes (not for just generating pictures or music). So for example, AI makes it possible for you to sing a song, then sing the melody for every member of a choir, and fine tune all voices to make them unique. You can be your own choir, making a lot of cool production techniques more accessible.

I believe once the initial hype dies down, we stop seeing AI used as a cheap marketing tactic, and the bubble bursts, the real benefits of AI will become apparent, and hopefully we will learn to live with it without destroying each other lol.

[-] WoodScientist@lemmy.world 9 points 14 hours ago

The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first.

Imagine is the key word. I've actually tried to use LLMs to perform literature analyses in my field, and they're total crap. They produce something that sounds true to someone not familiar with a field. But if you actually have some expert knowledge in a field, the LLM just completely falls apart. Imagine is all you can do, because LLMs cannot perform basic literature review and project planning, let alone find patterns in papers that human scientists can't. The emperor has no clothes.

[-] DigitalAudio@sopuli.xyz 2 points 9 hours ago

But I don't think that's necessarily a problem that can't be solved. LLM and so on are ultimately simply statistical analysis, and if you refine it and train it enough, it can absolutely summarise at least one paper at the moment. Google's Notebook LM is already capable of it, I just don't think it can quite pull off many of them yet. But the current state of LLMs is not that far off.

I agree with AIs being way over hyped and also just having a general dislike for them due to the way they're being used, the people who gush over them, and the surrounding culture. But I don't think that means we should simply ignore reality altogether. The LLMs from 2 or even 1 year ago are not even comparable to the ones today, and that trend will probably keep going that way for a while. The main issue lies with the ethics of training, copyright, and of course, the replacement of labor in exchange of what amounts to simply a cool tool.

this post was submitted on 07 Aug 2025
649 points (100.0% liked)

196

4090 readers
3080 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 6 months ago
MODERATORS