1126
top 39 comments
sorted by: hot top controversial new old
[-] Iunnrais@lemm.ee 178 points 3 weeks ago

Just let anyone scrape it all for any reason. It’s science. Let it be free.

[-] chicken@lemmy.dbzer0.com 22 points 3 weeks ago

The OP tweet seems to be leaning pretty hard on the "AI bad" sentiment. If LLMs make academic knowledge more accessible to people that's a good thing for the same reason what Aaron Swartz was doing was a good thing.

[-] Ashelyn 29 points 2 weeks ago* (last edited 2 weeks ago)

On the whole, maybe LLMs do make these subjects more accessible in a way that's a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

The problem is that LLMs 'hallucinate' details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it's essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they're intelligent. They're very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.

[-] chicken@lemmy.dbzer0.com 3 points 2 weeks ago* (last edited 2 weeks ago)

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you're wondering about something and don't know what you don't know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.

[-] Excrubulent@slrpnk.net 10 points 2 weeks ago* (last edited 2 weeks ago)

The phrase "synthesised expert knowledge" is the problem here, because apparently you don't understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.

You're not exposing people to expert knowledge, you're exposing them to expert-sounding words that cannot be made accurate. Sometimes they're right by accident, but that is not the same thing as accuracy.

You confused what the LLM is doing for synthesis, which is something loads of people will do, and this will just lend more undue credibility to its bullshit.

[-] veniasilente@lemm.ee 5 points 2 weeks ago

Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents.

If any of that was actually true, yeah. But it's not, it can't be, and it won't be.

As with all world-changing technology, "the general public" will never truly obtain its power, not until it has been well squeezed by the elites for gains. Not only that, "the general public" obtaining this power would be devastating on the simple physical principle that this kind of technology depends on ruining the ecology. And this whole "synthethized expert knowledge".... man, that's three words that mean absolutely nothing when chained together because it's all illusion: it's not actual knowledge, it's not expert, and it's not even synthetized, at best it's emulated. It's all a tangle of lies and make-believes sold on bulk with zero accountability.

But sure, nice dream. I want a Lamborghini, too.

[-] Ashelyn 3 points 2 weeks ago* (last edited 2 weeks ago)

People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.

Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.

Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.

[-] Auli@lemmy.ca 2 points 2 weeks ago

Man the amount of work a bash script needs from a LLM and that is a pretty basic thing. Did it speed up the process I think it did but not really sure actually did it make it easier yes. Did I need some idea of what it was doing yes.

[-] umbrella@lemmy.ml 7 points 2 weeks ago

i agree, my problem is that it wont

[-] funkless_eck@sh.itjust.works 5 points 2 weeks ago* (last edited 2 weeks ago)

That would be good if they did that but that is not the intent of the org, the purpose of the tool, the expected or even available outcome.

It's important to remember this data is not being scraped to make it available or presentable but to make a machine that echos human authography convincingly more convincingly.

On an extremely simplified level, it doesn't want to answer 1+1=? with "2", it wants to appear like a human confidently answering an arithmetic question, even if the exchange is "1+1=?" "yes, 2+3 does equal 9"

Obviously it can handle simple sums, this is an illustrative example

[-] chicken@lemmy.dbzer0.com 1 points 2 weeks ago

that is not the ... available outcome.

It demonstrably is already though. Paste a document in, then ask questions about its contents; the answer will typically take what's written there into account. Ask about something you know is in a Wikipedia article that would have been part of its training data, same deal. If you think it can't do this sort of thing, you can just try it yourself.

Obviously it can handle simple sums, this is an illustrative example

I am well aware that LLMs can struggle especially with reasoning tasks, and have a bad habit of making up answers in some situations. That's not the same as being unable to correlate and recall information, which is the relevant task here. Search engines also use machine learning technology and have been able to do that to some extent for years. But with a search engine, even if it's smart enough to figure out what you wanted and give you the correct link, that's useless if the content behind the link is only available to institutions that pay thousands a year for the privilege.

Think about these three things in terms of what information they contain and their capacity to convey it:

  • A search engine

  • Dataset of pirated contents from behind academic paywalls

  • A LLM model file that has been trained on said pirated data

The latter two each have their pros and cons and would likely work better in combination with each other, but they both have an advantage over the search engine: they can tell you about the locked up data, and they can be used to combine the locked up data in novel ways.

[-] funkless_eck@sh.itjust.works 2 points 2 weeks ago

the problem is you can't take those weaknesses and call it "academic" - it's a contradiction in terms.

When a real academic makes up answers its a problem, when chatgpt does it its part of the expectation.

[-] Auli@lemmy.ca 4 points 2 weeks ago

Except it won’t. And AI we’ll be pay to play

[-] CosmicTurtle0@lemmy.dbzer0.com 71 points 3 weeks ago

To paraphrase Nixon:

"When you're a company, it's not illegal."

To paraphrase Trump:

"When you're a company, they just let you do it."

[-] ayyy@sh.itjust.works 49 points 3 weeks ago
[-] rasakaf679@lemmy.ml 40 points 3 weeks ago
[-] PanArab@lemm.ee 39 points 3 weeks ago* (last edited 3 weeks ago)

Who writes the laws? There's your answer.

I'm curious why https://www.falconfinance.ae/ cares about this though.

The hell they are selling? https://www.falconfinance.ae/falcon-securities/

[-] TheOakTree@lemm.ee 21 points 2 weeks ago

I did some digging. It's a parody finance website that makes it seem like you can invest in falcons and make a blockchain (flockchain) with them. Dig a little further, go to the linked forum, and you'll see it's just a community of people shitposting (mostly).

[-] doctortran@lemm.ee 38 points 3 weeks ago* (last edited 2 weeks ago)

Can we be honest about this, please?

Aaron Swartz went into a secure networking closet and left a computer there to covertly pull data from the server over many days without permission from anyone, which is absolutely not the same thing as scraping public data from the internet.

He was a hero that didn't deserve what happened, but it's patently dishonest to ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as hostile behavior. Even your local library would call the cops if you tried to do that.

[-] veniasilente@lemm.ee 6 points 2 weeks ago

Why don't you speak what you truly believe instead of copy-pasting the same gaslighting everywhere? We already made you, anyway.

[-] Facebones@reddthat.com 38 points 2 weeks ago

All is legal in the eyes of capital.

[-] DarkDarkHouse@lemmy.sdf.org 11 points 2 weeks ago

The real golden rule

[-] wickedrando@lemmy.ml 2 points 2 weeks ago
[-] Facebones@reddthat.com 1 points 2 weeks ago

By peons*

Totally fine when they do it.

[-] crmsnbleyd@sopuli.xyz 24 points 3 weeks ago

Anything the rich and powerful do retroactively becomes okay

[-] EmbarrassedDrum@lemmy.dbzer0.com 22 points 3 weeks ago

and in due time, we'll hack OpenAI and get the sources from the chat module..

I've seen a few glitches before that made ChatGPT just drop entire articles in varying languages.

[-] FaceDeer@fedia.io 22 points 3 weeks ago

AI models don't actually contain the text they were trained on, except in very rare circumstances when they've been overfit on a particular text (this is considered an error in training and much work has been put into coming up with ways to prevent it. It usually happens when a great many identical copies of the same data appears in the training set). An AI model is far too small for it, there's no way that data can be compressed that much.

[-] EmbarrassedDrum@lemmy.dbzer0.com 8 points 3 weeks ago

thanks! it actually makes much sense.

welp guess I was wrong. so back to .edu scraping!

[-] electricprism@lemmy.ml 21 points 2 weeks ago

Remember what you learned in school: Working as a team to solve a test or problem is unacceptable!!! Unless you are a company town.

[-] xiao@sh.itjust.works 18 points 3 weeks ago

I'm still blaming the MIT for that !

[-] umami_wasbi@lemmy.ml 14 points 3 weeks ago
[-] Albbi@lemmy.ca 30 points 3 weeks ago
[-] EmbarrassedDrum@lemmy.dbzer0.com 34 points 3 weeks ago
[-] CHKMRK@programming.dev 13 points 3 weeks ago

Never really was

[-] dan@upvote.au 11 points 3 weeks ago

A recent report estimates that they won't be profitable until 2029: https://www.businessinsider.com/openai-profit-funding-ai-microsoft-chatgpt-revenue-2024-10

A lot can happen between now and then that would cause their expenses to grow even more, for example if they need to start licensing the content they use for training.

[-] Trainguyrom@reddthat.com 3 points 2 weeks ago

On the other hand some breakthrough in either hardware or software could make AI models significantly cheaper to run and/or train. The current cost in silicon is insane and just screams that there's efficiencies to be found. As always, in a gold rush, sell pickaxes

[-] dan@upvote.au 2 points 2 weeks ago

Definitely a possibility! It'll be interesting to see what happens.

[-] ProgrammingSocks@pawb.social 8 points 3 weeks ago

No and AI almost never will be. However, investor money keeps coming, so it doesn't matter.

[-] WilfordGrimley@linux.community 4 points 3 weeks ago

Epstein his own life

this post was submitted on 26 Oct 2024
1126 points (100.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54577 readers
152 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS