129
submitted 3 weeks ago by Zerush@lemmy.ml to c/technology@lemmy.ml

LOL

top 50 comments
sorted by: hot top controversial new old
[-] Lugh@futurology.today 149 points 3 weeks ago

So the same people who have no problem about using other people's copyrighted work, are now crying when the Chinese do the same to them? Find me a nano-scale violin so I can play a really sad song.

[-] iamericandre@lemmy.world 53 points 3 weeks ago
[-] wewbull@feddit.uk 25 points 3 weeks ago

That's obviously a cello.

[-] Viri4thus@feddit.org 6 points 3 weeks ago

Pedant time: That's microscale not nanoscale.

You can shoot me now, it's deserved.

[-] sp3tr4l@lemmy.zip 5 points 3 weeks ago

Can you put a liuqin in there?

[-] j4k3@lemmy.world 22 points 3 weeks ago* (last edited 3 weeks ago)

Planck could not scale small enough.

[-] harsh3466@lemmy.ml 2 points 3 weeks ago

You more elegantly said what I came to say.

[-] Nadru@lemmy.world 98 points 3 weeks ago

Stealing from thieves is not theft

[-] admin@lemmy.my-box.dev 30 points 3 weeks ago* (last edited 3 weeks ago)

Yes it is. Although I personally have far less moral objections to it.

To elaborate:
OpenAI scraped data without permission, and then makes money from it.

Deepseek then used that data (even paid openai for it), trained a model on that data, and then releases that model for anyone to use.

While it's still making use of "stolen data" (that's a whole semantics discussion I won't get into right now), I find it far more noble than the former.

[-] harsh3466@lemmy.ml 11 points 3 weeks ago

Came to say something similar. Like I give a fuck that OpenAI's model/tech/whatever was "stolen" by Deepseek. Fuck that piece of shit Sam Altman.

[-] wewbull@feddit.uk 5 points 3 weeks ago

"Recieving stolen goods" is prosecutable.

It's a lesser crime than the original theft though.

[-] rtxn@lemmy.world 45 points 3 weeks ago

Cry me a fucking river, David.

[-] notannpc@lemmy.world 41 points 3 weeks ago

Oh are we supposed to care about substantial evidence of theft now? Because there’s a few artists, writers, and other creatives that would like to have a word with you…

[-] Acoustic@lemm.ee 41 points 3 weeks ago

Bruh, these guys trained their own AI on so called "puplicly available" content. Except it was, and still is, completely without consent from, or compensation to said artists/bloggers/creators etc.. Don't throw rocks when you live in a glass house 🤌

[-] Zerush@lemmy.ml 2 points 3 weeks ago* (last edited 3 weeks ago)

Another reason why I use Andi, because it don¡t gut copyright content to the own knowledge base, it's is a search assistant, not a chatbot like others, it search by the concept and give a direct answer to your question, listing also the links of the sources and pages where it found the answers. It's LLM is only made to "understand" (to call it something) your question to search pages que contain information about it and to understand the content to be capable to summarize it. There isn't third party or copyright content in the LLM. It's knowledge is real time web content like any other search engine. Even in it's (reduced) chat capabilities, always show the sources where it found it's answers.

Traditional search works with keywords, listing thousends of pages where appears this keyword, that means that 99% of the list has nothing to do with what you are looking for, this is the reason why AI searches give a better result, but not Chatbots, which search the answers in a own knowledge base and invent answers if not.

[-] extremeboredom@lemmy.world 38 points 3 weeks ago

Womp womp. I'm sure openAI asked for permission from the creators for all its training data, right? Thief complains about someone else stealing their stolen goods, more at 11.

[-] j4k3@lemmy.world 36 points 3 weeks ago

OpenAI's mission statement is also in their name. The fact that they have a proprietary product that is not open source is criminal and should be sued out of existence. They are now just like the Sun Micro after Apache was made open sourced; irrelevant they just haven't gotten the memo yet. No company can compete against the whole world.

load more comments (1 replies)
[-] halcyoncmdr@lemmy.world 34 points 3 weeks ago

Your point OpenAI? Weren't you part of the group saying training AI wasn't copyright infringement? Not so happy when it's your shit being copied? Huh. Weird.

[-] qarbone@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

The only concern is how much the cost of training the model changes if it got a significant kickstart from previous, very-expensive training. I was interested because it was said to be comparable for a fraction of the cost. "Open"AI can suck sand.

[-] vfreire85@lemmy.ml 29 points 3 weeks ago

so he's just admitting that deepseek did a better job than openai but for a fraction of the price? it only gets better.

[-] dawnglider@lemmy.ml 17 points 3 weeks ago* (last edited 3 weeks ago)

It's funny that they did all that and open-sourced it too. Like some kid accusing another to copy their homeworks while the other kid did significantly better and also offered to share.

[-] conicalscientist@lemmy.world 29 points 3 weeks ago

When you can't win, accuse them of cheating.

[-] barnaclebutt@lemmy.world 15 points 3 weeks ago

But, but they committed the copyright infringement first. It's theirs. That's totally unfair. What are tech bros going to do? Admit they are grossly over valued? They've already spent the billions.

[-] lobut@lemmy.ca 26 points 3 weeks ago* (last edited 3 weeks ago)

Me pretending to care about David Sacks claim:

Open AI CTO making a stupid face after being asked if they steal

[-] mctoasterson@reddthat.com 25 points 3 weeks ago

Here's the thing... It was a bubble because you can't wall off the entire concept of AI. This revelation was just an acceleration displaying what should've been obvious.

There are many many open models available for people to fuck around with. I have in a homelab setting, just to keep abreast of what is going on, get a general idea how it works and what its capable of.

What most normie followers of AI don't seem to understand is, whether you're doing LLM or machine learning object detection or something, you can get open software that is "good enough" and run it locally. If you have a raspberry pi you can run some of this stuff, and it will be slow, but acceptable for many use cases.

So the concept that only OpenAI would ever hold the keys and should therefore have massive valuation in perpetuity, that is just laughable. This Chinese company just highlighted that you can bruteforce train more optimized models on garbage-tier hardware.

load more comments (1 replies)
[-] Embargo@lemm.ee 22 points 3 weeks ago

I couldn't give less of a fuck.

[-] thefluffiest@feddit.nl 20 points 3 weeks ago

FUD, just to distract from the crushing multibillion dollar defeat they’ve just been dealt. First stage of grief: denial. Second: anger. Third: bargaining. We’re somewhere between 2 and 3 right now.

[-] KeenFlame@feddit.nu 3 points 3 weeks ago

Nope, it's definitely true, but sensationalism. Almost all models are trained using gpt

[-] ToadOfHypnosis@lemmy.ml 17 points 3 weeks ago

Open AI stole all of our data to train their model. If this is true, no sympathy.

[-] Zerush@lemmy.ml 3 points 3 weeks ago

That is what I mean, it's a difference between an AI with robbed content in its knowledge/lenguage base and an AI assistant which only search iformation in the web to answer, linking to the corresponding pages. Way more intelligent and ethic use of an AI.

[-] TomMasz@lemmy.world 16 points 3 weeks ago

Copycat gets copycatted.

[-] some_guy@lemmy.sdf.org 15 points 3 weeks ago

Yes, so what?

https://stratechery.com/2025/deepseek-faq/

Who the fuck cares? They're all doing this.

[-] absquatulate@lemmy.world 13 points 3 weeks ago

It's only ok when we do it, cause we're the good guys!

[-] horse_battery_staple@lemmy.world 10 points 3 weeks ago

They're obviously trying so hard for regulatory capture in the states it's embarrassing.

[-] yogthos@lemmy.ml 9 points 3 weeks ago

If there's one thing we know about American AI companies it's that they have a spotless record when it comes to data ethics. Never touched unauthorized data. Swear! Not even once. Of course not.

[-] ddash@lemmy.dbzer0.com 8 points 3 weeks ago
[-] veroxii@aussie.zone 8 points 3 weeks ago

Well you can't run openai's models yourself so pretty sure deepseek would've had to pay for API access. How is that stealing again?

[-] OhStopYellingAtMe@lemmy.world 7 points 3 weeks ago

They’re eating each other.

[-] P00ptart@lemmy.world 3 points 3 weeks ago

Exactly. This is the end. The companies eat each other while we suffer.

[-] zaft@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

Whoop de doo

[-] KeenFlame@feddit.nu 3 points 3 weeks ago

So what? It's absolutely true and makes absolutely no difference to anyone

[-] Zerush@lemmy.ml 2 points 3 weeks ago

I still prefer Andisearch over all others, it was rhe first AI search long before all other were released, with own LLM not a copy from others.

[-] P00ptart@lemmy.world 2 points 3 weeks ago

It really doesn't matter which AI you prefer to use. You're wrong for using AI period.

[-] merde@sh.itjust.works 3 points 3 weeks ago

you're wrong for using writing. Writing leads to laziness and forgetfulness. Future generations will hear much without being properly taught and will appear wise but not be so.

load more comments (3 replies)
[-] Baaahb@feddit.nl 3 points 3 weeks ago

This is a garbage tier take. You use spell correction. Thats a form of computerized decision making, which is artificial intelligence. You MAY have a point if you're referring to LLMs but thats incredibly arguable, and you hasn't stated any reasoning behind your opinion.

[-] P00ptart@lemmy.world 2 points 3 weeks ago

Bullshit. You haven't made any argument at all. You just supported a position without any backing behind it. So your take is garbage. I don't use spell correction because I don't need it, as I understand the language.

[-] Baaahb@feddit.nl 2 points 3 weeks ago

Use of ai is not immoral in and of itself. That argument is made in my response. I supported that argument by explaining why AI isn't immoral, in that its a useful tool.

"I don't use spell correction because I don't need it, as I understand the language."

Run-on sentence. You are using a comma as a seperator between two fully separate thoughts. Thats the wrong punctuation to accomplish the task you want.you should either use a semi-colon or a conjunction, or just make them separate sentences.

Your understanding of the language is clearly perfect and any tool that provides feedback when you make mistakes is clearly unnecessary.

load more comments (5 replies)
[-] dx1@lemmy.ml 2 points 3 weeks ago

Good luck suing them

load more comments
view more: next ›
this post was submitted on 29 Jan 2025
129 points (100.0% liked)

Technology

36011 readers
42 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS