218
submitted 5 days ago by yogthos@lemmy.ml to c/memes@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] ICastFist@programming.dev 14 points 4 days ago

Come on, OP, Altman is still a billionaire. If he got out of the game right now, with OpenAi still unprofitable, he'd still have enough wealth for a dozen generations.

[-] yogthos@lemmy.ml 12 points 4 days ago

He's a billionaire based on the valuation of OpenAI, if the company fizzles so does his wealth.

[-] Grapho@lemmy.ml 12 points 4 days ago

🙏🏾🙏🏾🙏🏾

[-] NastyNative@mander.xyz 3 points 3 days ago

Free…aint nothing free in this world!

[-] SacredPony@sh.itjust.works 2 points 3 days ago

All things cost your money, your data, or your soul. And those at the top love nothing more than to trick us into paying all three at once

[-] Sabre363@sh.itjust.works 10 points 4 days ago

We doing paid promotions or something on Lemmy now? You sure seem to be pushing this DeepSeek thing pretty hard, op.

[-] yogthos@lemmy.ml 17 points 4 days ago

That's right I'm a huge open source shill.

[-] Sabre363@sh.itjust.works 8 points 4 days ago

None of this has anything to do with the model being open source or not, plenty of other people have already disputed that claim.

[-] Grapho@lemmy.ml 13 points 4 days ago

It's a model that outperforms the other ones in a bunch of areas with a smaller footprint and which was trained for less than a twentieth of the price, and then it was released as open source.

If it were European or US made nobody would deem it suspicious if somebody talked about it all month, but it's a Chinese breakthrough and god forbid you talk about it for three days

[-] yogthos@lemmy.ml 11 points 4 days ago

It has everything to do with the tech being open. You can dispute it all you like, but the fact is that all the code and research behind it is open. Anybody could build a new model from scratch using open data if they wanted to. That's what matters.

[-] Sabre363@sh.itjust.works 5 points 4 days ago

I'm commenting on the odd nature of the post and your behavior in the comments, pointing out that it comes across as more a shallow advertisement than a sincere endorsement, that is all. I don't know enough about DeepSeek to discuss it meaningfully, nor do I have enough evidence to decide upon its open source status.

load more comments (8 replies)
[-] uberstar@lemmy.ml 7 points 4 days ago

I tried DeepSeek, and immediately fell in love.. My only nitpick is that images have to have text on them, otherwise it complains, but for the price of free, I'm basically just asking for too much. Contemporaries be damned.

[-] SplashJackson@lemmy.ca 17 points 5 days ago

What's a deepseek? Sounds like a search engine?

[-] Karcinogen@discuss.tchncs.de 27 points 5 days ago

Deepseek is a Chinese AI company that released Deepseek R1, a direct competitor to ChatGPT.

[-] yogthos@lemmy.ml 29 points 5 days ago

You forgot to mention that it's open source.

[-] trevor 3 points 3 days ago

Is it actually open source, or are we using the fake definition of "open source AI" that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

[-] yogthos@lemmy.ml 5 points 3 days ago

The code is open, weights are published, and so is the paper describing the algorithm. At the end of the day anybody can train their own model from scratch using open data if they don't want to use the official one.

[-] trevor 2 points 3 days ago* (last edited 3 days ago)

The training data is the important piece, and if that's not open, then it's not open source.

I don't want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can't reproduce the model, and if you can't do that, it's not open source.

The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.

[-] yogthos@lemmy.ml 3 points 3 days ago

What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn't all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.

[-] trevor 2 points 3 days ago

That's fine if you think the algorithm is the most important thing. I think the training data is equally important, and I'm so frustrated by the bastardization of the meaning of "open source" as it's applied to LLMs.

It's like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn't.

It'd be fine if we could just use more honest language like "open weight", but "open source" means something different.

[-] yogthos@lemmy.ml 3 points 3 days ago

Again, if people feel strongly about this then there's a very clear way to address this problem instead of whinging about it.

[-] trevor 1 points 3 days ago* (last edited 3 days ago)

Yes. That solution would be to not lie about it by calling something that isn't open source "open source".

[-] KeenFlame@feddit.nu 1 points 2 days ago

Sigh, it's because the training data is mostly chatgpt itself. Chill

[-] trevor 1 points 2 days ago

I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term "open source", that'd be great.

[-] KeenFlame@feddit.nu 1 points 2 days ago

I don't really think they are stealing, because I don't believe publicly available information can be property. The algorithm is open source so it is a correct labelling

[-] trevor 1 points 2 days ago

My use of the word "stealing" is not a condemnation, so substitute it with "borrowing" or "using" if you want. It was already stolen by other tech oligarchs.

You can call the algo open source if the code is available under an OSS license. But the larger project still uses proprietary training data, and therefor the whole model, which requires proprietary training data to function is not open source.

[-] yogthos@lemmy.ml 2 points 2 days ago

Plenty of debate on what classifies as an open source model last I checked, but I wasn't expecting honesty from you there anyways.

[-] trevor 1 points 2 days ago

You won't see me on the side of the "debate" that launders language in defense of the owning class ¯_(ツ)_/¯

[-] yogthos@lemmy.ml 2 points 2 days ago

Nobody is doing that, but keep making bad faith arguments if you feel the need to.

load more comments (2 replies)
load more comments (3 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 26 Jan 2025
218 points (100.0% liked)

Memes

46394 readers
1380 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS