907
submitted 6 months ago by Prunebutt@slrpnk.net to c/memes@lemmy.world

Office space meme:

"If y'all could stop calling an LLM "open source" just because they published the weights... that would be great."

top 50 comments
sorted by: hot top controversial new old
[-] Jocker@sh.itjust.works 117 points 6 months ago

Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company "OpeanAI"

[-] WraithGear@lemmy.world 74 points 6 months ago* (last edited 6 months ago)

Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?

[-] Prunebutt@slrpnk.net 47 points 6 months ago

Seems kinda reductive about what makes it different from most other LLM’s

The other LLMs aren't open source, either.

isn’t that just trained from the other AI?

Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.

And the way it uses that data, afaik, is open and editable, and the license to use it is open.

From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.

[-] WraithGear@lemmy.world 8 points 6 months ago
  1. Well that’s the argument.

  2. Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.

  3. It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special

[-] Prunebutt@slrpnk.net 30 points 6 months ago* (last edited 6 months ago)

The point of open source is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.

[-] WraithGear@lemmy.world 8 points 6 months ago* (last edited 6 months ago)

So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and doesnt touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?

[-] Prunebutt@slrpnk.net 19 points 6 months ago

it’s the entirety of the bulk unfiltered data you want

Or more realistically: a description of how you could source the data.

doesnt touch on at all how this LLM is different from other LLM’s?

Correct. Llama isn't open source, either.

like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated

Not at all. It's like claiming an emulator is open source, because it has a plugin system, but you need a closed source build dependency that the developer doesn't disclose to the puplic.

load more comments (11 replies)
[-] whotookkarl@lemmy.world 11 points 6 months ago* (last edited 6 months ago)

A closer analogy would be only providing the binary output of the emulator build and calling it open source. If you can't reproduce building the output from what they provide in what way is it reproducible? The model is the output, the training data and algorithm to build the model based on the training data are the input.

Edit: Say I have a Java project I want to open source. Normally (oversimplifying a bit) it goes .java source files used with a compiler to build intermediate bytecode in .class files, then there's a just in time (JIT) compilation to create the binary code as it runs in the JVM. It's not open source if I only share the class files, even if I can use them to recreate source files that can be recompiled into the same class files. Starting at an intermediate step of the process isn't the source.

load more comments (2 replies)
[-] pennomi@lemmy.world 29 points 6 months ago

It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.

The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

[-] Prunebutt@slrpnk.net 29 points 6 months ago* (last edited 6 months ago)

Let's transfer your bullshirt take to the kernel, shall we?

The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

🤡

Edit: It's more that so-called "AI" stakeholders want to launder it's reputation with the "open source" label.

[-] WraithGear@lemmy.world 13 points 6 months ago

Right. You could train it yourself too. Though its scope would be limited based on capability. But that’s not necessarily a bad thing. Taking a class? Feed it your text book. Or other available sources, and it can help you on that subject. Just because it’s hard didn’t mean it’s not open

[-] Ajen@sh.itjust.works 11 points 6 months ago

The weights aren't the source, they're the output. Modifying the weights is analogous to editing a compiled binary, and the training dataset is analogous to source code.

load more comments (5 replies)
[-] Prunebutt@slrpnk.net 10 points 6 months ago

You could train it yourself too.

How, without information on the dataset and the training code?

load more comments (12 replies)
load more comments (1 replies)
[-] KillingTimeItself@lemmy.dbzer0.com 37 points 6 months ago

i mean, if it's not directly factually inaccurate, than, it is open source. It's just that the specific block of data they used and operate on isn't published or released, which is pretty common even among open source projects.

AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.

[-] fushuan@lemm.ee 13 points 6 months ago* (last edited 6 months ago)

The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

[-] FooBarrington@lemmy.world 8 points 6 months ago* (last edited 6 months ago)

But it is factually inaccurate. We don't call binaries open-source, we don't even call visible-source open-source. An AI model is an artifact just like a binary is.

An "open-source" project that doesn't publish everything needed to rebuild isn't open-source.

load more comments (2 replies)
[-] Xerxos@lemmy.ml 34 points 6 months ago* (last edited 6 months ago)

The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.

They published the weights AND their training methods which is about as open as it gets.

[-] Prunebutt@slrpnk.net 24 points 6 months ago

They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?

They could jpst not call it Open Source, if you can't open source it.

[-] Naia 13 points 6 months ago

For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

load more comments (1 replies)
load more comments (1 replies)
[-] Oisteink@feddit.nl 30 points 6 months ago* (last edited 6 months ago)

Source - it’s about open source, not access to the database

[-] Prunebutt@slrpnk.net 24 points 6 months ago

So, where's the source, then?

load more comments (4 replies)
[-] Ugurcan@lemmy.world 17 points 6 months ago

There are lots of problems with the new lingo. We need to come up with new words.

How about “Open Weightings”?

load more comments (3 replies)
[-] verstra@programming.dev 17 points 6 months ago
load more comments (1 replies)
[-] surph_ninja@lemmy.world 16 points 6 months ago

Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.

load more comments (1 replies)
[-] SoftestSapphic@lemmy.world 16 points 6 months ago

I like how when America does it we call it AI, and when China does it it's just an LLM!

load more comments (1 replies)
[-] bleistift2@sopuli.xyz 16 points 6 months ago* (last edited 6 months ago)

Uuuuh… why?

Do you only accept open source code if you can see every key press every developer made?

[-] Prunebutt@slrpnk.net 79 points 6 months ago

Open source means you can recreate the binaries yourself. Neiter Facebook. Nor the devs of deepseek published which training data they used, nor their training algorithm.

[-] magic_lobster_party@fedia.io 36 points 6 months ago

They published the source code needed run the model. It’s open source in the way that anyone can download the model, run it locally, and further build on it.

Training from scratch costs millions.

[-] Zikeji@programming.dev 22 points 6 months ago* (last edited 6 months ago)

Open source isn't really applicable to LLM models IMO.

There is open weights (the model), and available training data, and other nuances.

They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I've made

[-] BakedCatboy@lemmy.ml 15 points 6 months ago

It's worth noting that OpenR1 have themselves said that DeepSeek didn't release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn't be able to replicate it without re-discovering what they did.

OSI specifically makes a carve-out that allows models to be considered "open source" under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.

load more comments (1 replies)
[-] Prunebutt@slrpnk.net 14 points 6 months ago

They published the source code needed run the model.

Yeah, but not to train it

anyone can download the model, run it locally, and further build on it.

Yeah, it's about as open source as binary blobs.

Training from scratch costs millions.

So what? You still can gleam something if you know the dataset on which the model has been trained.

If software is hard to compile, can you keep the source code closed and still call software "open source"?

load more comments (1 replies)
load more comments (13 replies)
load more comments (8 replies)
[-] marcos@lemmy.world 47 points 6 months ago

Do you call binary-only software with EULA "Open Source" too?

[-] pennomi@lemmy.world 8 points 6 months ago

No, but I do call a CC licensed png file open source even if the author didn’t share the original layered Photoshop file.

Model weights are data, not code.

[-] breakingcups@lemmy.world 14 points 6 months ago

You'd be wrong. Open source has a commonly accepted definition and a CC licensed PNG does not fall under it. It's copyleft, yes, but not open source.

I do agree that model weights are data and can be given a license, including CC0. There might be some argument about how one can assign a license to weights derived from copyrighted works, but I won't get into that right now. I wouldn't call even the most liberally licensed model weights open-source though.

load more comments (1 replies)
[-] BakedCatboy@lemmy.ml 31 points 6 months ago* (last edited 6 months ago)

It really comes down to this part of the "Open Source" definition:

The source code [released] must be the preferred form in which a programmer would modify the program

A compiled binary is not the format in which a programmer would prefer to modify the program - it's much preferred to have the text file which you can edit in a text editor. Just because it's possible to reverse engineer the binary and make changes by patching bytes doesn't make it count. Any programmer would much rather have the source file instead.

Similarly, the released weights of an AI model are not easy to modify, and are not the "preferred format" that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI "open source", the training code and data used to produce the weights are considered the "preferred format", and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it's used.

OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.

I would call "open weights" models actually just "self hostable" models instead of open source.

load more comments (5 replies)
[-] joyjoy@lemm.ee 27 points 6 months ago

Open Source (generally and for AI) has an established definition.

https://opensource.org/ai/open-source-ai-definition

[-] barkingspiders@infosec.pub 24 points 6 months ago

This is exactly it, open source is not just the availability of the machine instructions, it's also the ability to recreate the machine instructions. Anything less is incomplete.

It strikes me as a variation on the "free as in beer versus free as in speech" line that gets thrown around a lot. These weights allow you to use the model for free and you are free to modify the existing weights but being unable to re-create the original means it falls short of being truly open source. It is free as in beer, but that's it.

[-] unknown1234_5@kbin.earth 15 points 6 months ago

it's only open source if the source code is open.

load more comments (1 replies)
[-] theacharnian@lemmy.ca 15 points 6 months ago

Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.

load more comments (6 replies)
[-] stinky@redlemmy.com 12 points 6 months ago
[-] surewhynotlem@lemmy.world 10 points 6 months ago

Creative Commons would make more sense

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 28 Jan 2025
907 points (100.0% liked)

memes

16586 readers
2102 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS