128
GPT-4's details are leaked. (threadreaderapp.com)

cross-posted from: https://lemmy.intai.tech/post/72919

Parameters count:

GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers. Mixture Of Experts - Confirmed.

OpenAI was able to keep costs reasonable by utilizing a mixture of experts (MoE) model. They utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass.

Related Article: https://lemmy.intai.tech/post/72922

top 22 comments
sorted by: hot top controversial new old
[-] Chickenstalker@lemmy.world 19 points 1 year ago

People who shit on AI "hallucinations" are the same kind of people who called the Wright Brother's Flyer, jank.

[-] Nugget_in_biscuit@lemmy.ml 10 points 1 year ago

Yeah but the original Wright Flyer was extremely janky. It took decades before planes were safe enough for the general public to fly on them. I doubt it’s going to take decades for LLM’s to get really good, but it’s undeniable that the current generation of these systems are somewhat lacking in quality

[-] Call_Me_Maple@lemmy.world 3 points 1 year ago

Yeah, I agree. Just because it's bad now, doesn't mean it won't be good later. The only way to fix the problems something has is by recognizing them, and working on them. Imagine if people didn't point out the AI hallucinations? We'd never get anywhere with LLMs. Not shitting on the first guy, I get where he's coming from it's a damn cool time to be alive, LLMs are incredible and can only get better from here(that is of course if they don't keep slapping F*****g censors on it!). but it's important for us to recognise the flaws in the system so it and we can grow.

[-] grabyourmotherskeys@lemmy.world 2 points 1 year ago

And to curb enthusiasm for integrating stuff like this into production systems.

There are currently law suits about copyright violations in training. They are either going to get settled for tons of money or the model will be retrained without that data source. This could have a significant effect on the business model, the business itself, and the models in use, especially over the long term.

There's a lot to figure out before this is a stable product.

[-] kakes@sh.itjust.works 1 points 1 year ago

I'm just stoked they exist at all, honestly, never minding any degree of quality. Been living my best geek life lately.

[-] Polydextrous@lemmy.world 4 points 1 year ago

I doubt it. They’d be pretty old, if so

[-] Thorny_Thicket@sopuli.xyz 2 points 1 year ago

Yeah as if those hallucinations are somehow unique to AI

[-] damnYouSun@sh.itjust.works 2 points 1 year ago

Wasn't there some newspapers that said that it would take one million years before humans could fly about two weeks before the Flyer?

Hell, we have gone from hunter gatherers to a technologically advanced society in less time than that. The moral of the story being journalists are idiots and should be ignored.

[-] BobbyBandwidth@lemmy.world 17 points 1 year ago

What’s over? Except for my head

[-] Call_Me_Maple@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

"Half of those additions are censors and more creative ways to say 'sorry, I can't do that for you Jim.'" Lol, I'm just kidding, 1.8t parameters is incredible.

I just really hope that it's not as censored as it currently is. ;_;

[-] lanolinoil@lemmy.world 8 points 1 year ago

The interesting part to me:

The missing dataset it a custom dataset of college textbooks collected by hand for as much courses as possible.

This is very easy to convert to txt file and than with self-instruct into instruction form. This creates the "illusion" that GPT-4 "is smart" no matter who use it.

Computer scientist? sure! it can help you with your questions about P!=NP Philosophy major? It can totally talk to you about epistemology.

Don't you see? It was trained on the textbooks. It is so obvious.

This could explain some (but not all) of the 'magic' I have seen with GPT4 vs GPT3.

If you put a bunch of textbooks into Google, it still couldn't help me build a video game engine

[-] JimmyOD@lemmy.ml 3 points 1 year ago
[-] YellowtoOrange@lemmy.world 3 points 1 year ago

What are the implications of this?

[-] shotgun_crab@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Thread reader link is down (Twitter API limits I guess). Is there any backups?

[-] manitcor@lemmy.intai.tech 3 points 1 year ago* (last edited 1 year ago)

its just a bit of a rant based on this, source article is still here and kicking. https://lemmy.intai.tech/post/72922

also check out the recent Gerorge Hotz, Freidman podcast.

its been kind of an open secret for weeks now, people are digging to try and prove it out.

[-] shotgun_crab@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Oh I see. Thanks, it's interesting stuff

Is it just me, or are those links going to the wrong places?

[-] manitcor@lemmy.intai.tech 1 points 1 year ago

They are the right ones. Should be a tweet archive and a blog post

Well that's weird because the first takes me to a shitpost with a picture of cake, and the second a shitpost about sucking your dentist's fingers...

[-] manitcor@lemmy.intai.tech 3 points 1 year ago

ewwww lol

are you using an app or the web? the links should point to the intai instance which works fine for me but i don't know what various clients will do with those links

I'm using Connect, so that could explain it! Thanks. I'll see if I can figure it out because this is really interesting to me, but the dentist post is not! Haha!

load more comments
view more: next ›
this post was submitted on 11 Jul 2023
128 points (100.0% liked)

ChatGPT

8912 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS