[-] Architeuthis@awful.systems 6 points 1 week ago

Apparently you can ask gpt-5.2 to make you a zip of /home/oai and it will just do it:

https://old.reddit.com/r/OpenAI/comments/1pmb5n0/i_dug_deeper_into_the_openai_file_dump_its_not/

An important takeaway I think is that instead of Actually Indian it's more like Actually a series rushed scriptjobs - they seem to be trying hard to not let the llm do technical work itself.

Also, it seems their sandboxing amounts to filtering paths that star with /.

[-] Architeuthis@awful.systems 6 points 1 month ago

As far as I can tell there's absolutely no ideology in the original transformers paper, what a baffling way to describe it.

James Watson was also a cunt, but calling "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" one of the founding texts of eugenicist ideology or whatever would be just dumb.

[-] Architeuthis@awful.systems 6 points 3 months ago* (last edited 3 months ago)

Who needs time travel when you have ~~Timeless~~ ~~Updateless~~ Functional Decision Theory, Yud's magnum opus and an arcane attempt at a game theoretic framework that boasts 100% success at preventing blackmail from pandimensional superintelligent entities that exist now in the future.

It for sure helped the Zizians become well integrated members of society (warning: lesswrong link).

[-] Architeuthis@awful.systems 6 points 3 months ago

Nice. Here's the bluesky account as well.

[-] Architeuthis@awful.systems 6 points 6 months ago

Except not really, because even if stuff that has to be reasoned about in multiple iterations was a distinct category of problems, reasoning models by all accounts hallucinate a whole bunch more.

[-] Architeuthis@awful.systems 6 points 7 months ago

Here's the full text:

Fake radical honesty: when a dishonest person self-discloses taboo or undesirable things about themselves, but then omits the worst thing or things. They make themselves look honest and they're not. This nasty trick ruined my life once. It occurs to me that this ploy may have been used to cover up the miricult scandal (https://archive.is/miricult.com) after a discussion with someone about what happened. A friend said something like that they'd looked into this and the people involved confessed, but only one minor was molested. For some reason this resulted in increased trust. It should not have. Have you seen fake radical honesty anywhere?

For someone not steeped into the lore, why is this important?

[-] Architeuthis@awful.systems 6 points 11 months ago* (last edited 11 months ago)

Apparently they announced a $3.000 home computer that will be able to run 200B parameter models which is about half the params of the biggest downloadable model at this time.

Are they trying to compete with OpenAI's $200/month plan? No idea. The actual pitch seems to be you know AI is going to be everywhere soon so better lube up.

They also say if you buy one you get access to nvidia's AI tools to do whatever, probably to produce cutting edge quality AI media content or develop some hugely disruptive AI powered app, like the countless success stories we've had so far.

[-] Architeuthis@awful.systems 6 points 2 years ago* (last edited 2 years ago)

Echoing the audience's fawning of heavyweight boxers is probably the least objectionable thing in this racist shitheap of an article, I like how it ends by basically saying people should shut up about the judges possibly favoring Usyk for being Ukrainian, not because that's just Tyson fans coping but because the current notable russian heavyweights are either icky muslims or not full whites by parentage.

P4P is mostly a marketing term anyway, size aside the meta is different enough between distant weight classes to really strain comparison.

[-] Architeuthis@awful.systems 6 points 2 years ago* (last edited 2 years ago)

Either that or he let his performative contrarianism get out of hand, he did delete the post after all.

Still, it's just like an HBD enthusiast heavy into eugenic optimisation to think that there might be something to measuring skulls, even if it didn't pan out the first time, maybe if they had known about IQ it would have been different, it's a shame the woke mob has made using calipers on school children a crime, etc.

[-] Architeuthis@awful.systems 6 points 2 years ago

This is almost the plot of The Fifth Season, a hugo winner from a while back.

[-] Architeuthis@awful.systems 6 points 2 years ago* (last edited 2 years ago)

Yet AI researcher Pablo Villalobos told the Journal that he believes that GPT-5 (OpenAI's next model) will require at least five times the training data of GPT-4.

I tried finding the non-layman's version of the reasoning for this assertion and it appears to be a very black box assessment, based on historical trends and some other similarly abstracted attempts at modelling dataset size vs model size.

This is EpochAI's whole thing apparently, not that there's necessarily anything wrong with that. I was just hoping for some insight into dataset length vs architecture and maybe the gossip on what's going on with the next batch of LLMs, like how it eventually came out that gpt4.x is mostly several gpt3.xs in a trench coat.

view more: ‹ prev next ›

Architeuthis

joined 2 years ago