[-] Amoeba_Girl@awful.systems 30 points 9 months ago* (last edited 9 months ago)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[-] Amoeba_Girl@awful.systems 36 points 9 months ago

Ah man, if there's one thing autistic kids love, it's the sudden and arbitrary removal of an object they depend on!

[-] Amoeba_Girl@awful.systems 33 points 11 months ago

This is cool but will any of it explain the most pressing MrBeast question: why does he smile like that? I'm assuming it's because he's always thinking about how terrible a person he is.

[-] Amoeba_Girl@awful.systems 35 points 1 year ago

Having a conscience? There's no career in that!

[-] Amoeba_Girl@awful.systems 41 points 1 year ago

It's just a tool, like cars! My definition of tools is things that are being forced on us even though they're terrible for the environment and make everyone's life worse!

[-] Amoeba_Girl@awful.systems 37 points 1 year ago

Spam machines are only ever funny or interesting by accident. The more they smooth out the wrinkles the more creatively useless they become. The tension is sort of fascinating.

Like I've always been interested in generative poetry and other manglings of text, and ChatGPT's so fucking dull compared to putting a sentence through babelfish a few times.

[-] Amoeba_Girl@awful.systems 46 points 1 year ago

cool graph what's the x axis

[-] Amoeba_Girl@awful.systems 43 points 1 year ago

Malcolm and Simone Collins with their children – Octavian George, four, Torsten Savage, two, and Titan Invictus, one – at home in Pennsylvania.

bye

[-] Amoeba_Girl@awful.systems 123 points 1 year ago

What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”

And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!

[-] Amoeba_Girl@awful.systems 167 points 1 year ago

From Re-evaluating GPT-4’s bar exam performance (linked in the article):

First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.

Ohhh, that is sneaky!

[-] Amoeba_Girl@awful.systems 52 points 1 year ago* (last edited 1 year ago)

I love the way these idiots keep incrementing the number on their ChatGPT fantasy as if it's a sufficient image of the future and it's going to get everyone on board. Complete failure of imagination, don't try to picture any actual use for it or anything, just make it... more.

[-] Amoeba_Girl@awful.systems 84 points 2 years ago

Oh well done, you added noise to a line going up!

view more: next ›

Amoeba_Girl

joined 2 years ago