[-] khalid_salad@awful.systems 11 points 1 week ago* (last edited 1 week ago)

Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?

Edit: maybe I need to take a break from Mystery AI Hype Theater 3000.

[-] khalid_salad@awful.systems 12 points 2 weeks ago

maybe they think the awful systems get designed here

[-] khalid_salad@awful.systems 34 points 3 weeks ago* (last edited 3 weeks ago)

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of "emergent").

So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.

[-] khalid_salad@awful.systems 23 points 1 month ago

Everybody knows that all languages derive from ULTRAFRENCH.

[-] khalid_salad@awful.systems 36 points 1 month ago

So Geoffrey Hinton is a total dork.

Hopefully, [this Nobel Prize] will make me more credible when I say these things really do understand what they're saying. [There] is a whole school of linguistics that comes from Chomsky that thinks it's nonsense to say these things understand language. That school is wrong. Neural nets are much better at processing language than anything produced by the Chomsky school of linguistics.

[-] khalid_salad@awful.systems 13 points 1 month ago

https://www.bbc.com/news/articles/c62r02z75jyo

It’s going to be like the Industrial Revolution - but instead of our physical capabilities, it’s going to exceed our intellectual capabilities ... but I worry that the overall consequences of this might be systems that are more intelligent than us that might eventually take control

😩

[-] khalid_salad@awful.systems 12 points 1 month ago

"I only had this problem because I was very reckless," he continued, "partially because I think it's interesting to explore the potential downsides of this type of automation. If I had given better instructions to my agent, e.g. telling it 'when you've finished the task you were assigned, stop taking actions,' I wouldn't have had this problem.

just instruct it "be sentient" and you're good, why don't these tech CEOs undersand the full potential of this limitless technology?

[-] khalid_salad@awful.systems 17 points 1 month ago

Every few years there is some new CS fad that people try to trick me into doing research in


"algorithms" (my actual area), then quantum, then blockchain, then AI.

Wish this bubble would just fucking pop already.

[-] khalid_salad@awful.systems 16 points 1 month ago

go gatekeeper somewhere else

Me, showing up to a chemistry discussion group I wasn't invited to:

Alchemy has valid use cases. If you want to be pedantic about what alchemy means, go gatekeep somewhere else.

[-] khalid_salad@awful.systems 11 points 2 months ago

Could it be because a statistical relation isn't the same as a semantic one? No, I must be prompting it wrong. I'll just add "engineer" to my title and then everyone will take me seriously.

[-] khalid_salad@awful.systems 11 points 2 months ago

If I were unable to write a novel in a month^1^ but really wanted to and some smug little shit came up to me and offered to ghost write it for me, I would not be happy. How is SALAMI generated text any different?

1: I definitely can't.

[-] khalid_salad@awful.systems 16 points 2 months ago* (last edited 2 months ago)

https://www.404media.co/this-is-doom-running-on-a-diffusion-model/

We can boil the oceans to run a worse version of a game that can run at 60fps on a potato, but the really cool part is that we need the better version of the game to exist in the first place and also the new version only runs at 20fps.

view more: next ›

khalid_salad

joined 4 months ago