42
submitted 6 months ago* (last edited 6 months ago) by dgerard@awful.systems to c/techtakes@awful.systems
top 26 comments
sorted by: hot top controversial new old
[-] rektdeckard@lemmy.world 21 points 6 months ago

MBA idiots and Economists consistently overestimate the abilities (and the underlying nature) of AI, because the output of AI is so much like how they speak: eloquent, confidently wrong, unconcerned with ground truth. They see themselves in AI, and they consistently overestimate themselves and their abilities.

[-] sc_griffith@awful.systems 19 points 6 months ago

i can't find a hole in this argument

[-] self@awful.systems 13 points 6 months ago

they phrase it so your brain will think otherwise, but I just want to point out the maybe obvious so nobody else has to do a double-take at the quoted paragraphs:

their example of what an ultra-intelligent AGI could do is of a human speedrunner using a glitch to beat Minecraft in 20 seconds. this asshole is taking something humans are excellent at and saying “but what if AI could do this, and also what if Minecraft was real life?” this is literal baby shit. like, even tool-assisted speedruns are the product of a shitload of human research into the problem space, and the tool’s just executing impossibly precise game inputs programmed, again, by a human. this is another space where AI sucks compared with regular human effort.

and speaking of which, does anyone remember the early OpenAI and Google marketing where they had an LLM play pac-man or some shit at supposedly superhuman levels? can anyone dig up an outcome for any of those in the form of a record on any credible speedrunning site or during an event? cause speedrunning has a ton of categories including stuff like dog-assisted runs, where you train your dog to play the game, and it’s all considered valid as different forms of skill applied to the game. the one thing you can’t do is cheat, and they’re very good at verifying runs (ie, you must be provably using the method you claim, and you can’t splice video together or use emulator cheats to achieve a better run). so where’s the verified LLM speedrun records?

[-] cstross@wandering.shop 8 points 6 months ago

@self He hasn't even caught up with what Vernor Vinge was talking about in his 1993 paper on the Singularity, which the current crop of starry-eyed singularitarians seem not to have read: https://accelerating.org/articles/comingtechsingularity

[-] dgerard@awful.systems 11 points 6 months ago

The error bars here are, of course, extremely large. Still,

[-] aio@awful.systems 6 points 6 months ago

Analyzing our data we conclude with 95% confidence that within a decade the Dyson Sphere Any% TAS time will be reduced below 55 seconds (± 1E10 years).

[-] YourNetworkIsHaunted@awful.systems 2 points 6 months ago

They're catching on that "big if true" is being recognized to mean "this is bullshit" so are trying to compensate by using more words.

[-] swlabr@awful.systems 11 points 6 months ago

Me, as a jock bully: Well of course these nerds would find finishing in 20 seconds an achievement

[-] zbyte64@awful.systems 19 points 6 months ago

How people expect AGI before self-driving cars is just ridiculous.

[-] swlabr@awful.systems 16 points 6 months ago

Given the all-consuming desire of Silicon Valley to accelerate us into a mad max style dystopia, the most foreshadowing thing about SF has gotta be all the (human) poop on the sidewalk.

[-] gerikson@awful.systems 14 points 6 months ago

Arrives like a wet turd hitting the bottom of the bowl at HN:

https://news.ycombinator.com/item?id=40576324

there are some promptfondlers trying valiantly to defend it but most correctly identify the author as a kid who doesn't know shit.

[-] gnomicutterance@awful.systems 20 points 6 months ago

Some scientific breakthroughs were memes at first

name one.

[-] mii@awful.systems 16 points 6 months ago

I'm reading Feynman's lectures on electromagnetism right now, and GPT-4o can answer questions and help me with the math. I doubt that even a smart high school would be able to do it.

Ten bucks this guy hasn’t double-checked anything his chatbot told him but accepted it as truth because it used big words in grammatically coherent ways.

[-] mountainriver@awful.systems 11 points 6 months ago

And here I thought it was easy to find high schoolers that are both wrong and sure of themselves.

[-] blakestacey@awful.systems 11 points 6 months ago

Electromagnetism is a standard subject covered in a bajillion books, so the training set is probably full of repeated explanations of the basic examples. That sounds like an excellent recipe for "AI" bilge-water that is just coherent enough for a student to miss where it goes wrong.

[-] Jayjader@jlai.lu 13 points 6 months ago* (last edited 6 months ago)

@> By the end of the decade, American electricity production will have grown tens of percent

..... don't worry guys AI will totally save us from climate change! /s

[-] pikesley@mastodon.me.uk 10 points 6 months ago

@dgerard "From GPT-4 to AGI: Counting the OOMs" wait this dude is wetting his knickers about Out Of Memory errors?

[-] 200fifty@awful.systems 9 points 6 months ago

No no, it's "order of magnitudes". It's like "surgeons general."

[-] froztbyte@awful.systems 8 points 6 months ago

Counterpoint: in Afrikaans (and likely Dutch I guess? Haven’t checked) “oom” is “uncle”

You may now proceed to enjoying this little diversion as much as I do

[-] skillissuer@discuss.tchncs.de 11 points 6 months ago

success has many fathers, but the best chatgpt can get are uncles

[-] Soyweiser@awful.systems 3 points 6 months ago

Dunno about Afrikaans, but you are correct about Dutch.

[-] froztbyte@awful.systems 4 points 6 months ago

you can be sure about afrikaans (since I am)

[-] blakestacey@awful.systems 8 points 6 months ago

Scott Aaronson says that Aschenbrenner displays "unusual clarity, concreteness, and seriousness".

That's it, that's the joke

[-] BigMuffin69@awful.systems 3 points 6 months ago

You know for a blog that's on its face about computational complexity, you'd think Scott would show a little more skepticism to the tech bro saying "all we need is 14 quintillion x compute to solve the Riemann hypothesis"

[-] dgerard@awful.systems 3 points 2 months ago
[-] Xraygoggles@lemmy.world 3 points 6 months ago

Great job with paean. Thanks for teaching me a new word.

this post was submitted on 07 Jun 2024
42 points (100.0% liked)

TechTakes

1489 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS