30

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[-] froztbyte@awful.systems 13 points 5 months ago* (last edited 5 months ago)

you know, it'd be a damn shame if someone made one of those megalists which contained all the various places that had promptboxes that could be used to synthesize bad code without having to pay your own money to openai subscriptions or so

[-] self@awful.systems 11 points 5 months ago

tired: stealing hundreds of dollars of electricity to mine hundreds of pennies in crypto

wired: spiking some project manager’s OpenAI bill to unsustainable levels by having their chatbot generate the worst nonsense ever experienced by a human

[-] froztbyte@awful.systems 8 points 5 months ago

inspired: crowdsourced prompt-based captcha solving

[-] dgerard@awful.systems 8 points 5 months ago

it feels a little mean doing this to a library, even if their use of AI is obviously doomed to failure, so a list of public access GPT prompts would be a service.

[-] froztbyte@awful.systems 6 points 5 months ago

Yeah, hospitals/libraries/schools/etc should not be things on such a list generally

(In two minds about some of the US colleges, but that’s a different kettle of barbs)

[-] Soyweiser@awful.systems 6 points 4 months ago

As soon as some of these LLMs get a math module to do math correctly (And not just via the LLM lookuptable thing) people could write scripts to externalize some more intensive calculations needed for crypto mining. Sure it will be inefficient as fuck, and I doubt the chance of getting a coin reward will be low, but it will be free.

[-] self@awful.systems 8 points 4 months ago

last week there were a couple of articles about how easy it is to craft an input that makes public chatgpt bots execute scripts (usually as root) on their hosting containers, which is almost definitely the result of a module like that being implemented for better programming-related results (aka fucking cheating), so this is very likely already happening

[-] Soyweiser@awful.systems 4 points 4 months ago

Happy to at least not be the first to think of that idea, and sad to hear people will wreck the commons more.

[-] dgerard@awful.systems 4 points 4 months ago
[-] self@awful.systems 6 points 4 months ago

found the original post! https://mastodon.social/@kennwhite/112290497758846218 the prompt to make them execute code is incredibly basic. no idea right now if the exploit is in the chatbot framework or the model itself though

[-] self@awful.systems 5 points 4 months ago

oh shit, somehow I figured you knew already! I’ll skim through my browser history and masto boosts and see if I can find one of the articles

[-] skillissuer@discuss.tchncs.de 5 points 5 months ago

counterpoint: it gives openai more money

[-] froztbyte@awful.systems 6 points 5 months ago

not necessarily/could be offset? openai is still in that "we'll set fire to money to make ourselves look good" stage of VC dreamery; find entities operating on credits, slap there

but possibly even in the case where it's still straight transactional, it might be a net negative for them: revenue, actual usage, and still no meaningful shift on their product becoming good. it'll just make them look even worse

the bigger problem (to which this suggestion would most certainly contribute things getting worse) is that they're still burning other important resources. I don't really have a good/clever proposal to this which isn't something like "well, burn their DCs to the fucking ground" (or other more creative forms of invasive service interruption)

[-] skillissuer@discuss.tchncs.de 5 points 5 months ago

they will burn through that money pretty quickly and without turning profit, however this can contribute to one of their ratfucked metrics to go up, which could hype up some segment of stonk market. because now they're fueled by hype and vc money, any new thing that would sustain that hype would be a bad thing (adoption here, kinda, at least as seen through excel)

i agree, 120mm mortar is much cheaper, faster, more irreversible, but openai going bankrupt and forced to sell their kit at least would generate less waste

[-] froztbyte@awful.systems 1 points 4 months ago

forced to sell their kit at least would generate less waste

and a net positive in terms of human happiness

this post was submitted on 21 Apr 2024
30 points (100.0% liked)

TechTakes

1278 readers
111 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS