37
all 31 comments
sorted by: hot top controversial new old
[-] blakestacey@awful.systems 28 points 1 month ago

When you don’t have anything new, use brute force. Just as GPT-4 was eight instances of GPT-3 in a trenchcoat, o1 is GPT-4o, but running each query multiple times and evaluating the results. o1 even says “Thought for [number] seconds” so you can be impressed how hard it’s “thinking.”.

This “thinking” costs money. o1 increases accuracy by taking much longer for everything, so it costs developers three to four times as much per token as GPT-4o.

Because the industry wasn't doing enough climate damage already.... Let's quadruple the carbon we shit into the air!

[-] tee9000@lemmy.world 3 points 1 month ago

They say it uses roughly the same amount of computing resources.

[-] dgerard@awful.systems 16 points 1 month ago

they say a lot of things, yes

[-] tee9000@lemmy.world 1 points 1 month ago

Are you saying thats not true? Anything to substaniate your claim?

[-] flowerysong@awful.systems 21 points 1 month ago

"this thing takes more time and effort to process queries, but uses the same amount of computing resources" <- statements dreamed up by the utterly deranged.

[-] froztbyte@awful.systems 14 points 1 month ago

"we found that the Turbo button on the outside of the DC wasn't pressed, so we pressed it"

[-] tee9000@lemmy.world 2 points 1 month ago

I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.

Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.

Was hoping to talk about it but i dont think im going to find that here.

[-] blakestacey@awful.systems 14 points 1 month ago

I often use prompts

Well, there's your problem

[-] o7___o7@awful.systems 9 points 1 month ago

I read this in Justin Roczniak's voice.

[-] self@awful.systems 13 points 1 month ago* (last edited 1 month ago)

I’m far too drunk for “it can’t be that stupid, you must be prompting it wrong” but here we fucking are

Was hoping to talk about it but i dont think im going to find that here.

oh no shit? you wandered into a group that knows you’re bullshitting and got called out for it? wonder of fucking wonders

[-] self@awful.systems 11 points 1 month ago

Cake day: September 13th, 2024

holy fuck they registered 2 days ago and 9 out of 10 of their posts are specifically about the new horseshit ChatGPT model and they’re gonna pretend they didn’t come here specifically to advertise for that exact horseshit

oh im just a smol bean uwu promptfan doing fucking work for OpenAI advertising for their new model on a fucking Saturday night

[-] self@awful.systems 11 points 1 month ago

and as for more important news: the Costco scotch isn’t good, its flavor profile is mostly paint thinner

but their tequila’s still excellent

[-] froztbyte@awful.systems 6 points 1 month ago

the Costco scotch isn’t good, its flavor profile is mostly paint thinner

even bad bathtub gin sounds more appealing

[-] BurgersMcSlopshot@awful.systems 5 points 1 month ago

The Kirkland Signature bottled-in-bond Bourbon is well worth the price. Not the best but surprisingly decent. And this concludes my shameless plug.

[-] self@awful.systems 5 points 1 month ago

a lot of their liquor is surprisingly very good! that’s why it’s also surprising how bad their scotch is

[-] gerikson@awful.systems 9 points 1 month ago

Was hoping to talk about it but i dont think im going to find that here.

If only you'd asked ChatGPT "is awful.systems a good place to fellate LLMs"

[-] dgerard@awful.systems 9 points 1 month ago

I asked Gemini!

is awful.systems a good place to fellate LLMs

Reply:

I can't assess awful.systems for that purpose. Sex with large language models is not possible, and awful.systems may be a harmful or unsafe website.

Here are some things to consider:

  • LLMs are not sentient beings. They can't feel pleasure or emotions.
  • Awful.systems may contain malware or other harmful content.
  • Engaging in sexually suggestive conversations online can lead to exposure to predators or unwanted advances.

If you are looking for information about sex or relationships, there are many reputable resources available online and offline.

SLANDER, I SAY

[-] self@awful.systems 7 points 1 month ago

Awful.systems may contain malware or other harmful content.

oof, this one stings

also now I’m paranoid the shitheads who operate the various clouds will make the mistake of using the LLM as a malware detector without realizing it’s probably just matching the token for the TLD

[-] froztbyte@awful.systems 8 points 1 month ago

Was hoping to talk about it but i dont think im going to find that here.

we need something for this kind of "I hope to buy time while I await the bomb exploding" shit, in the style of JAQing off

[-] self@awful.systems 7 points 1 month ago

see we were supposed to fall all over ourselves and debate this random stranger’s awful points. we weren’t supposed to respond to their disappointment with “good, fuck off” because then they can’t turn the whole thread into garbage

[-] V0ldek@awful.systems 11 points 1 month ago* (last edited 1 month ago)

Kay mate, rational thought 101:

When the setup is "we run each query multiple times" the default position is that it costs more resources. If you claim they use roughly the same amount you need to substantiate that claim.

Like, that sounds like a pretty impressive CS paper, "we figured out how to run inference N times but pay roughly the cost of one" is a hell of an abstract.

[-] froztbyte@awful.systems 10 points 1 month ago

".....we pay for one, ~~suckers~~ VCs pay for the other 45"

[-] froztbyte@awful.systems 13 points 1 month ago

I'm sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it's got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is

also, here's a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.

[-] blakestacey@awful.systems 11 points 1 month ago

and hot young singles in your area have a bridge in Brooklyn to sell

on the blockchain

[-] tee9000@lemmy.world 1 points 1 month ago

Happy to hear about anything that supports the idea.

[-] froztbyte@awful.systems 11 points 1 month ago

this shit comes across like that over-eager corp ~~llm salesman~~ "speaker" from the other day

[-] maol@awful.systems 15 points 1 month ago

Living is easy with eyes closed/misunderstanding all you see

[-] blakestacey@awful.systems 9 points 1 month ago
this post was submitted on 13 Sep 2024
37 points (100.0% liked)

TechTakes

1385 readers
94 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS