258
you are viewing a single comment's thread
view the rest of the comments
[-] RagnarokOnline@programming.dev 27 points 2 months ago

I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.

Beats manually summarizing that info myself.

Maybe their prompt sucks?

[-] froztbyte@awful.systems 41 points 2 months ago

“Are you sure you’re holding it correctly?”

christ, every damn time

[-] Jakeroxs@sh.itjust.works 4 points 2 months ago

That is how tools tend to work, yes.

[-] dgerard@awful.systems 16 points 2 months ago

we find they tend to post here, though not for long

[-] froztbyte@awful.systems 12 points 2 months ago

it makes me feel fucking ancient to find that this dipshit didn't seem to get the remark, and it wasn't even that long ago

[-] istewart@awful.systems 14 points 2 months ago

Jobs is Tech Jesus, but Antennagate is only recorded in one of the apocryphal books

[-] fasterandworse@awful.systems 16 points 2 months ago

"tools" doesn't mean "good"

good tools are designed well enough so it's clear how they are used, held, or what-fucking-ever.

fuck these simpleton takes are a pain in the arse. They're always pushed by these idiots that have based their whole world view on fortune cookie aphorisms

[-] V0ldek@awful.systems 8 points 2 months ago

Said like a person who wouldn't be able to correctly hold a hammer on first try

[-] dgerard@awful.systems 29 points 2 months ago

I got AcausalRobotGPT to summarise your post and it said "I'm not saying it's always programming.dev, but"

[-] pikesley@mastodon.me.uk 24 points 2 months ago

@RagnarokOnline @dgerard "They failed to say the magic spells correctly"

[-] HootinNHollerin@lemmy.world 18 points 2 months ago

Did you conduct or read all the interviews in full in order to verify no hallucinations?

[-] sxan@midwest.social 8 points 2 months ago

How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?

[-] TexasDrunk@lemmy.world 2 points 2 months ago

I also use it for that pretty often. I always double check and usually it's pretty good. Once in a great while it turns the summary into a complete shitshow but I always catch it on a reread, ask a second time, and it fixes things up. My biggest problem is that I'm dragged into too many useless meetings every week and this saves a ton of time over rereading entire transcripts and doing a poor job of summarizing because I have real work to get back to.

I also use it as a rubber duck. It works pretty well if you tell it what it's doing and tell it to ask questions.

[-] YourNetworkIsHaunted@awful.systems 9 points 2 months ago

Isn't the whole point of rubber duck debugging that the method works when talking to a literal rubber duck?

[-] self@awful.systems 8 points 2 months ago

what if your rubber duck released just an entire fuckton of CO2 into the environment constantly, even when you weren’t talking to it? surely that means it’s better

this post was submitted on 04 Sep 2024
258 points (100.0% liked)

TechTakes

1427 readers
95 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS