39
submitted 4 months ago* (last edited 4 months ago) by dgerard@awful.systems to c/techtakes@awful.systems
you are viewing a single comment's thread
view the rest of the comments
[-] steal_your_face@lemmy.ml 14 points 4 months ago* (last edited 4 months ago)

Is there even a way to check if something is written by an LLM? Only way I can think of is to monitor their computers and also make them turn on their webcams to see if they’re using any other devices.

[-] luciole@beehaw.org 13 points 4 months ago

Oh that's beautiful. This is at the crux of generative AI's disruptive potential: you can never tell for sure if it's AI. At least theoretically. For most meaningful tasks its output is often dubious. But for the mind rotting stuff done to train the models, there's no way they can tell. Unless they monitor their microtaskers. But proctoring is no trivial task. Considering the pittance they're paying microtaskers, I doubt any form of effective proctoring is justifiable.

In the end Saltman will be the main victim of the disruption it hoped to unleash at large.

this post was submitted on 25 Jul 2024
39 points (100.0% liked)

TechTakes

1425 readers
172 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS