[-] ebu@awful.systems 10 points 5 months ago* (last edited 5 months ago)

you have to scroll through the person's comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros

it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel we've all come to know and despise remains unknown

they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can "work in the background" on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestacey's other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed

[-] ebu@awful.systems 10 points 5 months ago

ah, yes, i'm certain the reason the slop generator is generating slop is because we haven't gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i'm certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm

[-] ebu@awful.systems 10 points 8 months ago* (last edited 8 months ago)

claiming to have customers you don't actually have so vocally that they have to sue you to get their names out of your mouth should be a death knell on its own, but the whole "pretending their already-expired three-month trial contract is still in effect for the full year" is a great way to find yourself pulling a Sam Bankman-Fried, except that you don't have a side company to pull $$$ from to cover your tracks

[-] ebu@awful.systems 10 points 9 months ago

it's cool that you discovered a word that lets you call Asahi Lina unnecessarily dramatic and attention-seeking in a way that lets you believe you "never gave an oppinion on the matter" and that you're just neutrally observing a scientifically studied phenomenon!

wait no "cool" isn't the right word now is it

[-] ebu@awful.systems 10 points 10 months ago

i wonder which endocrine systems are disrupted by not having your head sufficiently stuffed into a toilet before being old enough to type words into nazitter dot com

[-] ebu@awful.systems 10 points 1 year ago

happy to see the draft get fleshed out. good writeup

[-] ebu@awful.systems 10 points 1 year ago

...gods i miss n-gate

[-] ebu@awful.systems 10 points 2 years ago

long awaited and much needed. i bestow upon you both the highest honor i can reward: a place in my bookmarks bar

[-] ebu@awful.systems 10 points 2 years ago* (last edited 2 years ago)

putting my 2¢ forward: this is a forum for making fun of overconfident techbros. i work in tech, and it is maddening to watch a massively overvalued industry buy into yet another hype bubble, kept inflated by seemingly endless amounts of money from investors and VCs. and as a result it's rather cathartic to watch (and sneer at) said industry's golden goose shit itself to death over and over again due to entirely foreseeable consequences of the technology they're blindly putting billions of dollars into. this isn't r/programming, this is Mystery Science Theater 3000.

i do not care if someone does or does not understand the nuances of database administration, schema design, indexing and performance, and different candidates for the types of primary keys. hell, i barely know just enough SQL to shoot myself in the foot, which is why i don't try to write my own databases, in the hypothetical situation where i try to engineer a startup that "extracts web data at scale with multimodal codegen", whatever that means.

if someone doesn't understand, and they come in expressing confusion or asking for clarification? that's perfectly fine -- hell, if anything, i'd welcome bringing people up to speed so they can join in the laughter.

but do not come in here clueless and confidently (in)correct the people doing the sneering and expect to walk away without a couple rotten tomatoes chucked at you. if you want to do that, reddit and hacker news are thataway.

[-] ebu@awful.systems 10 points 2 years ago

maybe you're referring to when i brought it up in last week's thread? and yeah, this is basically the same

can't wait for AI bros to invent the trolley problem

[-] ebu@awful.systems 10 points 2 years ago

love If Books Could Kill. highly recommend.

i can recognize that sometimes getting away with massive amounts of fraud and theft is sometimes as easy as just being the right kind of charming and personable guy. that someone who talks smooth gets the benefit of the doubt. what i don't understand is how SBF's outstandingly bad interpersonal skills don't seem to immediately disqualify him from getting the starry-eyed treatment he got (and still gets). is it really just the fact that he's rich?

[-] ebu@awful.systems 10 points 2 years ago

Ultimately, LLMs don’t use words,

LLM responses are basically paths through the token space, they may or may not overuse certain words, but they’ll have a bias towards using certain words together

so they use words but they don't. okay

this is about as convincing a point as "humans don't use words, they use letters!" it's not saying anything, just adding noise

So I don’t think this is impossible… Humans struggle to grasp these kinds of hidden relationships (consciously at least), but neural networks are good at that kind of thing

i can't tell what the "this" is that you think is possible

part of the problem is that a lot of those "hidden relationships" are also noise. knowing that "running" is typically an activity involving your legs doesn't help one parse the sentence "he's running his mouth", and part of participating in communication is being able to throw out these spurious and useless connections when reading and writing, something the machine consistently fails to do.

It’s incredibly useful to generate all sorts of content when paired with a skilled human

so is a rock

It can handle the tedious details while a skilled human drives it and validates the output

validation is the hard step, actually. writing articles is actually really easy if you don't care about the legibility, truthiness, or quality of the output. i've tried to "co-write" short-format fiction with large language models for fun and it always devolved into me deleting large chunks -- or even the whole -- output of the machine and rewriting it by hand. i was more "productive" with a blank notepad.exe. i've not tried it for documentation or persuasive writing but i'm pretty sure it would be a similar situation there, if not even more so, because in nonfiction writing i actually have to conform to reality.

this argument always baffles me whenever it comes up. as if writing is 5% coming up with ideas and then the other 95% is boring, tedium, pen-in-hand (or fingers-on-keyboard) execution. i've yet to meet a writer who believes this -- all the writing i've ever done required more-or-less constant editorial decisions from the macro scale of format and structure down to individual choices. have i sufficiently introduced this concept? do i like the way this sentence flows, or does it need to go earlier in the paragraph? how does this tie with the feeling i'm trying to convey or the argument i'm trying to put forward?

writing is, as a skill, that editorial process (at least to one degree or another). sure, i can defer all the choice to the machine and get the statistically-most-expected, confusing, factually dubious, aimless, unchallenging, and uncompelling text out of it. but if i want anything more than that (and i suspect most writers do), then i am doing 100% of that work myself.

view more: ‹ prev next ›

ebu

joined 2 years ago