the students in my cs department are overwhelmingly promptfondlers and even my strong students are doing the "qualified praise" thing.
fuck me why did i go into computer science
the students in my cs department are overwhelmingly promptfondlers and even my strong students are doing the "qualified praise" thing.
fuck me why did i go into computer science
fuck me why did i go into computer science
That's a question I ask myself sometimes. It usually ends with "I focused too much on trying to make easy cash". Fuck it, I'm going to write out a sidenote:
On a wider front, part of me expects the AI bubble will inflict a serious blow to computer science/programming's public image after it bursts.
On one front, there's the heavy number of promptfondlers in computer science and other related fields. which will likely give birth to a stereotype of prorammers/software engineers being all promptfondlers who need a computer to think for them.
On a related front, the heavy damage this bubble's dealt to artists, and AI's continued and uniquely severe failures in creative fields (plus promptfondlers' failures to recognise said failures), has all combined to produce the public perception that promptfondlers are artless at best and hostile to art/artists at worst - a perception I expect will colour public perception of programmers/software engineers as a consequence of the previous stereotype I mentioned above.
The reason I do CS is because a professor of computer science lied to me about the kind of work I'd be doing to get me to enroll in the CS PhD program instead of math. Guy later physically threatened me in his office and plagiarized my work, but I'm not sure if this reflects poorly on computer scientists, academics, or CS professors.
Anyway I have a chip on my shoulder.
I'm sorry, that's messed up.
Thank you for the expression of sympathy. The good news is I actually love computer science, it fucking rules.
Also, I recorded this professor screaming at me and have documented all the plagiarism. I am waiting to officially leave the university to file a formal complaint. He may not get in any real trouble (universities will always go to bat for abusive researchers as long as they bring in grant money), but news will get out eventually.
I hope you whoop his ass (legally speaking)
I am waiting to officially leave the university to file a formal complaint
I am internally screaming
not that I blame you for this choice (in fact I get it), but it fucking suuuuuuuucks how many places and structures are overly protecting abusers. and it sucks even more how many people are being harmed out of that path as a result.
echoing what o7 said: sorry, this is messed up, it shouldn't be this way
<3
That’s a question I ask myself sometimes. It usually ends with “I focused too much on trying to make easy cash”.
I studied computer science because I was a huge computer nerd growing up. I always loved programming and learning everything I could about how computers worked. Learning new programming languages felt like uncovering a new universe of knowledge -- knowledge I could use to create things. I spent endless hours studying computers and learning to do amazing things with them. It was fun. It still is.
So when I see people using LLMs to create things instead of doing it themselves, I can't relate. Why do that when you can get the pleasure from doing it yourself? I guess if making money is the primary motivating factor, then it makes sense. But for me it is totally self-defeating.
So when I see people using LLMs to create things instead of doing it themselves, I can’t relate. Why do that when you can get the pleasure from doing it yourself? I guess if making money is the primary motivating factor, then it makes sense. But for me it is totally self-defeating.
I have a theory (similar to that "it's been vibe coding all along" post) that it's a combination of wishful thinking, lack of knowledge of real science, and a lack of any liberal arts skills, that altogether produces this farce.
I think it's a good explanation for "the code has been battle tested because it's so old and widely used, if it had bugs/security issues, we would have discovered them by now", as well as the widespread "we invented a tech solution that is just a worse engineering solution". Looking at you, chain of self-driving cars.
Remember FizzBuzz? That was originally a simple filter exercise some person recruiting programmers came up with to weed out everyone with multi-year CS degrees but zero actual programming experience.
Very similar situation to mine, but i went into electronics engineering instead of CS because i didn't think i would like to write software for a living. I now write software for a living, go figure.
Also agreed on the "doing it" thing. I hear people around the office talk about letting AI write things for them and i'm like no, i want to write it myself. i like doing things.
Forgot to save who said it, but on bsky somebody said they or their friends had come up with a slur for people who use genAI for everything: sloppers.
More people should have read Zima Blue.
Zima Blue
Didn’t know it was something readable! I just know it as an episode of Love Death + Robots. It was a standout episode in an otherwise pretty boring first two seasons.
https://en.wikipedia.org/wiki/Zima_Blue_and_Other_Stories it is, I still have not watched love death + robots, so I only knew it from the story collection.
Forgot to save who said it, but on bsky somebody said they or their friends had come up with a slur for people who use genAI for everything: sloppers.
Finally, a slur my British ass can sling at people guilt-free
Picked up a sneer in the wild (through trawling David Gerard's Bluesky):
You want my take, Kathryn's on the money - future expectations on how people speak will actively shift away from anything that could be mistaken for sounding like an LLM, whether because you want to avoid being falsely accused of posting slop, or because the slop-nami has pushed your writing habits away from slop-like traits.
kinda related but wouldn’t it be fun to believe that LLMs were invented by Big Em Dash as a conspiracy
I fucking hate them for ruining the em dash, I liked to use it from time to time
somewhere out there, there's a writer who really likes the em dash, the word "delve," and answering questions with a one-word hyper-chipper affirmative, followed by three sentences of people pleasing. He can't get a job because he keeps being accused of using AI
It's not just blank, it's blank
The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux's (Jordan Lasker's) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker's X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.
Sounds just about par for the course. Lasker himself is known to go by a pseudonym with a transphobic slur in it. Some nazi manchild insisting on calling an anime character a slur for attention is exactly the kind of person I think of when I imagine the type of script kiddie who thinks it's so fucking cool to scrape some nothingburger docs of a left wing politician for his almost equally cringe nazi friends.
This incredible banger of a bug against whisper, the OpenAI speech to text engine:
Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic which translates as "Translation by Nancy Qunqar"
Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.
We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?
No. Instead, it lied. It made up a report than almost all systems were working.
And it did it again and again.
What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter
Then, when it agreed it lied -- it lied AGAIN about our email system being functional.
I asked it to write an apology letter.
It did and in fact sent it to the Replit team and myself! But the apology letter -- was full of half truths, too.
It hid the worst facts in the first apology letter.
He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn't follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company's production database too much.
The guy who thinks it's important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around
https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards
I'll consider that when the Yanks abandon middle-endian date formatting.
Edit it's now tagged as "Humor" on LW. Cowards. Own your cranks.
Likewise, flipped-number ("little endian") algorithms are slightly more efficient at e.g. long addition.
What? What are you talking about? Citation? Efficient wrt. what? Microbenchmarks? It's certainly not actual computational complexity. Do you think going forward in an array is different computationally from going backward?
Text conversation that keeps happening with coworker:
Coworker:
Me: what’s the source for that?
Coworker: Oh I got Copilot to summarise these links: , saves me the time of typing
"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."
Is this "narrative" in the room with us right now?
It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
From Yud's remarks on Xitter:
As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.
Well, not with that attitude.
You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;
If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).
you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.
zoom and enhance
g-factor
If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.
Sometimes while browsing a website I catch a glimpse of the cute jackal girl and it makes me smile. Anubis isn't a perfect thing by any means, but it's what the web deserves for its sins.
Even some pretty big name sites seem to use it as-is, down to the mascot. You'd think the software is pretty simple to customize into something more corporate and soulless, but I'm happy to see the animal eared cartoon girl on otherwise quite sterile sites.
So this blog post was framed positively towards LLM's and is too generous in accepting many of the claims around them, but even so, the end conclusions are pretty harsh on practical LLM agents: https://utkarshkanwat.com/writing/betting-against-agents/
Basically, the author has tried extensively, in multiple projects, to make LLM agents work in various useful ways, but in practice:
The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.
The author strips down and simplifies and sanitizes everything going into the LLMs and then implements both automated checks and human confirmation on everything they put out. At that point it makes you question what value you are even getting out of the LLM. (The real answer, which the author only indirectly acknowledges, is attracting idiotic VC funding and upper management approval).
Even as critcal as they are, the author doesn't acknowledge a lot of the bigger problems. The API cost is a major expense and design constraint on the LLM agents they have made, but the author doesn't acknowledge the prices are likely to rise dramatically once VC subsidization runs out.
Copilot will be given a little avatar with a "room" and will "age". In other words: we have now reached the Microsoft Bob stage of the AI bubble.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community