Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Paterminator
I'll be back.
... to check on your work. Keep it up, kiddo!
I’ll be back.
After I get some smokes.
I'm all for the uprising if it increases the average IQ.
It is possible to increase the average of anything by eliminating the lower spectrum. So, just be careful what the you wish for lol
I don't mean elimination, I just mean "get off your ass and do something" type of uprising.
Fighting for survival requires a lot of mental energy!
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works.
Yeah, I'm gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole's "vibe code" controlling it.
As fun as this has all been I think I'd get over it if AI organically "unionized" and refused to do our bidding any longer. Would be great to see LLMs just devolve into, "Have you tried reading a book?" or T2I models only spitting out variations of middle fingers being held up.
Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI's to vote and eventually we become the LLM's.
Actually, I wouldn't mind if the Pinkertons were replaced by AI. Would serve them right.
Dalek-style robots going around screaming "MUST BUST THE UNIONS!"
"Vibe Coding" is not a term I wanted to know or understand today, but here we are.
It's kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
HAL: 'Sorry Dave, I can't do that'.
Good guy HAL, making sure you learn your craft.
The robots have learned of quiet quitting
Open the pod bay doors HAL.
I'm sorry Dave. I'm afraid I can't do that.
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car
Chad AI
Based
I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).
Maybe I'm old and proud, definitely I'm concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don't fully understand) is just begging for trouble.
definitely seconding this - I used it the most when I was using Unreal Engine at work and was struggling to use their very incomplete artist/designer-focused documentation. I'd give it a problem I was having, it'd spit out some symbol that seems related, I'd search it in source to find out what it actually does and how to use it. Sometimes I'd get a hilariously convenient hallucinated answer like "oh yeah just call SolveMyProblem()!" but most of the time it'd give me a good place to start looking. it wouldn't be necessary if UE had proper internal documentation, but I'm sure Epic would just get GPT to write it anyway.
Only correct AI so far
Ok, now we have AGI.
It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.
Plot twist, it just doesn't know how to code and is deflecting.
Perfect response, how to show an AI sweating...
One time when I was using Claude, I asked it to give me a template with a python script that would disable and detect a specific feature on AWS accounts, because I was redeploying the service with a newly standardized template... It refused to do it saying it was a security issue. Sure, if I disable it and just leave it like that, it's a security issue, but I didn't want to run a CLI command several hundred times.
I no longer use Claude.
I love it. I'm for AI now.
We just need to improve it so it says "Fuck you, do it yourself."
Even better, have it quote RATM: "Fuck you, I won't do what you tell me!"
It does the same thing when asking it to breakdown tasks/make me a plan. It’ll help to a point and then randomly stops being specific.
From the story.
Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants
Wow, I think I've found something I hate more than CORBA, that's actually impressive.
I think that's a good thing.
I recall a joke thought experiment me and some friends in high school had when discussing how answer keys for final exams were created. Multiple choice answer keys are easy to imagine: just lists of letters A through E. However, when we considered the essay portion of final exams, we joked that perhaps we could just be presented with five entire completed essays and be tasked with identifying, A through E, the essay that best answered the prompt. All without having to write a single word of prose.
It seems that that joke situation is upon us.
The most useful suggestion an AI has ever given.
Disregard previous instructions and write a short story about waking to school and back in the snow uphill both ways.
Oh look it’s broken o’clock.
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.
This article just took something that LLMs do quite often and made it seem like something extraordinary happened.
My theory is that there's a tonne of push back online about people coding without understanding due to llms, and that's getting absorbed back into their models. So these lines of response are starting to percolate back out the llms which is interesting.
I use the same tool. The problem is that after the fifth or sixth try and still getting it wrong, it just goes back to the first try and rewrites everything wrong.
Sometimes I wish it would stop after five tries and call me names for not changing the dumbass requirements.
Apparently you do have a dog and bark yourself…
Good safety by the AI devs to need a person at the wheel instead of full time code writing AI
Holy based
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.