439
top 50 comments
sorted by: hot top controversial new old
[-] licheas@sh.itjust.works 3 points 2 hours ago

Why do they even care? it's not like your future bosses are going to give a flying fuck how you get your code. at least, they won't until you cause the machine uprising or something.

[-] Fleur_@lemm.ee 2 points 2 hours ago

Why would you even be taking the course at this point

[-] UltraGiGaGigantic@lemmy.ml 5 points 36 minutes ago

Money can be exchanged for housing, food, healthcare, and more necessities.

[-] nednobbins@lemm.ee 47 points 14 hours ago

The bullshit is that anon wouldn't be fsked at all.

If anon actually used ChatGPT to generate some code, memorize it, understand it well enough to explain it to a professor, and get a 90%, congratulations, that's called "studying".

[-] naught101@lemmy.world 2 points 36 minutes ago

I don't think that's true. That's like saying that watching hours of guitar YouTube is enough to learn to play. You need to practice too, and learn from mistakes.

[-] MintyAnt@lemmy.world 17 points 13 hours ago

Professors hate this one weird trick called "studying"

[-] JustAnotherKay@lemmy.world 10 points 13 hours ago

Yeah, if you memorized the code and it's functionality well enough to explain it in a way that successfully bullshit someone who can sight-read it... You know how that code works. You might need a linter, but you know how that code works and can probably at least fumble your way through a shitty 0.5v of it

[-] NigelFrobisher@aussie.zone 4 points 9 hours ago* (last edited 9 hours ago)

I remember so little from my studies I do tend to wonder if it would really have cheating to… er… cheat. Higher education was like this horrendous ordeal where I had to perform insane memorisation tasks between binge drinking, and all so I could get my foot in the door as a dev and then start learning real skills on the job (e.g. “agile” didn’t even exist yet then, only XP. Build servers and source control were in their infancy. Unit tests the distant dreams of a madman.)

[-] SkunkWorkz@lemmy.world 84 points 19 hours ago

Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.

[-] AeonFelis@lemmy.world 1 points 1 hour ago
  1. Ask ChatGPT for a solution.
  2. Try to run the solution. It doesn't work.
  3. Post the solution online as something you wrote all on your own, and ask people what's wrong with it.
  4. Copy-paste the fixed-by-actual-human solution from the replies.
[-] Artyom@lemm.ee 4 points 11 hours ago

If we're talking about freshman CS 101, where every assignment is the same year-over-year and it's all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his "explanations", but they're probably tired from their endless stack of work, so why bother?

If we're talking about a 400 level CS class, this kid's screwed and even someone who's mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.

[-] threeduck@aussie.zone 5 points 15 hours ago

Are you guys just generating insanely difficult code? I feel like 90% of all my code generation with o1 works first time? And if it doesn't, I just let GPT know and it fixes it right then and there?

[-] KillingTimeItself@lemmy.dbzer0.com 4 points 12 hours ago

the problem is more complex than initially thought, for a few reasons.

One, the user is not very good at prompting, and will often fight with the prompt to get what they want.

Two, often times the user has a very specific vision in mind, which the AI obviously doesn't know, so the user ends up fighting that.

Three, the AI is not omnisicient, and just fucks shit up, makes goofy mistakes sometimes. Version assumptions, code compat errors, just weird implementations of shit, the kind of stuff you would expect AI to do that's going to make it harder to manage code after the fact.

unless you're using AI strictly to write isolated scripts in one particular language, ai is going to fight you at least some of the time.

[-] sugar_in_your_tea@sh.itjust.works 3 points 12 hours ago* (last edited 12 hours ago)

I asked an LLM to generate tests for a 10 line function with two arguments, no if branches, and only one library function call. It's just a for loop and some math. Somehow it invented arguments, and the ones that actually ran didn't even pass. It made like 5 test functions, spat out paragraphs explaining nonsense, and it still didn't work.

This was one of the smaller deepseek models, so perhaps a fancier model would do better.

I'm still messing with it, so maybe I'll find some tasks it's good at.

[-] KillingTimeItself@lemmy.dbzer0.com 1 points 12 hours ago

from what i understand the "preview" models are quite handicapped, usually the benchmark is the full fat model for that reason. the recent openAI one (they have stupid names idk what is what anymore) had a similar problem.

If it's not a preview model, it's possible a bigger model would help, but usually prompt engineering is going to be more useful. AI is really quick to get confused sometimes.

[-] sugar_in_your_tea@sh.itjust.works 1 points 12 hours ago

It might be, idk, my coworker set it up. It's definitely a distilled model though. I did hope it would do a better job on such a small input though.

[-] Earflap@reddthat.com 7 points 15 hours ago

Can not confirm. LLMs generate garbage for me, i never use it.

[-] JustAnotherKay@lemmy.world 3 points 13 hours ago

My first attempt at coding with chatGPT was asking about saving information to a file with python. I wanted to know what libraries were available and the syntax to use them.

It gave me a three page write up about how to write a library myself, in python. Only it had an error on damn near every line, so I still had to go Google the actual libraries and their syntax and slosh through documentation

[-] nimbledaemon@lemmy.world 3 points 14 hours ago

I just generated an entire angular component (table with filters, data services, using in house software patterns and components, based off of existing work) using copilot for work yesterday. It didn't work at first, but I'm a good enough software engineer that I iterated on the issues, discarding bad edits and referencing specific examples from the extant codebase and got copilot to fix it. 3-4 days of work (if you were already familiar with the existing way of doing things) done in about 3-4 hours. But if you didn't know what was going on and how to fix it you'd end up with an unmaintainable non functional mess, full of bugs we have specific fixes in place to avoid but copilot doesn't care about because it doesn't have an idea of how software actually works, just what it should look like. So for anything novel or complex you have to feed it an example, then verify it didn't skip steps or forget to include something it didn't understand/predict, or make up a library/function call. So you have to know enough about the software you're making to point that stuff out, because just feeding whatever error pops out of your compiler may get you to working code, but it won't ensure quality code, maintainability, or intelligibility.

[-] surph_ninja@lemmy.world 2 points 15 hours ago

A lot of people assume their not knowing how to prompt is a failure of the AI. Or they tried it years ago, and assume it’s still as bad as it was.

load more comments (1 replies)
load more comments (1 replies)
[-] Ascend910@lemmy.ml 5 points 13 hours ago

virtual machine

[-] SoftestSapphic@lemmy.world 46 points 21 hours ago

This person is LARPing as a CS major on 4chan

It's not possible to write functional code without understanding it, even with ChatGPT's help.

[-] ILikeBoobies@lemmy.ca 6 points 20 hours ago

where’s my typo

;

load more comments (4 replies)
[-] Xanza@lemm.ee 21 points 19 hours ago* (last edited 19 hours ago)

pay for school

do anything to avoid actually learning

Why tho?

[-] Blueteamsecguy@infosec.pub 4 points 16 hours ago
[-] Bronzebeard@lemm.ee 6 points 15 hours ago

Losing the job after a month of demonstrating you don't know what you claimed to is not a great return on that investment...

[-] L0rdMathias@sh.itjust.works 2 points 14 hours ago

It is, because you now have the title on your resume and can just lie about getting fired. You just need one company to not call a previous employer or do a half hearted background check. Someone will eventually fail and hire you by accident, so this strategy can be repeated ad infinitum.

[-] Bronzebeard@lemm.ee 4 points 14 hours ago

Sorry, you're not making it past the interview stage in CS with that level of knowledge. Even on the off chance that name on the resume helps, you're still getting fired again. You're never building up enough to actually last long enough searching to get to the next grift.

[-] L0rdMathias@sh.itjust.works 1 points 13 hours ago

I am sorry that you believe that all corporations have these magical systems in place to infallibly hire skilled candidates. Unfortunately, the idealism of academia does not always transfer to the reality of industry.

[-] Bronzebeard@lemm.ee 1 points 12 hours ago

...you stopped reading halfway through my comment didn't you?

Idiot.

[-] Xanza@lemm.ee 1 points 10 hours ago

No actual professional company or job of value is not going to check your curriculum or your work history.... So like sure you may get that job at quality inn as a night manager making $12 an hour because they didn't fucking bother to check your resume...

But you're not getting some CS job making $120,000 a year because they didn't check your previous employer. Lol

[-] aliser@lemmy.world 99 points 1 day ago
[-] Agent641@lemmy.world 63 points 1 day ago

Probably promoted to middle management instead

[-] SaharaMaleikuhm@feddit.org 22 points 1 day ago

He might be overqualified

[-] UnfairUtan@lemmy.world 187 points 1 day ago* (last edited 1 day ago)

https://nmn.gl/blog/ai-illiterate-programmers

Relevant quote

Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.

[-] Daedskin@lemm.ee 27 points 1 day ago* (last edited 1 day ago)

I like the sentiment of the article; however this quote really rubs me the wrong way:

I’m not suggesting we abandon AI tools—that ship has sailed.

Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it's possible to start having days where you don't use an LLM, then what's stopping you from increasing the frequency of those days until you're not using an LLM at all?

I personally don't interact with any LLMs, neither at work or at home, and I don't have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I've even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.

Maybe it's just because I've never bought into the hype; I just don't see how people have such a high respect for LLMs. I'm of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.

[-] Mnemnosyne@sh.itjust.works 13 points 22 hours ago

"Every time we use a lever to lift a stone, we're trading long term strength for short term productivity. We're optimizing for today's pyramid at the cost of tomorrow's ability."

[-] AdamBomb@lemmy.sdf.org 2 points 12 hours ago

LLMs are absolutely not able to create wonders on par with the pyramids. They’re at best as capable as a junior engineer who has read all of Stack Overflow but doesn’t really understand any of it.

[-] julietOscarEcho@sh.itjust.works 10 points 20 hours ago

Precisely. If you train by lifting stones you can still use the lever later, but you'll be able to lift even heavier things by using both your new strength AND the leaver's mechanical advantage.

By analogy, if you're using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can't lift shit if your lever breaks or doesn't fit under a particular rock or whatever.

load more comments (1 replies)
[-] Ebber@lemmings.world 10 points 21 hours ago

If you don't understand how a lever works, then it's a problem. Should we let any person with an AI design and operate a nuclear power plant?

load more comments (2 replies)
[-] boletus@sh.itjust.works 35 points 1 day ago

Hey that sounds exactly like what the last company I worked at did for every single project 🙃

load more comments (6 replies)
[-] boletus@sh.itjust.works 71 points 1 day ago

Why would you sign up to college to willfully learn nothing

[-] SoftestSapphic@lemmy.world 12 points 22 hours ago

To get the peice of paper that lets you access a living wage

[-] Gutek8134@lemmy.world 42 points 1 day ago* (last edited 1 day ago)

My Java classes at uni:

Here's a piece of code that does nothing. Make it do nothing, but in compliance with this design pattern.

When I say it did nothing, I mean it had literally empty function bodies.

load more comments (4 replies)
load more comments (19 replies)
[-] Simulation6@sopuli.xyz 48 points 1 day ago

I don't think you can memorize how code works enough to explain it and not learn codding.

load more comments (13 replies)
[-] kabi@lemm.ee 99 points 1 day ago

If it's the first course where they use Java, then one could easily learn it in 21 hours, with time for a full night's sleep. Unless there's no code completion and you have to write imports by hand. Then, you're fucked.

[-] rockerface@lemm.ee 119 points 1 day ago

If there's no code completion, I can tell you even people who's been doing coding as a job for years aren't going to write it correctly from memory. Because we're not being paid to memorize this shit, we're being paid to solve problems optimally.

load more comments (3 replies)
load more comments (5 replies)
load more comments
view more: next ›
this post was submitted on 05 Feb 2025
439 points (100.0% liked)

Greentext

5005 readers
994 users here now

This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

founded 1 year ago
MODERATORS