216
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Aug 2025
216 points (100.0% liked)
Showerthoughts
37090 readers
1493 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
Ok, im a hardware dev. They've tried to make us do software style project management every time there's a new fad (agile last time). It usually doesn't fit.
What do you find them useful for in your role? Like a coding partner, you can ask questions? Or linting? Im at a loss in my role. I need to know the proprietary code base to write a single line of value. We aren't allowing anyone to train an ai on our code. Thats a huge security problem if anyone does.
So there are a few very specific tasks that LLMs are good at from the perspective of a software developer:
And that's... pretty much it. I've experimented with building applications with "prompt engineering," and to be blunt, I think the concept is fundamentally flawed. The problem is that once the application exceeds the LLM's context window size, which is necessarily small, you're going to see it make a lot more mistakes than it already does, because - just as an example - by the time you're having it write the frontend for a new API endpoint, it's already forgotten how that endpoint works.
As the application approaches production size in features and functions, the number of lines of code becomes an insurmountable bottleneck for Copilot. It simply can't maintain a comprehensive understanding of what's already there.
one other use case where they're helpful is 'translation'. Like i have a docker compose file and want a helm chart/kubernetes yaml files for the same thing. It can get you like 80% there, and save you a lot of yaml typing.
Wont work well if it's mo than like 5 services or if you wanted to translate a whole code base from one language to another. But converting one kind of file to another one with a different language or technology can work ok. Anything to write less yaml…
I use it to generate unit tests, it'll get the bulk of the code writing done and does a pretty good job at coverage, usually hitting 100%. All I have to do for the most part is review the tests to make sure they're doing the right thing, and mock out some stuff that it missed.
Legit. Do you need to feed it your code base at all? How does it know what needs to be tested otherwise?
yeah, it's the copilot plugin for intellij, basically right click and choose generate tests, it'll read the file and ... well..
Downside to that approach is that it doesn't know what some function calls do if they're not part of that file, so it tends to miss places that need to be mocked out.
Occasionally it writes a test that's "wrong", and I have to fix the test.. very rarely, the "wrong" test is actually "right" based on say a method signature or decision tree, and the method itself needs changing.
You're right, unit tests are another area where they can be helpful, as long as you're very careful to check them over.
They are getting faster, having larger context windows, and becoming more accurate. It is only a matter of time until AI simply copy-cats 99.9% of the things humans do.
Actually, there's growing evidence that beyond a certain point, more context drastically reduces their performance and accuracy.
I'm of the opinion that LLMs will need a drastic rethink before they can reach the point you describe.
We have 100M context AI, we just need better attention mechamisms.
This sounds to me like saying you have enough feathers in the grocery bag you're holding. All you need now is a beak, and you'll make yourself a duck.
X doubt
Why does everyone believe we are oh-so special? We are just an accident. We just need to recreate that.
We just need to recreate abiogenisis and billions of years of evolution? Um ok.