30
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Dec 2024
30 points (100.0% liked)
JetBrains
184 readers
1 users here now
A community for discussion and news relating to JetBrains and its products! https://www.jetbrains.com/
Related Communities
- !aqua@programming.dev
- !clion@programming.dev
- !datagrip@programming.dev
- !dataspell@programming.dev
- !fleet@programming.dev
- !goland@programming.dev
- !intellij@programming.dev
- !phpstorm@programming.dev
- !pycharm@programming.dev
- !rider@programming.dev
- !rubymine@programming.dev
- !rustrover@programming.dev
- !webstorm@programming.dev
Copyright © 2000-2024 JetBrains s.r.o. JetBrains and the JetBrains logo are registered trademarks of JetBrains s.r.o.
founded 10 months ago
MODERATORS
Does this work? Given that the LLM doesn't actually know anything or have feelings of uncertainty, surely it just adds a chance that it will say "I don't know" purely at random, without making it any more likely that the answers it does give are correct.
I find it kind of hilarious how almost every prompt I've seen leaked from various apps almost always has a similar clause, as if it would have any effect at all on the result.
Seeing engineers resort to this level of basically praying and wishful thinking that in reality has no factual value is pretty funny.
"Please, don't give me wrong results 0_0"