something i was thinking about yesterday: so many people i ~~respect~~ used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:
- wrong tool for the job
- bad tool
- are you fucking serious?
- environmental impact
- ethics of how the data was gathered/curated to generate^[they call this "training" but i try to avoid anthropomorphising chatbots] the model
- privacy policy of these companies is a nightmare
- seriously what is wrong with you
they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user's brain.
anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences
"oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series."
additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?
https://awful.systems/post/5776862/8966942 😭
also this guy is a bit of a doofus, e.g. https://bugs.launchpad.net/calibre/+bug/853934, where he is a dick to someone reporting a bug, and https://bugs.launchpad.net/calibre/+bug/885027, where someone points out that you can execute anything as root because of a security issue, and he argues like a total shithead
i would not invite this person to my birthday