

something i was thinking about yesterday: so many people i respect used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:
- wrong tool for the job
- bad tool
- are you fucking serious?
- environmental impact
- ethics of how the data was gathered/curated to generate[1] the model
- privacy policy of these companies is a nightmare
- seriously what is wrong with you
they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user’s brain.
anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences — “oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series.”
additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?
they call this “training” but i try to avoid anthropomorphising chatbots ↩︎
the chair of my cs department at a public university expressed this sentiment
“the cat’s out of the bag, now we must teach students to use it responsibly”
BECAUSE THAT HAS WORKED SO WELL BEFORE
(sorry for shouting)