The third of a series by Alexander Kustov on AI attitudes among academics. I found the entire series to be quite persuasive. I suspect before long that AI use in academic writing will be common as long as its use is disclosed.
But the idea that AI use in research somehow pollutes it needs to go. Researchers are at least as likely to produce slop as AI is.
Meanwhile, academics routinely cite papers they haven’t read beyond the abstract. At least AI hallucination rates are tracked and improving. Human hallucination rates in academia are not tracked at all. We just call them “contributions to the literature.” And if you’re a peer reviewer, you don’t even have to hallucinate on your own: you just write “please cite me” and move on.
Also, another smart point about the “stochastic parrot” metaphor I wrote about this week.
One of the most influential slogans in the AI debate has always functioned as a thought-terminating cliche. As Cate Hall observed, it is a potent coinage: fun to say, conceptually efficient, and it has permanently colonized many people’s minds despite not being true of today’s models. A genuine linguistic work of art. It is also empirically false: every major frontier model since GPT-4 has been trained on non-textual input, and the original argument’s own logic requires text-only training to work.