Okay, you may be thinking:

The sort of person who may have been a Snapewife a decade ago is now an AI whisperer, and maybe some people go crazy on the margin who would have stayed sane. But this has nothing to do with me; I understand that LLMs are just a tool, and use them appropriately.
In fact, I personally am thinking that, so you’d be in good company! I intend to carefully prompt a few different LLMs with this essay, and while I expect them to mostly just tell me what I want to hear (that the post is insightful and convincing), and beyond that to mostly make up random critiques because they infer I want a critique-shaped thing, I’m also hopeful that they’ll catch me in a few genuine typos, lazy inferences, and inconsistent formatting.

But if you get to the point where your output and an LLM’s output are mingling, or LLMs are directly generating most of the text you’re passing off as original research or thinking, you’re almost certainly creating low-quality work. AIs are fundamentally chameleonic roleplaying machines—if they can tell what you’re going for is “I am a serious researcher trying to solve a fundamental problem” they will respond how a successful serious researcher’s assistant might in a movie about their great success. And because it’s a movie you’d like to be in, it’ll be difficult to notice that the AI’s enthusiasm is totally uncorrelated with the actual quality of your ideas. In my experience, you have to repeatedly remind yourself that AI value judgments are pretty much fake, and that anything more coherent than a 3⁄10 will be flagged as “good” by an LLM evaluator. Unfortunately, that’s just how it is, and prompting is unlikely to save you; you can flip an AI to be harshly critical with such keywords as “brutally honest”, but a critic that roasts everything isn’t really any better than a critic that praises everything. What you actually need in a critic or collaborator is sensitivity to the underlying quality of the ideas; AI is ill suited to provide this.