I don’t need systems thinking & feedback loops to confidently predict that “Percentage of code generated by an LLM” is a terrible target, guaranteed to incentivize destructive behavior.