Much of the advice around using AI is that if you use it, then you need to verify what it produces. This is presently good advice. But I'm doubtful it will be good advice in the long-run.
Consider how little verification happens in large institutions by leaders who are making decisions. Of course, many bad decisions get made this way, but also many good ones. The difference is in the quality of the work put before the decision-makers. Eric Drexler explains it well in this recent article (emphasis mine):
Consider how institutions tackle ambitious undertakings. Planning teams generate alternatives; decision-makers compare and choose; operational units execute bounded tasks with defined scopes and budgets; monitoring surfaces problems; plans revise based on results. No single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon.
AI fits naturally. Generating plans is a task for competing generative models—multiple systems proposing alternatives, competing to develop better options and sharper critiques. Choosing among plans is a task for humans advised by AI systems that identify problems and clarify trade-offs. Execution decomposes into bounded tasks performed by specialized systems with defined authority and resources. Assessment provides feedback for revising both means and ends. And in every role, AI behaviors can be more stable, transparent, bounded, and steerable than those of humans, with their personal agendas and ambitions. More trust is justified, yet less is required.
Framework for a Hypercapable World | Eric Drexler