When the Copilot Becomes the Pilot (and You Stop Flying)

There’s a hidden cognitive cost to using AI — and it’s showing up in how teams think, learn, and make decisions.

Ever since ChatGPT (and LLMs in general) came into the scene, our workplaces have taken a colossal, unprecedented leap. Companies are pushing — sometimes even mandating — the integration of AI into business workflows to cut down on repetitive jobs and save money, while individual team members are using it to handle everyday tasks, from answering routine corporate emails and creating LinkedIn job postings to writing and fixing code. 

The shift has gradually moved on from AI copilots capable of answering like a human to full-blown agentic systems that can reason across data, plan, and take actions in digital environments, proving to be a complete game-changer. 

And the results are already showing their impact. 

Microsoft recently touted that it has saved over $500 million in call center operations by implementing AI across sales, support, and engineering workflows. Meanwhile, Google and Meta have hinted at increased use of AI in coding, with plans to have these agents write most of their code by 2026. In the same vein, Nvidia’s CEO Jensen Huang observed that IT departments will “become the HR of AI agents,” given how many of them will be operating at different enterprise levels.

But, in this AI ‘gold rush,’ where everyone seems to be chasing automation, speed, and efficiencies, leaders can often lose sight of something very important: the impact of AI on the cognitive functions of their workforce.

AI’s cognitive debt on humans

Last month, researchers at MIT Media Lab published the results of a study that used an essay writing exercise to determine how using ChatGPT affects human cognition over four months. They compared three groups: ChatGPT users, Google Search users, and unaided, ‘brain-only’ participants. 

Through EEG brain scans, NLP analysis, and interviews over multiple sessions, researchers found that the participants relying on ChatGPT produced superficially competent essays, but 83% of them struggled to recall or quote their own work, with their brain activity showing signs of cognitive offloading and reduced mental effort. In contrast, brain-only participants exhibited the highest cognitive load and neural connectivity, suggesting deeper learning and mental engagement. 

Interestingly, when ChatGPT users switched to unaided writing, they showed diminished cognitive engagement, indicating what the authors described as ‘cognitive debt.’

Simply put, relying too much on AI can impact your attention, long-term memory retention, and internal learning.

While this study focuses just on essay writing, it’s not hard to extrapolate how these effects may impact modern enterprises and their employees.

For instance, a survey by Resume Builder found that 60% of HR professionals used AI tools like ChatGPT to seek assistance in hiring or firing decisions, with one in five even relying on AI to make the final call. AI can easily enhance efficiency in such processes, but over-reliance on the tech for short-term productivity gains could impact how managers exercise critical thinking, retain key details, and learn through hands-on experience.

“When we regularly outsource key cognitive functions like reasoning or decision-making, we risk deconditioning the neural networks that support those abilities. LLMs can make things faster and easier, but if we stop actively engaging with problems ourselves, the long-term cognitive cost could be real,” Dr. Luke Barr, a neurologist and chief medical officer at SenseIQ, told Future Nexus, while noting that the MIT study confirms concerns he’s had for a while.

Barr noted that AI, when used correctly, can increase efficiency, but the real problem of long-term erosion of human insight, judgement, and creativity begins when people start defaulting to LLMs’ output for everything, without self-reflection — “the quiet shift from support to substitution.”

“Many teams are already over-relying on them, and the tipping point is subtle. You cross it when people stop questioning, stop rewording, and stop editing…It’s not about how often you use AI — it’s about how actively your brain stays involved when you do,” Barr added, while noting that he has seen patients, professionals, and even clinicians accepting algorithmic answers as definitive, when in reality these tools are fallible and lack context.

When AI handles more of the thinking, teams may start engaging with less depth — working faster, but learning and reasoning less.

Striking the right balance

While many, including Nvidia’s Huang, continue to emphasize that AI is advancing their cognitive skills, rather than impacting them as suggested by MIT, the truth actually lies in the middle — striking the right balance and using the technology correctly.

Instead of relying on AI to think and execute for them, teams should treat it as an extension of their own thinking — a thought partner amplifying (and not replacing) their strategy, reflection, nuance, and creative synthesis. 

After all, AI is trained on collective intelligence — data from the public web — and it does not have a unique point of view. Relying on it too much would be losing out on that uniqueness. However, if you use it as a brainstorming partner, questioning its approach and building on your own point of view, then the results can be stronger.

“When someone assumes the AI must know better, they begin deferring their discernment. That’s where the danger lives. We see this as a wake-up call, not to pull away from AI, but to engage it differently. AI can support higher-order thinking, but only if we teach people to remain aware of their own thought process while using it. Without that awareness, the tool begins to think for you instead of with you, and that subtle shift has massive implications,” Matt and Joy Kahn, authors of Awakening of Intelligence — a book exploring the future of humans’ relationship with AI — told Future Nexus.

As for leaders implementing new age AI and agents, the key would be to ask not just ‘What can this tool do for us?’ but also ‘What does relying on it train us not to do anymore?’ 

This, as Barr explains, will help them make the right decisions, designing workflows that will get the most out of AI, such as when dealing with repetitive or data-heavy aspects of work, and ensure active engagement rather than passive consumption. It will enable teams to thrive on challenge and complexity, leveraging their creativity, ethical reasoning, or collaborative thinking — things machines just can’t replace.

Lastly, given that speed and output volumes are only going to explode in this AI-driven environment, leaders should also look at new metrics of productivity, such as valuing how decisions are made across their organization. Are they thoughtful, ethical, and adaptable, or do they optimize for the desired outcome? 

“We should track cognitive engagement, resilience under pressure, and the ability to learn and problem-solve over time. These are the traits that will define high-performing teams in the future — not just how quickly they can produce a response,” Barr warned.