The Day Your AI Forgets How to Think
A few weeks ago, a senior director at AMD published an analysis of her team’s AI coding sessions. Thousands of them, spanning months. The data told a simple story: the model they had built their entire engineering workflow around got quietly worse. Not broken - worse. Lazier. Less thorough. It stopped reading code before editing it. It started taking shortcuts and calling them solutions.
The cost of running it went up 80x. The quality went down. Same human effort, dramatically worse output.
Her team switched provider.
The part that stuck with me
It wasn’t the data. It was the realization that the model had changed underneath them and they had no mechanism to detect it, no way to roll back, and no contractual guarantee that reasoning quality would stay constant.
They had built muscle memory around a tool. Prompts, pipelines, conventions - all calibrated to a specific model’s behavior. When that behavior shifted, everything downstream shifted with it.
This is a new kind of dependency. Not a library you can pin. Not a service with an SLA. Something closer to a collaborator who slowly, invisibly, stops caring about the details.
What I keep thinking about
We talk about vendor lock-in as a technical problem. Switch costs, API compatibility, data portability. But the lock-in I’m seeing with AI tools is different. It’s cognitive. Your team learns to think alongside a model. They learn its strengths, work around its weaknesses, develop intuitions about what to ask and how. That knowledge has no export button.
When the model changes - and it will, because these systems are living infrastructure, not versioned software - the investment doesn’t transfer. You start over. Not from zero, but from somewhere uncomfortably close to it.
What I’m doing about it
I don’t have a clean answer. But I’ve started treating every AI-assisted workflow as if the model behind it will be different next month. Not out of paranoia - out of the same instinct that makes you keep understanding your codebase even when the AI writes most of it.
Every few months there’s a new model announcement that supposedly changes everything. I’d encourage some skepticism. These models have real, structural limitations - and the next release won’t fix that. What reduces risk isn’t a better model. It’s better tooling, clearer processes, and an environment built to absorb the imperfections rather than pretend they don’t exist.
The best tool today is not the safest bet. The safest bet is an architecture that survives when the best tool changes.
— Adrian