Five bottlenecks across forty years. Goldratt and the Theory of Constraints. From compilation, to integration, to communication, to deployment. Decisions and context, the constraint nobody has named. The hardest part was never writing the code.
There is a pattern in the way software actually gets built. The bottleneck, the thing that decides how fast a team can really move, never sits still. It shifts every decade or so, the industry chases it for a few years, beats it down to a manageable size, and then quietly discovers it has moved somewhere else. I have watched it move four times. The fifth move is happening now, and most teams have not noticed.
The pattern has had a name for forty-two years. Eliyahu Goldratt published The Goal in 1984 and named what every operations team had already half-noticed: the throughput of any system is set by its single slowest step, and the work of management is to find that step, exploit it, subordinate everything else to it, then invest to remove it. The catch, which gets less attention than it deserves, is the last step. Once you remove the constraint, a new one will appear somewhere else in the system, and the discipline begins again.
The constraint always moves. The only question is whether you notice when it does.
In our discipline the constraint has moved roughly once a decade.
The first four moves
In the 1980s the bottleneck was compilation. Builds took hours. Linking a serious C++ codebase was a coffee break with overflow. The industry attacked the problem with incremental compilers, distributed builds, faster languages and faster machines, and within a decade the build had stopped being the thing that decided how fast you could ship.
In the 1990s the bottleneck was integration. Branches diverged. Big-bang merges took weeks. A team of twenty engineers could spend a quarter trying to reconcile what they had each been doing in parallel. Continuous integration, in its early Cruise Control form and then in the Jenkins generation, dragged that constraint out of the work. Shorter branches, trunk-based development, automated test suites running on every commit. By the end of the decade integration had stopped being where the time went.
In the 2000s the bottleneck was communication. Distributed teams, late discoveries of misalignment, the gap between what the customer asked for and what the engineering team thought it had heard. Kent Beck's 1999 book and the Agile Manifesto that followed were the response. Standups, sprint planning, retrospectives, story walls, pair programming. A whole generation of practices designed to make the conversation cheaper and more frequent. By the late 2000s communication had stopped being the thing that broke teams, in the cases where the teams used the practices.
In the 2010s the bottleneck was deployment. Code sitting on a branch waiting for a release window. Manual ops, downtime, three-month release cycles. The DevOps movement, infrastructure-as-code, containers, Kubernetes, the whole continuous-delivery stack. Jez Humble and David Farley's 2010 book was the field manual. By 2020 most serious teams could ship multiple times a day, and deployment had stopped being the constraint.
That brings us to now.
The fifth move
In the last eighteen months, the bottleneck has moved again, and almost nothing in the industry's tooling has caught up to it. The constraint is no longer the build, the merge, the conversation, or the deploy. It is decisions and context. What the team has decided, what each agent currently has access to, and whether the two are actually the same thing.
I think most teams have read this move as a code-quality problem and missed the bigger thing.
When AI agents arrived in development workflows, the obvious worry was that the code they wrote would not be good enough. The early hand-wringing was almost entirely about correctness, security, hallucinated APIs, made-up function calls. That worry turned out to be the easy part. Code review catches it. Tests catch it. The agents themselves are, in 2026, quite good at writing code that compiles, passes tests, and does roughly what was asked.
What nobody had a good answer to was a different question. Did the agent know what the team had already decided? Was it building on top of last week's architectural choice or quietly reinventing it? Was it consistent with the agent two engineers over, working on a related part of the same system? The agent could write the code very well. It just had no way of knowing what kind of code the team had already committed to writing.
Where the decisions live now
This shows up in three specific ways, and any team running agents in earnest will recognise all three.
The first is decisions locked inside AI conversations nobody else can see. An engineer sits down on a Tuesday morning, has a forty-minute conversation with their agent, settles a meaningful architectural choice, and ships the work. The decision was made. It is now in the code. But the only record of how the team got to it lives inside one private chat thread. By Thursday a colleague is having a different conversation with a different agent and reaching a different conclusion. Both are sensible. Both are inside the same codebase. Neither knows the other exists.
The second is institutional knowledge decaying inside documents written for an older version of the system. Every team has a wiki, an architecture doc, a README. Six months ago they were correct. Now the system has migrated from Postgres to DynamoDB, the auth flow has been rewritten, two services have been merged into one, and the wiki still confidently describes the previous state. When the agent reads it, the agent does its best with what it has been given, and the agent is now wrong with confidence.
The third is the absence of any coordination between agents working on related parts of the same system. Three engineers, three branches, three Claude or Cursor sessions running independently. Each one solves its piece coherently. None of them is aware of what the others are doing. Where a human team would have walked past each other in the kitchen and noticed the collision, the agents have no kitchen.
The shape of the new bottleneck is therefore not exactly invisible decisions, although that is the most visible symptom. It is bidirectional. The decisions a human now makes are increasingly invisible to the rest of the human team because they happen inside a private agent conversation. And the decisions the team has made historically are increasingly invisible to the agent because the documents the team relied on to record them were not built to be read by a machine that takes them entirely literally.
The team can no longer see what the team has decided. The agent cannot see it either.
The same mechanism that lets a senior engineer with a competent agent now ship what used to take a team of five also makes the team's collective decision-making harder for anyone else to see. Compression of the team and compression of its visible thinking are the same force, and that force is the same one closing the bottom of the hiring market. The series comes back to that in later chapters.
The new discipline
What this all means is that the work of the next few years in this discipline is not really about better agents or better tests. The agents are good and the models will keep improving. The tests are tractable. The work is in giving the team and its agents a shared, queryable, current account of what the team has actually decided, so that the next conversation on the next Tuesday morning starts from where the team really is rather than from where one document says it used to be.
That is the new constraint. Goldratt's discipline applies. We have to find it, name it, exploit it, subordinate everything else to it for a while, and invest to remove it. None of those steps has happened seriously yet at industry scale, which is why we are in the moment we are in.
The hardest part of building software was never writing the code.
— Barrie
I am co-founder and CEO of Mindset AI, where we are building Memex AI, a decision and knowledge layer for AI-native engineering teams. This series is the thinking that shapes our product. I will flag it explicitly when an article touches something we build. Most of it is simply where the industry is going, with or without us.