// Extreme AI Programming · #01

Why Agile Doesn't Fit Anymore

By Barrie Hadfield · Tuesday 5 May 2026 · 6 min read

Kent Beck published Extreme Programming Explained in 1999. I read it as a young engineer running a small team that kept getting lost in its own commitments, and the book shaped how I thought about the job for the next twenty-five years. Pair programming, TDD, short iterations. Along with the Agile movement it helped seed, XP became the way most serious software got built, and the way most serious engineering teams got run.

That run is ending, for reasons that have very little to do with Agile itself and everything to do with who is actually writing the code. In any serious AI-native team, the developer is increasingly not a human being. The developer is an agent. Often it is several of them, each running its own conversation with a different member of the team, each producing work on its own cadence, and the coordination problem has changed shape entirely. Agile assumed developers on one side of the table and customers on the other. That is no longer what the table looks like.

I am starting a weekly series, and eventually a book, called Extreme AI Programming. It runs here and on Substack, roughly once a week, and it is my attempt to describe what a serious, professional discipline for building software with AI agents actually looks like. The title is a homage to Beck, and not a subtle one. I am well aware of the debt. Without the work Beck, Fowler and the others did in the late nineties, there is nothing to build on here.

A short note on where this is coming from. I have been building software companies for over thirty-five years. I co-founded Workshare in 1999 and spent nearly a decade there as CTO and board director, running worldwide product marketing and R&D while the company grew to market leadership across sixty-four countries. I returned as CTO in 2012, and we sold Workshare to HG Capital in 2019 as part of their thesis to consolidate Litera, Microsystems and Workshare into a single drafting platform. In the gap between the two Workshare stints I founded Tyger Consulting and co-founded SkyDox as CEO. Today I am co-founder and CEO of Mindset AI. Across those years I have run engineering teams of hundreds, raised money across multiple cycles, and shipped software that has generated hundreds of millions of pounds in revenue. None of this is observation from a distance. I have been running an AI-native company for over a year, using the practices I plan to write about here.

Here is the thesis of the series, compressed into a paragraph. Agile is, at heart, a coordination protocol between humans. When most of the implementation work is being done by agents that never forget, never tire and never need to be motivated on a Monday morning, the shape of the coordination problem is different. The hard part is no longer keeping a team of engineers aligned with each other. It is keeping humans, agents and the codebase itself aligned with the decisions the team has actually made, and making sure each agent acts on the current set of decisions rather than an old one, a half-remembered one, or someone else's private one.

This is why articulate humans matter more than ever, and not because of any mystical human edge. Agents are excellent at writing code when they know what they are meant to be writing. They are also quite capable of producing fluent, confident code that does precisely the wrong thing, and doing so very quickly. The difference between those two outcomes is almost entirely the precision of the brief they were given. An agent is only as good as its specification. The better the agent gets, the more unforgiving the gap becomes between what you said and what you meant.

The old distinction between the developer and the customer, the one Agile spent twenty years closing, has quietly dissolved. In its place there is a newer distinction, and it is between people treating AI as a genuine instrument to be mastered and people treating it as a slot machine.

The slot-machine version has picked up a name. People call it vibe coding. Prompt, accept, ship. If the code runs, move on. If it doesn't, prompt again. It feels productive in the moment, and for a weekend project it is perfectly fine. As a professional practice it is quietly catastrophic. You end up with codebases the team cannot evolve, decisions the team does not remember making, and a continuous accumulation of incoherence nobody can see until it breaks something expensive.

The alternative, and the subject of most of what I plan to write, is a more disciplined way of working. Intent set clearly up front. Decisions captured as they are made. Rules for how software gets built, written down somewhere every agent and every engineer on the team will actually read. Plans reviewed before code is written, rather than code reviewed after the fact. None of this is new in spirit, and a reasonable reader could point out that it is roughly what good engineering has always looked like. The cast has changed, though, and the practices have to change with it.

Which brings me to the part of this I care about most, which is the question of what technologists actually do now. There is a great deal of talk at the moment about whether our profession is ending. I don't believe a word of it. What is ending is a particular mode of the work, specifically the part where a human types most of the code by hand. Something new is beginning in its place, and I find it genuinely exciting. The work has moved up the stack, away from the mechanics of production and toward the articulation of intent, the setting of constraints, and the judgment about what is worth building and what is not. Those are the genuinely interesting parts of the work, and the parts the job was always really about. We have simply been handed a set of tools that take care of the mechanical middle, and the honest response to those tools is to learn how to use them well, rather than either fearing them or worshipping them.

For anyone who has been in software a long time, I think this is the most interesting moment in the discipline since the arrival of the web. The barrier of programming language, the particular question of which one you happen to know, has quietly become less important than at any point in my career. What matters now is clarity of intent, and the ability to articulate what you want in enough detail that a competent instrument can produce it. That specific skill has always been scarce. For the first time it is directly economically productive.

Over the next few months the series will cover what I think the new discipline actually contains. The roles and how they have recomposed, including which Agile ceremonies still make sense and which do not. The new first-class artefacts, which in my view are decisions, blueprints and execution plans. How to review work produced by an agent without becoming its babysitter. The economics of running a team where one engineer and three agents produce what five engineers used to, and the things that get harder rather than easier. And occasionally, because there is no point pretending otherwise, where the commercial work I am doing with Mindset AI sits inside the argument.

One article a week. If the argument resonates, I would be glad of the company. If it doesn't, I would be glad to hear why. I would rather be argued with now than wrong in print later.

— Barrie

I am co-founder and CEO of Mindset AI, where we are building Memex AI, a decision and knowledge layer for AI-native engineering teams. This series is the thinking that shapes our product. I will flag it explicitly when an article touches something we build. Most of it is simply where the industry is going, with or without us.