I’ve been reflecting on a recent experiment building a multi-agent system that generates software from a product specification. What stood out was not whether it could produce code, it obviously can. What stood out was how quickly weak product definition showed up in the output. In the first version, the system produced something functional, but it was clearly built on incomplete assumptions.… Read more
What Building a Multi-Agent System Taught Me About Autonomous Software Development
In August 2022, before I was using AI this way, I wrote: “Computers do what we tell them to. We’re just not always clear with what we’re asking for.”
Building with AI has only made that more obvious.
In fact, it made something else clear too: even when we think we are being clear, there is still a lot of room for a system to interpret the request in ways that are technically valid and product-wise wrong.… Read more
From Documents to Defensible Claims: Rethinking Document Systems
Most document systems are built for storage and extraction.
That is the wrong goal.
In high-trust environments, the real job is not to collect documents or parse fields. It is to determine whether the available evidence actually supports a claim.
That is a different system.
Organizations generate endless documentation: invoices, contracts, certifications, logs, reports, statements, attestations.… Read more
Your Team Isn’t Blocked. It’s Waiting for Perfect.
As systems scale, so do dependencies. The question is not whether they exist. It is whether teams know how to work through them.
I’ve seen teams say they’re blocked by dependencies when they’re actually blocked by something else: nobody wants to move until the boundary is finished.
That is a costly mistake.… Read more
