Time Between Disengagements is a concept I came across in a recent article from Gitpod, and it offered an interesting new way to think about AI’s role in software development.

It compares the evolution of AI in engineering to the progression of self-driving cars—where the key metric is how long an autonomous system can operate before a human needs to step in. That simple but powerful analogy really clicked with me. It reframes how we should think about the future of AI-assisted development—not just in terms of raw capability, but in how independently and safely these systems can work.

This new agentic environment isn’t just about pushing the boundaries of what AI can do—it comes with real challenges. For AI agents to work autonomously in meaningful ways, they need secure, sandboxed environments; full context that goes beyond code files to include specs, tickets, and architecture; and new ways for us to interact with them. It’s not just about smarter tools, but about reshaping the very foundations of how software gets built.

That last point really stuck with me—this idea that developers of the future might act more like “conductors” than traditional coders. Instead of manually writing every line, we’ll be orchestrating intelligent agents, stepping in only when something complex or high-risk comes up. It’s a shift in mindset and skillset that feels both exciting and a bit daunting.

Gitpod’s vision of standardized, ephemeral dev environments that give agents the security and context they need is an ambitious one—but it also feels like a logical next step. If we can increase the time between AI disengagements from seconds to hours, we’re not just making developers more efficient—we’re redefining what it means to build software in the first place.

You can read Christian Weichel’s article here: Time between disengagements: the rise of the software conductor - Blog