Monolithic vs Microservices Architecture

When is monolithic architecture better than microservices, and vice versa? What are the trade-offs between the two? What makes it worth converting from one to the other? How does AI adoption change these considerations?

Monoliths, Microservices, and the Agent Harness #

The April session opened with a lightning talk revisiting a question most engineering leaders thought was settled: what actually pushes teams toward microservices? The talk reframed the familiar pressures — Conway’s law shaping code to match org structure, the headwind of accumulated technical debt making a rewrite more appealing than a refactor, and the greenfield temptation to extract value and build anew. The core provocation was that these pressures conflate two things: the need for a boundary and the need for a network call. Package structure, import constraints, pre-commit hooks, and code review can all enforce boundaries without the operational weight of a distributed system. Every microservice, the argument went, is technical debt — even when it feels like it’s making things easier. Start boring, and add complexity only when you need it.

The Lean Coffee that followed carried the question into four topics.

What is the ROI of complexity? #

The group pushed back on the negative connotation of the word itself, distinguishing essential complexity from overhead. The real cost is how much of the system a developer has to hold in their head to ship a change. That led into the session’s central tradeoff: when do you pay the complexity tax? Upfront, by designing clean service contracts on day one and accepting distributed-system overhead before you know where the real domain seams are — or later, by starting monolithic, moving fast, and paying at the split? The case for paying later is that you often don’t know where the natural boundaries are until the system teaches you, and drawing them prematurely risks a double whammy when you inevitably redraw them. The case for paying earlier is that every month a monolith grows is another month of dependency leakage accumulating in places the original design never intended. By the time you go to split, the hard part isn’t the split itself — it’s finding and untangling the hidden coupling that a clean boundary would have prevented from the start.

Change impact with monoliths. #

SLAs emerged as the decisive factor on change impact. Tight SLAs favor microservices because blast radius stays containable — a failure in one service doesn’t cascade across the whole system. Monoliths repay you with simpler observability and reporting since everything lives in one place, but they’re slower and more expensive to bring back up when things do fail — “bringing the whole elephant back up,” as one attendee put it.

Microservices in the AI harness. #

The conversation then took a turn that would have been hard to imagine a year ago: the same debate is now replaying one layer up, inside coding agent harnesses. Teams are split between building a single large harness that tries to do everything versus orchestrating an ensemble of smaller, purpose-built ones. The parallels are nearly one-to-one, down to the same question of when to pay the complexity tax. Foundational models are such generalists that the harness itself becomes the thing injecting intent and specialization, which means tuning the harness may matter more than which model sits behind it.

What are the boundaries for a service, and when do you make a new one? #

Several attendees shared the moments their organizations felt the weight of the monolith. One team shipped a statically linked binary to end users and watched it balloon until it hit OS-level limits on standard consumer machine configurations — a hard physical ceiling that forced the break. Others described softer but equally telling signals. Onboarding time for new engineers climbing steadily, because ramping into a single small corner of the product had come to require loading the context of the whole system. Diff hot spots in unexpected places — a shared subsystem (an event notification layer was the example) showing up in nearly every unrelated PR, which usually means a dependency isn’t properly abstracted from the consumers using it. And deployment friction: when rolling out a small quality-of-life fix becomes a big operation because the whole monolith has to go out, devops has become the bottleneck for changes that shouldn’t have one. Underneath all of these, one heuristic kept surfacing — follow the data. If the data stores naturally separate, the services probably should too.

The throughline across all four topics was unmistakable: whether you’re drawing a boundary in a codebase, a distributed system, or an agent harness, the fundamentals haven’t changed — only the layer you’re drawing them at has.


Jack Moffatt is the CTO and Co-founder of Linkt.


Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image