Wabi sabi carries the sprit of embracing inherint brokenness.
The wabi sabi limit is the point where cost of maintenance exceed the risk of re-building your software from ground up. A point where technical debt has gotten so high, bankcruptcy is the only choice.
This is an exploration of the nature of wabi sabi limit.
The law of program evolution dictates that we're gonna run into wabi sabi limit if our software survive long enough.
For most teams, re-writing a production software from scratching is to be avoided. The reasons are best summarized by "remember Netscape 5", and better elaborated by many other people before.
But re-writes are not avoided at all cost. They do happen, decisions to do so do get reached by reasonably smart people, presumably some of them with really good reasons, even taking into account of what we know today. Clearly some of them think the cost and risk is worth paying for.
What are the costs? Too many to count, but one interesting pre-requisite for engaging in a re-write is 100% automated test coverage.
No it's not true, that would be ridiculous. That would rule of 99% of software in production.
Which sounds right. I'm willing to bet the true pre-requisite is probably close to 100% coverage.
Facebook Messenger proudly re-wrote their iOS app with acceptable results (I assume, I'm not a user). I also wanna guess they have the luxury of freezing features for the new build to catch up. Most software that matters can't afford feature-freeze. It'll be interesting to see how FB came to make this bet.
How did FB know they've hit the wabi sabi limit? How conscious were they about this limit? How much of the decision was quantified? Of the quantification, how much of them are full of shit?
How are we supposed to find our wabi sabi limit? It's tempting question, but I highly suspect it's the wrong thing to ask.
It's tempting for the fact that we wanna imagine a two-dimensional graph where cost of maintenance squares off against risks of re-writes. Soon as we get hold of a model, we can point at it to the suits, and they'll have to take it as data.
Nope, I'm more interested in the truth. But you go ahead anyway, the suits will love it.
Cost of maintenance can be calculated if we squint hard enough, but I can't take a model seriously if risks are quantified.
There's amount of value-at-risk equations you can throw at it to get close to modelling reality. Every risk variable we have would be made out of layers of assumptions, it's assumption-turtles all the way down.
If we can't quantify the wabi sabi limit, sounds like it isn't much good helping us make decisions. But maybe that's not the point.
The wabi sabi limit has more power being an mythical figure, The point is for it to be creature behind the curtain, a cautionary tale.
It's not meant to be figured out, but to actively avoid running into. Because by the time you do, it's too late.
One approach is to treat wabi sabi limit as an inevitability. If we see impermenance in everything, all pieces are susceptible to be thrown away and replaced.
Then it helps to decouple, micro service, isolate code rot, minimize surface area of re-writes. Basically everything they've been telling you is a good idea.
The wabi sabi limit is not static. Even if you hold the re-write risk constant, the cost of maintenance stands a chance of going down given the will.
Wabi sabi limit is not solely an engineering concern. The suits arguably care more about the risk equation. In fact engineers enjoy re-writes regardless of the business consequences.
Acknowledging this limit allows both camps to point to a bogeyman. Not in such a way that measuring it ends up invalidating a gamed metric, but allows the team to develop a negative capability in dodging the risky bullets of re-writes.