Test Driven Transformation

The introduction of agile methods has brought a wave of innovation in the business world that some might argue has revolutionized thinking about how organizations should be structured and how people work together. However, as it stands today, much of the promise of agile methods is wrapped up in preconfigured frameworks that offer a one-size-fits-all solution for every business challenge that a company may face. This is despite the fact that the modern organization is a highly complex structure, bordering on chaotic, that is often not best served by the application of frameworks. We see this manifested most commonly today in the failures to scale agile methods within large organizations.
The conversation about failure rates in the world of transformation is similar to prior discussions about the failure rates of projects and programs: both are notoriously vague and poorly defined. Almost all of the surveys that you find (PMI, etc.) use an embarrassing amount of anecdotal evidence to back up their assertions. The very definition of failure is usually so broad as to be completely meaningless. So, with that said, I think it’s important that we are careful with any assertions that transformations are failing or succeeding. In fact, my experience is that when we are talking about transformations within organizations, we are working at such a high level, that it is never clear what is entirely successful or failing. After all, In a good transformation, there is a lot of failure. You experiment, try things out, and find out that they don’t work. I’m not sure I trust anyone who tells me that 100% of their efforts are always successful. That tells me that they aren’t really changing much.
When I speak of frameworks, what exactly do I mean? Well, I’m thinking globally. I’m not just talking about those large scaling frameworks like SAFe and LeSS (that’s easy), I’m also pointing the finger at small scale, team level frameworks like Scrum and XP. And it’s not that these frameworks can’t work or can’t be useful. In fact, I’ve seen them applied and applied well. However, more often than not, they aren’t applied well. I know there is bitter and acrimonious debate on this subject. I’ll leave that battle for others and simply say, “We can do better.”
We need to step back and reassess how we engage with organizations from the very earliest stages of the engagement. It’s no longer sufficient to make prescriptive, framework-oriented recommendations and have any reasonable expectation of those proposals having any kind of success. In fact, I think we may well find they are often more harmful than helpful. Framework oriented approaches give the false promise that their solutions will solve every problem, and when they fail, they leave the customer having wasted tremendous time an energy, without anything to show for it. To make matters worse, consultants implementing such transformations will simply say that the organization didn’t have the right “mindset”, effectively blaming the customer for the failure of the transformation. This allows the consultant to wash their own hands of any responsibility for the failure as they move on to the next engagement with yet another set of pre-packaged proposals.
It’s time that we brought an end to such thinking and begin to focus on how we can properly understand the problems in the organization before we even begin to make recommendations. Then, like with any prescription for a complex system, we need to apply trial experiments not broad frameworks to address the specific problems that we find. Of course, in order to do this well, we need to have reliable means of assessing the health of the system. We need to treat the system like what it truly is, a complex organic structure, that lives and breathes, composed of living elements interacting with each other and participating in flows of ingestion, respiration, and value production for customers. This requires a first principles approach to understanding organizations. We need to understand exactly what organizational health looks like before we can make any kind of decent assessment of the system. To make any recommendations without that sort of understanding is irresponsible.
So what’s our target? Achieving some hypothetical state of agility is not a meaningful or useful target for a transformation. Agility has no objective meaning that a business person finds useful. Instead it is an end state in search of a meaning. In short, it has none.
Alternatively, there are those who propose that we should start from a place of experimentation. That also is an insufficient starting point for working with organizations. A company is not a consultant’s toy to be experimented with. And no one wants to be the subject of experiments. The experimental approach, while well meaning, signals rather strongly that you not only don’t understand the problem, but also that you have no idea what the real solution is. This experimental approach should be considered by any business owner of integrity as completely useless.
What organizations need is a clear eyed and objective assessment of what the problem is. It should be the sort of analysis that allows us to measure our effectiveness against that of our competition and our customer market in some meaningful fashion. Furthermore, based on that data, we should know what the prescription for change should be with a very high degree of confidence. Organizations are not looking for your best guess. They want to have confidence that any change or transformation effort has some reasonably provable possible outcome.
Another way of putting this is to think of it as test driven transformation. We must have some idea of a reasonable set of tests for assessing the relative health of a system. The results of those tests should give us some clue to the different kinds of problems that may afflict the system. They must be quantifiable, and like a doctor, we must have some notion of what the results of the tests imply. It doesn’t mean that we know for sure what the outcome will be, but it also doesn’t mean that we are taking a random shot in the dark. A good doctor will use multiple diagnostic tests to build a picture of the problems with the patient. Based on the results of those tests, the doctor is able to narrow down the treatment to a subset of commonly recommended approaches. Nothing about this is random experimentation, but rather it is a systematic, data-driven approach to understanding the nature of the problem.