Many EA programs we examine suffer from one problem; they are mired in the details of a few narrow problem spaces and, as a result, are not as broadly focused as they should be. A simple narrative-based analysis can help EA teams find the right level of detail for their work. The narrative, while deceptively simple, reveals a lot about just how effective an EA program is, our how effective a new one can be.
The narrative leads to a simple test, one that is not objective and wouldn’t stand scrutiny as a well-constructed metric. Instead it is in the form of a subjective analysis. It is meant to focus EA leaders on one aspect of EA; to drive consistent, enterprise-wide convergence towards a desired future state enterprise architecture. The premise is simple, that no matter how “perfect” and detailed the architecture, it will only make a difference to the organization if it is relevent, well-understood, it informs decision-making, and it is uniformly applied to the project and asset portfolios of the enterprise.
The test begins by “imagining” the behavior of three (3) different project teams, all assigned the same exact project (out of the set of any projects that are typical for the organization). The teams are not permitted to talk to each other. Ask yourself how likely is that all three will independently produce essentially the same fundamental design? We’re not looking for identical results, e.g. the same lines of code, but whether they reach the conclusion to use the same primary infrastructure components, arranged in acceptable ways, manage data and information in the same logical and physical chunks, integrate with other applications with the same style and approach, be secure and reliable, etc. In the absence of enterprise architecture, the answer is usually “not very likely”. Designs will be based on each individual team members world view, their experience base, what they are comfortable with, what new things they are interested in pursuing, how they are influenced by others including colleagues, business-side leaders, vendors, and so on.
The next part of this “thought experiment” is to ask yourself the following questions: In your organization, are you providing enough information so that each team will, through the normal course of project execution, get “the most important things” right? Do you know what “the most important things” are? Are you effectively communicating them and educating/guiding the organization?
The final step is to use your findings to:
- influence content creation to make sure we address “the most important things” (because EA teams often spend disproportionate time deep-diving into narrow topics that have limited impact on the enterprise – we have always said EA should be “breadth before depth”)
- communicate to and educate the people who will have the most impact across the enterprise (because EA teams often fight fires and work exclusively on the hot projects and thus only influence narrow stakeholder communities)
- create usable, easily consumable deliverables that can be quickly located (because EA teams often create excessively detailed and lengthy deliverables instead of actionable and prescriptive content, and bury them in multiple levels of hierarchy instead of with links and/or tags in easily searchable repositories)
EA leaders that regularly contemplate the implications of questions like these tend to be more systematic in defining the work that they need to do, and are more successful in completing that work in the way that has maximum impact across the enterprise. Furthermore, they tend to suffer from less churn and are not in “react mode” as often. Those that are fortunate enough to be able to engage their leadership in the analyses of scenarios like these tend to get maximum support for their ongoing EA programs.
There is another high level test that makes sense here as well, I believe. A lot of the minor, more evolutionary changes happen as a result of the day to day activities of the operational support groups (moving servers around; installing new versions of software; rebalancing workloads, etc).
It is very possible that through a combination of governance and/or leadership to direct a defined set of projects to a consistent implementation/use of the primary infrastructure components. It is also necessary to ensure that in the day to day battle, the infrastructure components are evolving in a consistent manner.
So, another question would be — if you gave support of the same infrastructure components (say data services or application hosting services) to three different operational groups, would these components would evolve in a consistent manner over some period of time or would the set of tactical decisions result in three entirely different components.