Businesses can remain dependable only if they get a full grip on risk and complexity
This latest BriefingsDirect discussion from The Open Group Conference recently in Philadelphia explores the essential role of standards in an increasingly complex and unpredictable world.
From risks around cybersecurity to supply chain concerns to fast-changing trends around cloud computing, the pace of change and pressures on businesses to adjust well have never been higher. To gain a fuller grip on such risk and complexity, The Open Group is shepherding a series of standards and initiatives to provide better tools for understanding and managing true operational dependability.
BriefingsDirect sat down with the President and CEO of The Open Group, Allen Brown, at the July conference to gather an update on the efforts. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: What are the environmental variables that many companies are facing now as they try to improve their businesses and assess the level of risk and difficulty?
Brown: There are a lot of moving targets. We're looking at a situation where organizations are having to put in increasingly complex systems. They're expected to make them highly available, highly safe, highly secure, and to do so faster and cheaper. That’s kind of tough.
Gardner: One of the ways that organizations have been working toward a solution is to have a standardized approach, perhaps some methodologies, because if all the different elements of their business approach this in a different way, we don’t get too far too quickly, and it can actually be more expensive.
Perhaps you could paint for us the vision of an organization like The Open Group in terms of helping organizations standardize and be a little bit more thoughtful and proactive toward these changed elements?
Brown: With the vision of The Open Group, the headline is "Boundaryless Information Flow." That was established back in 2002, at a time when organizations were breaking down the stovepipes or the silos within and between organizations and getting people to work together across functioning. They found, having done that, or having made some progress toward that, that the applications and systems were built for those silos. So how can we provide integrated information for all those people?
As we have moved forward, those boundaryless systems have become bigger and much more complex. Now, boundarylessness and complexity are giving everyone different types of challenges. Many of the forums or consortia that make up The Open Group are all tackling it from their own perspective, and it’s all coming together very well.
We have got something like the Future Airborne Capability Environment (FACE) Consortium, which is a managed consortium of The Open Group focused on federal aviation. In the federal aviation world they're dealing with issues like weapons systems.
Over time, building similar weapons is going to be more expensive, inflation happens. But the changing nature of warfare is such that you've then got a situation where you’ve got to produce new weapons. You have to produce them quickly and you have to produce them inexpensively.
So how can we have standards that make for more plug and play? How can the avionics within a cockpit of whatever airborne vehicle be more interchangeable, so that they can be adapted more quickly and do things faster and at lower cost. After all, cost is a major pressure on government departments right now.
We've also got the challenges of the supply chain. Because of the pressure on costs, it’s critical that large, complex systems are developed using a global supply chain. It’s impossible to do it all domestically at a cost. Given that, countries around the world, including the US and China, are all concerned about what they're putting into their complex systems that may have tainted or malicious code or counterfeit products.
The Open Group Trusted Technology Forum (OTTF) provides a standard that ensures that, at each stage along the supply chain, we know that what’s going into the products is clean, the process is clean, and what goes to the next link in the chain is clean. And we're working on an accreditation program all along the way.
We're also in a world, which when we mention security, everyone is concerned about being attacked, whether it’s cybersecurity or other areas of security, and we've got to concern ourselves with all of those as we go along the way.
Our Security Forum is looking at how we build those things out. The big thing about large, complex systems is that they're large and complex. If something goes wrong, how can you fix it in a prescribed time scale? How can you establish what went wrong quickly and how can you address it quickly?
If you've got large, complex systems that fail, it can mean human life, as it did with the BP oil disaster at Deepwater Horizon or with Space Shuttle Challenger. Or it could be financial. In many organizations, when something goes wrong, you end up giving away service.
An example that we might use is at a railway station where, if the barriers don’t work, the only solution may be to open them up and give free access. That could be expensive. And you can use that analogy for many other industries, but how can we avoid that human or financial cost in any of those things?
A couple of years after the Space Shuttle Challenger disaster, a number of criteria were laid down for making sure you had dependable systems, you could assess risk, and you could know that you would mitigate against it.
What The Open Group members are doing is looking at how you can get dependability and assuredness through different systems. Our Security Forum has done a couple of standards that have got a real bearing on this. One is called Dependency Modeling, and you can model out all of the dependencies that you have in any system.
A very simple analogy is that if you are going on a road trip in a car, you’ve got to have a competent driver, have enough gas in the tank, know where you're going, have a map, all of those things.
What can go wrong? You can assess the risks. You may run out of gas or you may not know where you're going, but you can mitigate those risks, and you can also assign accountability. If the gas gauge is going down, it's the driver's accountability to check the gauge and make sure that more gas is put in.
We're trying to get that same sort of thinking through to these large complex systems. What you're looking at doing, as you develop or evolve large, complex systems, is to build in this accountability and build in understanding of the dependencies, understanding of the assurance cases that you need, and having these ways of identifying anomalies early, preventing anything from failing. If it does fail, you want to minimize the stoppage and, at the same time, minimize the cost and the impact and, more importantly, make sure that that failure never happens again in that system.
The Security Forum has done the Dependency Modeling standard. They have also provided us with the Risk Taxonomy. That's a separate standard that helps us analyze risk and go through all of the different areas of risk.
Now, the Real-time and Embedded Systems Forum has produced the Dependability through Assuredness, a standard of The Open Group, that brings all of these things together. We've had a wonderful international endeavor on this, bringing a lot of work from Japan, working with the folks in the US and other parts of the world. It's been a unique activity.
Dependability through Assuredness depends upon having two interlocked cycles. The first is a Change Management Cycle that says that, as you look at requirements, you build out the dependencies, you build out the assurance cases for those dependencies, and you update the architecture. Everything has to start with architecture now.
You build in accountability, and accountability, importantly, has to be accepted. You can't just dictate that someone is accountable. You have to have a negotiation. Then, through ordinary operation, you assess whether there are anomalies that can be detected and fix those anomalies by new requirements that lead to new dependabilities, new assurance cases, new architecture and so on.
The other cycle that’s critical in this, though, is the Failure Response Cycle. If there is a perceived failure or an actual failure, there is understanding of the cause, prevention of it ever happening again, and repair. That goes through the Change Accommodation Cycle as well, to make sure that we update the requirements, the assurance cases, the dependability, the architecture, and the accountability.
So the plan is that with a dependable system through that assuredness, we can manage these large, complex systems much more easily.
Gardner: Many of The Open Group activities have been focused at the enterprise architect or business architect levels. Also with these risk and security issues, you're focusing at chief information security officers or governance, risk, and compliance (GRC), officials or administrators. It sounds as if the Dependability through Assuredness standard shoots a little higher. Is this something a board-level mentality or leadership should be thinking about, and is this something that reports to them?
Brown: In an organization, risk is a board-level issue, security has become a board-level issue, and so has organization design and architecture. They're all up at that level. It's a matter of the fiscal responsibility of the board to make sure that the organization is sustainable, and to make sure that they've taken the right actions to protect their organization in the future, in the event of an attack or a failure in their activities.
The risks to an organization are financial and reputation, and those risks can be very real. So, yes, they should be up there. Interestingly, when we're looking at areas like business architecture, sometimes that might be part of the IT function, but very often now we're seeing as reporting through the business lines. Even in governments around the world, the business architects are very often reporting up to business heads.
Gardner: Here in Philadelphia, you're focused on some industry verticals, finance, government, health. We had a very interesting presentation this morning by Dr. David Nash, who is the Dean of the Jefferson School of Population Health, and he had some very interesting insights about what's going on in the United States vis-à-vis public policy and healthcare.
One of the things that jumped out at me was, at the end of his presentation, he was saying how important it was to have behavior modification as an element of not only individuals taking better care of themselves, but also how hospitals, providers, and even payers relate across those boundaries of their organization.
That brings me back to this notion that these standards are very powerful and useful, but without getting people to change, they don't have the impact that they should. So is there an element that you've learned and that perhaps we can borrow from Dr. Nash in terms of applying methods that actually provoke change, rather than react to change?
Brown: Yes, change is a challenge for many people. Getting people to change is like taking a horse to water, but will it drink? We've got to find methods of doing that.
One of the things about The Open Group standards is that they're pragmatic and practical standards. We've seen in many of our standards that where they apply to product or service, there is a procurement pull through. So the FACE Consortium, for example, a $30 billion procurement means that this is real and true.
In the case of healthcare, Dr. Nash was talking about the need for boundaryless information sharing across the organizations. This is a major change and it's a change to the culture of the organizations that are involved. It's also a change to the consumer, the patient, and the patient advocates.
All of those will change over time. Some of that will be social change, where the change is expected and it's a social norm. Some of that change will change as people, generations develop. The younger generations are more comfortable with authority that they perceive with the healthcare professionals, and also of modifying the behavior of the professionals.
The great thing about the healthcare service very often is that we have professionals who want to do a number of things. They want to improve the lives of their patients, and they also want to be able to do more with less.
There's already a need. If you want to make any change, you have to create a need, but in the healthcare, there is already a pent-up need that people see that they want to change. We can provide them with the tools and the standards that enable it to do that, and standards are critically important, because you are using the same language across everyone.
It's much easier for people to apply the same standards if they are using the same language, and you get a multiplier effect on the rate of change that you can achieve by using those standards. But I believe that there is this pent-up demand. The need for change is there. If we can provide them with the appropriate usable standards, they will benefit more rapidly.
The focus of The Open Group for the last couple of decades or so has always been on horizontal standards, standards that are applicable to any industry. Our focus is always about pragmatic standards that can be implemented and touched and felt by end-user consumer organizations.
Now, we're seeing how we can make those even more pragmatic and relevant by addressing the verticals, but we're not going to lose the horizontal focus. We'll be looking at what lessons can be learned and what we can build on. Big data is a great example of the fact that the same kind of approach of gathering the data from different sources, whatever that is, and for mixing it up and being able to analyze it, can be applied anywhere.
The challenge with that, of course, is being able to capture it, store it, analyze it, and make some sense of it. You need the resources, the storage, and the capability of actually doing that. It's not just a case of, "I'll go and get some big data today."
I do believe that there are lessons learned that we can move from one industry to another. I also believe that, since some geographic areas and some countries are ahead of others, there's also a cascading of knowledge and capability around the world in a given time scale as well.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.