A discussion on how advances in design enhance the total experience for IT operators, making usability a key ingredient of modern hybrid IT systems.
Listen to this podcast discussion on how Texmark, with support from HPE and HPE channel partner CB Technologies, has been combining the refinery of the future approach with the best of OT, IT, and IoT technology solutions to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.
Many of the latest technologies -- such as Internet of Things (IoT) platforms, big data analytics, and cloud computing -- are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.
The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.
BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.
The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibus; Dr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at Boeing; Don Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?
Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens -- and money is just a means to achieving that end.
This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.
Bottom-up blueprint approach
Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down.
In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.
There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.
Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.
This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help.
Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken.
These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.
Sunderland: I agree with everything that’s been said so far. In the particular case of New York City -- and with a lot of cities in the US -- cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set.
The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive.
Create demand to drive demand
Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone.
If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need.
In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract -- without somehow having demand developed by the user base -- is pretty difficult.
By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector.
The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand.
Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications?
Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.
This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on.
It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.
Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives?
Framework for a common foundation
Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously -- and I suspect that this is actually a problem that cities share with corporate enterprises.
Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains -- and they need to share data. What’s needed is some kind of framework to allow data sharing to happen.
This framework has to be at two levels. It has to be at a policy level -- and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.
Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things?
Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.
Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned.
One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.
Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.
The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority -- not a specific agency or department of the city.
Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services.
Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?
Public- and private-sector parity
Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?
This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there.
Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.
So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.
Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public -- and if we know a common data framework is important -- why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?
Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.
In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward.
The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity -- like an Amazon or Facebook can -- would be doomed to failure, because nobody could actually afford to implement it.
Since you can’t do top-down solutions -- even if you pass a law -- the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up -- as opposed to the top-down.
Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?
Not something to legislate
Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful.
The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”
That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management.
It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.
A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice.
Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?
Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.
Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?
How to pay for IT?
Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us -- and the vendors -- into a world of interoperability and a common data approach.
Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.
Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector.
Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.
What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.
This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.
The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.
There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.
So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.
Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?
Cities in the cloud, but services grounded
Saha: Across the world, many governments -- not just local governments but even state and central governments -- are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.
It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.
Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.
Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?
Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.
One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.
There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.
This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.
Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.
Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?
Smart city success strategies
Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities.
One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.
At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.
Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.
Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?
Collective, crystal-clear standards
Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.
Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.
Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?
Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns. That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.
Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.
We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.
The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.
I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.
Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?
Use-case consumer value
Sunderland: I like the idea of reference architectures attached to use cases because -- for better or worse -- when folks engage around these issues -- even in large entities like New York City -- they are going to be engaging for specific needs.
Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.
Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.
When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?
Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced -- or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.
Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.
This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.
Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?
Prioritize policy and people
Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.
The development of policy around data sharing -- or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.
Gardner: Dr. Saha, same question to you.
Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health -- or whatever the priority is for that city.
Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.
The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.
It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.
Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?
City smarts and safety first
Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.
In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.
In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.
Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service -- all the way down to the data, so that eventually data can be tied back to the roles.
For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors.
Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?
Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.
The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are -- whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.
When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.
You may also be interested in:
The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.
We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks -- from use of illegal and unethical labor practices to hidden environmental malpractices.
Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?
Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.
First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good.
Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society -- and not just generating dividends and capital growth for shareholders.
Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent -- particularly Millennials and Generation Z -- and they are more likely to get better investment rates as well.
Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?
Better business decisions with AI
Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.
We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings.
These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.
What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.
Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba.
Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.
McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand -- not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements.
But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.
We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.
At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive.
They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks -- like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators -- it’s all that data matrix that comes together that tell the actionable story.
Some companies still just want to check a [compliance] box. Other companies want to dig deeper -- but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips.
Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.
Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”
They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business.
What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in.
Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score.
And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices.
Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own -- and SAP Ariba is now aggregating the aggregators.
How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from?
Timely insights, best business decisions
Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions.
The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there’s risk, there is opportunity.
So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.
We're looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there is risk there is opportunity.
Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?
Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.
When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt.
Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them?
AI risk assessment reaps rewards
Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology -- through our procurement suite, for example -- to impact the decisions they’re making.
For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time.
A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas.
Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?
Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.
We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.
AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.
The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material.
And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant.
Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.
We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article.
By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”
And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly.
Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem?
The AI-aggregation advantage
McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably
We look at many indices. We actually just revamped our social indices a couple of years ago.
Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources.
Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them.
We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it.
Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?
Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.
We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame.
And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly.
What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers.
Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?
Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba's supplier risk solutions directly.
Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those.
If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.
Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well -- or not so well.
Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency -- how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?
Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.
How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable.
And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”
And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process.
We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it -- but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.
Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen?
Pay attention, be prepared
Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that.
From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great.
They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.
Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.
And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.
Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?
Synergistic risk management
Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team.
I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.
I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers -- and who are the suppliers to the suppliers to the suppliers?
We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.
You may also be interested in:
- Envisioning procurement technology and techniques in 2025: The future looks bright
- Diversity spend: When doing good leads to doing well
- Bridging the education divide – How business networks level the playing field for those most in need
- SAP Ariba and MercadoLibre to consumerize business commerce in Latin America
- How a large Missouri medical center developed an agile healthcare infrastructure security strategy
- How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures Process and IT ills
- How a Florida school district tames the wild west of education security at scale and on budget
- How SAP Ariba became a first-mover as Blockchain comes to B2B procurement
- Awesome Procurement —Survey shows how business networks fuel innovation and business transformation
- Seven secrets to highly effective procurement: How business networks fuel innovation and transformation
The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.
We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.
Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.
To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Let's begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing -- and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.
Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.
They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.
Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.
The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.
In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing -- but without commensurate hybrid management and monitoring.
They can’t assure security or compliance and they cannot determine true and comparative costs -- never mind gain optimization for efficiency across the cloud computing spectrum.
Old management into the cloud
But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.
Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.
Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.
Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured -- and at compelling prices.
Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.
Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.
IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.
We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.
Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information.
Ian, please tell us more about the survey findings.
IT leaders respond to their needs
Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.
More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.
The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].
What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments -- whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.
What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.
The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.
Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.
What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed -- and what depends on the internal, on-premise as well as private and public cloud instances.
People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.
How to gain better visibility?
What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.
They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.
It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.
I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.
The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts.
So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.
People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs -- the cost of what’s being deployed -- and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.
The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.
For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users -- then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.
Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading.
So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.
So that, Dana, is the set of the different results that we got out of this survey.
A new need for analytics
Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.
Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.
So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.
Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.
We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on -- they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.
But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey -- some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.
Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.
Sense and significance of all that data?
Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical -- and must be applied across hybrid IT against your resources and map them to your services that you define.
Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.
And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.
When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt.
And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.
And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.
We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.
Operations Bridge in more detail
Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.
We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.
Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.
All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data -- all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.
And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.
This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.
As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting -- independent of the technology -- whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.
They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology -- everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.
Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like Azure, Amazon, or Google Cloud.
We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.
Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.
Customers vote a top choice
And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.
This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.
Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks -- or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.
If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.
Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.
Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data -- not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?
And you can have this on a network operations center (NOC) wall, on your tablet, or your phone -- wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.
Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.
Multicloud monitoring in action
David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.
We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.
We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.
The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using -- almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.
The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.
[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.
Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.
We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.
You may also be interested in:
- Containers, microservices, and HCI help governments in Norway provide safer public data sharing
- Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman
- Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks
- A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes
- How VMware, HPE, and Telefonica together bring managed cloud services to a global audience
- Retail gets a makeover thanks to data-driven insights, edge computing, and revamped user experiences
- Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center
- How UBC gained TCO advantage via flash for its EduCloud cloud storage service
- As enterprises face mounting hybrid IT complexity, new management solutions beckon