.banner-thumbnail-wrapper { display:none; }

big data

The Open Group panel explores ways to help smart cities initiatives overcome public sector obstacles

 Credit: Wikimedia Commons

Credit: Wikimedia Commons

The next BriefingsDirect thought leadership panel discussion focuses on how The Open Group is spearheading ways to make smart cities initiatives more effective.

Many of the latest technologies -- such as Internet of Things (IoT) platforms, big data analytics, and cloud computing -- are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.

The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.

The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibusDr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at BoeingDon Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?

 Harding

Harding

Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens -- and money is just a means to achieving that end.

This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.

Bottom-up blueprint approach

Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down. 

In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.

There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.

 Saha

Saha

Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.

This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help. 

Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken. 

These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.

Sunderland: I agree with everything that’s been said so far. In the particular case of New York City -- and with a lot of cities in the US -- cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set. 

The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive. 

Create demand to drive demand

 Credit: Wikimedia Commons

Credit: Wikimedia Commons

Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone. 

If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need. 

In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract -- without somehow having demand developed by the user base -- is pretty difficult.

By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector. 

The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand. 

Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications? 

 Lisdorf

Lisdorf

Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.

This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on. 

It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.

Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives? 

Framework for a common foundation 

Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously -- and I suspect that this is actually a problem that cities share with corporate enterprises. 

Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains -- and they need to share data. What’s needed is some kind of framework to allow data sharing to happen. 

This framework has to be at two levels. It has to be at a policy level -- and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.

Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things? 

Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.

Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned. 

One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.

Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.

The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority -- not a specific agency or department of the city.

Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services. 

Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?

Public- and private-sector parity

 Brancato

Brancato

Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?

This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there. 

Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.

So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.

Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public -- and if we know a common data framework is important -- why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?

Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.

In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward. 

 Sunderland

Sunderland

The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity -- like an Amazon or Facebook can -- would be doomed to failure, because nobody could actually afford to implement it.

Since you can’t do top-down solutions -- even if you pass a law -- the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up -- as opposed to the top-down.

Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?

Not something to legislate

Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful. 

The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”

That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management. 

It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.

A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice. 

Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?

Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.

Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?

How to pay for IT?

Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us -- and the vendors -- into a world of interoperability and a common data approach.

Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.

Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector. 

Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.

What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.

This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.

The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.

There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.

So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.

Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?

Cities in the cloud, but services grounded

Saha: Across the world, many governments -- not just local governments but even state and central governments -- are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.

It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.

Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.

Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?

Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.

One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.

There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.

This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.

Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.

Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?

Smart city success strategies

Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities. 

One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.

At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.

Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.

Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?

Collective, crystal-clear standards 

Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.

Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.

Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?

Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns.  That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.

Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.

We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.

The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.

I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.

Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?

Use-case consumer value

Sunderland: I like the idea of reference architectures attached to use cases because -- for better or worse -- when folks engage around these issues -- even in large entities like New York City -- they are going to be engaging for specific needs.

Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.

Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.

When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?

Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced -- or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.

Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.

This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.

Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?

Prioritize policy and people 

Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.

The development of policy around data sharing -- or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.

Gardner: Dr. Saha, same question to you. 

Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health -- or whatever the priority is for that city.

Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.

The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.

It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.

Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?

City smarts and safety first 

Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.

In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.

In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.

Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service -- all the way down to the data, so that eventually data can be tied back to the roles.

For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors. 

Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?

Mobile-app appeal

Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.

The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are -- whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.

When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

Learn how an integrated backup and recovery capability allowed production processing systems to be snap back into use in only a few hours.

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

A discussion on how the 2018 Ryder Cup golf match between European and US players places unique technical and campus requirements on its operators.

How new tools help any business build ethical and sustainable supply chains

The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.

We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks -- from use of illegal and unethical labor practices to hidden environmental malpractices

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?

Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.

 Harris

Harris

First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good. 

Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society -- and not just generating dividends and capital growth for shareholders. 

Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent -- particularly Millennials and Generation Z -- and they are more likely to get better investment rates as well. 

Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?

Better business decisions with AI

Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.

We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings. 

These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.

What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.

Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba. 

Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.

McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand -- not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements. 

 McVeigh

McVeigh

But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.

We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.

At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive. 

They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks -- like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators -- it’s all that data matrix that comes together that tell the actionable story. 

Some companies still just want to check a [compliance] box. Other companies want to dig deeper -- but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips. 

Global compliance

Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.

Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”

 Rakowski

Rakowski

They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business. 

What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in. 

Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score. 

And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices. 

Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own -- and SAP Ariba is now aggregating the aggregators.

How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from? 

Timely insights, best business decisions

Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions. 

The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there’s risk, there is opportunity.

So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.

We're looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there is risk there is opportunity.

Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?

Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.

When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt. 

Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them? 

AI risk assessment reaps rewards

Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology -- through our procurement suite, for example -- to impact the decisions they’re making. 

For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time. 

A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas. 

Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?

Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.

We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.

AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.

The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material. 

And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant. 

Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.

We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article. 

By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”

And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly. 

Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem? 

The AI-aggregation advantage

McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably

We look at many indices. We actually just revamped our social indices a couple of years ago.

Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources. 

Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them. 

We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it. 

Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?

Partnership prowess

Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.

We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame. 

And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly. 

What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers. 

Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?

Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba's supplier risk solutions directly.

Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those. 

If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.

Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well -- or not so well.

Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency -- how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?

Nice-to-have ROI

Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.

How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable. 

And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”

And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process. 

We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it -- but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.

Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen? 

Pay attention, be prepared

Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that. 

From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great. 

They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.

Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.

And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.

Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?

Synergistic risk management

Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team. 

I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.

I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers -- and who are the suppliers to the suppliers to the suppliers?

We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Panel explores new ways to solve the complexity of hybrid cloud monitoring

The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.

We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.

Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.

We will also hear about new solutions and explore examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing -- and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.

Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.

They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.

Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.

The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.

In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing -- but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs -- never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.

Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.

Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.

Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured -- and at compelling prices.

Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.

Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.

IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.

We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.  

Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information. 

Ian, please tell us more about the survey findings.

IT leaders respond to their needs 

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.

 Bromehead

Bromehead

More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.

The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].

What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments -- whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.

What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.

The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.

Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.

What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed -- and what depends on the internal, on-premise as well as private and public cloud instances.

People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.

They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.

It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.

I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.

The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts. 

So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.

People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs -- the cost of what’s being deployed -- and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.

The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.

For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users -- then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.

Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading. 

So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.

So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics 

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.

Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.

 Brandt

Brandt

So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.

Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.

We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on -- they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.

But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey -- some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.

Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data? 

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical -- and must be applied across hybrid IT against your resources and map them to your services that you define.

Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.

And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.

When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt. 

And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.

We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.

Operations Bridge in more detail  

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.

We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.

Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.

 Burose

Burose

All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data -- all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.

And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.

This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.

As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting -- independent of the technology -- whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.

They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology -- everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.

Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like AzureAmazon, or Google Cloud.

We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.

Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.

This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.

Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks -- or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.

If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.

Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.

Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data -- not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?

And you can have this on a network operations center (NOC) wall, on your tablet, or your phone -- wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.

Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.

David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.

 Herrera

Herrera

We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.

We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.

The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using -- almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.

The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.

[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.

Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.

We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Micro Focus.

You may also be interested in:

How HudsonAlpha transforms hybrid cloud complexity into an IT force multiplier

The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.

We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?

Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.

 Mullican

Mullican

We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.

Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?

Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.

Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.

Software-defined everything 

Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.

Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize -- yet allow for experimentation, too?

Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.

Simplified 

Hybrid Cloud

Management

Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.

Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.

Gardner: What about the IT requirements? How you fulfill that mandate using technology?

Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers. 

Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?

Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.

Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?

Agnostic arrangements of clouds

Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.

There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind -- so you don’t lock into any one solution necessarily -- then you can optimize and put everything together.

Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.

Cloud cost control, granted

Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud -- from storage to compute -- and that cost is covered by the grant.

So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.

Simplified 

Hybrid Cloud

Management

Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?

Variety is the spice of IT

Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.

The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access -- through a common API -- that infrastructure. So it’s very exciting.

Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?

Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully -- and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.

Gardner: What this might capability bring to you and your organization?

Cross-training the clouds

Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds -- AWSGoogleMicrosoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.

For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.

But the key here is that our developer and researchers -- the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline -- and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.

Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.

Simplified 

Hybrid Cloud

Management

Mullican: Right, and that will be an ongoing effort to always optimize cost -- but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.

Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.

Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.

Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.

The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.

As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE SynergyHPE SimpliVity HCI, and HPE 3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a "whole greater than the sum of the parts" effect?

Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer -– open-source operating systems and applications.

The ability to utilize that in the mix that I have architected natively with OneSphere -- in addition to the public clouds -- is very powerful, and I am excited to see where that goes.

Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure? 

Get adept at DevOps

Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.

As far as putting an emphasis on automation -- and being able to provision infrastructure in the datacenter and the cloud through automated APIs -- a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.

It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources -- that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.

That’s the partnership that companies really need to foster -- and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.

On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.

Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.

We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.

Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?

HCI advances across the globe

Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.

And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.

Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.

Is there something about having that visibility into hybrid IT that extends into hybrid data environments?

Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.

So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.

Simplified 

Hybrid Cloud

Management

Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.

Hybrid hard work pays off

Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.

HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem-- but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

 Steyn

Steyn

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.  

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing? 

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big ... Now we have the entire environment running on SimpliVity cubes.  

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Containers, microservices, and HCI help governments in Norway provide safer public data sharing

The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.

We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI)containers, and microservices.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us discover the benefits of a modern platform for smarter government data sharing is FrodeSjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving interest in having a common platform for public information in your country?

SjovatsenWe need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.

Gardner: What problems do you need to overcome in order to create a more common approach?

Common API at the core

SjovatsenOne of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes -- that’s where the problem is. And that’s what the FINT project is all about.

  Sjovatsen

Sjovatsen

[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal -- for all of those business applications. It gives us the ability to use our data much more efficiently.

Gardner: Please describe for us what the FINT project is and why this is so important for public health.

SjovatsenIt’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs -- both for public use and also for internal use.

Gardner: What sort of information are we dealing with here? Why is it important for the general public health? 

SjovatsenIt’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.

Learn More About

HPE Pointnext Services

Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?

Agility, scalability, and speed

SjovatsenWe needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails. 

The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.

Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach? 

SjovatsenWe are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity

Container comfort

Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?

SjovatsenYes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.

The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.

Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?

Learn More About

HPE Pointnext Services

SjovatsenThe container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.

Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sector applications?

Unlocking the potential

SjovatsenYes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.

Gardner: That could have some very beneficial impacts, very good payoffs.

SjovatsenYes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.

In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.

Gardner: It shows the wide applicability of this. Any advice, Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?

SjovatsenI definitely recommend it -- it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

The Connected Consumer for Retail offering takes the cross-channel experience and enhances it for the brick-and-mortar environment. 

Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center

The next BriefingsDirect Voice of the Customer high-performance computing (HPC) success story interview examines how Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies.

We'll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.

How UBC gained TCO advantage via flash for its EduCloud cloud storage service

The next BriefingsDirect cloud efficiency case study explores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.

We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How is satisfying the storage demands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list? 

Dunington: A university setting isn't much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer.

 Dunington

Dunington

We have to be able to offer those services, because as most people know, there are choices -- and they can go somewhere else and choose those other products.

Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it's just like any other business, even though it’s a university setting.

HPE

Delivers

Flash Performance

Gardner: Can you give us a sense of the scale that describes your storage requirements?

Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.

We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance.

Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year -- because there is registration, there are exam times -- these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need.

Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?

On-Campus Cloud

Dunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can't leave the borders.

That's why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud.

The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it's a hybrid model or all on-campus. We just want to be able to fulfill those needs.

Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model -- but with the security, control and manageability of the private clouds.

What decisions have you made about your storage underpinnings, the infrastructure that supports your SaaS cloud?

Dunington: We have a large storage footprint. For our site, it’s about 12 petabytes of storage. We realized that we weren’t meeting the needs with spinning disks. One of the problems was that we had runaway virtual workloads that would cause problems, and they would impact other services. We needed some mechanism to fix that.

We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

We went through the whole request for proposal (RFP) process, and all the IT infrastructure vendors responded, but we did have some guidelines that we wanted to go through. One of the things we did is present our problems and make sure that they understood what the problems were and what they were trying to solve.

And there were some minimum requirements. We do have a backup vendor of choice that they needed to merge with. And quality of service is a big thing. We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

Gardner: You gained more than just flash benefits when you got to flash storage, right?

Streamlined, safe, flash storage

Dunington: Yes, for sure. With an entire data center full of spinning disks, it gets to the point where the disks start to manage you; you are no longer managing the disks. And the teams out there changing drives, removing volumes around it, it becomes unwieldy. I mean, the power, the footprint, and all that starts to grow.

Also, Vancouver is in a seismic zone, we are right up against the Pacific plate and it's a very active seismic area. Heaven forbid anything happens, but one of the requirements we had was to move the data center into the interior of the province. So that was what we did.

When we brought this new data center online, one of the decisions the team made was to move to an all-flash storage environment. We wanted to be sure that it made financial sense because it's publicly funded, and also improved the user experience, across the province.

Gardner: As you were going about your decision-making process, you had choices, what made you choose what you did? What were the deciding factors?

Dunington: There were a lot of deciding factors. There’s the technology, of being able to meet the performance and to manage the performance. One of the things was to lock down runaway virtual machines and to put performance tiers on others.

But it’s not just the technology; it's also the business part, too. The financial part had to make sense. When you are buying any storage platform, you are also buying the support team and the sales team that come with it.

Our team believes that technology is a certain piece of the pie, and the rest of it is relationship. If that relationship part doesn't work, it doesn’t matter how well the technology part works -- the whole thing is going to break down.

Because software is software, hardware is hardware -- it breaks, it has problems, there are limitations. And when you have to call someone, you have to depend on him or her. Even though you bought the best technology and got the best price -- if it doesn't work, it doesn’t work, and you need someone to call.

So those service and support issues were all wrapped up into the decision.

HPE

Delivers

Flash Performance

We chose the Hewlett Packard Enterprise (HPE) 3PAR all-flash storage platform. We have been very happy with it. We knew the HPE team well. They came and worked with us on the server blade infrastructure, so we knew the team. The team knew how to support all of it. 

We also use the HPE OneView product for provisioning, and it integrated into that all. It also supported the performance optimization tool (IT Operations Management for HPE OneView) to let us set those values, because one of the things in EduCloud is customers choose their own storage tier, and we mark the price on it. So basically all we would do is present that new tier as new data storage within VMware and then they would just move their workloads across non-disruptively. So it has worked really well.

The 3PAR storage piece also integrates with VMware vRealize Operations Manager. We offer that to all our clients as a portal so they can see how everything is working and they can do their own diagnostics. Because that’s the one goal we have with EduCloud, it has to be self-service. We can let the customers do it, that's what they want.

Gardner: Not that long ago people had the idea that flash was always more expensive and that they would use it for just certain use-cases rather than pervasively. You have been talking in terms of a total cost of ownership reduction. So how does that work? How does the economics of this over a period of time, taking everything into consideration, benefit you all?

Economic sense at scale

Dunington: Our IT team and our management team are really good with that part. They were able to break it all down, and they found that this model would work at scale. I don’t know the numbers per se, but it made economic sense.

Spinning disks will still have a place in the data center. I don't know a year from now if an all-flash data center will make sense, because there are some records that people will throw in and never touch. But right now with the numbers on how we worked it out, it makes sense, because we are using the standard bronze, the gold, the silver tiers, and with the tiers it makes sense.

The 3PAR solution also has dedupe functionality and the compression that they just released. We are hoping to see how well that trends. Compression has only been around for a short period of time, so I can’t really say, but the dedupe has done really well for us.

Gardner: The technology overcomes some of the other baseline economic costs and issues, for sure.

We have talked about the technology and performance requirements. Have you been able to qualify how, from a user experience, this has been a benefit?

Dunington: The best benchmark is the adoption rate. People are using it, and there are no help desk tickets, so no one is complaining. People are using it, and we can see that everything is ramping up, and we are not getting tickets. No one is complaining about the price, the availability. Our operational team isn't complaining about it being harder to manage or that the backups aren’t working. That makes me happy.

The big picture

Gardner: Brent, maybe a word of advice to other organizations that are thinking about a similar move to private cloud SaaS. Now that you have done this, what might you advise them to do as they prepare for or evaluate a similar activity?

Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there.

Dunington: Look at the full picture, look at the total cost of ownership. There’s the buying of the hardware, and there's also supporting the hardware, too. Make sure that you understand your requirements and what your customers are looking for first before you go out and buy it. Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there. We will see in a couple of years how it went.

Look at the big picture, step back. It’s just not the new shiny toy, and you might have to take a stepped approach into buying, but for us it worked. I mean, it’s a solid platform, our team sleeps well at night, and I think our customers are really happy with it.

Gardner: This might be a little bit of a pun in the education field, but do your homework and you will benefit.

HPE

Delivers

Flash Performance

Dunington: Yes, for sure.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·      How IoT capabilities open new doors for Miami telecoms platform provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

 

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

 Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional factors into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

 Burris

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America -- independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It's not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset. 

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That's what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that's US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let's just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity. 

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there's some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren't necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.  

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise -- their legal, marketing, security, risk avoidance and governance units -- right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern -- or that type of concern -- is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud -- think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don't automatically default to the notion that you're going to take your business to the cloud.

The same type of thinking pertains here -- the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run -- whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop -- and they will continue to drop -- it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services -- and as these applications increasingly are the basis for the business and the end-value that we’re creating -- we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There's a new data regime that's emerging to support application development. How's that going to work -- the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications -- and especially applications that involve capabilities like deep learning or machine learning with increased automation -- we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications -- inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

As enterprises face hybrid IT complexity, new management solutions beckon

The next BriefingsDirect Voice of the Analyst interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges.

We'll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.


Here are some excerpts:

Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT.

 Teich

Teich

Teich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.

Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them -- never mind properly react to it. We really are playing catch up.

Teich: As we look at the options available, the cloud giants -- the public cloud services -- don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.

Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology.

There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first.

As we look to how this might unfold, it seems to me that the previous models of IT management -- agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes -- have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?

Learn More About

Hybrid IT Management

Solutions From HPE

Teich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet.

When you are an IT manager and you have a lot of folks on public cloud services, you don't have a full picture.

That single pane of glass, looking at a lot of data and information, is soon overloaded. When you are an IT manager, you are at a mid-sized or a large corporation, you have a lot of folks paying out-of-pocket right now, slapping a credit card down on public cloud services, so you don’t have a full picture. Where you do have a picture, there are so many moving parts.

I think we have to get past having a screen full of data, a screen full of information, and to a point where we have insight. And that is going to require a new generation of tools, probably borrowing from some of the machine learning evolution that’s happening now in pattern analytics.

Gardner: The timing in some respects couldn’t be better, right? Just as we are facing this massive problem of complexity of volume and velocity in managing IT across a hybrid environment, we have some of the most powerful and cost-effective means to deal with big data problems just like that.

Life in the infrastructure

Paul, before we go further let’s hear about you and your organization, and tell us, if you would, what a typical day is like in the life of Paul Teich?

Teich: At TIRIAS Research we are boutique industry analysts. By boutique we mean there are three of us -- three principal analysts; we have just added a few senior analysts. We are close to the metal. We live in the infrastructure. We are all former engineers and/or product managers. We are very familiar with deep technology.

My day tends to be first, a lot of reading. We look at a lot of chips, we look at a lot of service-level information, and our job is to, at a very fundamental level, take very complex products and technologies and surface them to business decision-makers, IT decision-makers, folks who are trying to run lines of business (LOB) and make a profit. So we do the heavy lifting on why new technology is important, disruptive, and transformative.

Gardner: Thanks. Let’s go back to this idea of data-driven and analytical values as applied to hybrid IT management and complexity. If we can apply AI and machine learning to solve business problems outside of IT -- in such verticals as retail, pharmaceutical, transportation -- with the same characteristics of data volume, velocity, and variety, why not apply that to IT? Is this a case of the cobbler’s kids having no shoes? You would think that IT would be among the first to do this.

Dig deep, gain insight

Teich: The cloud giants have already implemented systems like this because of necessity. So they have been at the front-end of that big data mantra of volume, velocity -- and all of that.

To successfully train for the new pattern recognition analytics, especially the deep learning stuff, you need a lot of data. You can’t actually train a system usefully without presenting it with a lot of use cases.

The public clouds have this data. They are operating social media services, large retail storefronts, and e-tail, for example. As the public clouds became available to enterprises, the IT management problem ballooned into a big data problem. I don’t think it was a big data problem five or 10 years ago, but it is now.

That’s a big transformation. We haven’t actually internalized what that means operationally when your internal IT department no longer runs all of your IT jobs anymore.

We are generating big data and that means we need big data tools to go analyze it and to get that relevant insight.

That’s the biggest sea change -- we are generating big data in the course of managing our IT infrastructure now, and that means we need big data tools to go analyze it, and to get that relevant insight. It’s too much data flowing by for humans to comprehend in real time.

Gardner: And, of course, we are also talking about islands of such operational data. You might have a lot of data in your legacy operations. You might have tier 1 apps that you are running on older infrastructure, and you are probably happy to do that. It might be very difficult to transition those specific apps into newer operating environments.

You also have multiple SaaS and cloud data repositories and logs. There’s also not only the data within those apps, but there’s the metadata as to how those apps are running in clusters and what they are doing as a whole. It seems to me that not only would you benefit from having a comprehensive data and analytics approach for your IT operations, but you might also have a workflow and process business benefit by being an uber analyst, by being on top of all of these islands of operational data. 

Learn More About

Hybrid IT Management

Solutions From HPE

To me, moving toward a comprehensive intelligence and data analysis capability for IT is the gift that keeps giving. You would then be able to also provide insight for an uber approach to processes across your entire organization -- across the supply chains, across partner networks, and back to your customers. Paul, do you also see that there’s an ancillary business benefit to having that data analysis capability, and not ceding it to your cloud providers?

Manage data, improve workflow

Teich: I do. At one end of the spectrum it’s simply what do you need to do to keep the lights on, where is your data, all of it, in the various islands and collections and the data you are sharing with your supply chain as well. Where is the processing that you can apply to that data? Increasingly, I think, we are looking at a world in which the location of the stored data is more important than the processing power.

The management of all the data you have needs to segue into visible workflows.

We have processing power pretty much everywhere now. What’s key is moving data from place to place and setting up the connections to acquire it. It means that the management of all the data you have needs to segue into visible workflows.

Once I know what I have, and I am managing it at a baseline effectively, then I can start to improve my processes. Then I can start to get better workflows, internally as well as across my supply chain. But I think at first it’s simply, “What do I have going on right now?”

As an IT manager, how can I rein in some of these credit card instances, credit card storage on the public clouds, and put that all into the right mix. I have to know what I know first -- then I can start to streamline. Then I can start to control my costs. Does that make sense?

Gardner: Yes, absolutely. And how can you know which people you want to give even more credit to on their credit cards – and let them do more of what they are doing? It might be very innovative, and it might be very cost-effective. There might also be those wasting money, spinning their wheels, repaving cow paths, over and over again.

If you don’t have the ability to make those decisions with insight, without the visibility, and then further analyze it as to how best to go about it – it seems to me a no-brainer.

It also comes at an auspicious time as IT is trying to re-factor its value to the organization. If in fact they are no longer running servers and networks and keeping the trains running on time, they have to start being more in the business of defining what trains should be running and then how to make them the best business engines, if you will.

If IT departments needs to rethink their role and step up their game, then they need to use technologies like advanced hybrid IT management from vendors with a neutral perspective. Then they become the overseers of operations at a fundamentally different level. 

Data revelation, not revolution

Teich: I think that’s right. It’s evolutionary stuff. I don’t think it’s revolutionary. I think that in the same way you add servers to a virtual machine farm, as your demand increases, as your baseline demand increases, IT needs to keep a handle on costs -- so you can understand which jobs are running where and how much more capacity you need.

One of the things they are missing with random access to the cloud is bulk purchasing. And so at a very fundamental level, helping your organization manage which clouds you are spending on by aggregating the purchase of storage, aggregating the purchase of compute instances to get just better buying power, doing price arbitrage when you can. To me, those are fundamental qualities of IT going forward in a multi-cloud environment.

They are extensions of where we are today; it just doesn’t seem like it yet. They have always added new servers to increasing internal capacity and this is just the next evolutionary step.

Gardner: It certainly makes sense that you would move as maturity occurs in any business function toward that orchestration, automation and optimization – rather than simply getting the parts in place. What you are describing is that IT is becoming more like a procurement function and less like a building, architecture, or construction function, which is just as powerful.

Not many people can make those hybrid IT procurement decisions without knowing a lot about the technology. Someone with just business acumen can’t walk in and make these decisions. I think this is an opportunity for IT to elevate itself and become even more essential to the businesses.

Teich: The opportunity is a lot like the Sabre airline scheduling system that nearly every airline uses now. That’s a fundamental capability for doing business, and it’s separate from the technology of Sabre. It’s the ability to schedule -- people and airplanes – and it’s a lot like scheduling storage and jobs on compute instances. So I think there will be this step.

But to go back to the technology versus procurement, I think some element of that has always existed in IT in terms of dealing with vendors and doing the volume purchases on one side, but also having some architect know how to compose the hardware and the software infrastructure to serve those applications.

Connect the clouds

We’re simply translating that now into a multi-cloud architecture. How do I connect those pieces? What network capacity do I need to buy? What kind of storage architectures do I need? I don’t think that all goes away. It becomes far more important as you look at, for example, AWS as a very large bag of services. It’s very powerful. You can assemble it in any way you want, but in some respect, that’s like programming in C. You have all the power of assembly language and all the danger of assembly language, because you can walk up in the memory and delete stuff, and so, you have to have architects who know how to build a service that’s robust, that won’t go down, that serves your application most efficiently and all of those things are still hard to do.

So, architecture and purchasing are both still necessary. They don’t go away. I think the important part is that the orchestration part now becomes as important as deploying a service on the side of infrastructure because you’ve got multiple sets of infrastructure.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: For hybrid IT, it really has to be an enlightened procurement, not just blind procurement. And the people in the trenches that are just buying these services -- whether the developers or operations folks -- they don’t have that oversight, that view of the big picture to make those larger decisions about optimization of purchasing and business processes.

That gets us back to some of our earlier points of, what are the tools, what are the management insights that these individuals need in order to make those decisions? Like with Sabre, where they are optimizing to fill every hotel room or every airplane seat, we’re going to want in hybrid IT to fill every socket, right? We’re going to want all that bare metal and all those virtualization instances to be fully optimized -- whether it’s your cloud or somebody else’s.

It seems to me that there is an algorithmic approach eventually, right? Somebody is going to need to be the keeper of that algorithm as to how this all operates -- but you can’t program that algorithm if you don’t have the uber insights into what’s going on, and what works and what doesn’t.

What’s the next step, Paul, in terms of the technology catching up to the management requirements in this new hybrid IT complex environment?

Teich: People can develop some of that experience on a small scale, but there are so many dimensions to managing a multi-cloud, hybrid IT infrastructure business model. It’s throwing off all of this metadata for performance and efficiency. It’s ripe for machine learning.

We're moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale.

In a strong sense, we’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale. It’s just going to be looking at a bigger picture, it’s going to be managing more variables, and learning across a lot more data points than a human can possibly comprehend.

We are at this really interesting point in the industry where we are getting deep-learning approaches that are coming online cost effectively; they can help us do that. They have a little while to go before they are fully mature. But IT organizations that learn to take advantage of these systems now are going to have a head start, and they are going to be more efficient than their competitors.

Gardner: At the end of the day, if you’re all using similar cloud services then that differentiation between your company and your competitor is in how well you utilize and optimize those services. If the baseline technologies are becoming commoditized, then optimization -- that algorithm-like approach to smartly moving workloads and data, and providing consumption models that are efficiency-driven -- that’s going to be the difference between a 1 percent margin and a 5 percent margin over time.

The deep-learning difference

Teich: The important part to remember is that these machine-training algorithms are somewhat new, so there are several challenges with deploying them. First is the transparency issue. We don’t quite yet know how a deep-learning model makes specific decisions. We can’t point to one aspect and say that aspect is managing the quality of our AWS services, for example. It’s a black box model.

We can’t yet verify the results of these models. We know they are being efficient and fast but we can’t verify that the model is as efficient as it could possibly be. There is room for improvement over the next few years. As the models get better, they’ll leave less money on the table.

We’re also validating that when you build a machine-learning model that it’s covering all the situations you want it to cover. You need an audit trail for specific sets of decisions, especially with data that is subject to regulatory constraints. You need to know why you made decisions.

So the net is, once you are training a machine-learning model, you have to keep retraining it over time. Your model is not going to do the same thing as your competitor's model. There is a lot of room for differentiation, a lot of room for learning. You just have to go into it with your eyes open that, yeah, occasionally things will go sideways. Your model might do something unexpected, and you just have to be prepared for that. We’re still in the early days of machine learning.

Gardner: You raise an interesting point, Paul, because even as the baseline technology services in the multi-cloud era become commoditized, you’re going to have specific, unique, and custom approaches to your own business’ management.

Your hybrid IT optimization is not going to be like that of any other company. I think getting that machine-learning capability attuned to your specific hybrid IT panoply of resources and assets is going to be a gift that keeps giving. Not only will you run your IT better, you will run your business better. You’ll be fleet and agile.

If some risk arises -- whether it’s a cyber security risk, a natural disaster risk, a business risk of unintended or unexpected changes in your supply chain or in your business environment -- you’re going to be in a better position to react. You’re going to have your eyes to the ground, you’re going to be well tuned to your specific global infrastructure, and you’ll be able to make good choices. So I am with you. I think machine learning is essential, and the sooner you get involved with it, the better.

Before we sign off, who are the vendors and some of the technologies that we will look to in order to fill this apparent vacuum on advanced hybrid IT management? It seems to me that traditional IT management vendors would be a likely place to start.

Who’s in?

Teich: They are a likely place to start. All of them are starting to say something about being in a multi-cloud environment, about being in a multi-cloud-vendor environment. They are already finding themselves there with virtualization, and the key is they have recognized that they are in a multi-vendor world.

There are some start-ups, and I can’t name them specifically right now. But a lot of folks are working on this problem of how do I manage hybrid IT: In-house IT, and multi-cloud orchestration, a lot of work going on there. We haven’t seen a lot of it publicly yet, but there is a lot of venture capital being placed.

I think this is the next step, just like PCs came in the office, smartphones came in the office as we move from server farms to the clouds, going from cloud to multi-cloud, it’s attracting a lot of attention. The hard part right now is nailing whom to place your faith in. The name brands that people are buying their internal IT from right now are probably good near-term bets. As the industry gets more mature, we’ll have to see what happens.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We did hear a vision described on this from Hewlett Packard Enterprise (HPE) back in June at their Discover event in Las Vegas. I’m expecting to hear quite a bit more on something they’ve been calling New Hybrid IT Stack that seems to possess some of the characteristics we’ve been describing, such as broad visibility and management.

So at least one of the long-term IT management vendors is looking in this direction. That’s a place I’m going to be focusing on, wondering what the competitive landscape is going to be, and if HPE is going to be in the leadership position on hybrid IT management.

Teich: Actually, I think HPE is the only company I’ve heard from so far talking at that level. Everybody is voicing some opinion about it, but from what I’ve heard, it does sound like a very interesting approach to the problem.

Microsoft actually constrained their view on Azure Stack to a very small set of problems, and is actively saying, “No, I don’t.” If you’re looking at doing virtual machine migration and taking advantage of multi-cloud for general-purpose solutions, it’s probably not something that you want to do yet. It was very interesting for me then to hear about the HPE Project New Hybrid IT Stack and what HPE is planning to do there.

Gardner: For Microsoft, the more automated and constrained they can make it, the more likely you’d be susceptible or tempted to want to just stay within an Azure and/or Azure Stack environment. So I can appreciate why they would do that.

Before we sign off, one other area I’m going to be keeping my eyes on is around orchestration of containers, Kubernetes, in particular. If you follow orchestration of containers and container usage in multi-cloud environments, that’s going to be a harbinger of how the larger hybrid IT management demands are going to go as well. So a canary in the coal mine, if you will, as to where things could get very interesting very quickly.

The place to be

Teich: Absolutely. And I point out that the Linux Foundation’s CloudNativeCon in early December 2017 looks like the place to be -- with nearly everyone in the server infrastructure community and cloud infrastructure communities signing on. Part of the interest is in basically interchangeable container services. We’ll see that become much more important. So that sleepy little technical show is going to be invaded by “suits,” this year, and we’re paying a lot of attention to it.

Gardner: Yes, I agree. I’m afraid we’ll have to leave it there. Paul, how can our listeners and readers best follow you to gain more of your excellent insights?

Teich: You can follow us at www.tiriasresearch.com, and also we have a page on Forbes Tech, and you can find us there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU

The next BriefingsDirect high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information.

We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. 

To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

 Sandholm

Sandholm

That's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information.

Know what you don’t know

So you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold'em has become the leading benchmark in the AI community.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it's not really the only part here -- or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn't want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

All three of these modules run on the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC), for which the hardware was built by Hewlett Packard Enterprise (HPE).

HPC from HPE

Overcomes Barriers

To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there's potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you're describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don't want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can't have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don't, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we're seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That's the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world -- but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential -- probabilities set like you mentioned -- the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It's all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field -- but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let's examine the commercialization potential of this. You're not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let's say you're a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess -- it's often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future.

Sandholm: Exactly! If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run -- so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that's also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We're actually testing how well one can steer a patient's T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan -- but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent -- but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we've heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you're keeping a keen eye on?

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I'm looking at it from the outside. I'm trusting that they will continue to build the best hardware and maintain it in the best way -- so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well.

Sandholm: Actually here in the live game in Las Vegas they don't allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don't put our AI in there; it’s against their site rules. Also, I think it's unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcomes

The next BriefingsDirect healthcare transformation use-case discussion focuses on how an ecosystem approach to big data solutions brings about improved healthcare informatics-driven outcomes.

We'll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.


Here are some excerpts:

Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?

Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer

 Heemskerk

Heemskerk

underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution -- or the total value proposition -- for our customers.

Gardner: I'm sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?

Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.

Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.

Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.

Big data value add

Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It's a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.

This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.

Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.

Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?

Healthy hardware relationships 

Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.

As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.

Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?

Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing -- the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.

You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?

By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization -- the people, the process, and the culture -- the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.

Who else makes up the ecosystem? It takes more than two players to make an ecosystem.

Heemskerk: Yes, that's very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.

Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.

Besides that, we don't have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.

Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?

Saving time, money, and people

Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.

First of all, addressing standardization across the clinical application layer means that a customer doesn't have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that's already a big savings.

Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.

Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.

Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better -- where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.

If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file -- it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       How IoT capabilities open new doors for Miami Telecoms Platform Provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

How IoT and OT collaborate to usher in the data-driven factory of the future

The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains.

We’ll now learn how a leading-edge manufacturer, Hirotec, in the global automotive industry, takes advantage of IoT and Operational Technology (OT) combined to deliver dependable, managed, and continuous operations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's happening in the market with business and technology trends that’s driving this need for more modern factories and more responsive supply chains?

Hester: Our customers are demanding shorter lead times. There is a drive for even higher quality, especially in automotive manufacturing. We’re also seeing a much higher level of customization requests coming from our customers. So how can we create products that better match the unique needs of each customer?

As we look at how we can continue to compete in an ever-competitive environment, we are starting to see how the solutions from IoT can help us.

Gardner: What is it about IoT and Industrial IoT (IIoT) that allows you to do things that you could not have done before?

Hester: Within the manufacturing space, a lot of data has been there for years; for decades. Manufacturing has been very good at collecting data. The challenges we've had, though, is bringing in that data in real-time, because the amount of data is so large. How can we act on that data quicker, not on a day-by-day basis or week-by-week basis, but actually on a minute-by-minute basis, or a second-by-second basis? And how do we take that data and contextualize it?

 Hester

Hester

It's one thing in a manufacturing environment to say, “Okay, this machine is having a challenge.” But it’s another thing if I can say, “This machine is having a challenge, and in the context of the factory, here's how it's affecting downstream processes, and here's what we can do to mitigate those downstream challenges that we’re going to have.” That’s where IoT starts bringing us a lot of value.

The analytics, the real-time contextualization of that data that we’ve already had in the manufacturing area, is very helpful.

Gardner: So moving from what may have been a gather, batch, analyze, report process -- we’re now taking more discrete analysis opportunities and injecting that into a wider context of efficiency and productivity. So this is a fairly big change. This is not incremental; this is a step-change advancement, right?

A huge step-change 

Hester: It’s a huge change for the market. It's a huge change for us at Hirotec. One of the things we like to talk about is what we jokingly call the Tuesday Morning Meeting. We talk about this idea that in the morning at a manufacturing facility, everyone gets together and talks about what happened yesterday, and what we can do today to make up for what happened yesterday.

Instead, now we’re making that huge step-change to say,  “Why don't we get the data to the right people with the right context and let them make a decision so they can affect what's going on, instead of waiting until tomorrow to react to what's going on?” It’s a huge step-change. We’re really looking at it as how can we take small steps right away to get to that larger goal.

In manufacturing areas, there's been a lot of delay, confusion, and hesitancy to move forward because everyone sees the value, but it's this huge change, this huge project. At Hirotec, we’re taking more of a scaled approach, and saying let's start small, let’s scale up, let’s learn along the way, let's bring value back to the organization -- and that's helped us move very quickly.

Gardner: We’d like to hear more about that success story but in the meantime, tell us about Hirotec for those who don't know of it. What role do you play in the automotive industry, and how are you succeeding in your markets?

Hester: Hirotec is a large, tier-1 automotive supplier. What that means is we supply parts and systems directly to the automotive original equipment manufacturers (OEMs), like Mazda, General Motors, FCA, Ford, and we specialize in door manufacturing, as well as exhaust system manufacturing. So every year we make about 8 million doors, 1.8 million exhaust systems, and we provide those systems mainly to Mazda and General Motors, but also we provide that expertise through tooling.

For example, if an automotive OEM would like Hirotec’s expertise in producing these parts, but they would like to produce them in-house, Hirotec has a tooling arm where we can provide that tooling for automotive manufacturing. It's an interesting strategy that allows us to take advantage of data both in our facilities, but then also work with our customers on the tooling side to provide those lessons learned and bring them value there as well.

Gardner: How big of a distribution are we talking about? How many factories, how many countries; what’s the scale here?

Hester: We are based in Hiroshima, Japan, but we’re actually in nine countries around the world, currently with 27 facilities. We have reached into all the major continents with automotive manufacturing: we’re in North America, we’re in Europe, we’re all throughout Asia, in China and India. We have a large global presence. Anywhere you find automotive manufacturing, we’re there supporting it.

Discover How the

IoT Advantage

Works in Multiple Industries

Gardner: With that massive scale, very small improvements can turn into very big benefits. Tell us why the opportunity in a manufacturing environment to eke out efficiency and productivity has such big payoffs.

Hester: So especially in manufacturing, what we find when we get to those large scales like you're alluding to is that a 1 percent or 2 percent improvement has huge financial benefits. And so the other thing is in manufacturing, especially automotive manufacturing, we tend to standardize our processes, and within Hirotec, we’ve done a great job of standardizing that world-class leadership in door manufacturing.

And so what we find is when we get improvements not only in IoT but anywhere in manufacturing, if we can get 1 percent or 2 percent, not only is that a huge financial benefit but because we standardized globally, we can move that to our other facilities very quickly, doubling down on that benefit.

Gardner: Well, clearly Hirotec sees this as something to really invest in, they’ve created the IoT Lab. Tell me a little bit about that and how that fits into this?

The IoT Lab works

Hester: The IoT Lab is a very exciting new group, it's part of our Advanced Engineering Center (AEC). The AEC is a group out of our global headquarters and this group is tasked with the five- to 10-year horizon. So they're able to work across all of our global organizations with tooling, with engineering, with production, with sales, and even our global operations groups. Our IoT group goes and finds solutions that can bring value anywhere in the organization through bringing in new technologies, new ideas, and new solutions.

And so we formed the IoT Lab to find how can we bring IoT-based solutions into the manufacturing space, into the tooling space, and how actually can those solutions not only help our manufacturing and tooling teams but also help our IT teams, our finance teams, and our sales teams.

Gardner: Let's dig back down a little bit into why IT, IoT and Operational Technology (OT) are into this step-change opportunity, looking for some significant benefits but being careful in how to institute that. What is required when you move to a more an IT-focused, a standard-platform approach -- across all the different systems -- that allows you to eke these great benefits?

Tell us about how IoT as a concept is working its way into the very edge of the factory floor.

Discover How the

IoT Advantage

Works in Multiple Industries

Hester: One of the things we’re seeing is that IT is beginning to meld, like you alluded to, with OT -- and there really isn't a distinction between OT and IT anymore. What we're finding is that we’re starting to get to these solution levels by working with partners such as PTC and Hewlett Packard Enterprise (HPE) to bring our IT group and our OT group all together within Hirotec and bring value to the organization.

What we find is there is no longer a need in OT that becomes a request for IT to support it, and also that IT has a need and so they go to OT for support. What we are finding is we have organizational needs, and we’re coming to the table together to make these changes. And that actually within itself is bringing even more value to the organization.

Instead of coming last-minute to the IT group and saying, “Hey, we need your support for all these different solutions, and we’ve already got everything set, and you are just here to put it in,” what we are seeing, is that they bring the expertise in, help us out upfront, and we’re finding better solutions because we are getting experts both from OT and IT together.

We are seeing this convergence of these two teams working on solutions to bring value. And they're really moving everything to the edge. So where everyone talks about cloud-based computing -- or maybe it’s in their data center -- where we are finding value is in bringing all of these solutions right out to the production line.

We are doing data collection right there, but we are also starting to do data analytics right at the production line level, where it can bring the best value in the fastest way.

Gardner: So it’s an auspicious time because just as you are seeking to do this, the providers of technology are creating micro data centers, and they are creating Edgeline converged systems, and they are looking at energy conservation so that they can do this in an affordable way -- and with storage models that can support this at a competitive price.

What is it about the way that IT is evolving and providing platforms and systems that has gotten you and The IoT Lab so excited?

Excitement at the edge  

Hester: With IoT and IT platforms, originally to do the analytics, we had to go up to the cloud -- that was the only place where the compute power existed. Solution providers now are bringing that level of intelligence down to the edge. We’re hearing some exciting things from HPE on memory-driven computing, and that's huge for us because as we start doing these very complex analytics at the edge, we need that power, that horsepower, to run different applications at the same time at the production line. And something like memory-driven solutions helps us accomplish that.

It's one thing to have higher-performance computing, but another thing to gain edge computing that's proper for the factory environment. In a manufacturing environment it's not conducive to a standard servers, a standard rack where it needs dust protection and heat protection -- that doesn't exist in a manufacturing environment.

The other thing we're beginning to see with edge computing, that HPE provides with Edgeline products, is that we have computers that have high power, high ability to perform the analytics and data collection capabilities -- but they're also proper for the environment.

I don't need to build out a special protection unit with special temperature control, humidity control – all of which drives up energy costs, which drives up total costs. Instead, we’re able to run edge computing in the environment as it should be on its own, protected from what comes in a manufacturing environment -- and that's huge for us.

Gardner: They are engineering these systems now with such ruggedized micro facilities in mind. It's quite impressive that the very best of what a data center can do, can now be brought to the very worst types of environments. I'm sure we'll see more of that, and I am sure we'll see it get even smaller and more powerful.

Do you have any examples of where you have already been able to take IoT in the confluence of OT and IT to a point where you can demonstrate entirely new types of benefits? I know this is still early in the game, but it helps to demonstrate what you can do in terms of efficiency, productivity, and analytics. What are you getting when you do this well?

IoT insights save time and money

Hester: Taking the stepped strategy that we have, we actually started at Hirotec very small with only eight machines in North America and we were just looking to see if the machines are on, are they running, and even from there, we saw a value because all of a sudden we were getting that real-time contextualized insight into the whole facility. We then quickly moved over to one of our production facilities in Japan, where we have a brand-new robotic inspection system, and this system uses vision sensors, laser sensors, force sensors -- and it's actually inspecting exhaust systems before they leave the facility.

We very quickly implemented an IoT solution in that area, and all we did was we said, “Hey, we just want to get insight into the data, so we want to be able to see all these data points. Over 400 data points are created every inspection. We want to be able to see this data, compared in historical ways -- so let’s bring context to that data, and we want to provide it in real-time.”

Discover How the

IoT Advantage

Works in Multiple Industries

What we found from just those two projects very quickly is that we're bringing value to the organization because now our teams can go in and say, “Okay, the system is doing its job, it's inspecting things before they leave our facility to make sure our customers always get a high-quality product.” But now, we’re able to dive in and find different trends that we weren't able to see before because all we were doing is saying, “Okay, this system leaves the facility or this system doesn't.”

And so already just from that application, we’ve been able to find ways that our engineers can even increase the throughput and the reliability of the system because now they have these historical trends. They were able to do a root-cause analysis on some improvements that would have taken months of investigation; it was completed in less than a week for us.

And so that's a huge value -- not only in that my project costs go down but now I am able to impact the organization quicker, and that's the big thing that Hirotec is seeing. It’s one thing to talk about the financial cost of a project, or I can say, “Okay, here is the financial impact,” but what we are seeing is that we’re moving quicker.

And so, we're having long-term financial benefits because we’re able to react to things much faster. In this case, we’re able to reduce months of investigation down to a week. That means that when I implement my solution quicker, I'm now bringing that impact to the organization even faster, which has long-term benefits. We are already seeing those benefits today.

Gardner: You’ll obviously be able to improve quality, you’ll be able to reduce the time to improving that quality, gain predictive analytics in your operations, but also it sounds like you are going to gain metadata insights that you can take back into design for the next iteration of not only the design for the parts but the design for the tooling as well and even the operations around that. So that intelligence at the edge can be something that is a full lifecycle process, it goes right back to the very initiation of both the design and the tooling.

Data-driven design, decisions 

Hester: Absolutely, and so, these solutions, they can't live in a silo. We're really starting to look at these ideas of what some people call the Digital Thread, the Digital Twin. We’re starting to understand what does that mean as you loop this data back to our engineering teams -- what kind of benefits can we see, how can we improve our processes, how can we drive out into the organization?

And one of the biggest things with IoT-based solutions is that they can't stay inside this box, where we talked about OT to IT, we are talking about manufacturing, engineering, these IoT solutions at their best, all they really do is bring these groups together and bring a whole organization together with more contextualized data to make better decisions faster.

And so, exactly to your point, as we are looping back, we’re able to start understanding the benefit we’re going to be seeing from bringing these teams together.

Gardner: One last point before we close out. It seems to me as well that at a macro level, this type of data insight and efficiency can be brought into the entire supply chain. As you're providing certain elements of an automobile, other suppliers are providing what they specialize in, too, and having that quality control and integration and reduced time-to-value or mean-time-to-resolution of the production issues, and so forth, can be applied at a macro level.

So how does the automotive supplier itself look at this when it can take into consideration all of its suppliers like Hirotec are doing?

Start small 

Hester: It's a very early phase, so a lot of the suppliers are starting to understand what this means for them. There is definitely a macro benefit that the industry is going to see in five to 10 years. Suppliers now need to start small. One of my favorite pictures is a picture of the ocean and a guy holding a lighter. It [boiling the ocean] is not going to happen. So we see these huge macro benefits of where we’re going, but we have to start out somewhere.

Discover How the

IoT Advantage

Works in Multiple Industries

A lot of suppliers, what we’re recommending to them, is to do the same thing we did, just start small with a couple of machines, start getting that data visualized, start pulling that data into the organization. Once you do that, you start benefiting from the data, and then start finding new use-cases.

As these suppliers all start doing their own small projects and working together, I think that's when we are going to start to see the macro benefits but in about five to 10 years out in the industry.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       Veikkaus digitally transforms as it emerges as new combined Finnish national gaming company

 ·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

DreamWorks Animation crafts its next era of dynamic IT infrastructure

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.

Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas?

Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.

Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?

Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.

  Wike

Wike

That's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.

We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.

Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly.

So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.

Greater creativity, increased efficiency

Wike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.

We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”

How HPE Synergy

Automates

Infrastructure Operations

And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there's two things that you get out of that.

One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it's all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or -- because of the compute infrastructure -- you can drive efficiency into the operation.

So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.

Core flexibility, fast

Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it -- we want that in our infrastructure. We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.

Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.

What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity -- the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn't have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.

By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure -- parts of it or all of it -- and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure -- and do it quickly.

Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.

Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?

Wike: That’s absolutely true. We certainly don't want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.

In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn't do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.

The world is great for us today, because people want content -- they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.

Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.

HPE Synergy

Automates

Infrastructure Operations

Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn't work on the factory floor, then what's the point of the design? Are we doing that with digital manufacturing now?

Artifact analytics improve usage, rendering

Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking -- I mean all of those things are commodities that are required to create a project.

Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it's too much.”

One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.

And we correlate that information to a specific asset, such as a character, basket, or chair -- whatever it is that I am rendering -- as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.

One of the things that’s wasteful of digital resources is either having a job that doesn't fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours -- it could take 10 hours to run.

At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don't really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?

Just the facts

Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it's going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.

So now we don't have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.

Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that's 15 percent less money you have to spend. It's 15 percent faster time for a director to be able to see something. It's 15 percent more iterations. So that was really huge for us.

Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.

Creative intelligence at the edge

Wike: It's interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data -- that intelligence -- to an artist, then they are actually really, really good at managing their own work.

It's only a problem when they don't have any idea that six months from now it's going to cause a huge increase in memory usage or render time. When they don't know that, it's hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.

On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn't understand why.

After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.

But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it's really not a key element to the story. And they can do that today, which is really great.

HPE Synergy

Automates

Infrastructure Operations

Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll -- hundreds and hundreds of people at work putting this together.

So you are dealing with not just one artist making a decision, you have an army of people. It's astounding that you can bring this level of data-driven efficiency to it.

Movie-making’s mobile workforce

Wike: It becomes so much more important, too, as we become a more mobile workforce. 

Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it's now all these great screens in people's hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?

Wike: There's an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.

We always say that we don't know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.

Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.

Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.

I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How a Florida school district tames the wild west of education security at scale and on budget

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West.

For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety -- without breaking the bank.

The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we're joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district?

Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change -- of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard.

Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can't clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.

Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?

Skipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice.

 Skipper

Skipper

One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.

I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it's no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.

Access for all, but not all-access

Gardner: Whenever you are trying to change people's perceptions, behaviors, culture, it’s useful to have both the carrot and a stick approach.

So to you Rich, what's been working in terms of a carrot? How do you incentivize people? What works in practice there?

Perkins: That's a tough one. We don't really have a carrot that we use. We basically say, “If you are doing the wrong things, you are not going to be able to use our network.”  So we focus more on negatives.

 Perkins

Perkins

The positives would be you get to do your job. You get to use the Internet. We don't really give them something more. We see security as directly intertwined with our customer service. Every person we have is our customer and our job is to protect them -- and sometimes that's from themselves.

So we don't really have a carrot-type of system. We don't allow students to play games if they have no problems. We give everybody the same access and treat everybody the same. Either you are a student and you get this level of access, or you are a staff member, you get this level of access, or you don't get access.

Gardner: Let’s get background on the Clay County School District. Tell us how many students you have, how many staff administrators, the size and scope of your school district?

Bunkley: Our school district is the 22nd largest in Florida, we are right on the edge of small and medium in Florida, which in most districts is a very large school district. We run about 38,500 students.

And as far as our IT team, which is our student information system, our Enterprise Resource Planning (ERP) system, security, down to desktop support, network infrastructure support, our web services, we have about 48 people total in our department.

Our scope is literally everything. For some reason IT means that if it plugs into a wall, we are responsible for it. That's generally a true statement in education across the board, where the IT staff tends to be a Jack-of-all-trades, and we fix everything.

Practical IT

Gardner: Where you are headed in terms of technology? Is there a one-to-one student-to-device ratio in the works? What sort of technology do you enable for them?

Bunkley: I am extremely passionate about this, because the one-to-one scenario seems to be the buzzword, and we generally despise buzzwords in this office and we prefer a more practical approach.

The idea of one-to-one is itself to me flawed, because if I just throw a device in a student's hand, what am I actually doing besides throwing a device in a student's hand? We haven't trained them. We haven’t given them the proper platform. All we have done is thrown technology.

And when I hear the terms, well, kids inherently know how to use technology today; it kind of just bothers me, because kids inherently know how to use social media, not technology. They are not production-driven, they are socially driven, and that is a sticking point with me.

We are in fact moving to a one-to-one, but in a nontraditional sense. We have established a one-to-one platform so we can introduce a unified platform for all students and employees to see through a portal system; we happen to use ClassLink, there are various other vendors out there, that’s just the one we happen to use.

We have integrated that in moving to Google Apps for Education and we have a very close relationship with Google. It’s pretty awesome, to be quite honest with you.

So we are moving in the direction of Chromebooks, because it’s just a fiscally more responsible move for us.

I know Microsoft is coming out with Windows 10 S, it’s kind of a strong move on their part. But for us, just because we have the expertise on the Google Apps for Education, or G Suite, it just made a lot of sense for us to go that direction.

So we are moving in one-to-one now with the devices, but the device is literally the least important -- and the last -- step in our project.

Non-stop security, no shenanigans

Gardner: Tell us about the requirements now for securing the current level of devices, and then for the new one. It seems like you are going to have to keep the airplane flying while changing the wings, right? So what is the security approach that works for you that allows for that?

Skipper: Clay County School District has always followed trends as far as devices go. So we actually have a good mixture of devices in our network, which means that no one solution is ever the right solution.

So, for example, we still have some iPads out in our networks, we still have some older Apple products, and then we have a mixture of Chromebooks and also Windows devices. We really need to make sure that we are running the right security platform for the full environment.

As we are transitioning more and more to a take-home philosophy -- and that’s where we as an IT department are seeing this going – so that if the decision is made to make the entire student population go home, we are going to be ready to go.

We have coordinated with our content filter company, and they have some extensions that we can deploy that lock the Chromebooks into a filter situation regardless of their network. That’s been really successful in identifying, maybe blocking students, from those late-night searches. We have also been able to identify some shenanigans that might be taking place due to some interesting web searches that they might do over YouTube, for example. That’s worked really well.

Our next objective is to figure out how to secure our Windows devices and possibly even the Mac devices. While our content filter does a good job as far as securing the content on the Internet, it’s a little bit more difficult to deploy into a Windows device, because users have the option of downloading different Internet browsers. So, content filtering doesn’t really work as well on those.

I have deployed Bitdefender to my laptops, and also to take-home Apple products. That allows me to put in more content filtering, and use that to block people from malicious websites that maybe the content filter didn’t see or was unable to see due to a different browser being used.

In those aspects we definitely are securing our network down further than it ever has been before.

Block and Lock

Perkins: With Bitdefender, one of the things we like is that if we have those devices go off network, we can actually have it turn on the Bitdefender Firewall that allows us to further lock down those machines or protect them if they are in an open environment, like at a hotel or whatever, from possible malicious activity.

And it allows us to block executables at some point. So we can actually go in and say, “No, I don’t want you to be able to run this browser, because I can’t do anything to protect you. Or I can’t watch what you do, or I can’t keep you from doing things you shouldn’t do.” So those are all very useful tools in a single pane of glass that we can see all of those devices at one time and monitor and manage. It saves us a lot of time.

Bunkley: I would follow up on that with a base concept, Dana, and our base concept is of an external network. We come from the concept of, we are an everywhere network. We are not only aiming to defend our internal network while you are here and maybe do some stuff while you are at our house, we are literally an externally built network, where our network will extend directly down into the student and teacher’s home.

We have gone as far as moving everything we physically can out of this network, right down to our firewall. We are moving our domain controllers, external to the network to create literally an everywhere network. And so our security focus is not just internal, it is focused on external first, then internal.

Gardner: With security products, what have you been using, what wasn't working, and where do you expect to go next given those constraints?

No free lunch

Perkins: Well, we can tell you that “free” is not always the best option; as a matter of fact, it’s almost never a good option, but we have had to deal with it.

We were previously using an antivirus called Avast, and it’s a great home product. We found out that it has not been the best business-level product. It’s very much marketed to education, and there are some really good things about it. Transferring away from it hasn’t been the easiest because it’s next to impossible to uninstall. So we have been having some problems with that.

We have also tested some other security measures and programs along the way that haven’t been so successful. And we are always in the process of evaluating where we are. We are never okay with status quo. Even if we achieve where we want to be, I don't think any of us will be satisfied, and that’s actually something that a lot of this is built on -- we always want to go that step further. And I know that’s cliché, but I would say for an institution of this size, the reason we are able to do some of the stuff is the staff that has been assembled here is second to none for an educational institution.

So even in the processes that we have identified, which were helter-skelter before we got here, we have some more issues to continue working out, but we won’t be satisfied with where we are even if we achieve the task.

Skipper: One of the things that our office actually hates is just checking the box on a security audit. I mean, we are very vocal to the auditors when they come in. We don’t do things just to satisfy their audit. We actually look at the audit and we look at the intent of the question and if we find merit in it, we are going to go and meet that expectation and then make it better. Audits are general. We are going to exceed and make it a better functioning process than just saying, “Yes, I have purchased an antivirus product,” or “I have purchased x.” To us that’s unacceptable.

Bunkley: Audits are a good thing, and nobody likes to do them because they are time-consuming. But you do them because they are required by law, for our institution anyways. So instead of just having a generic audit, where we ignore the audit, we have adopted the concept of the audit as a very useful thing for us to have as a self-reflection tool. It’s nice to not have the same set of eyes on your work all the time. And instead of taking offense to someone coming in and saying, “You are not doing this good enough,” we have literally changed our internal culture here, audits are not a bad thing; audits are a desired thing.

Gardner: Let’s go around the table and hear how you began your journey into IT and security, and how the transition to an educational environment went.

IT’s the curriculum

Bunkley: I started in the banking industry. Those hours were crazy and the pressure was pretty high. So as soon as I left that after a year, I entered education, and honestly, I entered education because I thought the schedule was really easy and I kind of copped out on that. Come to find out, I am working almost as many hours, but that’s because I have come to love it.

This is my 17th year in education, so I have been in a few districts now. Wholesale change is what I have been hired to do, that’s also what I was hired here to do in Clay. We want to change the culture, make IT part of the instruction instead of a separate segment of education.

We have to be interwoven into everything, otherwise we are going to be on an island, and the last time I heard the definition of education is to educate children. So IT can never by itself be a high-functioning department in education. So we have decided instead to go to instruction, and go to professional development, and go to administration and intervene ourselves.

Gardner: Jon, tell us about your background and how the transition has been for you.

Skipper: I was at active-duty Air Force until 2014 when I retired after 20 years. And then I came into education on the side. I didn’t really expect this job, wasn’t mentally searching for it. I tried it out, and that was three years ago.

It’s been an interesting environment. Education, and especially a small IT department like this one, is one of those interesting places where you can come and really expand on your weak areas. So that’s what I actually like about this. If I need to practice on my group policy knowledge, I can dive in there and I can affect that change. Overall this has been an effective change, totally different from the military, a lot looser as far as a lot of things go, but really interesting.

Gardner: Rick, same question to you, your background and how did the transition go?

Perkins: I spent 21 years in the military, I was Navy. When I retired in 2010, I actually went to work for a smaller district in education mainly because they were the first one to offer me a job. In that smaller district, just like here, we have eight people doing operations, and we have this big department. Jeremy understands from where he came from. It was pretty much me doing every aspect of it, so you do a little security, you do a little bit of everything, which I enjoyed because you are your own boss, but you are not your own boss.

You still have people residing over you and dictating how you are going to work, but I really enjoyed the challenge. Coming from IT security in the military and then coming into education, it’s almost a role reversal where we came in and found next to no policies.

I am used to a black-and-white world. So we are trying to interject some of that and some of the security best practices into education. You have to be flexible because education is not the military, so you can’t be that stringent. So that’s a challenge.

Gardner: What are you using to put policies in place enforce them? How does that work?

Policy plans

Perkins: From a [Microsoft] Active Directory side, we use group policy like most people do, and we try and automate it as much as we can. We are switching over, on the student side, very heavily to Google. They effectively have their own version of Active Directory with group policy. And then I will let Jon speak more to the security side though we have used various programs like PDQ for our patch management system that allows us to push out stuff. We use some logging systems with ManageEngine. And then as we have said before we use Bitdefender to push a lot of policy and security out as well, and we've been reevaluating some other stuff.

We also use SolarWinds to monitor our network and we actually manage changes to our network and switching using SolarWinds, but on the actual security side, I will let Jon get more specific for you.

Skipper: When we came in … there was a fear of having too much in policy equated to too much auditing overhead. One of the first things we did was identify what we can lock down, and the easiest one was the filter.

The content filter met such stipulations as making sure adult material is not acceptable on the network. We had that down. But it didn't really take into account the dynamic of the Internet as far as sites are popping up every minute or second, and how do you maintain that for unclassified and uncategorized sites?

So one of the things we did was we looked at a vendor, like, okay, does this vendor have a better product for that aspect of it, and we got that working, I think that's been working a lot better. And then we started moving down, we were like, okay, cool, so now we have content filtering down, luckily move on to active network, actually not about finding someone else who is doing it, and borrowing their work and making their own.

We look into some of the bigger school districts and see how they are doing it. I think Chicago, Los Angeles. We both looked at some of their policies where we can find it. I found a lot of higher education in some of the universities. Their policies are a lot more along the lines of where we want to be. I think they have it better than what some of the K-12s do.

So we have been going through there and we are going to have to rewrite policy – we are in an active rewrite of our policies right now, we are taking all of those in and we are looking at them, and we are trying to figure out which ones work in our environment and then make sure we do a really good search and replace.

Gardner: We have talked about people, process and technology. We have heard that you are on a security journey and that it’s long-term and culturally oriented.

Let's look at this then as to what you get when you do it right, particularly vis-à-vis education. Do you have any examples of where you have been able to put in the right technology, add some policy and process improvements, and then culturally attune the people? What does that get for you? How do you turn a problem student into a computer scientist at some point? Tell us some of the examples of when it works, what it gets you.

Positive results

Skipper: When we first got in here, we were a Microsoft district. We had some policies in place to help prevent data loss, and stuff like that.

One of the first things we did is review those policies and activate them, and we started getting some hits. We were surprised at some of hits that we saw, and what we saw going out. We already knew we were moving to the Google networks, continuing the process.

We researched a lot and one of the things we discovered is that just by a minor tweak in a user’s procedures, we were able to identify that we could introduce that user to and get them used to using email encryption, for example. With the Gmail solution, we are able to add an extension, and that extension actually looks at their email as it goes out and finds keywords -- or it may be PII -- and automatically encrypt the email, preventing those kinds of breaches from going out there. So that’s really been helpful.

As far as taking a student who may be on the wrong path and reeducating them and bringing them back into the fold, Bitdefender has actually helped out on that one.

We had a student a while back who went out to YouTube and find out how he could just do a simple search on how to crash the school network, and he found about five links. And he researched those links and went out there and found that this batch filed with this type will crash a school server.

He was able to implement it and started trying to get that attack out there, and Bitdefender was able to actually go out there and see the batch file, see what it did and prevent it. By quarantining the file, I was able to get that reported very quickly from the moment that he introduced the attack, and it identified the student and we were able to sit down with the administrators and talk to the student about that process and educate them on the dangers of actually attacking a school network and the possible repercussions of it.

Gardner: It certainly helps when you can let them know that you are able to track and identify those issues, and then trace them back to an individual. Any other anecdotes about where the technology process and people have come together for a positive result?

Applied IT knowledge for the next generation

Skipper: One of the things that’s really worked well for the school district is what we call Network Academy. It’s taught by one of our local retired master chiefs, and he is actually going in there and teaching students at the high school level how to go as far as earning a Cisco Certified Network Associate (CCNA)-level IT certificate.

If a student comes in and they try hard enough, they will actually figure it out and they can leave when they graduate with a CCNA, which is pretty awesome. A high school student can walk away with a pretty major industry certification.

We like to try and grab these kids as soon as they leave high school, or even before they leave high school, and start introducing them to our network. They may have a different viewpoint on how to do something that’s revolutionary to us.

But we like having that aspect of it, we can educate those kids who are coming in and  getting their industry certifications, and we are able to utilize them before they move on to a college or another job that pays more than we do.

Bunkley: Charlie Thompson leads this program that Jon is speaking of, and actually over half of our team has been through the program. We didn’t create it, we have just taken advantage of the opportunity. We even tailor the classes to some of the specific things that we need. We have effectively created our own IT hiring pipeline out of this program.

Gardner: Next let’s take a look to the future. Where do you see things going, such as more use of cloud services, interest in unified consoles and controls from the cloud as APIs come into play more for your overall IT management? Encryption? Where do you take it from here?

Holistic solutions in the cloud

Bunkley: Those are some of the areas we are focusing on heavily as we move that “anywhere network.” The unified platform for management is going to be a big deal to us. It is a big deal to us already. Encryption is something we take very seriously because we have a team of eight protecting the data of  about 42,000 users..

If you consider the perfect cyber crime reaching down into a 7th or an 8th grader and stealing all of their personal information, taking that kid’s identity and using it, that kid won’t even know that their identity has been stolen.

We consider that a very serious charge of ours to take on. So we will continue to improve our protection of the students’ and teachers’ PII -- even if it sometimes means protecting them from themselves. We take it very seriously.

As we move to the cloud, that unified management platform leads to a more unified security platform. As the operating systems continue to mature, they seem to be going different ways. And what’s good for Mac is not always good for Chrome, is not always good for Windows. But as we move forward with our projects we bring everything back to that central point -- can the three be operated from the single point of connection, so that we can save money moving forward? Just because it’s a cool technology and we want to do, it doesn't mean it's the right thing for us.

Sometimes we have to choose an option that we don’t necessarily like as much, but pick it because it is better for the whole. As we continue to move forward, everything will be focused on that centralization. We can remain a small and flexible department to continue making sure that we are able to provide the services needed internally as well as protect our users.

Skipper: I think Jeremy hit it pretty solid on that one. As we integrate more with the cloud services, Google, etc., we are utilizing those APIs and we are leading our vendors that we use and forcing them into new areas. Lightspeed, for instance, is integrating more-and-more with Google and utilizing their API to ensure that content filtering -- even to the point of mobile device management (MDM) that is more integrated into the Google and Apple platforms to make sure that students are well protected and we have all the tools available that they need at any given time.

We are really leaning heavily on more cloud services, and also the interoperability between APIs and vendors.

Perkins: Public education is changing more to the realm of college education where the classroom is not a classroom -- a classroom is anywhere in the world. We are tasked with supporting them and protecting them no matter where they are located. We have to take care of our customers either way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge -- and closer to consumers -- to support live, multi-screen Internet Protocol (IP) entertainment delivery. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected -- with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

 Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go -- on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change -- when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now -- with the new technology on the back-end -- to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology -- all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them -- knowing what they like -- and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to -- at the very edge of the signal distribution, the video signal distribution -- insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. -- why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going -- to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP -- so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news -- this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera -- from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them -- and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That's what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That's very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it's 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That's also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it's an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It's a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed -- but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don't have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it's distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy -- and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations -- from content creation, planning, and distribution -- to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV -- all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don't want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that's the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that's an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side -- these were two groups that didn't speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn't say yet that they are IT companies. The core strength is content, that is their brand, that's what they are good at -- creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can't lose concentration on their core business. I think the IT providers still have a very strong play there. It's always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that's dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years -- because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That's one of the intrinsic values of a hybrid IT approach -- you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that's where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it's disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: