.banner-thumbnail-wrapper { display:none; }

HPE Synergy

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

Learn how Equinix, Microsoft Azure Stack, and HPE’s Cloud28+ help MSPs and businesses alike obtain world-class hybrid cloud implementations.

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

A discussion on how the 2018 Ryder Cup golf match between European and US players places unique technical and campus requirements on its operators.

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

A discussion on how hyperconverged infrastructure and virtual desktop infrastructure are combining to make one of the more traditionally challenging workloads far easier to deploy, optimize, and operate.

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

A discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.

New strategies emerge to stem the costly downside of complex cloud choices

New strategies emerge to stem the costly downside of complex cloud choices

A discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

A discussion on how IT leaders face an increasingly complex mix of identifying and automating for both best performance and best price points across all of their cloud options.

How HudsonAlpha transforms hybrid cloud complexity into an IT force multiplier

The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.

We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?

Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.

 Mullican

Mullican

We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.

Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?

Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.

Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.

Software-defined everything 

Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.

Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize -- yet allow for experimentation, too?

Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.

Simplified 

Hybrid Cloud

Management

Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.

Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.

Gardner: What about the IT requirements? How you fulfill that mandate using technology?

Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers. 

Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?

Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.

Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?

Agnostic arrangements of clouds

Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.

There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind -- so you don’t lock into any one solution necessarily -- then you can optimize and put everything together.

Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.

Cloud cost control, granted

Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud -- from storage to compute -- and that cost is covered by the grant.

So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.

Simplified 

Hybrid Cloud

Management

Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?

Variety is the spice of IT

Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.

The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access -- through a common API -- that infrastructure. So it’s very exciting.

Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?

Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully -- and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.

Gardner: What this might capability bring to you and your organization?

Cross-training the clouds

Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds -- AWSGoogleMicrosoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.

For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.

But the key here is that our developer and researchers -- the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline -- and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.

Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.

Simplified 

Hybrid Cloud

Management

Mullican: Right, and that will be an ongoing effort to always optimize cost -- but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.

Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.

Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.

Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.

The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.

As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE SynergyHPE SimpliVity HCI, and HPE 3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a "whole greater than the sum of the parts" effect?

Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer -– open-source operating systems and applications.

The ability to utilize that in the mix that I have architected natively with OneSphere -- in addition to the public clouds -- is very powerful, and I am excited to see where that goes.

Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure? 

Get adept at DevOps

Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.

As far as putting an emphasis on automation -- and being able to provision infrastructure in the datacenter and the cloud through automated APIs -- a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.

It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources -- that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.

That’s the partnership that companies really need to foster -- and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.

On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.

Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.

We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.

Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?

HCI advances across the globe

Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.

And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.

Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.

Is there something about having that visibility into hybrid IT that extends into hybrid data environments?

Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.

So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.

Simplified 

Hybrid Cloud

Management

Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.

Hybrid hard work pays off

Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.

HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks

The next BriefingsDirect agile data center architecture interview explores how an Ericsson and Hewlett Packard Enterprise (HPE) partnership establishes a mobile telecommunications stack that accelerates data services adoption in rapidly advancing economies. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

We’ll now learn how this mobile business support infrastructure possesses a low-maintenance common core -- yet remains easily customizable for regional deployments just about anywhere. 

Here to help us define the unique challenges of enabling mobile telecommunications operators in countries such as Bangladesh and Uzbekistan, we are joined by Mario Agati, Program Director at Ericsson, based in Amsterdam, and Chris James-Killer, Sales Director for HPE. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions

Here are some excerpts:

Gardner: What are the unique challenges that mobile telecommunications operators face when they go to countries like Bangladesh?

 Agati

Agati

Agati: First of all, these are countries with a very low level of revenue per user (RPU). That means for them cost efficiency is a must. All of the solutions that are going to be implemented in those countries should be, as much as possible, focused on cost efficiency, reusability, and industrialization. That’s one of the main reasons for this program. We are addressing those types of needs -- of high-level industrialization and reusability across countries where cost-efficiency is king.

Gardner: In such markets, the technology needs to be as integrated as possible because some skill sets can be hard to come by. What are some of the stack requirements from the infrastructure side to make it less complex?

James-Killer: These can be very challenging countries, and it’s key to do the pre-work as systematically as you can. So, we work very closely with the architects at Ericsson to ensure that we have something that’s repeatable, that’s standardized and delivers a platform that can be rolled out readily in these locations. 

Even countries such as Algeria are very difficult to get goods into, and so we have to work with customs, we have to work with goods transfer people; we have to work on local currency issues. It’s a big deal.

Learn More About the

HPE and Ericsson Alliance

Gardner: In a partnership like this between such major organizations as Ericsson and HPE, how do you fit together? Who does what in this partnership?

Agati: At Ericsson, we are the prime integrator responsible for running the overall digital transformation. This is for a global operator that is presently in multiple countries. It shows the complexity of such deals.

We are responsible for delivering a new, fully digital business support system (BSS). This is core for all of the telco services. It includes all of the business management solutions -- from the customer-facing front end, to billing, to charging, and the services provisioning.

In order to cope with this level of complexity, we at Ericsson rely on a number of partners that are helping us where we don’t have our own solutions. And, in this case, HPE is our selected partner for all of the infrastructure components. That’s how the partnership was born.

Gardner: From the HPE side, what are the challenges in bringing a data center environment to far-flung parts of the world? Is this something that you can do on a regional basis, with a single data center architecture, or do you have to be discrete to each market?

Your country, your data center

James-Killer: It is more bespoke than we would like. It’s not as easy as just sending one standard shipping container to each country. Each country has its own dynamic, its own specific users. 

The other item worth mentioning is that each country needs its own data center environment. We can’t share them across countries, even if the countries are right next to each other, because there are laws that dictate this separation in the telecommunications world. 

 James-Killer

James-Killer

So there are unique attributes for each country. We work with Ericsson very closely to make sure that we remove as many itemized things as we can. Obviously, we have the technology platform standardized. And then we work out what’s additionally required in each country. Some countries require more of something and some countries require less. We make sure it’s all done ahead of time. Then it comes down to efficient and timely shipping, and working with local partners for installation.

Gardner: What is the actual architecture in terms of products? Is this heavily hyper-converged infrastructure (HCI)-oriented, and software-defined? What are the key ingredients that allow you to meet your requirements?

James-Killer: The next iterations of this will become a lot more advanced. It will leverage a composable infrastructure approach to standardize resources and ensure they are available to support required workloads. This will reduce overall cost, reduce complexity, and make the infrastructure more adaptable to the end customers’ business needs and how they change over time. Our HPE Synergy solution is a critical component of this infrastructure foundation. 

At the moment we have to rely on what’s been standardized as a platform for supporting this BSS portfolio.

This platform has been established for years and years. So it is not necessarily on the latest technology ... but it's a good, standardized, virtualized environment to run this all in a failsafe way.

We have worked with Ericsson for a long time on this. This platform has been established for years and years. So it is not necessarily on the latest technology; the latest is being tested right now. For example, the Ericsson Karlskrona BSS team in Sweden is currently testing HPE Synergy. But, as we speak, the current platform is HPE Gen9 so it’s ProLiant Servers. HPE Aruba is involved; a lot of heavy-duty storage is involved as well. 

But it’s a good, standardized, virtualized environment to run this all in a failsafe way. That’s really the most critical thing. Instead of being the most advanced, we just know that it will work. And Ericsson needs to know that it will work because this platform is critical to the end-users and how they operate within each country.

Gardner: These so-called IT frontiers countries -- in such areas as Southeast Asia, Oceania, the Middle East, Eastern Europe, and the Indian subcontinent -- have a high stake in the success of mobile telecommunications. They want their economies to grow. Having a strong mobile communications and data communications infrastructure is essential to that. How do we ensure the agility and speed? How are you working together to make this happen fast?

Architect globally, customize locally

Agati: This comes back to the industrialization aspect. By being able to define a group-wide solution that is replicable in each of these countries, you are automatically providing a de facto solution in countries where it would be very difficult to develop locally. They obtain a complex, state-of-the-art core telco BSS solution. Thanks to this group initiative, we are able to define a strong set of capabilities and functions, an architecture that is common to all of the countries. 

That becomes a big accelerator because the solution comes pre-integrated, pre-defined, and is just ready to be customized for whatever remains to be done locally. There are always aspects of the regulations that need to be taken care of locally. But you can start from a predefined asset that is already covering some 80 percent of your needs.

Learn More About the

HPE and Ericsson Alliance

In a relatively short time, in those countries, they obtain a state-of-the-art, brand-new, digital BSS solution that otherwise would have required a local and heavy transformation program -- with all of the complexity and disadvantages of that.

Gardner:And there’s a strong economic incentive to keep the total cost of IT for these BSS deployments at a low percentage of the carriers’ revenue. 

Shared risk, shared reward

Agati: Yes. The whole idea of the digital transformation is to address different types of needs from the operator’s perspective. Cost efficiency is probably the biggest driver because it’s the one where the shareholders immediately recognize the value. There are other rationales for digital transformation, such as relating to the flexibility in the offering of new services and of embracing new business models related to improved customer experiences. 

On the topic of cost efficiency, we have created with a global operator an innovative revenue-share deal. From our side, we commit to providing them a solution that enables them a certain level of operational cost reduction. 

The current industry average cost of IT is 5 to 6 percent of total mobile carrier revenue. Now, thanks to the efficiency that we are creating from the industrialization and re-use across the entire operator’s group, we are committed to bringing the operational cost down to the level of around 2 percent. In exchange, we will receive a certain percentage of the operator’s revenue back. 

That is for us, of course, a bold move. I need to say this clearly, because we are betting on our capability of not only providing a simple solution, but on also providing actual shareholder value, because that's the game we are actually playing in now.

It's a real quality of life issue ... These people need to be connected and haven't been connected before.

We are risking our own money on it at the end of the game. So that's what makes the big difference in this deal against any other deal that I have seen in my career -- and in any other deal that I have seen in this industry. There is probably no one that is really taking on such a huge challenge.

Gardner: It's very interesting that we are seeing shared risks, but then also shared rewards. It's a whole different way of being in an ecosystem, being in a partnership, and investing in big-stakes infrastructure projects.

Agati: Yes. 

Gardner: There has been recent activity for your solutions in Bangladesh. Can you describe what's been happening there, and why that is illustrative of the value from this approach?

Bangladesh blueprint

Agati:Bangladesh is one of the countries in the pipeline, but it is not yet one of the most active. We are still working on the first implementation of this new stack. That will be the one that will set the parameters and become the template for all the others to come. 

The logic of the transformation program is to identify a good market where we can challenge ourselves and deliver the first complete solution, and then reuse that solution for all of the others. This is what is happening now; we’re in the advanced stages of this pilot project.

Gardner: Yes, thank you. I was more referring to Bangladesh as an example of how unique and different each market can be. In this case, people often don't have personal identification; therefore, one needs to use a fingerprint biometric approach in the street to sell a SIM to get them up and running, for example. Any insight on that, Chris?

Learn More About the

HPE and Ericsson Alliance

James-Killer: It speaks to the importance of the work that Ericsson is doing in these countries. We have seen in Africa and in parts of the Middle East how important telecommunications is to an individual. It's a real quality of life issue. We take it for granted in Sweden; we certainly take advantage of it in my home country of Australia. But in some of these countries you are actually making a genuine difference.

These people need to be connected and haven’t been connected before. And you can see what has happened politically when the people have been exposed to this kind of technology. So it's admirable, I believe, what Ericsson is doing, particularly commercially, and the way that they are doing it. 

It also speaks to Ericsson's success and the continued excitement around LTE and 4G in these markets; not actually 5G yet. When you visit Ericsson's website or go to Ericsson’s shows, there's a lot of talk about autonomous vehicles and working with Volvo and working with Scania, and the potential of 5G for smart cities initiatives. But some of the best work that Ericsson does is in building out the 4G networks in some of these frontier countries.

Agati: If I can add one thing. You mentioned how specific requirements are coming from such countries as Bangladesh, where we have the specific issue related to identity management. This is one of the big challenges we are now facing, of gaining the proper balance between coping with different local needs, such as different regulations, different habits, different cultures -- but at the same time also industrializing the means, making them repeatable and making that as simple as possible and as consistent as possible across all of these countries. 

There is a continuous battle between the attempts to simplify and the reality check on what does not always allow simplification and industrialization. That is the daily battle that we are waging: What do you need and what don’t you need. Asking, “What is the business value behind a specific capability? What is the reasoning behind why you really need this instead of that?”

We at Ericsson want to be the champion of simplicity and this project is the cornerstone of going in that direction.

At the end of the game, this is the bet that we are making together with our customers -- that there is a path to where you can actually find the right way to simplification. Ericsson has recently been launching our new brand and it is about this quest for making it easier. That's exactly our challenge. We want to be the champion of simplicity and this project is the cornerstone of going in that direction.

Gardner: And only a global integrator with many years of experience in many markets can attain that proper combination of simplicity and customization.

Agati: Yes.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes

The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.

We will now explore how a regional hospital seeking efficiency -- and a teaching hospital seeking performance -- are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

How VMware, HPE, and Telefonica together bring managed cloud services to a global audience

The next BriefingsDirect Voice of the Customer optimized cloud design interview explores how a triumvirate of VMware, Hewlett Packard Enterprise (HPE), and Telefonica together bring managed cloud services to global audiences. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Learn how Telefonica’s vision for delivering flexible cloud services capabilities to Latin American and European markets has proven so successful. Here to explain how they developed the right recipe for rapid delivery of agile Infrastructure-as-a-Services (IaaS) deployments is Joe Baguley, Vice President and CTO of VMware EMEA, and Antonio Oriol Barat, Head of Cloud IT Infrastructure Services at Telefonica. The interview is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What challenges are mobile and telecom operators now facing as they transition to becoming managed service providers?

Oriol Barat: The main challenge we face at this moment is to help customers navigate in a multi-cloud environment. We now have local platforms, some legacy, some virtualized platforms, hyperscale public cloud providers, and data communications networks. We want to help our customers manage these in a secure way.

Gardner: How have your cloud services evolved? How have partnerships allowed you to enter new markets to quickly provide services?

  Oriol Barat

Oriol Barat

Oriol Barat: We have had to transition from being a hosting provider with data centers in many countries. Our movement to cloud was a natural evolution of those hosting services. As a telecommunications company (telco), our main business is shared networks, and the network is a shared asset between many customers. So when we thought about the hosting business, we similarly wanted to be able to have shared assets. VMware, with its virtualization technology, came as a natural partner to help us evolve our hosting services.

Gardner: Joe, it’s as if you designed the VMware stack with customers such as Telefonica in mind.

Baguley: You could say that, yes. The vision has always been for us at VMware to develop what was originally called the software-defined data center (SDDC). Now, with multi-cloud, for me, it’s an operating system (OS) for clouds.

 Baguley

Baguley

We’re bringing together storage, networking and compute into one OS that can run both on-premises and off-premises. You could be running on-premises the same OS as someone like Telefonica is running for their public cloud -- meaning that you have a common operating environment, a common infrastructure.

So, yes, entirely, it was built as part of this vision that everyone runs this OS to build his or her clouds.

Gardner: To have a core, common infrastructure -- yet have the ability to adapt on top of that for localized markets -- is the best of all worlds.

Baguley: That’s entirely it. Like someone said, “If all of the clouds are running the same OS, what’s the differentiation?” Well, the differentiation is, you want to go with the biggest player in Latin America. You want to go with the player that has the best direct connections: The guys that can give you service levels maybe that the cloud providers can’t give. They can give you over-the-top services that other cloud providers don’t provide. They can give you an integrated solution for your business that includes the cloud -- and other enterprise services.

It’s about providing the tools for cloud providers to build differentiated powerful clouds for their customers.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: Antonio, please, for those of our listeners and readers that aren’t that familiar with Telefonica, tell us about the breadth and depth of your company.

Oriol Barat: Telefonica is one of the top 10 global telco providers in the world. We are in 21 countries. We have fixed and mobile data services, and now we are in the process of digital transformation, where we have our focus in four areas: cloud, security, Internet of Things (IoT), and big data.

We used to think that our core business was in communications. Now we see what we call a new core of our business at the intersection of data communications, cloud, and security. We think this is really the foundation, the platform, of all the services that come on top.

Gardner: And, of course, we would all like to start with brand-new infrastructure when we enter markets. But as you know, we have to deal with what is already in place, too. When it came time for you to come up with the right combination of vendors, the right combination of technologies, to produce your new managed services capabilities, why did you choose HPE and VMware to create this full solution?

Sharing requires trust

Oriol Barat: VMware was our natural choice with its virtualization technologies to start providing shared IT platforms -- even before cloud, as a word, was invented. We launched “virtual hosting” in 2007. That was 10 years ago, and since then we have been evolving from this virtual hosting that had no portal but was a shared platform for customers, to the cloud services that we have today.

The hardware part is important; we have to have reliable and powerful technology. For us, it’s very important to provide trust to the customers. Trust, because what they are running in their data centers is similar to what we have in our data centers. Having VMware and HPE as partners provides this trust to the customers so that they will move the applications, and they know it will work fine.

Gardner: HPE is very fond of its Synergy platform, with composable infrastructure. How did that help you and VMware pull together the full solution for Telefonica, Joe?

Learn More End-to-End Solutions
From HPE and VMware

Baguley: We have been on this journey together, as Antonio mentioned, since 2007 -- since before cloud was a thing. We don’t have a test environment that’s as big as Telefonica’s production environment -- and neither does HPE. What we have been doing is working together -- and like any of these journeys, there have been missteps along the way. We stumbled occasionally, but it’s been good to work together as a partnership.

As we have grown, we have also both understood how the requirements of the market are changing and evolving. Ten years ago providing a combined cloud platform on a composable infrastructure was unheard of -- and people wouldn’t believe you could do it. But that’s what we have evolved together, with the work that we have done with companies such as Telefonica.

The need for something like HPE Synergy and the Gen10 stack -- where there are these very configurable stacks that you can put together -- has literally grown out of the work that we have done together, along with what we have done in our management stack, with the networking, compute, and storage.

Gardner: The combination of composable infrastructure and SDDC makes for a pretty strong tag team.

Baguley: Yes, definitely. It gives you that flexibility and the agility that a cloud provider needs to then meet the agility requirements of their customers, definitely.

Gardner: When it comes to bringing more end users into the clouds for your managed services providers, one of the important things is for end users to move into that cloud with as much ease as possible. Because VMware is a de facto standard in many markets with its vSphere Hypervisor, how does that help you, being a VMware stack, create that ease of joining these clouds?

Seamless migrations

Oriol Barat: Having the same technology in the customer data center and in our cloud makes things a lot easier. In the first place, in terms of confidence, the customer can be confident that it’s going to work well when it is in place. The other thing is that VMware is providing us with the tools that make these migrations easier.

Baguley: At VMworld 2017, we announced VMware Hybrid Cloud Extension (HCX), which is our hybrid cloud connector. It allows customers to locally install software that connects at a Layer 2 [network] level, as well as right back to vSphere 5.0 in clouds. Those clouds now are IBM and VMware cloud native, but we are extending it to other service providers like Telefonica in 2018.

The important thing here is by going down this road, people can take some of the fear out of going to the cloud.

So a customer can truly feel that their connecting and migrations will be seamless. Things like vSphere vMotion across that gap are going to be possible, too. I think the important thing here is by going down this road, people can take some of the fear out of going to the cloud, because some of the fear is about getting locked in: “I am going to make decisions that I will regret in two years by converting my virtual machines (VMs) to run on another platform.” Right here, there isn’t that fear, there is just more choice, and Telefonica is very much part of that story of choice.

Gardner: It sounds like you have made things attractive for managed service providers in many markets. For example, they gain ease of migration from enterprises into the provider’s cloud. In the case of Telefonica, users gain support, services and integration, knowing that the venerable vendors like VMware and HPE are behind the underlying services.

Do you have any examples where you have been able to bring this total solution to a typical managed service provider account? How has it worked out for them?

Everyone’s doing it

Oriol Barat: We have use cases in all the vertical industries. Because cloud is a horizontal technology, it’s the foundation of everything. I would say that all companies of all verticals are in this process of transformation.

We have a lot of customers in retail that are moving their platforms to cloud. We have had, for example, US companies coming to Europe and deploying their SAP systems on top of our platforms.

For example in Spain, we have a very strong tourism industry with a lot of hotel chains that are also using our cloud services for their reservation systems and for more of their IT.

We have use cases in healthcare, of companies moving their medical systems to our clouds.

We have use cases of software vendors that are growing software-as-a-service (SaaS) businesses and they need a flexible platform that can grow as their businesses grow.

A lot of people are using these platforms as disaster recovery (DR) for the platforms that they have on-premises.

I would say that all verticals are into this transformation.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: It’s interesting, you mentioned being able to gain global reach from a specific home economy by putting data centers in place with a managed service provider model.

It’s also important for data sovereignty and compliance and General Data Protection Regulation (GDPR) and other issues for that to happen. It sounds like a very good market opportunity.

And that brings us to the last part of our discussion. What happens next? When we have proven technology in place, and we have cloud adoption, where would you like to be in 12 months?

Gaining the edge

Baguley: There has been a lot of talk at recent events, like HPE Discover, about intelligent edge developments. We are doing a lot at the edge, too. When you look at telcos, the edge is going to become something quite interesting.

What we are talking about is taking that same blend of storage, networking and compute, and running it on as small a device as possible. So think micro data centers, nano data centers. How far out can we push this cloud? How much can we distribute this cloud? How close to the point of need can we get our customers to execute their workloads, to do their artificial intelligence (AI), to do their data gathering, et cetera?

And working in partnership with someone who has a fantastic cloud and a fantastic network just means that a customer who is looking to build some kind of distributed edge-to-cloud core capability is something that Telefonica and VMware could probably do over the next 12 months. That could be really, really strong.

Gardner: Antonio?

Oriol Barat: In this transformation that all the enterprises are in, maybe we are in the 20 percent of execution range. So we still have 80 percent of the transformation ahead of us. The potential is huge.

Looking ahead with our services, for example, it’s very important that the network is also in transformation, leveraging the software-defined networking (SDN) technologies. These networks are going to be more flexible. We think that we are in a good position to put together cloud services with such network services -- with security, also with more software-defined capabilities, and create really flexible solutions for our customers.

Learn More End-to-End Solutions
From HPE and VMware

Baguley: One example that I would like to add is if you can imagine that maybe Real Madrid C.F. are playing at home next weekend ... It’s theoretical that Telefonica could have the bottom of those network base stations -- because of VMware Network Functions Virtualization (NFV), it’s no longer specific base station hardware, it’s x86 HPE servers in there. They can maybe turn around to a betting company and say, “Would you like to move your front-end web servers with running containers to run in the base station, in Real Madrid’s stadium, for the four hours in the afternoon of that match?” And suddenly they are the best performing website.

That’s the kind of out-there transformative ideas that are now possible due to new application infrastructures, new cloud infrastructures, edge, and technologies like the network all coming together. So those are the kind of things you are going to see from this kind of solutions approach going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center

The next BriefingsDirect Voice of the Customer high-performance computing (HPC) success story interview examines how Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies.

We'll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

 Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional factors into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

 Burris

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America -- independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It's not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset. 

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That's what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that's US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let's just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity. 

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there's some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren't necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.  

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise -- their legal, marketing, security, risk avoidance and governance units -- right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern -- or that type of concern -- is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud -- think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don't automatically default to the notion that you're going to take your business to the cloud.

The same type of thinking pertains here -- the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run -- whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop -- and they will continue to drop -- it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services -- and as these applications increasingly are the basis for the business and the end-value that we’re creating -- we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There's a new data regime that's emerging to support application development. How's that going to work -- the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications -- and especially applications that involve capabilities like deep learning or machine learning with increased automation -- we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications -- inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How modern architects transform the messy mix of hybrid cloud into a force multiplier

The next BriefingsDirect cloud strategies insights interview focuses on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios.

We’ll now learn how composable infrastructure and auto-scaling help improve client services, operations, and business goals attainment for a New York cloud services and architecture support provider.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us learn what's needed to reach the potential of multiple -- and often overlapping -- cloud models is Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) Ltd. in New York.

Here are some excerpts:

Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services? 

 Reyenger

Reyenger

Reyenger: Like the technology trends themselves, everything is accelerating. Before, you would have three-year or even five-year plans that were developed by the business. They were designed to reach certain business outcomes, they would design the technology to support that and it was then heads-down to build my rocket ship.

It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.

Gardner: More moving parts, less time.

Reyenger: Absolutely.

Gardner: How do you accomplish that? 

Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.

Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting? 

Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.

On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.

Gardner: Tell us about your company. What have you done to transform?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.

It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them. 

Gardner: I suppose there is no typical cloud engagement, but what is a common hurdle that organizations are facing as they go from that traditional IT mindset to the more cloud-centric thinking and hybrid deployment models? 

The cloud answer

Reyenger: The concept of auto-scaling or bursting has become very, very prevalent. You see that within different lines of business. Ultimately, they are all asking for essentially the same thing -- and the cloud is a pretty good answer.

At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.

We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.

And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives. 

Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.

How are you working with companies like Hewlett Packard Enterprise (HPE), for example, to provide composable auto-scaling capabilities with the look and feel of public cloud on their private cloud?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger: Now it’s becoming a multi-cloud strategy. It’s one thing to say only on-premises and using one cloud. But using just one cloud has risk, and this is a problem.

We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.

Intelligent platform performance

We looked at some of the very unique things that HPE has done, specifically around their Synergy platform, to allow for cloud management and cloud automation to deliver true composable infrastructure. That has huge value around energizing a company’s goals, strengthening their profitability, boosting productivity, and enhancing innovation. We've been able to extend that into the public cloud. So now we have customers that truly are getting the best of both worlds.

Composable infrastructure is having true infrastructure that you can deploy as code. It’s being able to standardize on a single RESTful API set. 

Gardner: How do you define composable infrastructure? 

Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.

That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.

Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.

We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on. 

Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?  

Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.

But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment. 

Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?

Consult before composing

Reyenger: We offer a series of consulting services around the delivery of managed services and the actual development to take an existing cloud management stack -- whether that is Red Hat CloudForms, vRealize from VMware, or Terraform -- it really doesn't matter.

We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.

Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.

There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.

Get the

Updated Book

HPE Synergy for Dummies

Gardner: We are in the early days of composability of infrastructure in a multi-cloud world. But as the complexity and scale increases, it seems likely to me that we are going to need to bring things like machine learning and artificial intelligence (AI) because humans doing this manually will run out of runway.

Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process? 

Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.

And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.

We will see more of that single pane of glass that they will leverage across multiple cloud providers. 

It's really early to ultimately pick favorites, but you are going to see more standardization. Rather than having 50 different RESTful APIs that everyone is standardizing on and that are constantly changing, so that I have to provide custom integrations. What we will see is more of that single pane of glass they will leverage across multiple cloud providers. That will leverage a lot of the same automation and orchestration toolsets that we talked about. 

Gardner: And HPE has their sights set on this with Project New Hybrid IT Stack? 

Reyenger: 100 percent. 

Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes? 

Fulfilling the promise

Reyenger: Every organization is different. I’ve spent a good chunk of my career being tactically deployed within very large organizations that are trying to achieve certain goals.

For me, I like to go to a customer’s 10-K SEC filing and look at the promises they’ve made to their investors. We want to ultimately be able to marry back what this IT investment will do for the short-term goals that they are all being judged against, as well as from both the key performance indicators (KPI) standpoint and from the health of the company.

It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.

These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value. 

Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.

Are we going to see IT costs at an operating level adjusted favorably? Is this something that ultimately will be so optimized -- with higher utilization, leveraging the competitive market for cloud services -- that meaningful decreases will occur in the total operating costs of IT in an organization?

An uphill road to lower IT costs

Reyenger: I definitely think that it’s quite possible. The way that most organizations are set up today, IT operations rolls back into finance. So if you sit underneath the CFO, like most organizations do, and a request gets made by marketing or sales or another line of business -- it has to go up the chain, get translated, and then come back down.

A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.

But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.

Get the

Updated Book

HPE Synergy for Dummies

Gardner: Arthur, I have been in this business long enough to know that every time we’ve gotten into the point where we think we are going to meaningfully decrease IT costs, some other new paradigm of IT comes up that requires a whole new round of investment. But it seems to me that this could be different this time, that we actually are getting to a standardized approach for supporting workloads and that traditional economics that impact any procurement service will become in effect here, too.

Mining to minimize risk

Reyenger: Absolutely. One of our big pushes has been around object storage. This still allows for traditional file- and block-level support. We are trying to help customers achieve that new economic view -- of which cloud approach ultimately provides them that best price point, but still gives them low risk, visibility, and control over their data.

I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”

And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service. 

Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.

We help customers gain all the benefits associated with cloud – without taking on any of the additional risk.

These are the types of projects that we really like to work on with customers, to be able to help them gain all the benefits associated with cloud – without taking on any of the additional risk, or the negatives, associated with jumping into cloud with both feet. 

Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation? 

Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.

A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged. 

Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger:  We are absolutely using the HPE Apollo storage line, and the different flavors of solid-state disks (SSD) down to SATA physical drives. And we are leveraging best-in-breed object storage software from Red Hat. We also have an OpenStack flavor as well.

We leverage things like automation and orchestration technologies, and our ServiceNow capabilities -- all married with our RIP in order to give customers the choice of buying this, deploying it, and having us layer services on top if you want or if you want to consume a fully managed service for something that’s on-premises. I have a per-GB price and the same SLAs as those public cloud providers. So all of it’s coming together to allow customers to really have the true choice and flexibility that everyone claimed you could years ago.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

As enterprises face hybrid IT complexity, new management solutions beckon

The next BriefingsDirect Voice of the Analyst interview examines how new machine learning and artificial intelligence (AI) capabilities are being applied to hybrid IT complexity challenges.

We'll explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to report on how companies and IT leaders are seeking new means to manage an increasingly complex transition to sustainable hybrid IT is Paul Teich, Principal Analyst at TIRIAS Research in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.


Here are some excerpts:

Gardner: Paul, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. There is also lingering concern about the complexity of managing so many fast-moving parts. We have legacy IT, private cloud, public cloud, software as a service (SaaS) and, of course, multi-cloud. So as someone who tracks technology and its consumption, how much has technology itself been tapped to manage this sprawl, if you will, across hybrid IT.

 Teich

Teich

Teich: So far, not very much, mostly because of the early state of multi-cloud and the hybrid cloud business model. As you know, it takes a while for management technology to catch up with the actual compute technology and storage. So I think we are seeing that management is the tail of the dog, it’s getting wagged by the rest of it, and it just hasn’t caught up yet.

Gardner: Things have been moving so quickly with cloud computing that few organizations have had an opportunity to step back and examine what’s actually going on around them -- never mind properly react to it. We really are playing catch up.

Teich: As we look at the options available, the cloud giants -- the public cloud services -- don’t have much incentive to work together. So you are looking at a market where there will be third parties stepping in to help manage multi-cloud environments, and there’s a lag time between having those services available and having the cloud services available and then seeing the third-party management solution step in.

Gardner: It’s natural to see that a specific cloud environment, whether it’s purely public like AWS or a hybrid like Microsoft Azure and Azure Stack, want to help their customers, but they want to help their customers all get to their solutions first and foremost. It’s a natural thing. We have seen this before in technology.

There are not that many organizations willing to step into the neutral position of being ecumenical, of saying they want to help the customer first, manage it all from the first.

As we look to how this might unfold, it seems to me that the previous models of IT management -- agent-based, single-pane-of-glass, and unfortunately still in some cases spreadsheets and Post-It notes -- have been brought to bear on this. But we might be in a different ball game, Paul, with hybrid IT, that there’s just too many moving parts, too much complexity, and that we might need to look at data-driven approaches. What is your take on that?

Learn More About

Hybrid IT Management

Solutions From HPE

Teich: I think that’s exactly correct. One of the jokes in the industry right now is if you want to find your stranded instances in the cloud, cancel your credit card and AWS or Microsoft will be happy to notify you of all of the instances that you are no longer paying for because your credit card expired. It’s hard to keep track of this, because we don’t have adequate tools yet.

When you are an IT manager and you have a lot of folks on public cloud services, you don't have a full picture.

That single pane of glass, looking at a lot of data and information, is soon overloaded. When you are an IT manager, you are at a mid-sized or a large corporation, you have a lot of folks paying out-of-pocket right now, slapping a credit card down on public cloud services, so you don’t have a full picture. Where you do have a picture, there are so many moving parts.

I think we have to get past having a screen full of data, a screen full of information, and to a point where we have insight. And that is going to require a new generation of tools, probably borrowing from some of the machine learning evolution that’s happening now in pattern analytics.

Gardner: The timing in some respects couldn’t be better, right? Just as we are facing this massive problem of complexity of volume and velocity in managing IT across a hybrid environment, we have some of the most powerful and cost-effective means to deal with big data problems just like that.

Life in the infrastructure

Paul, before we go further let’s hear about you and your organization, and tell us, if you would, what a typical day is like in the life of Paul Teich?

Teich: At TIRIAS Research we are boutique industry analysts. By boutique we mean there are three of us -- three principal analysts; we have just added a few senior analysts. We are close to the metal. We live in the infrastructure. We are all former engineers and/or product managers. We are very familiar with deep technology.

My day tends to be first, a lot of reading. We look at a lot of chips, we look at a lot of service-level information, and our job is to, at a very fundamental level, take very complex products and technologies and surface them to business decision-makers, IT decision-makers, folks who are trying to run lines of business (LOB) and make a profit. So we do the heavy lifting on why new technology is important, disruptive, and transformative.

Gardner: Thanks. Let’s go back to this idea of data-driven and analytical values as applied to hybrid IT management and complexity. If we can apply AI and machine learning to solve business problems outside of IT -- in such verticals as retail, pharmaceutical, transportation -- with the same characteristics of data volume, velocity, and variety, why not apply that to IT? Is this a case of the cobbler’s kids having no shoes? You would think that IT would be among the first to do this.

Dig deep, gain insight

Teich: The cloud giants have already implemented systems like this because of necessity. So they have been at the front-end of that big data mantra of volume, velocity -- and all of that.

To successfully train for the new pattern recognition analytics, especially the deep learning stuff, you need a lot of data. You can’t actually train a system usefully without presenting it with a lot of use cases.

The public clouds have this data. They are operating social media services, large retail storefronts, and e-tail, for example. As the public clouds became available to enterprises, the IT management problem ballooned into a big data problem. I don’t think it was a big data problem five or 10 years ago, but it is now.

That’s a big transformation. We haven’t actually internalized what that means operationally when your internal IT department no longer runs all of your IT jobs anymore.

We are generating big data and that means we need big data tools to go analyze it and to get that relevant insight.

That’s the biggest sea change -- we are generating big data in the course of managing our IT infrastructure now, and that means we need big data tools to go analyze it, and to get that relevant insight. It’s too much data flowing by for humans to comprehend in real time.

Gardner: And, of course, we are also talking about islands of such operational data. You might have a lot of data in your legacy operations. You might have tier 1 apps that you are running on older infrastructure, and you are probably happy to do that. It might be very difficult to transition those specific apps into newer operating environments.

You also have multiple SaaS and cloud data repositories and logs. There’s also not only the data within those apps, but there’s the metadata as to how those apps are running in clusters and what they are doing as a whole. It seems to me that not only would you benefit from having a comprehensive data and analytics approach for your IT operations, but you might also have a workflow and process business benefit by being an uber analyst, by being on top of all of these islands of operational data. 

Learn More About

Hybrid IT Management

Solutions From HPE

To me, moving toward a comprehensive intelligence and data analysis capability for IT is the gift that keeps giving. You would then be able to also provide insight for an uber approach to processes across your entire organization -- across the supply chains, across partner networks, and back to your customers. Paul, do you also see that there’s an ancillary business benefit to having that data analysis capability, and not ceding it to your cloud providers?

Manage data, improve workflow

Teich: I do. At one end of the spectrum it’s simply what do you need to do to keep the lights on, where is your data, all of it, in the various islands and collections and the data you are sharing with your supply chain as well. Where is the processing that you can apply to that data? Increasingly, I think, we are looking at a world in which the location of the stored data is more important than the processing power.

The management of all the data you have needs to segue into visible workflows.

We have processing power pretty much everywhere now. What’s key is moving data from place to place and setting up the connections to acquire it. It means that the management of all the data you have needs to segue into visible workflows.

Once I know what I have, and I am managing it at a baseline effectively, then I can start to improve my processes. Then I can start to get better workflows, internally as well as across my supply chain. But I think at first it’s simply, “What do I have going on right now?”

As an IT manager, how can I rein in some of these credit card instances, credit card storage on the public clouds, and put that all into the right mix. I have to know what I know first -- then I can start to streamline. Then I can start to control my costs. Does that make sense?

Gardner: Yes, absolutely. And how can you know which people you want to give even more credit to on their credit cards – and let them do more of what they are doing? It might be very innovative, and it might be very cost-effective. There might also be those wasting money, spinning their wheels, repaving cow paths, over and over again.

If you don’t have the ability to make those decisions with insight, without the visibility, and then further analyze it as to how best to go about it – it seems to me a no-brainer.

It also comes at an auspicious time as IT is trying to re-factor its value to the organization. If in fact they are no longer running servers and networks and keeping the trains running on time, they have to start being more in the business of defining what trains should be running and then how to make them the best business engines, if you will.

If IT departments needs to rethink their role and step up their game, then they need to use technologies like advanced hybrid IT management from vendors with a neutral perspective. Then they become the overseers of operations at a fundamentally different level. 

Data revelation, not revolution

Teich: I think that’s right. It’s evolutionary stuff. I don’t think it’s revolutionary. I think that in the same way you add servers to a virtual machine farm, as your demand increases, as your baseline demand increases, IT needs to keep a handle on costs -- so you can understand which jobs are running where and how much more capacity you need.

One of the things they are missing with random access to the cloud is bulk purchasing. And so at a very fundamental level, helping your organization manage which clouds you are spending on by aggregating the purchase of storage, aggregating the purchase of compute instances to get just better buying power, doing price arbitrage when you can. To me, those are fundamental qualities of IT going forward in a multi-cloud environment.

They are extensions of where we are today; it just doesn’t seem like it yet. They have always added new servers to increasing internal capacity and this is just the next evolutionary step.

Gardner: It certainly makes sense that you would move as maturity occurs in any business function toward that orchestration, automation and optimization – rather than simply getting the parts in place. What you are describing is that IT is becoming more like a procurement function and less like a building, architecture, or construction function, which is just as powerful.

Not many people can make those hybrid IT procurement decisions without knowing a lot about the technology. Someone with just business acumen can’t walk in and make these decisions. I think this is an opportunity for IT to elevate itself and become even more essential to the businesses.

Teich: The opportunity is a lot like the Sabre airline scheduling system that nearly every airline uses now. That’s a fundamental capability for doing business, and it’s separate from the technology of Sabre. It’s the ability to schedule -- people and airplanes – and it’s a lot like scheduling storage and jobs on compute instances. So I think there will be this step.

But to go back to the technology versus procurement, I think some element of that has always existed in IT in terms of dealing with vendors and doing the volume purchases on one side, but also having some architect know how to compose the hardware and the software infrastructure to serve those applications.

Connect the clouds

We’re simply translating that now into a multi-cloud architecture. How do I connect those pieces? What network capacity do I need to buy? What kind of storage architectures do I need? I don’t think that all goes away. It becomes far more important as you look at, for example, AWS as a very large bag of services. It’s very powerful. You can assemble it in any way you want, but in some respect, that’s like programming in C. You have all the power of assembly language and all the danger of assembly language, because you can walk up in the memory and delete stuff, and so, you have to have architects who know how to build a service that’s robust, that won’t go down, that serves your application most efficiently and all of those things are still hard to do.

So, architecture and purchasing are both still necessary. They don’t go away. I think the important part is that the orchestration part now becomes as important as deploying a service on the side of infrastructure because you’ve got multiple sets of infrastructure.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: For hybrid IT, it really has to be an enlightened procurement, not just blind procurement. And the people in the trenches that are just buying these services -- whether the developers or operations folks -- they don’t have that oversight, that view of the big picture to make those larger decisions about optimization of purchasing and business processes.

That gets us back to some of our earlier points of, what are the tools, what are the management insights that these individuals need in order to make those decisions? Like with Sabre, where they are optimizing to fill every hotel room or every airplane seat, we’re going to want in hybrid IT to fill every socket, right? We’re going to want all that bare metal and all those virtualization instances to be fully optimized -- whether it’s your cloud or somebody else’s.

It seems to me that there is an algorithmic approach eventually, right? Somebody is going to need to be the keeper of that algorithm as to how this all operates -- but you can’t program that algorithm if you don’t have the uber insights into what’s going on, and what works and what doesn’t.

What’s the next step, Paul, in terms of the technology catching up to the management requirements in this new hybrid IT complex environment?

Teich: People can develop some of that experience on a small scale, but there are so many dimensions to managing a multi-cloud, hybrid IT infrastructure business model. It’s throwing off all of this metadata for performance and efficiency. It’s ripe for machine learning.

We're moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale.

In a strong sense, we’re moving so fast right now that if you are an organization of any size, machine learning has to come into play to help you get better economies of scale. It’s just going to be looking at a bigger picture, it’s going to be managing more variables, and learning across a lot more data points than a human can possibly comprehend.

We are at this really interesting point in the industry where we are getting deep-learning approaches that are coming online cost effectively; they can help us do that. They have a little while to go before they are fully mature. But IT organizations that learn to take advantage of these systems now are going to have a head start, and they are going to be more efficient than their competitors.

Gardner: At the end of the day, if you’re all using similar cloud services then that differentiation between your company and your competitor is in how well you utilize and optimize those services. If the baseline technologies are becoming commoditized, then optimization -- that algorithm-like approach to smartly moving workloads and data, and providing consumption models that are efficiency-driven -- that’s going to be the difference between a 1 percent margin and a 5 percent margin over time.

The deep-learning difference

Teich: The important part to remember is that these machine-training algorithms are somewhat new, so there are several challenges with deploying them. First is the transparency issue. We don’t quite yet know how a deep-learning model makes specific decisions. We can’t point to one aspect and say that aspect is managing the quality of our AWS services, for example. It’s a black box model.

We can’t yet verify the results of these models. We know they are being efficient and fast but we can’t verify that the model is as efficient as it could possibly be. There is room for improvement over the next few years. As the models get better, they’ll leave less money on the table.

We’re also validating that when you build a machine-learning model that it’s covering all the situations you want it to cover. You need an audit trail for specific sets of decisions, especially with data that is subject to regulatory constraints. You need to know why you made decisions.

So the net is, once you are training a machine-learning model, you have to keep retraining it over time. Your model is not going to do the same thing as your competitor's model. There is a lot of room for differentiation, a lot of room for learning. You just have to go into it with your eyes open that, yeah, occasionally things will go sideways. Your model might do something unexpected, and you just have to be prepared for that. We’re still in the early days of machine learning.

Gardner: You raise an interesting point, Paul, because even as the baseline technology services in the multi-cloud era become commoditized, you’re going to have specific, unique, and custom approaches to your own business’ management.

Your hybrid IT optimization is not going to be like that of any other company. I think getting that machine-learning capability attuned to your specific hybrid IT panoply of resources and assets is going to be a gift that keeps giving. Not only will you run your IT better, you will run your business better. You’ll be fleet and agile.

If some risk arises -- whether it’s a cyber security risk, a natural disaster risk, a business risk of unintended or unexpected changes in your supply chain or in your business environment -- you’re going to be in a better position to react. You’re going to have your eyes to the ground, you’re going to be well tuned to your specific global infrastructure, and you’ll be able to make good choices. So I am with you. I think machine learning is essential, and the sooner you get involved with it, the better.

Before we sign off, who are the vendors and some of the technologies that we will look to in order to fill this apparent vacuum on advanced hybrid IT management? It seems to me that traditional IT management vendors would be a likely place to start.

Who’s in?

Teich: They are a likely place to start. All of them are starting to say something about being in a multi-cloud environment, about being in a multi-cloud-vendor environment. They are already finding themselves there with virtualization, and the key is they have recognized that they are in a multi-vendor world.

There are some start-ups, and I can’t name them specifically right now. But a lot of folks are working on this problem of how do I manage hybrid IT: In-house IT, and multi-cloud orchestration, a lot of work going on there. We haven’t seen a lot of it publicly yet, but there is a lot of venture capital being placed.

I think this is the next step, just like PCs came in the office, smartphones came in the office as we move from server farms to the clouds, going from cloud to multi-cloud, it’s attracting a lot of attention. The hard part right now is nailing whom to place your faith in. The name brands that people are buying their internal IT from right now are probably good near-term bets. As the industry gets more mature, we’ll have to see what happens.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We did hear a vision described on this from Hewlett Packard Enterprise (HPE) back in June at their Discover event in Las Vegas. I’m expecting to hear quite a bit more on something they’ve been calling New Hybrid IT Stack that seems to possess some of the characteristics we’ve been describing, such as broad visibility and management.

So at least one of the long-term IT management vendors is looking in this direction. That’s a place I’m going to be focusing on, wondering what the competitive landscape is going to be, and if HPE is going to be in the leadership position on hybrid IT management.

Teich: Actually, I think HPE is the only company I’ve heard from so far talking at that level. Everybody is voicing some opinion about it, but from what I’ve heard, it does sound like a very interesting approach to the problem.

Microsoft actually constrained their view on Azure Stack to a very small set of problems, and is actively saying, “No, I don’t.” If you’re looking at doing virtual machine migration and taking advantage of multi-cloud for general-purpose solutions, it’s probably not something that you want to do yet. It was very interesting for me then to hear about the HPE Project New Hybrid IT Stack and what HPE is planning to do there.

Gardner: For Microsoft, the more automated and constrained they can make it, the more likely you’d be susceptible or tempted to want to just stay within an Azure and/or Azure Stack environment. So I can appreciate why they would do that.

Before we sign off, one other area I’m going to be keeping my eyes on is around orchestration of containers, Kubernetes, in particular. If you follow orchestration of containers and container usage in multi-cloud environments, that’s going to be a harbinger of how the larger hybrid IT management demands are going to go as well. So a canary in the coal mine, if you will, as to where things could get very interesting very quickly.

The place to be

Teich: Absolutely. And I point out that the Linux Foundation’s CloudNativeCon in early December 2017 looks like the place to be -- with nearly everyone in the server infrastructure community and cloud infrastructure communities signing on. Part of the interest is in basically interchangeable container services. We’ll see that become much more important. So that sleepy little technical show is going to be invaded by “suits,” this year, and we’re paying a lot of attention to it.

Gardner: Yes, I agree. I’m afraid we’ll have to leave it there. Paul, how can our listeners and readers best follow you to gain more of your excellent insights?

Teich: You can follow us at www.tiriasresearch.com, and also we have a page on Forbes Tech, and you can find us there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to grow and thrive

The next BriefingsDirect Voice of the Analyst interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice.

We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy

Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles joins us to report on how companies are managing an increasingly complex transition to sustainable hybrid IT. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.

As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?

 Crawford

Crawford

Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models -- and cloud is a relatively new consumption model for technology -- it changes the dynamics of it. It’s the combination of changing that consumption model -- but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.

It’s not as it used to be, where we would say, “Okay, I'm not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: The good news is that more and more people are able to -- on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?

Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this -- and it’s something that just kind of raises the hair on the back of my neck -- is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”

The board should be very business-focused and instead they're dictating specific technology -- whether it’s the right technology or not. That’s really what this comes down to. 

What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? 

Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.

We in the enterprise IT space haven't really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.

Another example that I've worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients -- you do the math to figure out what’s the comparative between the IT you're doing on-premises in your corporate data center with any given application -- versus doing it in a public cloud.

Think differently

If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?

When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.

When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally -- from a process standpoint, and from a delivery standpoint -- and also economically.

Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?

It’s not tech for tech’s sake, rather it’s best to say, “How do we use technology for business advantage?” 

Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”

We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.

A day in a life of me could range from, if I'm lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.

I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they're challenged by -- and then, of course, helping them through their challenges.

Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.

It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I'm grateful that the opportunities that I've had to work with have been just fabulous, and the same goes for the people.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: I've always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently -- and I find that very valuable.

Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.

How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT -- to therefore know if we’re doing it appropriately?

The monkey-bread model

Crawford: The first thing we have to do is take a step back to provide the context of that visibility -- or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.

If we look back for a minute, and look at how IT operates -- traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.

Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.

Part of the methodology I use is the monkey bread example. If you're not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you're essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”

And then, what you're left with is this really squishy -- but much smaller -- footprint that you have to contend with. That problem in the center is much more specific -- and arguably that’s what differentiates your company from your competition.

Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.

This example helps you work through determining what’s critical, and -- more importantly -- what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.

Identify opportunities where cloud makes sense – and where it doesn’t. It definitely is one of the most significant opportunities for most IT organizations today. 

That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense -- and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.

So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.

Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.

So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?

Changing IT roles and responsibilities

Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.

The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.

For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business -- unfortunately, most of them would have a really hard time doing that.

The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.

Gardner: If we recognize that cloud is here to stay -- but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?

Learn More About

Hybrid IT Management

Solutions From HPE

Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms -- whether it’s public cloud, SaaS, private cloud, appropriate data centers -- those are all just different levers we can pull depending on the business type.

Businesses change, customers change, demand changes and revenue comes from different places. IT needs to be able to shift gears just as fast and in anticipation of where the company goes. 

As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.

The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.

Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.

Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?

Why, oh why, oh why, cloud?

Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud -- full stop, that’s failure right out of the gate. But if you’re doing it from a position of -- I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.

The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets -- especially as you start thinking about Internet of things (IoT) and start thinking about edge computing -- you know that public cloud becomes problematic for some of those things.

The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?

Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions -- and what decisions maybe you might want to think about doing differently.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it -- but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.

So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?

Cloud’s good, but is it good for business?

Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.

For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.

If you begin to understand the business value of the actions you take -- how you leverage public cloud versus private cloud versus your corporate data center assets -- and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.

Economics plays a very significant role -- but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.

Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on -- perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.

But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks -- along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.

Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.

Safety in numbers

Crawford: You bring up a great point, Dana. Risk -- whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance -- the reality is that nobody across the organization truly understands all of these pieces together.

It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. 

It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.

It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.

Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t -- doing appropriately with hybrid cloud and hybrid IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Case study: How HCI-powered private clouds accelerate efficient digital transformation

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.

We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn't necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

 McKittrick

McKittrick

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Also, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds -- and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively -- or even more cost effectively -- than you can in the public clouds. There are certainly places for both in the market.

Gardner: It's going to vary, of course, from company to company -- and even department to department within a company -- but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.

We have a head start on what's really coming down the pipe -- we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what's really coming down the pipe --  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can't mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.

I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure -- but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What's new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from -- depending on whom you spoke to -- somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.

The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that's different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn't really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn't work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That's what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That's when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth -- would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE's portfolio.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: This thing is now built and running, and it's been two years since inception. So that's a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it -- and he is a very forward-thinking technologist -- he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have -- from the server on down -- folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that's going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture -- yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out -- and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let's revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process 

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people -- moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what's happening, why it's happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don't get done, or they become almost unmanageable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU

The next BriefingsDirect high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information.

We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. 

To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

 Sandholm

Sandholm

That's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information.

Know what you don’t know

So you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold'em has become the leading benchmark in the AI community.

Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it's not really the only part here -- or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn't want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.

All three of these modules run on the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC), for which the hardware was built by Hewlett Packard Enterprise (HPE).

HPC from HPE

Overcomes Barriers

To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there's potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you're describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don't want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can't have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don't, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we're seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That's the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world -- but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential -- probabilities set like you mentioned -- the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It's all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field -- but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let's examine the commercialization potential of this. You're not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let's say you're a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess -- it's often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future.

Sandholm: Exactly! If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run -- so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that's also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We're actually testing how well one can steer a patient's T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan -- but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent -- but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we've heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you're keeping a keen eye on?

HPC from HPE

Overcomes Barriers 

To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I'm looking at it from the outside. I'm trusting that they will continue to build the best hardware and maintain it in the best way -- so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well.

Sandholm: Actually here in the live game in Las Vegas they don't allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don't put our AI in there; it’s against their site rules. Also, I think it's unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe

The next BriefingsDirect cloud ecosystem strategies interview explores how a Canadian software provider delivers a hybrid cloud platform for enterprises and service providers alike.

We'll now learn how Ormuco has identified underserved regions and has crafted a standards-based hybrid cloud platform to allow its users to attain world-class cloud services just about anywhere.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

Here to help us explore how new breeds of hybrid cloud are coming to more providers around the globe thanks to the Cloud28+ consortium is Orlando Bayter, CEO and Founder of Ormuco in Montréal, and Xavier Poisson Gouyou Beachamps, Vice President of Worldwide Indirect Digital Services at Hewlett Packard Enterprise (HPE), based in Paris. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with this notion of underserved regions. Orlando, why is it that many people think that public cloud is everywhere for everyone when there are many places around the world where it is still immature? What is the opportunity to serve those markets?

Bayter: There are many countries underserved by the hyperscale cloud providers. If you look at Russia, United Arab Emirates (UAE), around the world, they want to comply with regulations on security, on data sovereignty, and they need to have the clouds locally to comply.

 Bayter

Bayter

Ormuco targets those countries that are underserved by the hyperscale providers and enables service providers and enterprises to consume cloud locally, in ways they can’t do today.

Gardner: Are you allowing them to have a private cloud on-premises as an enterprise? Or do local cloud providers offer a common platform, like yours, so that they get the best of both the private and public hybrid environment?

Bayter: That is an excellent question. There are many workloads that cannot leave the firewall of an enterprise. With that, you now need to deliver the economies, ease of use, flexibility, and orchestration of a public cloud experience in the enterprise. At Ormuco, we deliver a platform that provides the best of the two worlds. You are still leaving your data center and you don't need to worry whether it’s on-premises or off-premises.

It's a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

It’s a single pane of glass. You can move the workloads in that global network via established providers throughout the ecosystem of cloud services.

Gardner: What are the attributes of this platform that both your enterprise and service provider customers are looking for? What’s most important to them in this hybrid cloud platform?

Bayter: As I said, there are some workloads that cannot leave the data center. In the past, you couldn’t get the public cloud inside your data center. You could have built a private cloud, but you couldn’t get an Amazon Web Services (AWS)-like solution or a Microsoft Azure-like solution on-premises.

We have been running this now for two years and what we have noticed is that enterprises want to have the ease-of-use, sales, service, and orchestration on-premises. Now, they can connect to a public cloud based on the same platform and they don’t have to worry about how to connect it or how it will work. They just decide where to place this.

They have security, can comply with regulations, and gain control -- plus 40 percent savings compared with VMware, and up to 50 percent to 60 percent compared with AWS.

Gardner: I’m also interested in the openness of the platform. Do they have certain requirements as to the cloud model, such as OpenStack?  What is it that enables this to be classified as a standard cloud?

Bayter: At Ormuco, we went out and checked what are the best solutions and the best platform that we can bring together to build this experience on-premises and off-premises.

We saw OpenStack, we saw Docker, and then we saw how to take, for example, OpenStack and make it like a public cloud solution. So if you look at OpenStack, the way I see it is as concrete, or a foundation. If you want to build a house or a condo on that, you also need the attic. Ormuco builds that software to be able to deliver that cloud look and feel, that self-service, all in open tools, with the same APIs both on private and public clouds.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: What is it about the HPE platform beneath that that supports you? How has HPE been instrumental in allowing that platform to be built?

Community collaboration

Bayter: HPE has been a great partner. Through Cloud28+ we are able to go to markets in places that HPE has a presence. They basically generate that through marketing, through sales. They were able to bring deals to us and help us grow our business.

From a technology perspective, we are using HPE Synergy. With Synergy, we can provide composability, and we can combine storage and compute into a single platform. Now we go together into a market, we win deals, and we solve the enterprise challenges around security and data sovereignty.

Gardner: Xavier, how is Cloud28+ coming to market, for those who are not familiar with it? Tell us a bit about Cloud28+ and how an organization like Ormuco is a good example of how it works.

Poisson: Cloud28+ is a community of IT players -- service providers, technology partners, independent software vendors (ISVs), value added resellers, and universities -- that have decided to join forces to enable digital transformation through cloud computing. To do that, we pull our resources together to have a single platform. We are allowing the enterprise to discover and consume cloud services from the different members of Cloud28+.

We launched Cloud28+ officially to the market on December 15, 2016. Today, we have more than 570 members from across the world inside Cloud28+. Roughly 18,000 distributed services may be consumed and we also have system integrators that support the platform. We cover more than 300 data centers from our partners, so we can provide choice.

In fact, we believe our customers need to have that choice. They need to know what is available for them. As an analogy, if you have your smartphone, you can have an app store and do what you want as a consumer. We wanted to do the same and provide the same ease for an enterprise globally anywhere on the planet. We respect diversity and what is happening in every single region.

Ormuco has been one of the first technology partners. Docker is another one. And Intel is another. They have been working together with HPE to really understand the needs of the customer and how we can deliver very quickly a cloud infrastructure to a service provider and to an enterprise in record time. At the same time, they can leverage all the partners from the catalog of content and services, propelled by Cloud28+, from the ISVs.

Global ecosystem, by choice 

Because we are bringing together a global ecosystem, including the resellers, if a service provider builds a project through Cloud28+, with a technology partner like Ormuco, then all the ISVs are included. They can push their services onto the platform, and all the resellers that are part of the ecosystem can convey onto the market what the service providers have been building.

We have a lot of collaboration with Ormuco to help them to design their solutions. Ormuco has been helping us to design what Cloud28+ should be, because it's a continuous improvement approach on Cloud28+ and it’s via collaboration.

If you want to join Cloud28+ to take, don't come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.

As I like to say, “If you want to join Cloud28+ to take, don't come. If you want to give, and take a lot afterward, yes, please come, because we all receive a lot.”

Gardner: Orlando, when this all works well, whatdo your end-users gain in terms of business benefits? You mentioned reduction in costs, that's very important, of course. But is there more about your platform from a development perspective and an operational perspective that we can share to encourage people to explore it?

Bayter: So imagine yourself with an ecosystem like Cloud28+. They have 500 members. They have multiple countries, many data centers.

Now imagine that you can have the Ormuco solution on-premises in an enterprise and then be able to burst to a global network of service providers, across all those regions. You get the same performance, you get the same security, and you get the same compliance across all of that.

For an end-customer, you don’t need to think anymore where you’re going to put your applications. They will go to the public cloud, they will go to the private cloud. It is agnostic. You basically place it where you want it to go and decide the economies you want to get. You can compare with the hyperscale providers.

That is the key, you get one platform throughout our ecosystem of partners that can deliver to you that same functionality and experience locally. With a community such as Cloud28+, we can accomplish something that was not possible before.

Gardner: So, just hoping to delineate between the development and then the operations in production. Are you offering the developer an opportunity to develop there and seamlessly deploy, or are you more focused on the deployment after the applications are developed, or both?

Development to deployment 

Bayter: With our solution, same as AWS or Azure allows, a developer can develop their app via APIs, automated, use a database of choice (it could be MySQL, Oracle), and the load balancing and the different features we have in the cloud, whether it’s Kubernetes or Docker, build all that -- and then when the application is ready, you can decide in which region you want to deploy the application.

So you go from development, to deployment technology of your choice, whether it’s Docker or Kubernetes, and then you can deploy to the global network that we’re building on Cloud28+. You can go to any region, and you don’t have to worry about how to get a service provider contract in Russia, or how do I get a contract in Brazil? Who is going to provide me with the service? Now you can get that service locally through a reseller, a distributor, or have an ISV deploythe software worldwide.

Gardner: Xavier, what other sorts of organizations should be aware of the Cloud28+ network?

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Poisson: We have the technology partners like Ormuco, and we are thankful for what they have brought to the community. We have service providers, of course, software vendors, because you can publish your software in Cloud28+ and provision it on-premises or off-premises. We accelerate go-to-market for startups, they gain immediate global reach with Cloud28+. So to all the ISVs, I say, “Come on, come on guys, we will help you reach out to the market.”

System integrators also, because we see this is an opportunity for the large enterprises and governments with a lot of multi-cloud projects taking care, having requirements forsecurity. And you know what is happening with security today, it's a hot topic. So people are thinking about how they can have a multi-cloud strategy. System integrators are now turning to Cloud28+ because they find here a reservoir of all the capabilities to find the right solution to answer the right question.

Universities are another kind of member we are working with. Just to explain, we know that all the technologies are created first at the university and then they evolve. All the startups are starting at the university level. So we have some very good partnerships with some universities in several regions in Portugal, Germany, France, and the United States. These universities are designing new projects with members of Cloud28+, to answer questions of the governments, for example, or they are using Cloud28+ to propel the startups into the market.

Ormuco is also helping to change the business model of distribution. So distributors now also are joining Cloud28+. Why? Because a distributor has to make a choice for its consumers. In the past, a distributor had software inventory that they were pushing to the resellers. Now they need to have an inventory of cloud services.

There is more choice. They can purchase hyperscale services, resell, or maybe source to the different members of Cloud28+, according to the country they want to deliver to. Or they can own the platform using the technology of Ormuco, for example, and put that in a white-label model for the reseller to propel it into the market. This is what Azure is doing in Europe, typically. So new kinds of members and models are coming in.

Digital transformation

Lastly, an enterprise can use Cloud28+ to make their digital transformation. If they have services and software, they can become a supplier inside of Cloud28+. They source cloud services inside a platform, do digital transformation, and find a new go-to-market through the ecosystem to propel their offerings onto the global market.

Gardner: Orlando, do you have any examples that you could share with us of a service provider, ISV or enterprise that has white-labeled your software and your capabilities as Xavier has alluded to? That’s a really interesting model.

Bayter: We have been able to go-to-market to countries where Cloud28+ was a tremendous help. If you look at Western Europe, Xavier was just speaking about Microsoft Azure. They chose our platform and we are deploying it in Europe, making it available to the resellers to help them transform their consumption models.

They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises -- all from a single platform.

If you look at the Europe, Middle East and Africa (EMEA) region, we have one of the largest managed service providers. They provide public cloud and they serve many markets. They provide a community cloud for governments and they provide private clouds for enterprises -- all from a single platform.

We also have several of the largest telecoms in Latin America (LATAM) and EMEA. We have a US presence, where we have Managed.com as a provider. So things are going very well and it is largely thanks to what Cloud28+ has done for us.

Gardner: While this consortium is already very powerful, we are also seeing new technologies coming to the market that should further support the model. Such things as HPE New Stack, which is still in the works, HPE Synergy’s composability and auto-bursting, along with security now driven into the firmware and the silicon -- it’s almost as if HPE’s technology roadmap is designed for this very model, or very much in alignment. Tell us how new technology and the Cloud28+ model come together.

Bayter: So HPE New Stack is becoming the control point of multi-cloud. Now what happens when you want to have that same experience off-premises and on-premises? New Stack could connect to Ormuco as a resource provider, even as it connects to other multi-clouds.

With an ecosystem like Cloud28+ all working together, we can connect those hybrid models with service providers to deliver that experience to enterprises across the world.

Learn How Cloud 28+

Provides an Open Community

Of Cloud Service Providers

Gardner: Xavier, anything more in terms of how HPE New Stack and Cloud28+ fit? 

Partnership is top priority

Poisson: It’s a real collaboration. I am very happy with that because I have been working a long time at HPE, and New Stack is a project that has been driven by thinking about the go-to-market at the same time as the technology. It’s a big reward to all the Cloud28+ partners because they are now de facto considered as resource providers for our end-user customers – same as the hyperscale providers, maybe.

At HPE, we say we are in partnership first -- with our partners, or ecosystem, or channel. I believe that what we are doing with Cloud28+, New Stack, and all the other projects that we are describing – this will be the reality around the world. We deliver on-premises for the channel partners.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       How IoT capabilities open new doors for Miami Telecoms Platform Provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

How Nokia refactors the video delivery business with new time-managed IT financing models

The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokia is refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too? 

Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.

 Larbey

Larbey

It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.

It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.

If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone -- so not on a big TV screen, that is what most people mean by mobile video today.

The future is personalized 

When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.

Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.

And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same -- it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.

There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch.

The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.

Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?

Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.

But if you step back and look at the fundamental changes -- all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.

If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.

And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.

Gardner: You also mentioned in the case of an airplane, where it's not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?

IT’s all about intelligence

Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.

But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.

Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization -- minimal viable streaming, if you will.

How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?

Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface.

The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.

The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.

Taking the CDN private

Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that's extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?

Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery -- but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.

Even today that's how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network -- with both telcos and cable operators -- so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.
You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don't if you outsource to a shared service.

There are a lot of advantages to having your own private CDN. You have complete control over the quality of experience which you don't if you outsource to a shared service.

What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.

The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically -- we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.

But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility -- not just from an IT perspective, but also from a commercial perspective.

Gardner:  Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.

Larbey: We basically help as a business unit. Anyone who has content -- be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.

It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?

Flexible financial formats

Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.

We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time -- but these don’t necessarily work for my applications from a financial perspective.

 Our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate.

In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.

We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.

And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment -- they are used to buying a service from the Internet and they work in a different way and they have different requirements.

So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important. 

Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.

Our tolerance just doesn't exist anymore for buffering and we demand and expect the highest-quality video.

Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn't relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.

If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.

So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.

HPE Rethinks

How to Acquire, Pay For

And Use IT

Gardner: We now also have to think about 4K resolution, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization? 

Optimal edge: functionality and storage

Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.

The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go -- and certainly the deeper, the better.

The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording -- especially in the US due to the content-use regulations compliance -- is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.

The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.

The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.

That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it -- and that’s a big change for the business case.

Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement. 

Content popularity

Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.

Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.

Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.

Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure -- but also deliver a far better user experience?

In-house control

Larbey: One of our largest customers is Liberty Global, with a large number of cable operators in a variety of countries across Europe. They were enhancing an IP service. They started with an Internet-based CDN and that’s how they were delivering their service. But recognizing the importance of gaining more control over costs and the quality experience, they wanted to take that in-house and put the content on a private CDN.

We worked with them to deliver that technology. One of things that they noticed very quickly, which I don’t think they were expecting, was a dramatic reduction in the number of people calling in to complain because the stream had stopped or buffered. They enjoyed a big decrease in call-center calls as soon as they switched on our new CDN technology, which is quite an interesting use-case benefit.

When they deployed a private CDN, they reached costs payback in less than 12 months.

We do a lot with Sky in the UK, which was also looking to migrate away from an Internet-based CDN service into something in-house so they could take more control over it and improve the users’ quality of experience. 

One of our customers in Canada, TELUS, when they deployed a private CDN, they reached costs payback in less than 12 months in terms of both the network savings and the Internet CDN costs savings.

Gardner: Before we close out, perhaps a look to the future and thinking about some of the requirements on business models as we leverage edge intelligence. What about personalization services, or even inserting ads in different ways? Can there be more of a two-way relationship, or a one-to-one interaction with the end consumers? What are the increased benefits from that high-performing, high-efficiency edge architecture? 

VR vision and beyond

Larbey: All of that generates more traffic -- moving from standard-definition to high-definition to 4K, to beyond 4K -- it all generates more network traffic. You then take into account a 360-degree-video capability and virtual reality (VR) services, which is a focus for Nokia with our Ozo camera, and it’s clear that the data is just going to explode.

So being able to optimize, and continue to optimize that, in terms of new codec technology and new streaming technologies -- to be able to constrain the growth of video demands on the network – is essential, otherwise the traffic would just explode.

There is lot of innovation going on to optimize the content experience. People may not want to watch all their TV through VR headsets. That may not become the way you want to watch the latest episode of Game of Thrones. However, maybe there will be a uniquely created piece of content that’s an add-on in 360, and the real serious fans can go and look for it. I think we will see new types of content being created to address these different use-cases.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       How IoT capabilities open new doors for Miami Telecoms Platform Provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything