.banner-thumbnail-wrapper { display:none; }

Managing the next wave of IT disruption


“A world with millions of clouds distributed everywhere - that's the future as we see it.” – HPE CEO Antonio Neri

When cloud computing first began disrupting traditional IT over 10 years ago, who would have imagined millions of clouds would soon follow? According to industry experts, that is exactly where the industry is heading. The next wave of digital disruption will store and analyze data at the edge and in the cloud instantly, compliments of millions of clouds distributed everywhere.

To cope with this tsunami of widely distributed data, businesses will need to go beyond on-premises environments and multi-cloud deployments. They must connect a hybrid system that stretches from the edge to the cloud and everywhere in-between. A recent report from 451 Research, From Edge to Cloud, Managing the Next Wave of IT Disruption, explains this new reality.


8 Essential Steps for Managing Edge-to-Cloud

The report details 8 essentials businesses need to consider as they enter the next wave of IT disruption.

1.       Proactive cloud strategy

Organizations everywhere are pursuing a proactive hybrid cloud and multi-cloud strategy, balancing performance, cost, and compliance. At the same time, they are meeting specific needs of applications and workloads. All of this takes planning, along with time and skills – which are in short supply in today’s fast-paced, competitive environment. Organizations must seek ways to unify access to multiple clouds and simplify management.

2.       Modernize and automate

Traditional, manual-laden IT processes will become outdated, as orchestration and automation tools transform the data center. Hyperconvergence and composability are providing the agility of public cloud through software-defined strategies, which increases automation and saves time.

3.       Take out the complexity

An ideal hybrid IT environment must be simple and quick to deploy and manage -- and capable of seamlessly bridging multiple work­loads across traditional, private, and public cloud infrastructure. A hybrid cloud management platform must allow IT administrators or business managers to view all available infrastructure resources without requiring detailed knowledge of the underlying hardware.

4.       Future-proof for emerging technologies

Hybrid IT must support not only OS, virtualization, and popular cloud options that businesses are using, but also fast-growing new alternatives. These include bare-metal and container platforms, along with extensions to the architecture, such as the distributed edge. Unified APIs will help with the integration of existing apps, making everything easier to manage.  

5.       Deliver everything as a service

Enterprises that want to optimize resources are moving toward deploying everything as a service. Software-defined and hybrid cloud management help to integrate off-premises services with workloads that need to stay on-premises.

6.       Deal with the data and gains insights faster

As data explodes from the edge to the cloud, software-defined services and hybrid cloud data management will become vital. Organizations will need to decide where to generate data, how to analyze it quickly, and what actions to take based on their analysis.

7.       Control spending and utilization

Public cloud providers are expanding their portfolios to provide more options, which include more pricing models, increased instance sizes, smaller time increments, better reporting, and competitive pricing. Because the price of cloud is falling only marginally, providers differentiate themselves by offering flexibility in procurement and products. Yet, as more choice is offered, complexity also increases, driving the need for hybrid cloud management solutions. 

8.       Extend to the edge

Edge computing marks the beginning of a massive increase in a vast infrastructure of endpoints that will be part of tomorrow’s IT. Moving data centers such as cars, airplanes, trains, robots, and drones will increase rapidly. Enterprise customers need to invest now by integrating their private and public cloud resources with an eye toward expanding to a highly distributed infrastructure in the future.

A world with millions of clouds distributed everywhere will soon become commonplace. While the rest of the world is moving toward the cloud, multitudes of smart endpoints are starting to force computing closer to the edge. Analytics, edge processing, artificial intelligence, and machine learning are also on the rise. Combining cloud and hybrid IT models with edge computing—all tied together with a multi-cloud management platform—is an important milestone to combat the next wave of IT disruption.


Read the full report from 451 Research, From Edge to Cloud, Managing the Next Wave of IT Disruption. Learn more about hybrid cloud management here.

Gary Thome blog.png

Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies which include HPE OneSphere – multi-cloud management, HPE SimpliVity – Hyperconverged Infrastructure, HPE Synergy – Composable Infrastructure and HPE OneView – Integrated Management.

To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.

Meet the InfoSight data scientists

Get ready to meet the team of data scientists behind HPE InfoSight. And learn how we’re leveraging this AI-driven analytics platform to improve the customer, support, and sales experience.

In 2012, InfoSight was launched with the intention of leveraging telemetry data to identify, predict, and solve customer issues with storage arrays. This in turn would allow our support team to consist primarily of Level 3 engineers.

HPE InfoSight_data scientists_blog.jpg

The bold vision was put in place by our Chief Data Scientist at the time. In the early days of Nimble Storage, the data science team knew data would be valuable and had the foresight to begin collecting DNA about the hardware being deployed in the field. InfoSight enabled the team to leverage multiple years of detailed performance data pertaining to hard drives, solid state drives, fans, CPUs, power supplies, and network cards. This trove of data became the backbone of InfoSight and the data science team.

In the years since the launch of InfoSIght, the data science team has played an integral role in differentiating our storage arrays from our competitors. Leveraging installed base data allowed the team to programmatically open, remediate, and close cases on our customers’ behalf.  As the scope of data increased, the data science team has been able to create more sophisticated models and tools that not only enhance the customer experience but also the support and sales experiences too. Additionally, the data used by the data science team has expanded into the stack leveraging sensor data and configuration data from virtual machines connected to the various HPE platform.

The lessons learned from our multi-petabyte analytics platform has directly improved the following experiences:

Customer experience

InfoSight data science has been providing customers with the ability to view capacity and performance predictions based on specific workloads allowing them to avoid troublesome situations that could lead to poor performance.  Various resource models have been created to help a customer identify periods of saturation, cache latency, and CPU usage. 

Configuration data also plays an important role in the analytics provided to customers.  Individual volumes can also become problematic so we closely monitor overprovisioning on the customer array.  When analyzing all the data for a particular customer we are able to confidently use our models based on real-world data to make recommendations to our customers on upgrade needs to ensure they continue receiving the performance they expect from our hardware.

Support experience

The InfoSight data science team has worked closely with our support organization since the beginning to provide insights into the install base performance of similar customer configurations.  The team partners with support to help create signatures that are used for proactive case monitoring.  As we feed our support data into our data lake we are able to look for similar patterns across various models of hardware or software releases to refine rules that are currently in place. 

InfoSight has also been a key tool in the support investigation process.  It allows a customer to gain insights into the operation of their HPE equipment while viewing the same information a technical support engineer will see.

Sales experience

The team has created tools which are used in the sales cycle to help right-size a customer environment leveraging our real-world installed base knowledge. These sizing models are sophisticated in nature and take into account known performance metrics, sensor data, and latency. They allow users to input various criteria that matters to the customer and weigh cost versu performance—while providing recommendations that are based not just on marketing data but on known performance for particular workloads, array models, or disk types.

HPE InfoSight_sales experience.jpg

Historically. we have also been able to leverage our recommendation engine to generate leads and opportunities the allow the sales team to help serve our customers.

Get ready to get to know today’s InfoSight data science team

The HPE InfoSight data science team has a wide breadth of experience,  ranging from recent grads to 20+ years. The team has received multiple patents for the work in the data space. The data scientists work closely with our dedicated data engineers, support team, and product specialists. In the coming weeks you can check this space for blogs written by the individual data scientists discussing the technology stack, research approach, real-world findings, and formula usage.


Katie Fritsch_HPE.png


I am Marketing Manager for HPE InfoSight. Before that, I led the marketing for the deep learning startup, Nervana Systems, up until its acquisition by Intel in 2016.

Discover the data storage solution from HPE and Cohesity that collapses secondary storage silos

Discover how a joint data storage solution from HPE and Cohesity eliminates secondary storage silos and restores sanity to application chaos.

HPE and Cohesity_secondary data storage_blog.jpg

Without question, secondary storage has become a critical issue for IT decision makers. A previous blog discusses how disjointed secondary storage can cause mass data fragmentation, which can slow down the business and potentially raise compliance risk.

A poor secondary data storage solution can certainly create inefficient workflows, but that's just the beginning. Unmanaged secondary storage sprawl also results in:

  • Increased costs due to loss of economies of scale

  • Increased security risk, since you may not have a clear picture of what data assets you have and who has access to them

  • Increased compliance risk if you're holding regulated data without a crystal-clear understanding of what you have

  • Increased risk of poor decision making, as copies of data can float around with no clear indication of which one is the most current or accurate

  • Increased storage capacity utilization as various secondary storage silos lose the capability to bring aggregated data reduction technologies to bear on the ongoing capacity challenge

It's clear that secondary storage has become a primary problem. What's needed is a solution that can eliminate the need for so many of these silos.

Now, via a partnership between two enterprise IT firms—HPE and Cohesity, iPhone-like simplicity has come to handily solve the secondary storage dilemma, bringing with it some incredible outcomes.

A data storage solution for secondary storage

This solution: HPE Solutions for Cohesity DataPlaltform. Cohesity, a market-leading purveyor of a hyperconverged secondary storage software stack, provides software that runs atop HPE Apollo and ProLiant servers. This combination handily solves the mass data fragmentation problem. With Cohesity, your data is no longer massively siloed, copies of data are reduced or eliminated, and you reduce or halt the spread of data to every corner of your organization.

Why is this so important? In research performed by VansonBourne, 98 percent of respondents indicated that their secondary storage needs will grow in the next 18 months, with more than one-half saying that their storage needs will grow between 25 and 75 percent per year.

Fixing this without the right tools isn't easy. In fact, 26 percent of respondents said that they would rather quit their jobs than be tasked with fixing their company's secondary storage problems without the right tools in place.

Hyperconverged secondary storage

You may be wondering how this HPE/Cohesity mashup can help you solve the storage growth issue, since you'll still have a lot of data to contend with. What Cohesity brings to the picture is a software solution that conglomerates all of an organization's secondary storage assets under one managed umbrella, instantly imbuing visibility into what used to be something of a black box. Where the real magic comes in, however, is through Cohesity's global deduplication, compression, and erasure coding features. These capabilities allow organizations to grow in a far more sustainable way than they can with a smattering of point solutions.

This transition to a converged secondary storage architecture can't happen soon enough. Today, 48 percent of survey respondents say that their secondary storage sprawl means that their IT team spends 30-to-100 percent of their time cleaning up. Almost all respondents—98 percent—say that it's getting worse.

The HPE Solutions for Cohesity DataPlatform are data storage solutions that provide a broad set of capabilities, including:

  • The previously mentioned data reduction services, which slow your organization's need to constantly procure storage for secondary needs. (Bear in mind that the HPE Cohesity System isn't designed for primary workloads. For those, you should turn your attention to HPE Nimble Storage and HPE 3PAR solutions, which bring all-flash power to the storage equation.)

  • Data protection capabilities for workloads that operate in the secondary arena.

  • Full support for object file types. No longer do you need a separate solution to manage object-centric application needs. Cohesity can handle that for you as a part of its integrated software.

  • Test/dev support. For many, test and development environments were just fragments sitting on their own. With Cohesity, all of your test and dev activities can be managed via the integrated platform and then easily promoted into production once you're ready. This provides a seamless DevOps experience for your organization.

  • Data indexing so that you always know what you have and where you have it.

  • Encryption of your data both at rest and in flight.

If you're looking for a solution that's hyperconverged onto a single platform, the joint HPE and Cohesity solution is exactly what the CIO ordered. You can collapse a number of previously disparate secondary-centric services into one. And it's not limited to an on-premises approach.

Instant hybrid cloud

The HPE Solutions for Cohesity DataPlatfrom also helps you accelerate your hybrid cloud journey. The Cohesity software optionally leverages the public cloud for myriad purposes, including as a backup target, a target for long-term data retention and archive, a storage tier, disaster recovery, and test/dev purposes.

All of the features described earlier, including data reduction, global indexing, and security, apply to the public cloud side of the Cohesity equation, too. The result is a seamless experience between your HPE-driven on-premises Cohesity solution and your favorite public cloud, whether that's AWS, Azure, Google, or another Cohesity cloud partner.

Ensuring a solid hardware foundation

With great software comes a need for great hardware from a trusted partner. Cohesity is a hyperconverged solution, meaning that the underlying hardware needs to work in concert with the solution—not as a hindrance. Even with a conglomeration of secondary data sources, secondary storage needs will still grow, and these data sources have and will continue to have importance to the organization. The only way that such a solution works is if it's running on hardware that is stable, fast, easily managed, and scalable. HPE brings that in spades with HPE Solutions for Cohesity DataPlatfrom.  

A solution is only as good as the support behind it. The HPE and Cohesity solution is a full member of HPE's global supply chain, helping customers around the world reduce deployment risk and complexity and unlocking a fully optimized deployment and support experience. For those that are uncomfortable taking the plunge on their own, the HPE and Cohesity solution can be configured and deployed by HPE Pointnextconsulting services.

As you consider your secondary application needs, look no further than the hyperconverged secondary storage solution from HPE and Cohesity to allow you to simplify your environment by conglomerating workloads into a single, scalable hybrid environment.

Scott Lowe.jpg

Meet Around the Storage Block blogger Scott D. Lowe, CEO and Lead Analyst for ActualTech Media.

Since 1994, Scott has helped organizations of all stripes solve critical technology challenges. He has served in a variety of technical roles, spent ten years as a CIO, and has spent another ten as a strategic IT consultant in higher education. Today, his company helps educate IT pros and decision makers and brings IT consumers together with the right enterprise IT solutions to help them propel their businesses forward.

Two groundbreaking partnerships help simplify the pathway to hybrid cloud




HPE partners with Google Cloud and Nutanix to provide customers greater choice and agility

Organizations everywhere are on a hybrid cloud expedition. One that can be complex but one that we, at HPE, aim to simplify no matter which path our customers choose.

In recent years, we’ve taken many steps to accelerate customers’ hybrid cloud journeys. We’ve simplified our own organizational structure, creating a Hybrid IT business group that is integrated and easier for customers to navigate – because all infrastructure, software, and services capabilities are under one roof.

Plus, we have made extensive additions to our suite of products and services in order to give customers the choice and flexibility they crave for a consistent and optimal experience across public and private clouds. One of those is HPE SimpliVity, a key component of our Composable Cloud portfolio which enables customers to streamline IT operations with a fast, uncomplicated and efficient hyperconverged infrastructure (HCI) platform – and at a fraction of the cost.

And today, we’re further expanding our commitment to deliver the options and the experiences our customers desire for hybrid cloud by strategically aligning with two powerful industry players. Each is designed to extend our fast-growing and ever-evolving HPE GreenLake ecosystem.


To extend the HPE Composable Cloud portfolio, HPE and Google Cloud have entered into a strategic partnership to deliver hybrid cloud solutions that accelerate innovation and expand choice and agility for customers.  The partnership will provide customers with a consistent experience across public cloud and on premises environments.

As an initial part of this strategic agreement, HPE will offer two validated designs for Google Kubernetes Engine (GKE) based on HPE SimpliVity hyperconverged offering and HPE Nimble Storage with HPE ProLiant offering.  In addition, HPE will offer these solutions as a service through HPE GreenLake, HPE’s fully managed consumption offering. Customers that choose this offering can run applications as a Service in the Google Kubernetes Engine environment on premises and benefit from the same container-based design across their hybrid cloud.

In addition to that, we are partnering with Nutanix to deliver an integrated hybrid cloud as-a-service solution. The offer -- which leverages Nutanix’s Enterprise Cloud OS delivered through the HPE GreenLake as-a-service solution – will provide customers with a fully HPE-managed hybrid cloud.

As part of the agreement, Nutanix is also expanding platform choice to its customers and will enable its channel partners to directly sell HPE ProLiant DX and HPE Apollo DX servers combined with Nutanix’s Enterprise Cloud OS software, so that customers can purchase an integrated, turnkey appliance with built-in intelligence and security.

We believe our offering with Nutanix is an attractive choice that will reduce cost and complexity by offering a fully managed hybrid cloud infrastructure delivered as-a-service, to be deployed in customer data centers or in a customer’s co-location facility.

With tools like these, customers have the wherewithal to build hybrid and private clouds and transform their existing applications. They can provision workloads of all types, across virtualization, containers and bare metal, in minutes. And as a result, they will spend less time managing infrastructure and more time creating value-added services at a fraction of past operational costs.

Some might ask why we are pursuing such unique and collaborative offerings with other providers. But to me, the reason is obvious: For certain customers, in certain situations, we can offer stronger composable solutions, together. It’s all about delivering what the customer wants and needs, and we don’t mind sharing – while still continuing to compete vigorously across our entire portfolio of solutions.

The transformation to hybrid cloud can be difficult and confusing. With bold, customer-centric, and simplified initiatives – such as today’s two groundbreaking announcements with Google and Nutanix – HPE further strengthens our position as the strategic hybrid cloud transformation partner for any enterprise.

And in doing so, the pathway to hybrid cloud just became a lot easier to travel.

How HPC supports 'continuous integration of new ideas' for optimizing Formula 1 car design

How HPC supports 'continuous integration of new ideas' for optimizing Formula 1 car design

Learn how Alfa Romeo Racing in Switzerland leverages the latest in IT to bring hard-to-find but momentous design improvements -- from simulation to victory. 

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

A discussion on how healthcare providers employ new breeds of intelligent digital workspace technologies to improve doctor and patient experiences, make technology easier to use, and assist in bringing actionable knowledge resources to the integrated healthcare environment. 

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

Learn ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations. 

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

Learn from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure in the heavily-regulated financial services industry.

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

A discussion on how the IT4IT Reference Architecture for IT management works in many ways for many types of organizations and the demonstrated business benefits that are being realized as a result.

IT kit sustainability: A business advantage and balm for the planet

IT kit sustainability: A business advantage and balm for the planet

Learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment. 

Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work

Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work

Listen to this podcast discussion on how workers in harsh conditions are gaining ease in accessing and interacting with the best intelligence thanks to a cloud-enabled, hands-free, voice-activated, and multimedia wearable computer from HPE MyRoom and RealWear.

Why enterprises should approach procurement of hybrid IT in entirely new ways

Why enterprises should approach procurement of hybrid IT in entirely new ways

Learn why changes in cloud deployment models are forcing a rethinking of IT economics, and maybe even the very nature of acquiring and cost-optimizing digital business services.

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

A discussion on how a Canadian maker of containers leverages the Internet of Things to create a positive cycle of insights and applied learning. 

Why enterprises struggle with adopting public cloud as a culture

Why enterprises struggle with adopting public cloud as a culture

Learn why a cultural solution to adoption may be more important than any other aspect of digital business transformation.

Who, if anyone, is in charge of multi-cloud business optimization?

Who, if anyone, is in charge of multi-cloud business optimization?

Learn from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach. 

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

A discussion on how new maturity in management over all facets of IT amounts to a culmination of 30 years of IT operations improvement and ushers in an era of comprehensive automation, orchestration, and AIOps.

How global HCM provider ADP mines an ocean of employee data for improved talent management

How global HCM provider ADP mines an ocean of employee data for improved talent management

Read how digital transformation for HCM provider ADP unlocks new business insights from vast data resources using big data analytics and artificial intelligence strategies. 

New Podcast: Fairygodboss Radio

In this episode of Fairygodboss Radio, Romy sits down with Jill to talk about career growth and the importance of getting outside of your comfort zone.

Jill Sweeney leads technical Knowledge Management for volume servers, high performance computing and artificial intelligence, and composable systems at Hewlett-Packard Enterprise (HPE). She and her team are transforming the technical experiences customers and partners have with HPE's products, solutions, and support information to foster positive customer business outcomes. 

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

A discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time.

Dark side of cloud—How people and organizations are unable to adapt to improve the business


The next BriefingsDirect cloud deployment strategies interview explores how public cloud adoption is not reaching its potential due to outdated behaviors and persistent dissonance between what businesses can do and will do with cloud strengths.

Many of our ongoing hybrid IT and cloud computing discussions focus on infrastructure trends that support the evolving hybrid IT continuum. Today’s focus shifts to behavior -- how individuals and groups, both large and small, benefit from cloud adoption. 

It turns out that a dark side to cloud points to a lackluster business outcome trend. A large part of the disappointment has to do with outdated behaviors and persistent dissonance between what line of business (LOB) practitioners can do and will do with their newfound cloud strengths. 

We’ll now hear from an observer of worldwide cloud adoption patterns on why making cloud models a meaningful business benefit rests more with adjusting the wetware than any other variable.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore why cloud failures and cost overruns are dogging many enterprises is Robert Christiansen, Vice President, Global Delivery, Cloud Professional Services and Innovation at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening now with the adoption of cloud that makes the issue of how people react such a pressing concern? What’s bringing this to a head now?



Christiansen: Enterprises are on a cloud journey. They have begun their investment, they recognize that agility is a mandate for them, and they want to get those teams rolling. They have already done that to some degree and extent. They may be moving a few applications, or they may be doing wholesale shutdowns of data centers. They are in lots of different phases in adoption situations. 

What we are seeing is a lack of progress with regard to the speed and momentum of the adoption of applications into public clouds. It’s going a little slower than they’d like.

Gardner: We have been through many evolutions, generations, and even step-changes in technology. Most of them have been in a progressive direction. Why are we catching our heels now?

Christiansen: Cloud is a completely different modality, Dana. One of the things that we have learned here is that adoption of infrastructure that can be built from the ground-up using software is a whole other way of thinking that has never really been the core bread-and-butter of an infrastructure or a central IT team. So, the thinking and the process -- the ability to change things on the fly from an infrastructure point of view -- is just a brand new way of doing things. 

And we have had various fits and starts around technology adoption throughout history, but nothing at this level. The tool kits available today have completely changed and redefined how we go about doing this stuff.

Gardner: We are not just changing a deployment pattern, we are reinventing the concept of an application. Instead of monolithic applications and systems of record that people get trained on and line up around, we are decomposing processes into services that require working across organizational boundaries. The users can also access data and insights in ways they never had before. So that really is something quite different. Even the concept of an application is up for grabs.

Christiansen: Well, think about this. Historically, an application team or a business unit, let’s say in a bank, said, “Hey, I see an opportunity to reinvent how we do funding for auto loans.”

We worked with a company that did this. And historically, they would have had to jump through a bunch of hoops. They would justify the investment of buying new infrastructure, set up the various components necessary, maybe landing new hardware in the organization, and going into the procurement process for all of that. Typically, in the financial world, it takes months to make that happen.

Today, that same team using a very small investment can stand up a highly available redundant data center in less than a day on a public cloud. In less than a day, using a software-defined framework. And now they can go iterate and test and have very low risk to see if the marketplace is willing to accept the kind of solution they want to offer.

And that just blows apart the procedural-based thinking that we have had up to this point; it just blows it apart. And that thinking, that way of looking at stuff is foreign to most central IT people. Because of that emotion, going to the cloud has come in fits and starts. Some people are doing it really well, but a majority of them are struggling because of the people issue.

Gardner: It seems ironic, Robert, because typically when you run into too much of a good thing, you slap on governance and put in central command and control, and you throttle it back. But that approach subverts the benefits, too.

How do you find a happy medium? Or is there such a thing as a happy medium when it comes to moderating and governing cloud adoption?

Control issues

Christiansen: That’s where the real rub is, Dana. Let’s give it an analogy. At Cloud Technology Partners (CTP), we do cloud adoption workshops where we bring in all the various teams and try to knock down the silos. They get into these conversations to address exactly what you just said. “How do we put governance in place without getting in the way of innovation?”

It’s a huge, huge problem, because the central IT team’s whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

When you have a structure like that but supplied by the public clouds like Amazon (AWS)Google, and Microsoft Azure, you still have the ability to put in a lot of those controls in the software. Before it was done either manually or at least semi-manually.

The central IT team's whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

The challenge is that the central IT teams are not necessarily set up with the skills to make that happen. They are not by nature software development people. They are hardware people. They are rack and stack people. They are people who understand how to stitch this stuff together -- and they may use some automation. But as a whole it’s never been their core competency. So therein lies the rub: How do you convert these teams over to think in that new way?

At the same time, you have the pressing issue of, “Am I going to automate myself right out of a job?” That’s the other part, right? That’s the big, 800-pound gorilla sitting in the corner that no one wants to talk about. How do you deal with that?

Gardner: Are we talking about private cloud, public cloud, hybrid cloud, hybrid IT -- all the above when it comes to these trends?

Public perceptions 

Christiansen: It’s mostly public cloud that you see the perceived threats. The public cloud is perceived as a threat to the current way of doing IT today, if you are an internal IT person. 

Let’s say that you are a classic compute and management person. You actually split across both storage and compute, and you are able to manage and handle a lot of those infrastructure servers and storage solutions for your organization. You may be part of a team of 50 in a data center or for a couple of data centers. Many of those classic roles literally go away with a public cloud implementation. You just don’t need them. So these folks need to pivot or change into new roles or reinvent themselves.

Let’s say you’re the director of that group and you happen to be five years away from retirement. This actually happened to me, by the way. There is no way these folks want to give up the range right before their retirement. They don’t want to reinvent their roles just before they’re going to go into their last years. 

They literally said to me, “I am not changing my career this far into it for the sake of a public cloud reinvention.” They are hunkering down, building up the walls, and slowing the process. This seems to be an undercurrent in a number of areas where people just don’t want to change. They don’t want any differences.

Gardner: Just to play the devil’s advocate, when you hear things around serverless, when we see more operations automation, when we see artificial intelligence (AI)Ops use AI and machine learning (ML) -- it does get sort of scary. 

You’re handing over big decisions within an IT environment on whether to use public or private, some combination, or multicloud in some combination. These capabilities are coming into fruition.

Maybe we do need to step back and ask, “Just because you can do something, should you?” Isn’t that more than just protecting my career? Isn’t there a need for careful consideration before we leap into some of these major new trends?

Transform fear into function 

Christiansen: Of course, yeah. It’s a hybrid world. There are applications where it may not make sense to be in the public cloud. There are legacy applications. There are what I call centers of gravity that are database-centric; the business runs on them. Moving them and doing a big lift over to a public cloud platform may not make financial sense. There is no real benefit to it to make that happen. We are going to be living between an on-premises and a public cloud environment for quite some time. 

The challenge is that people want to create a holistic view of all of that. How do I govern it in one view and under one strategy? And that requires a lot of what you are talking about, being more cautious going forward.

And that’s a big part of what we have done at CTP. We help people establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. How do you network between the two environments? How do you create low-latency communications between your sources of data and your sources of truth? Making that happen is what we have been doing for the last five or six years.

We help establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. 

The challenge we have, Dana, is that once we have established that -- we call that methodology the Minimum Viable Cloud (MVC). And after you put all of that structure, rigor, and security in place -- we still run into the problems of motion and momentum. Those needed governance frameworks are well-established.

Gardner: Before we dig into why the cloud adoption inertia still exists, let’s hear more about CTP. You were acquired by HPE not that long ago. Tell us about your role and how that fits into HPE.

CTP: A cloud pioneer

Christiansen: CTP was established in 2010. Originally, we were doing mostly private cloud, OpenStack stuff, and we did that for about two to three years, up to 2013.


I am one of the first 20 employees. It’s a Boston-based company, and I came over with the intent to bring more public cloud into the practice. We were seeing a lot of uptick at the time. I had just come out of another company called Cloud Nation that I owned. I sold that company; it was an Amazon-based, Citrix-for-rent company. So imagine, if you would, you swipe a credit card and you get NetScaler, XenApp and XenDesktop running on top of AWS way back in 2012 and 2013. 

I sold that company, and I joined CTP. We grew the practice of public cloud on Google, Azure, and AWS over those years and we became the leading cloud-enabled professional services organization in the world.

We were purchased by HPE in October 2017, and my role since that time is to educate, evangelize, and press deeply into the methodologies for adopting public cloud in a holistic way so it works well with what people have on-premises. That includes the technologies, economics, strategies, organizational change, people, security, and establishing a DevOps practice in the organization. These are all within our world.

We do consultancy and professional services advisory types of things, but on the same coin, we flip it over, and we have a very large group of engineers and architects who are excellent on keyboards. These are the people who actually write software code to help make a lot of this stuff automated to move people to the public clouds. That’s what we are doing to this day.

Gardner: We recognize that cloud adoption is a step-change, not an iteration in the evolution of computing. This is not going from client/server to web apps and then to N-Tier architectures. We are bringing services and processes into a company in a whole new way and refactoring that company. If you don’t, the competition or a new upstart unicorn company is going to eat your lunch. We certainly have seen plenty of examples of that. 

So what prevents organizations from both seeing and realizing the cloud potential? Is this a matter of skills? Is it because everyone is on the cusp of retirement and politically holding back? What can we identify as the obstacles to overcome to break that inertia?

A whole new ball game

Christiansen: From my perspective, we are right in the thick of it. CTP has been involved with many Fortune 500 companies throughthis process.

The technology is ubiquitous, meaning that everybody in the marketplace now can own pretty much the same technology. Dana, this is a really interesting thought. If a team of 10 Stanford graduates can start up a company to disrupt the rental car industry, which somebody has done, by the way, and they have access to technologies that were only once reserved for those with hundreds of millions of dollars in IT budgets, you have all sorts of other issues to deal with, right?

So what’s your competitive advantage? It’s not access to the technologies. The true competitive advantage now for any company is the people and how they consume and use the technology to solve a problem. Before [the IT advantage] was reserved for those who had access to the technology. That’s gone away. We now have a level playing field. Anybody with a credit card can spin up a big data solution today – anybody. And that’s amazing, that’s truly amazing.

For an organization that had always fallen back on their big iron or infrastructure -- those processes they had as their competitive advantage -- that now has become a detriment. That’s now the thing that’s slowing them down. It’s the anchor holding them back, and the processes around it. That rigidity of people and process locks them into doing the same thing over and over again. It is a serious obstacle. 

Untangle spaghetti systems 

Another major issue came very much as a surprise, Dana. We observed it over the last couple of years of doing application inventory assessments for people considering shutting down data centers. They were looking at their applications, the ones holding the assets of data centers, as not competitive. And they asked, “Hey, can we shut down a data center and move a lot of it to the public cloud?”

We at CTP were hired to do what are called application assessments, economic evaluations. We determine if there is a cost validation for doing a lift-and-shift [to the public cloud]. And the number-one obstacle was inventory. The configuration management data bases (CMDBs), which hold the inventory of where all the servers are and what’s running on them for these organizations, were wholly out of date. Many of the CMDBs just didn’t give us an accurate view of it all. 

When it came time to understand what applications were actually running inside the four walls of the data centers -- nobody really knew. As a matter of fact, nobody really knew what applications were talking to what applications, or how much data was being moved back and forth. They were so complex; we would be talking about hundreds, if not thousands, of applications intertwined with themselves, sharing data back and forth. And nobody inside organizations understood which applications were connected to which, how many there were, which ones were important, and how they worked.

When it came time to understand what applications were actually running inside of the four walls of the data centers -- no one really knew. Nobody knew what applications were talking to what applications, or how much data was being moved back and forth.

Years of managing that world has created such a spaghetti mess behind those walls that it’s been exceptionally difficult for organizations to get their hands around what can be moved and what can’t. There is great integration within the systems.

The third part of this trifecta of obstacles to moving to the cloud is, as we mentioned, people not wanting to change their behaviors. They are locked in to the day-to-day motion of maintaining those systems and are not really motivated to go beyond that.

Gardner: I can see why they would find lots of reasons to push off to another day, rather than get into solving that spaghetti maze of existing data centers. That’s hard work, it’s very difficult to synthesize that all into new apps and services.

Christiansen: It was hard enough just virtualizing these systems, never mind trying to pull it all apart.

Gardner: Virtualizing didn’t solve the larger problem, it just paved the cow paths, gained some efficiency, reduced poor server utilization -- but you still have that spaghetti, you still have those processes that can’t be lifted out. And if you can’t do that, then you are stuck.

Christiansen: Exactly right.

Gardner: Companies for many years have faced other issues of entrenchment and incumbency, which can have many downsides. Many of them have said, “Okay, we are going to create a Skunk Works, a new division within the company, and create a seed organization to reinvent ourselves.” And maybe they begin subsuming other elements of the older company along the way.

Is that what the cloud and public cloud utilization within IT is doing? Why wouldn’t that proof of concept (POC) and Skunk Works approach eventually overcome the digital transformation inertia?

Clandestine cloud strategists

Christiansen: That’s a great question, and I immediately thought of a client who we helped. They have a separate team that re-wrote or rebuilt an application using serverless on Amazon. It’s now a fairly significant revenue generator for them, and they did it almost two and-a-half years ago.

It uses a few cloud servers, but mostly they rely on the messaging backbones and non-server-based platform-as-a-service (PaaS) layers of AWS to solve their problem. They are a consumer credit company and have a lot of customer-facing applications that they generate revenue from on this new platform.

The team behind the solution educated themselves. They were forward-thinkers and saw the changes in public cloud. They received permission from the business unit to break away from the central IT team’s standard processes, and they completely redefined the whole thing.

The team really knocked it out of the park. So, high success. They were able to hold it up and tried to extend that success back into the broader IT group. The IT group, on the other hand, felt that they wanted more of a multicloud strategy. They weren’t going to have all their eggs in Amazon. They wanted to give the business units options, of either going to Amazon, Azure, or Google. They wanted to still have a uniform plane of compute for on-premises deployments. So they brought in Red Hat’s OpenShift, and they overlaid that, and built out a [hybrid cloud] platform.

Now, the Red Hat platform, I personally had had no direct experience, but I had heard good things about it. I had heard of people who adopted it and saw benefits. This particular environment though, Dana, the business units themselves rejected it.

The core Amazon team said, “We are not doing that because we’re skilled in Amazon. We understand it, we’re using AWS CloudFormation. We are going to write code to the applications, we are going to use Lambda whenever we can.” They said, “No, we are not doing that [hybrid and multicloud platform approach].”

Other groups then said, “Hey, we’re an Azure shop, and we’re not going to be tied up around Amazon because we don’t like the Amazon brand.” And all that political stuff arose, they just use Azure, and decided to go shooting off on their own and did not use the OpenShift platform because, at the time, the tool stacks were not quite what they needed to solve their problems.

The company ended up getting a fractured view. We recommended that they go on an education path, to bring the people up to speed on what OpenShift could do for them. Unfortunately, they opted not to do that -- and they are still wrestling with this problem.

CTP and I personally believe that this was an issue of education, not technology, and not opportunity. They needed to lean in, sponsor, and train their business units. They needed to teach the app builders and the app owners on why this was good, the advantages of doing it, but they never invested the time. They built it and hoped that the users would come. And now they are dealing with the challenges of the blowback from that.

Gardner: What you’re describing, Robert, sounds an awful lot like basic human nature, particularly with people in different or large groups. So, politics, right? The conundrum is that when you have a small group of people, you can often get them on board. But there is a certain cut-off point where the groups are too large, and you lose control, you lose synergy, and there is no common philosophy. It’s Balkanization; it’s Europe in 1916.

Christiansen: Yeah, that is exactly it.

Gardner:Very difficult hurdles. These are problems that humankind has been dealing with for tens of thousands of years, if not longer. So, tribalism, politics. How does a fleet organization learn from what software development has come up with to combat some of these political issues? I’m thinking of Agile methodologiesscrums, and having short bursts, lots of communication, and horizontal rather than command-and-control structures. Those sorts of things.

Find common ground first

Christiansen: Well, you nailed it. How you get this done is the question. How do you get some kind of agility throughout the organization to make this happen? And there are successes out there, whole organizations, 4,000 or 5,000 or 6,000 people, have been able to move. And we’ve been involved with them. The best practices that we see today, Dana, are around allowing the businesses themselves to select the platforms to go deep on, to get good at.

Let’s say you have a business unit generating $300 million a year with some service. They have money, they are paying the IT bill. But they want more control, they want more the “dev” from the DevOps process.

The best practices that we see today are around allowing the businesses themselves to select the cloud platforms to go deep on, to get good at. ... They want the "dev" from the DevOps process.

They are going to provide much of that on their own, but they still need core common services from central IT team. This is the most important part. They need the core services, such as identity and access management, key management, logging and monitoring, and they need networking. There is a set of core functions that the central team must provide.

And we help those central teams to find and govern those services. Then, the business units [have cloud model choice and freedom as long as they] consume those core services -- the access and identity process, the key management services, they encrypt what they are supposed to, and they use the networking functions. They set up separation of the services appropriately, based on standards. And they use automation to keep them safe. Automation prevents them from doing silly things, like leaving unencrypted AWS S3 buckets open to the public Internet, things like that.

You now have software that does all of that automation. You can turn those tools on and then it’s like a playground, a protected playground. You say, “Hey, you can come out into this playground and do whatever you want, whether it’s on Azure or Google, or on Amazon or on-premises.”

 “Here are the services, and if you adopt them in this way, then you, as the team, can go deep, you can use Application programming interface (API) calls, you can use CloudFoundation or Python or whatever happens to be the scripting language you want to build your infrastructure with.”

Then you have the ability to let those teams do what they want. If you notice, what it doesn’t do is overlay a common PaaS layer, which isolates the hyperscale public cloud provider from your work. That’s a whole other food fight, religious battle, Dana, around lock-in and that kind of conversation.

Gardner: Imposing your will on everyone else doesn’t seem to go over very well.

So what you’re describing, Robert, is a right-sizing for agility, and fostering a separate-but-equal approach. As long as you can abstract to the services level, and as long as you conform to a certain level of compliance for security and governance -- let’s see who can do it better. And let the best approach to cloud computing win, as long as your processes end up in the right governance mix.

Development power surges

Christiansen: People have preferences, right? Come on! There’s been a Linux and .NET battle since I have been in business. We all have preferences, right? So, how you go about coding your applications is really about what you like and what you don’t like. Developers are quirky people. I was a C programmer for 14 years, I get it.

The last thing you want to do is completely blow up your routines by taking development back and starting over with a whole bunch of new languages and tools. Then they’re trying to figure out how to release code, test code, and build up a continuous integration/continuous delivery pipeline that is familiar and fast.

These are really powerful personal stories that have to be addressed. You have to understand that. You have to understand that the development community now has the power -- they have the power, not the central IT teams. That shift has occurred. That power shift is monumental across the ecosystem. You have to pay attention to that.

If the people don’t feel like they have a choice, they will go around you, which is where the problems are happening.

Gardner: I think the power has always been there with the developers inside of their organizations. But now it’s blown out of the development organization and has seeped up right into the line of business units.

Christiansen: Oh, that’s a good point.

Gardner: Your business strategy needs to consider all the software development issues, and not just leave them under the covers. We’re probably saying the same thing. I just see the power of development choice expanding, but I think it’s always been there.

But that leads to the question, Robert, of what kind of leadership person can be mindful of a development culture in an organization, and also understand the line of business concerns. They must appreciate the C-suite strategies. If you are a public company, keeping Wall Street happy, and keeping the customer expectations met because those are always going up nowadays.

It seems to me we are asking an awful lot of a person or small team that sits at the middle of all of this. It seems to me that there’s an organizational and a talent management deficit, or at least something that’s unprecedented.

Tech-business cross-pollination

Christiansen: It is. It really is. And this brings us to a key piece to our conversation. And that is the talent enablement. It is now well beyond how we’ve classically looked at it.

Some really good friends of mine run learning and development organizations and they have consulting companies that do talent and organizational change, et cetera. And they are literally baffled right now at the dramatic shift in what it takes to get teams to work together.

In the more flexible-thinking communities of up-and-coming business, a lot of the folks that start businesses today are technology people. They may end up in the coffee industry or in the restaurant industry, but these folks know technology. They are not unaware of what they need to do to use technology.

So, business knowledge and technology knowledge are mixing together. They are good when they get swirled together. You can’t live with one and not have the other.

For example, a developer needs to understand the implications of economics when they write something for cloud deployment. If they build an application that does not economically work inside the constructs of the new world, that’s a bad business decision, but it’s in the hands of the developer.

It’s an interesting thing. We’ve had that need for developer-empowerment before, but then you had a whole other IT group put restrictions on them, right? They’d say, “Hey, there’s only so much hardware you get. That’s it. Make it work.” That’s not the case anymore, right?

We have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud. 

At the same time, you now have an operations person involved with figuring out how to architect for the cloud, and they may think that the developers do not understand what has to come together.

As a result, we have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud.

We have found that much of an organization’s delay in rolling this out is because the people who are consuming the cloud are not ready or knowledgeable enough on how to maximize their investment in cloud. This is not for the people building up those core services that I talked about, but for the consumers of the services, the business units.

We are rolling that out later this year, a full Talent Enablement track around those new roles.

Gardner: This targets the people in that line of business, decision-making, planning, and execution role. It brings them up to speed on what cloud really means, how to consume it. They can then be in a position of bringing teams together in ways that hadn’t been possible before. Is that what you are getting at?

Teamwork wins 

Christiansen: That’s exactly right. Let me give you an example. We did this for a telecommunications company about a year ago. They recognized that they were not going to be able to roll out their common core services.

The central team had built out about 12 common core services, and they knew almost immediately that the rest of the organization, the 11 other lines of business, were not ready to consume them.

They had been asking for it, but they weren’t ready to actually drive this new Ferrari that they had asked for. There were more than 5,000 people who needed to be up-skilled on how to consume the services that a team of about 100 people had put together.

Now, these are not classic technical services like AWS architecture, security frameworks, or Access control list (ACL) and Network ACL (NACL) for networking traffic, or how you connect back and backhaul, that kind of stuff. None of that.

I’m talking about how to make sure you don’t get a cloud bill that’s out of whack. How do I make sure that my team is actually developing in the right way, in a safe way? How do I make sure my team understands the services we want them to consume so that we can support it?

It was probably 10 or 12 basic use domains. The teams simply didn’t understand how to consume the services. So we helped this organization build a training program to bring up the skills of these 4,000 to 5,000 people.

Now think about that. That has to happen in every global Fortune 2000 company where you may only have a central team of a 100, and maybe 50 cloud people. But they may need to turn over the services to 1,000 people.

We have a massive, massive, training, up-skilling, and enablement process that has to happen over the next several years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: