BRIEFINGS DIRECT PODCASTS with Dana Gardner
Since 1999 Dana Gardner has emerged as a leading identifier of software productivity trends and new IT business value opportunities and is frequently quoted as a thought leader in top, news and IT industry publications such as The New York Times, The Wall Street Journal, The Boston Globe, Washington Post, Business Week, San Francisco, Reuters, Associated Press, MSNBC.com, CNN.com and more.
Gardner is well known as a creative thought leader on enterprise software solutions, strategies, partnerships, and markets. As a skilled multi-media communicator and evangelist, he has written dozens of industry reports on the business benefits of IT and Internet innovation for advancing general productivity, improving employee efficiency, and reducing total IT costs.
Gardner tracks and analyzes a critical set of enterprise software technologies and business transformation issues: Cloud computing, data center modernization, software-defined data centers, virtualization, big data analysis platforms, business intelligence, application development tools and application delivery optimization techniques, as well as mobile and virtual desktop strategies. His specific interests include enterprise architecture, IT as a service, open source strategies, social media, and mobile-first DevOps initiatives.
As founder and president of Interarbor Solutions, Gardner has taken a strong record in consulting services for IT vendors, carriers, and enterprises to yet another level: The exciting new communications capabilities around Internet social media. Businesses of all kinds are quickly exploiting blogs, podcasts and video-podcasts for education, communications and viral outreach. Gardner practices what he preaches, as a frequent blogger on ZDNet and his personal blog, as well as a podcaster. He began podcasting as a founding member of the Gillmor Gang in 2005.
Stay with us to learn about unlocking new choices and innovation for the next generations of supercomputing with Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at Hewlett Packard Enterprise (HPE), and Professor Mark Parsons, Director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
The next BriefingsDirect enterprise storage partnership innovation discussion explores how the best of startup culture and innovation can be married to the global reach, maturity, and solutions breadth of a major IT provider.
To learn more about the latest in total storage efficiency strategies and HPE’s Pathfinder program we welcome Rob Salmon, President and Chief Operating Officer at Cohesity in San Jose, California, and Paul Glaser, Vice President and Head of the Pathfinder Program at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Texmark has been combining the best of operational technology (OT) with IT and now Internet of Things (IoT) to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.
Stay with us now as we hear how a team approach -- including the plant operators, consulting experts and latest in hybrid IT systems -- joins forces for rapid process and productivity optimization results.
To learn how, are are joined by our panel, Linda Salinas, Vice President of Operations at Texmark Chemicals, Inc. in Galena Park, Texas; Stan Galanski, Senior Vice President of Customer Success at CB Technologies (CBT) in Houston, and Peter Moser, IoT and Artificial Intelligence (AI) Strategist at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
How HPC Supports the 'Continuous Integration of New Ideas’ to Optimize Formula 1 Car Designs
Wed, 27 March 2019
The next BriefingsDirect extreme use-case for high-performance computing (HPC) examines how the strictly governed redesign of Formula 1 race cars relies on data center innovation to coax out the best in fluid dynamics analysis and refinement.
We’ll now hear how Alfa Romeo Racing (formerly Alfa Romeo Sauber F1 Team) in Hinwil, Switzerland leverages the latest in IT to bring hard-to-find but momentous design improvements -- from simulation, to wind tunnel, to test track, and ultimately, to victory. The goal: To produce cars that are glued to the asphalt and best slice through the air.
Here to describe the challenges and solutions from the compute-intensive design of Formula 1 cars is Francesco Del Citto, Head of Computational Fluid Dynamics Methodology for Alfa Romeo Racing, and Peter Widmer, Worldwide Category Manager for Moonshot/Edgeline and Internet of Things (IoT) at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Podcast: The costly downside of complex cloud environments
Get insights on the spiraling complexity and costs of unwieldy hybrid IT environments and how new tools and approaches can help you regain control.
[Editor's note: This podcast was recorded on Aug. 17, 2018.]
With the promise of greater agility, cloud—in all its iterations—is a fact of life in business today. But many organizations have put the cart in front of the horse, with the horse being hybrid cloud management.
“I think the rush to adoption of public cloud—and the focus on agility over cost efficiency—has driven a predominance of the culture of, “We are going to provide visibility and report and guide, but we are not going to control because of the business value of that agility,” says Rhett Dillingham, vice president and senior analyst at Moor Insights & Strategy.
In this Voice of the Analyst podcast, Dillingham discusses how scattershot approaches to cloud threaten to wipe out its promised benefits and the management solutions that can help organizations gain control of their hybrid cloud environments.
Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise's Voice of the Analyst podcast series. I’m Dana Gardner, principal analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into successful digital transformation.
This hybrid IT management strategies interview explores how jerry-rigged approaches to cloud adoption at many organizations have spawned complexity amid spiraling—and even unknown—costs
We’ll hear now from an IT industry analyst about what causes unwieldy cloud use, and how new tools, processes, and methods are bringing insights and actionable analysis to regain control over hybrid IT sprawl.
Rhett Dillingham: Thank you. Glad to be with you.
Gardner: Rhett, what are some of the drivers making hybrid and multicloud adoption so complex?
Dillingham: Regardless of how an enterprise has invested in public and private cloud use for the last decade, a lot of them ended up in a similar situation. They have a footprint on at least one or multiple public clouds. This is in addition to their private infrastructure, in whatever degree that private infrastructure has been cloud-enabled and turned into a cloud API-available infrastructure to their developers.
Rhett Dillingham, Moor Insights & Strategy
They have this footprint then across the hybrid infrastructure and multiple public clouds. Therefore, they need to decide how they are going to orchestrate on those various infrastructures—and how they are going to manage in terms of control costs, security, and compliance. They are operating cloud by cloud, versus operating as a consolidated group of infrastructures that use common tooling. This is the real wrestling point for a lot of them, regardless of how they got here.
Gardner: Where are we in this as an evolution? Are things going to get worse before they get better in terms of these levels of complexity and heterogeneity?
Dillingham: We’re now at the point where this is so commonly recognized that we are well into the late majority of adopters of public cloud. The vast majority of the market is in this situation. We’re going to get worse from an enterprise market perspective.
We are also at the inflection point of requiring orchestration tooling, particularly with the advent of containers. Container orchestration is getting more mature in a way that is ready for broad adoption and trust by enterprises, so they can make bets on that technology and the platforms based on them.
Want to learn more about hybrid IT? There's a Dummies Guide for that.
On the control side, we’re still in the process of sorting out the tooling. You have a number of vendors innovating in the space, and there have been a number of startup efforts. Now, we’re seeing more of the historical infrastructure providers invest in the software capabilities and turning those into services—whether it’s Hewlett Packard Enterprise, VMware, or Cisco, they are all making serious investments into the control aspect of hybrid IT. That’s because their value is private cloud but extends to public cloud with the same need for control.
Gardner: You mentioned containers, and they provide a common denominator approach so that you can apply them across different clouds, with less arduous and specific work than deploying without containerization. The attractiveness of containers comes because the private cloud people aren’t going to help you deal with your public cloud deployment issues. And the public clouds aren’t necessarily going to help you deal with other public clouds or private clouds. Is that why containers are so popular?
Dillingham: If you go back to the fundamental basis of adoption of cloud and the value proposition, it was first and foremost about agility—more so than cost efficiency. Containers are a way of extending that value, and getting much deeper into speed of development, time to market, and for innovation and experimentation.
Containerization is an improvement geared around that agility value that furthers cloud adoption. It is not a stark difference from virtual machines, in the sense of how the vendors support and view it.
So, I think a different angle on that would be that the use of VMs in public cloud was step one, containers was a significant step two that comes with an improved path to the agility and speed value. The value the vendor ecosystem is bringing with the platforms—and how that works in a portable way across hybrid infrastructures and multicloud—is more easily delivered with containers.
There’s going to be an enterprise world where orchestration runs specific to cloud infrastructure, public versus private, but different on various public clouds. And then there is going to be more commonality with containers by virtue of the Kubernetes project and Cloud Native Computing Foundation (CNCF) portfolio.
That’s going to deliver for new applications—and those lifted and shifted into containers—much more seamless use across these hybrid infrastructures, at least from the control perspective.
Gardner: We seem to be at a point where the number of cloud options has outstripped the ability to manage them. In a sense, the cart is in front of the horse; the horse being hybrid cloud management. But we are beginning to see more such management come to the fore. What does this mean in terms of previous approaches to management?
In other words, a lot of organizations already have management for solving a variety of systems heterogeneity issues. How should the new forms of management for cloud have a relationship with these older management tools for legacy IT?
Dillingham: That is a big question for enterprises. How much can they extend their existing toolsets to public cloud?
A lot of the vendors from the private [infrastructure] sector invested in delivering new management capabilities, but that isn’t where many started. I think the rush to adoption of public cloud—and the focus on agility over cost efficiency—has driven a predominance of the culture of, “We are going to provide visibility and report and guide, but we are not going to control because of the business value of that agility.”
And the tools have grown up as a delivery on that visibility, versus the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity. That is an advantage to vendors in those different spheres. I see that continuing.
Gardner: You mentioned both agility and cost as motivators for going to hybrid cloud, but do we get to the point where the complexity and heterogeneity spawn a lack of insight and control? Do we get to the point where we are no longer increasing agility? And that means we are probably not getting our best costs either.
Are we at a point where the complexity is subverting our agility and our ability to have predictable total costs?
Growing up in the cloud
Dillingham: We are still a long away from maturity in effective use of cloud infrastructure. We are still at a point where just understanding what is optimal is pretty difficult across the various purchase and consumption options of public cloud by provider and in comparing that to an accurate cost model for private infrastructure. So, the tooling needs to be in place to support this.
There has been a lot of discussion recently about HPE OneSphere from HPE, where they have invested in delivering some of this comparability and the analytics to enable better decision-making. I see a lot of innovation in that space—and that’s just the tooling.
There is also the management of the services, where the cloud managed service provider market is continuing to develop beyond just a brokering orientation. There is more value now in optimizing an enterprise’s footprint across various cloud infrastructures on the basis of optimal agility. And also creating value from services that can differentiate among different infrastructures—be it Amazon Web Services versus Azure, and Google, and so forth—and provide the cost comparisons.
Gardner: Given that it’s important to show automation and ongoing IT productivity, are these new management tools including new levels of analytics, maybe even predictive insights, into how workloads and data can best become fungible—and moved across different clouds—based on the right performance and/or cost metrics?
Is that part of the attractiveness to a multi- and cross-cloud management capability? Does hybrid cloud management become a slippery slope toward impressive analytics and/or performance-oriented automation?
Dillingham: We’ve had investment in the tooling from the cloud providers, the software providers, and the infrastructure providers. Yet, the insights have come more from the professional services realm than they have from the tooling realm. That’s provided a feedback loop that can now be applied across hybrid and multicloud in a way that hasn’t come from the public cloud provider tools themselves.
So, where I see the most innovation is from the providers that are trying to address multicloud environments and best feed innovation from their customer engagements from professional services. I like the opportunity HPE has to benefit from their acquisitions of Cloud Technology Partners and RedPixie, and then feeding those insights back into [product development]. I’ve seen a lot of examples about the work they’re doing in HPE OneSphere in moving those insights into action for customers through analytics.
Gardner: I was also thinking about the Nimble acquisition, and with InfoSight, and the opportunity for that intellectual property to come to bear on this, too.
Dillingham: Yes, which is really harvesting the value of the control and insights of the private infrastructure and the software-defined orientation of private infrastructure in comparison to the public cloud options.
Gardner: Tell us about Rhett Dillingham. You haven’t been an IT industry analyst forever. Please tell us a bit about your background.
Dillingham: I’ve been a longtime product management leader. I started in hardware, at AMD, and moved into software. Before the cloud days, I was at Microsoft. Next, I was building out the early capabilities at AWS, such as Elastic Compute Cloud (EC2) and Elastic Block Store (EBS). Then I went into a portfolio of services at Rackspace, building those out at the platform level and the overall Rackspace public cloud. As the value of OpenStack matured into private use, I worked with a number of enterprises on private OpenStack cloud deployments.
As an analyst, I support project management-oriented, consultative, and go-to-market positioning of our clients.
Gardner: Let’s dwell on the product management side for a bit. Given that the market is still immature, given what you know customers are seeking for a hybrid IT end state, what should vendors such as HPE be doing in order to put together the right set of functions, processes, and simplicity—and ultimately, analytics and automation—to solve the mess among cloud adoption patterns and sprawl?
Clean up the cloud mess
Dillingham: We talked about automation and orchestration, talked about control of cost, security, and compliance. I think that there is a tooling and services spectrum to be delivered on those. The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.
Where are they optimizing on cost based on what they can do in private infrastructure? Where are they setting up decision processes? What incremental services should be adopted? What incremental clouds should be adopted, such as what an Oracle and an IBM are positioning their cloud offerings to be for adoption beyond what’s already been adopted by a client in AWS, Google, and Azure?
The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.
I think there’s a synergy to be had across those needs. This spans from the software and services tooling into the services and managed services, and in some cases when the enterprise is looking for an operational partner.
Gardner: One of the things that I struggle with, Rhett, is not just the process, the technology, and the opportunity, but the people. Who in a typical enterprise IT organization should be tasked with such hybrid IT oversight and management? It involves more than just IT.
To me, it’s economics, it’s procurement, it’s contracts. It involves a bit more than red light, green light…on speed. Tell me about who or how organizations need to change to get the right people in charge of these new tools.
Who’s in charge?
Dillingham: More than the individuals, I think this is about the recognition of the need for partnerships between the business units, the development organizations, and the operational IT organization’s arm of the enterprise.
The focus on agility for business value had a lot of the cloud adoption led by the business units and the application development organizations. As the focus on maturity mixes in the control across security and compliance, those are traditional realms of the IT operational organization.
Now there’s the need for decision structure around sourcing—where how they value incremental capabilities from more clouds and cloud providers is a decision of trade-offs and complexity. As you were mentioning, of weighing between the incremental value of an additional provider and an incremental service, and portability across those.
What I am seeing in the most mature setups are partnerships across the orientations of those organizations. That includes the acknowledgment and reconciliation of those trade-offs in long-term portability of applications across infrastructures—against the value of adoption of proprietary capabilities, such as deeper cognitive machine learning (ML) automation and Internet of Things capabilities, which are some of the drivers of the more specific public cloud platform uses.
Gardner: So with adopting cloud, you need to think about the organizational implications and refactor how your business operates. This is not just bolting on a cloud capability. You have to rethink how you are doing business across the board in order to take full advantage.
Dillingham: There is wide recognition of that theme. It gets into the nuts and bolts as you adopt a platform and you determine exactly how the operations function and roles are going to be defined. It means determining who is going to handle what, such as how much you are going to empower developers to do things themselves. With the accountability that results, more trade-offs are there for them in their roles. But it's almost over-rotation and focus to that out of recognition of it and lack of valuation of that more senior-level decision-making in what their cloud strategy is.
I hear a lot of cloud strategies that are as simple as, “Yes, we are allowing and empowering adoption of cloud by our development teams,” without the second-level recognition of the need to have a strategy for what the guidelines are for that adoption—not in the sense of just controlling costs, but in the sense of how do you view the value of long-term portability? How do you value strategic sourcing and the ability to negotiate across these providers long term with evidence and demonstrable portability of your application portfolio?
Gardner: In order to make those proper calls on where you want to go with cloud and to what degree, across which provider, organizations like HPE are coming up with new tools.
So we have heard about HPE OneSphere. We are now seeing HPE’s GreenLake Hybrid Cloud, which is a use of HPE OneSphere management as a service. Is that the way to go? Should we think of cloud management oversight and optimization as a set of services rather than a product or a tool? It seems to me that a set of services, with an ecosystem behind them, is pretty powerful.
A three-layer cloud
Dillingham: I think there are three layers to that. One is the tool, whether that is consumed as software or as a service.
Second is the professional consultative services around that, to the degree that you as an enterprise need help getting up to speed in how your organization needs to adjust to benefit from the tools and the capabilities the tools are wrangling.
And then third is a decision on whether you need an operational partner from a managed service provider perspective, and that's where HPE is stepping up and saying, "We will handle all three of these. We will deliver your tools in various consumption models on through to a software-as-a-service delivery model, for example, with HPE OneSphere. And we will operate the services for you beyond that SaaS control portal into your infrastructure management, across a hybrid footprint, with the HPE GreenLake Hybrid Cloud offering."
Gardner: With so many moving parts, it seems that we need certain things to converge, which is always tricky. So to use the analogy of properly intercepting a hockey puck, the skater is the vendor trying to provide these services, the hockey puck is the end-user organization that has complexity problems, and the ice is a wide-open market. We would like to have them all come together productively at some point in the future.
We have talked about the vendors; we understand the market pretty well. But what should the end-user organizations be starting to do and think in order for them to be prepared to take advantage of these tools? What should be happening inside your development, your DevOps, and that larger overview of process and organization in order to say, “OK, we’re going to take advantage of that hockey player when they are ready, so that we can really come together and be proficient as a cloud-first organization?”
Commit to an action plan
Dillingham: You need to have a plan in place for each element we have talked about. There needs to be a plan in place for how you are maturing your toolset in cloud-native development…how you are supporting that on the development side from a continuous integration (CI) and continuous delivery (CD) perspective, how you are reconciling that with the operational toolset and the culture of operating in a DevOps model with whatever degree of iterative development you want to enable.
Is the tooling in place from an orchestration and development capability and operations perspective, which can be containers or not? And that gets into container orchestration and the cloud management platforms. There is the control aspect—what tooling you are going to apply there, how you are going to consume that, and how much you want to provide it as a consultative offer. And then how much do you want those options managed for you by an operational partner? And then how are you are going to set up your decision-making structure internally?
Every element of that is where you need to be maturing your capabilities. A lot of the starting baseline for the consultative value of a professional services partner is walking you through the decision-making that is common to every organization on each of those fronts, and then enabling a deep discussion of where you want to be in three, five, or 10 years, and deciding proactively.
More important than anything, what is the goal? There is a lot of oversimplification of what the goal is—such as adoption of cloud and picking of best-of-breed tools—without a vision yet for where you want the organization to be and how much it benefits from the agility and speed value, and the cost efficiency opportunity.
Gardner: It’s clear that those organizations that can take that holistic view, that have the long-term picture in mind and can actually execute on it, have a significant advantage in whatever market they are in. Is that fair?
Dillingham: It is. And one thing that I think we tend to gloss over—but does exist—is a dynamic where some of the decision-makers are not necessarily incentivized to think and consider these options on a long-term basis.
The folks who are in role, often for one to three years before moving to a different role or a different enterprise, are going to consider these options differently than someone who has been in role for five or 10 years and intends to be there through this full cycle and outcome. I see those decisions made differently, and I think sometimes the executives watching this transpire are missing that dynamic and allowing some decisions to be made that are more short-term oriented than long term.
Gardner: Maybe people at the board of directors' level should familiarize themselves more with cloud management capabilities as we go forward.
I’m afraid we’re going to have to leave it there. We have been exploring how jerry-rigged approaches to cloud adoption at many organizations has spawned complexity and spiraling costs. And we have also learned about new breeds of hybrid and multicloud management solutions that are bringing insights and even actionable analysis to help regain control over hybrid IT sprawl.
So please join me in thanking our guest, Rhett Dillingham, vice president and senior analyst at Moor Insights & Strategy. Thank you so much, Rhett.
Dillingham: It’s been a pleasure, Dana.
Gardner: And a big thank you to our audience as well for joining this Hewlett Packard Enterprise's Voice of the Analyst hybrid IT management strategies interview.
I’m Dana Gardner, principal analyst at Interarbor Solutions, your host on this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening. Please pass this along to your IT community, and do come back next time.
LATEST PODCASTS FOR JULY
How HPE and Docker Together Accelerate and Automate Hybrid Cloud Adoption How HPE and Docker Together Accelerate and Automate Hybrid Cloud Adoption
The next BriefingsDirect hybrid cloud strategies discussion examines how the use of containers has moved from developer infatuation to mainstream enterprise adoption.
As part of the wave of interest in containerization technology, Docker, Inc. has emerged as a leader in the field and has greased the skids for management and ease of use.
Meanwhile, Hewlett Packard Enterprise (HPE) has embraced containers as a way to move beyond legacy virtualization and to provide both developers and IT operators more choice and efficiency as they seek to embrace hybrid cloud deployment scenarios.
Like the proverbial chocolate and peanut butter coming together -- or as I like to say, with Docker and HPE, fish and chips-- the two make a highly productive alliance and cloud ecosystem tag team.
Here to describe exactly how the Docker and HPE alliance accelerates modern and agile hybrid architectures, we are joined by two executives, Betty Junod, Senior Director of Product and Partner Marketing at Docker, and Jeff Carlat, Senior Director of Global Alliances at HPE. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Direct download: BriefingsDirect-How_HPE_and_Docker_Together_Accelerate_and_Automate_Hybrid_Cloud_Adoption.mp3
Category:technology -- posted at: 4:03pm EDT
LATEST PODCASTS FOR FEBRUARY
HPE-Deloitte retail trends
How VMware, HPE and Telefonica Together Bring Managed Cloud Services to a Global Audience
Transcript of a discussion on why Telefonica’s vision for delivering flexible cloud services solution capabilities to many Latin American and European markets has proven successful.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.
Our next optimized cloud design interview explores how a triumvirate made up of VMware, Hewlett Packard Enterprise (HPE) and Telefonica together brought managed cloud services to a global audience.
We’ll now learn how Telefonica’s vision for delivering flexible cloud services capabilities to Latin American and European markets has proven so successful. Here to explain how they developed the right recipe for rapid delivery of agile Infrastructure-as-a-Services (IaaS) deployments is Joe Baguley, Vice President and CTO of VMware EMEA. Welcome, Joe.
Joe Baguley: Hi. Nice to meet you.
Gardner: We’re also here with Antonio Oriol Barat, Head of Cloud IT Infrastructure Services at Telefonica. Welcome, Antonio.
Antonio Oriol Barat: Hello. Nice to meet you.
Gardner: Antonio, please describe the unique challenges now facing mobile and telecom operators as they transition to being more managed service providers?
Oriol Barat: The main challenge we face at this moment is to help customers navigate in a multi-cloud environment. We now have local platforms, some legacy, some virtualized platforms, hyperscale public cloud providers, and data communications networks. We want to help our customers manage these in a secure way.
Gardner: How has your cloud services vision evolved? How have partnerships allowed you to enter new markets to quickly provide services?
We have had to transition from being a hosting provider with data centers in many countries. Our movement to cloud was a natural evolution of those hosting services.Oriol Barat: We have had to transition from being a hosting provider with data centers in many countries. Our movement to cloud was a natural evolution of those hosting services. As a telecommunications company (telco), our main business is shared networks, and the network is a shared asset between many customers. So when we thought about the hosting business, we similarly wanted to be able to have shared assets. VMware, with its virtualization technology, came as a natural partner to help us evolve our hosting services.
Gardner: Joe, it’s almost as if you designed the VMware stack with customers such as Telefonica in mind.
An OS for clouds
Baguley: You could say that, yes. The vision has always been for us at VMware to develop what was originally called the software-defined data center (SDDC). Now, with multi-cloud, for me, it’s an operating system (OS) for clouds.
We’re bringing together storage, networking and compute into one OS that can run both on-premises and off-premises. You could be running on-premises the same OS as someone like Telefonica is running for their public cloud -- meaning that you have a common operating environment, a common infrastructure.
So, yes, entirely, it was built as part of this vision that everyone runs this OS to build his or her clouds.
Gardner: To have a core, common infrastructure -- yet have the ability to adapt on top of that for localized markets -- is the best of all worlds.
Baguley: That’s entirely it. Like someone said, “If all of the clouds are running the same OS, what’s the differentiation?” Well, the differentiation is, you want to go with the biggest player in Latin America. You want to go with the player that has the best direct connections: The guys that can give you service levels maybe that the cloud providers can’t give. They can give you over-the-top services that other cloud providers don’t provide. They can give you an integrated solution for your business that includes the cloud -- and other enterprise services.
It’s about providing the tools for cloud providers to build differentiated powerful clouds for their customers.
Gardner: Antonio, please, for those of our listeners and readers that aren’t that familiar with Telefonica, tell us about the breadth and depth of your company.
Oriol Barat: Telefonica is one of the top 10 global telco providers in the world. We are in 21 countries. We have fixed and mobile data services, and now we are in the process of digital transformation, where we have our focus in four areas: cloud, security, Internet of Things (IoT), and big data.
We used to think that our core business was in communications. Now we see what we call a new core of our business at the intersection of data communications, cloud, and security. We think this is really the foundation, the platform, of all the services that come on top.
Gardner: And, of course, we would all like to start with brand-new infrastructure when we enter markets. But as you know, we have to deal with what is already in place, too. When it came time for you to come up with the right combination of vendors, the right combination of technologies, to produce your new managed services capabilities, why did you choose HPE and VMware to create this full solution?
Sharing requires trust
Oriol Barat: VMware was our natural choice with its virtualization technologies to start providing shared IT platforms -- even before cloud, as a word, was invented. We launched “virtual hosting” in 2007. That was 10 years ago, and since then we have been evolving from this virtual hosting that had no portal but was a shared platform for customers, to the cloud services that we have today.
The hardware part is important; we have to have reliable and powerful technology. For us, it’s very important to provide trust to the customers. Trust, because what they are running in their data centers is similar to what we have in our data centers. Having VMware and HPE as partners provides this trust to the customers so that they will move the applications, and they know it will work fine.
Gardner: HPE is very fond of its Synergy platform, with composable infrastructure. How did that help you and VMware pull together the full solution for Telefonica, Joe?
Ten years ago providing a combined cloud platform on a composable infrastructure was unheard of – but that’s what we have evolved together.Baguley: We have been on this journey together, as Antonio mentioned, since 2007 -- since before cloud was a thing. We don’t have a test environment that’s as big as Telefonica’s production environment -- and neither does HPE. What we have been doing is working together -- and like any of these journeys, there have been missteps along the way. We stumbled occasionally, but it’s been good to work together as a partnership.
As we have grown, we have also both understood how the requirements of the market are changing and evolving. Ten years ago providing a combined cloud platform on a composable infrastructure was unheard of -- and people wouldn’t believe you could do it. But that’s what we have evolved together, with the work that we have done with companies such as Telefonica.
The need for something like HPE Synergy and the Gen10 stack -- where there are these very configurable stacks that you can put together -- has literally grown out of the work that we have done together, along with what we have done in our management stack, with the networking, compute, and storage.
Gardner: The combination of composable infrastructure and SDDC makes for a pretty strong tag team.
Baguley: Yes, definitely. It gives you that flexibility and the agility that a cloud provider needs to then meet the agility requirements of their customers, definitely.
Gardner: When it comes to bringing more end users into the clouds for your managed services providers, one of the important things is for end users to move into that cloud with as much ease as possible. Because VMware is a de facto standard in many markets with its vSphere Hypervisor, how does that help you, being a VMware stack, create that ease of joining these clouds?
Oriol Barat: Having the same technology in the customer data center and in our cloud makes things a lot easier. In the first place, in terms of confidence, the customer can be confident that it’s going to work well when it is in place. The other thing is that VMware is providing us with the tools that make these migrations easier.
Baguley: At VMworld 2017, we announced VMware Hybrid Cloud Extension (HCX), which is our hybrid cloud connector. It allows customers to locally install software that connects at a Layer 2 [network] level, as well as right back to vSphere 5.0 in clouds. Those clouds now are IBM and VMware cloud native, but we are extending it to other service providers like Telefonica in 2018.
So a customer can truly feel that their connecting and migrations will be seamless. Things like vSphere vMotion across that gap are going to be possible, too. I think the important thing here is by going down this road, people can take some of the fear out of going to the cloud, because some of the fear is about getting locked in: “I am going to make decisions that I will regret in two years by converting my virtual machines (VMs) to run on another platform.” Right here, there isn’t that fear, there is just more choice, and Telefonica is very much part of that story of choice.
Gardner: It sounds like you have made things attractive for managed service providers in many markets. For example, they gain ease of migration from enterprises into the provider’s cloud. In the case of Telefonica, users gain support, services and integration, knowing that the venerable vendors like VMware and HPE are behind the underlying services.
Do you have any examples where you have been able to bring this total solution to a typical managed service provider account? How has it worked out for them?
Everyone’s doing it
Oriol Barat: We have use cases in all the vertical industries. Because cloud is a horizontal technology, it’s the foundation of everything. I would say that all companies of all verticals are in this process of transformation.
We have a lot of customers in retail that are moving their platforms to cloud. We have had, for example, US companies coming to Europe and deploying their SAP systems on top of our platforms.
For example in Spain, we have a very strong tourism industry with a lot of hotel chains that are also using our cloud services for their reservation systems and for more of their IT.
We have use cases in healthcare, of companies moving their medical systems to our clouds.
We have use cases of software vendors that are growing software-as-a-service (SaaS) businesses and they need a flexible platform that can grow as their businesses grow.
A lot of people are using these platforms as disaster recovery (DR) for the platforms that they have on-premises.
I would say that all verticals are into this transformation.
Gardner: It’s interesting, you mentioned being able to gain global reach from a specific home economy by putting data centers in place with a managed service provider model.
It’s also important for data sovereignty and compliance and General Data Protection Regulation (GDPR) and other issues for that to happen. It sounds like a very good market opportunity.
And that brings us to the last part of our discussion. What happens next? When we have proven technology in place, and we have cloud adoption, where would you like to be in 12 months?
Gaining the edge
We are talking about taking that same blend of storage, networking and compute, and running it on as small a device as possible.Baguley: There has been a lot of talk at recent events, like HPE Discover, about intelligent edge developments. We are doing a lot at the edge, too. When you look at telcos, the edge is going to become something quite interesting.
What we are talking about is taking that same blend of storage, networking and compute, and running it on as small a device as possible. So think micro data centers, nano data centers. How far out can we push this cloud? How much can we distribute this cloud? How close to the point of need can we get our customers to execute their workloads, to do their artificial intelligence (AI), to do their data gathering, et cetera?
And working in partnership with someone who has a fantastic cloud and a fantastic network just means that a customer who is looking to build some kind of distributed edge-to-cloud core capability is something that Telefonica and VMware could probably do over the next 12 months. That could be really, really strong.
Oriol Barat: In this transformation that all the enterprises are in, maybe we are in the 20 percent of execution range. So we still have 80 percent of the transformation ahead of us. The potential is huge.
Looking ahead with our services, for example, it’s very important that the network is also in transformation, leveraging the software-defined networking (SDN) technologies. These networks are going to be more flexible. We think that we are in a good position to put together cloud services with such network services -- with security, also with more software-defined capabilities, and create really flexible solutions for our customers.
Baguley: One example that I would like to add is if you can imagine that maybe Real Madrid C.F. are playing at home next weekend ... It’s theoretical that Telefonica could have the bottom of those network base stations -- because of VMware Network Functions Virtualization (NFV), it’s no longer specific base station hardware, it’s x86 HPE servers in there. They can maybe turn around to a betting company and say, “Would you like to move your front-end web servers with running containers to run in the base station, in Real Madrid’s stadium, for the four hours in the afternoon of that match?” And suddenly they are the best performing website.
That’s the kind of out-there transformative ideas that are now possible due to new application infrastructures, new cloud infrastructures, edge, and technologies like the network all coming together. So those are the kind of things you are going to see from this kind of solutions approach going forward.
Gardner: Truly dynamic and responsive architecture, it’s very interesting.
Gardner: I’m afraid we’ll have to leave it there. We have been exploring how a triumvirate made up of VMware, Hewlett Packard Enterprise and Telefonica has brought managed cloud services to a global audience.
We have learned why Telefonica’s vision for delivering flexible cloud services solution capabilities to such markets as Latin America and Europe has proven so successful.
So please join me in thanking our guests, Joe Baguley, Vice President and CTO of VMware EMEA. Thanks, Joe.
Baguley: Thanks. It’s been great.
Gardner: And we have been here also with Antonio Oriol Barat, Head of Cloud IT Infrastructure Services at Telefonica. Thank you.
Oriol Barat: Thank you.
Gardner: And a big thank you to our audience as well for joining us for this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening. Please pass this content along to your IT community, and do come back next time.
Transcript of a discussion on why Telefonica’s vision for delivering flexible cloud services solution capabilities to many Latin American and European markets has proven successful. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.
You may also be interested in:
LATEST PODCASTS FOR NOVEMBER
How Mounting Complexity, Multi-Cloud Sprawl, and Need for Maturity Confront Hybrid IT’s Ability to Grow and Thrive
Transcript of a discussion on how companies and IT leaders are seeking to manage an increasingly complex transition to sustainable hybrid IT.
Join us now as we hear from leading IT industry analysts and consultants on how to make the hybrid IT journey to successful digital business transformation.
Our next interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice. We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.
To report on how companies and IT leaders are managing an increasingly complex transition to sustainable hybrid IT, we are joined by Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles. Welcome, Tim.
Tim Crawford: Thanks, Dana. Thanks for having me on the program; I’m looking forward to our conversation.
Gardner: You and I have appeared on a number of panels and videos over the years, but it’s great to have you on BriefingsDirect. I appreciate your time.
Crawford: It’s always a pleasure to get an opportunity to chat with you, and now actually getting a chance to talk to your audience as well. I’m happy to share what I can.
Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.
As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?
To IT and beyond
Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models -- and cloud is a relatively new consumption model for technology -- it changes the dynamics of it. It’s the combination of changing that consumption model -- but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.
It’s not as it used to be, where we would say, “Okay, I'm not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.
Learn More About
Solutions From HPE
Gardner: The good news is that more and more people are able to -- on their own –innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?
Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this -- and it’s something that just kind of raises the hair on the back of my neck -- is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”
The board should be very business-focused and instead they're dictating specific technology -- whether it’s the right technology or not. That’s really what this comes down to.
Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.
We in the enterprise IT space haven't really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.
Another example that I've worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients -- you do the math to figure out what’s the comparative between the IT you're doing on-premises in your corporate data center with any given application -- versus doing it in a public cloud.
If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?
When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.
When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally -- from a process standpoint, and from a delivery standpoint -- and also economically.
Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?
Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”
We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.
A day in a life of me could range from, if I'm lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.
I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they're challenged by -- and then, of course, helping them through their challenges.
Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.
It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I'm grateful that the opportunities that I've had to work with have been just fabulous, and the same goes for the people.
Learn More About
Solutions From HPE
Gardner: I've always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently -- and I find that very valuable.
Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.
How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT -- to therefore know if we’re doing it appropriately?
The monkey-bread model
Crawford: The first thing we have to do is take a step back to provide the context of that visibility -- or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.
If we look back for a minute, and look at how IT operates -- traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.
Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.
Part of the methodology I use is the monkey bread example. If you're not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you're essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”
And then, what you're left with is this really squishy -- but much smaller -- footprint that you have to contend with. That problem in the center is much more specific -- and arguably that’s what differentiates your company from your competition.
Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.
This example helps you work through determining what’s critical, and -- more importantly -- what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.
That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense -- and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.
So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.
Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.
So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?
Changing IT roles and responsibilities
Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.
The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.
For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business -- unfortunately, most of them would have a really hard time doing that.
The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.
Gardner: If we recognize that cloud is here to stay -- but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?
Learn More About
Solutions From HPE
Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms -- whether it’s public cloud, SaaS, private cloud, appropriate data centers -- those are all just different levers we can pull depending on the business type.
As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.
The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.
Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.
Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?
Why, oh why, oh why, cloud?
Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud -- full stop, that’s failure right out of the gate. But if you’re doing it from a position of -- I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.
The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets -- especially as you start thinking about Internet of things (IoT) and start thinking about edge computing -- you know that public cloud becomes problematic for some of those things.
The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?
Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions -- and what decisions maybe you might want to think about doing differently.
Learn More About
Solutions From HPE
Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it -- but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.
So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?
Cloud’s good, but is it good for business?
Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.
For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.
If you begin to understand the business value of the actions you take -- how you leverage public cloud versus private cloud versus your corporate data center assets -- and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.
Economics plays a very significant role -- but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.
Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on -- perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.
But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks -- along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.
Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.
Safety in numbers
Crawford: You bring up a great point, Dana. Risk -- whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance -- the reality is that nobody across the organization truly understands all of these pieces together.
It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.
It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.
Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t -- doing appropriately with hybrid cloud and hybrid IT.
I’m afraid we’ll have to leave it there. We’ve been exploring how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice. And we’ve learned how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to continue to grow -- and for IT organizations to continue to fulfill what could very well be their new charter.
So please join me now in thanking our guest, Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles. Thank you, Tim.
Crawford: Thanks for having me on the program.
Gardner: Tim, how can our listeners and readers best follow you to gain more of your excellent insights?
Crawford: There are two great ways to do that. One is on Twitter, @tcrawford or my blog at www.avoa.com.
Gardner: Thanks again, that was really great. A big thank you as well to our audience for joining us for this BriefingsDirect Voice of the Analyst discussion on how to best manage the hybrid IT journey to digital business transformation.
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Follow me on Twitter at @Dana_Gardner and find more hybrid IT-focused podcasts at www.briefingsdirect.com. Lastly, please pass this content on to your IT community, and do come back next time.
Transcript of a discussion on how companies and IT leaders are seeking to manage an increasingly complex transition to sustainable hybrid IT. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.
You may also be interested in:
LATEST PODCASTS FOR SEPTEMBER
India Smart Cities Mission Shows IoT Potential for Improving Quality of Life at Vast Scale’ with VS Shridhar, Sr VP and Head of IoT BU at Tata Communications, and Nigel Upton, HPE GM IoT/GCP, Hewlett Packard Enterrprise.
How Nokia Refactors the Video Delivery Business With New Time-Managed IT Financing Models
Transcript of a discussion on new video delivery architectures and the creative ways that media companies are paying for the technology that supports IP video streaming.
Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.
Our next thought leader interview examines how Nokia is refactoring the video delivery business. We will now learn about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.
Paul Larbey: Hey, Dana, it’s great to be here.
Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too?
The mobile video decade
Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.
It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.
It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.
If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone -- so not on a big TV screen, that is what most people mean by mobile video today.
The future is personalized
When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.
Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.
And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same -- it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.
The user interface presented to me in the morning may be different than the user
interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.
Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?
Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.
But if you step back and look at the fundamental changes -- all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.
If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.
And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.
Gardner: You also mentioned in the case of an airplane, where it's not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?
IT’s all about intelligence
Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.
But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.
Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization -- minimal viable streaming, if you will.
How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?
Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface
The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.
How to acquire, pay for
And use IT
Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.
The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.
Taking the CDN private
Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that's extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?
Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery -- but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.
Even today that’s how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network -- with both telcos and cable operators -- so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.
You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don’t if you outsource to a shared service.
What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.
The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically -- we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.
But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility -- not just from an IT perspective, but also from a commercial perspective.
Gardner: Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.
Larbey: We basically help as a business unit. Anyone who has content -- be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.
It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.
How to acquire, pay for
And use IT
Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?
Flexible financial formats
Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.
We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time -- but these don’t necessarily work for my applications from a financial perspective.
In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.
We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.
And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment -- they are used to buying a service from the Internet and they work in a different way and they have different requirements.
So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important.
Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.
Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn't relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.
If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.
So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.
How to acquire, pay for
And use IT
Gardner: We now also have to think about 4K, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization?
Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.
The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go -- and certainly the deeper, the better.
The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording -- especially in the US due to the content-use regulations compliance -- is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.
The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.
The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.
That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it -- and that’s a big change for the business case.
Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement.
Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.
Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.
Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.
Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure -- but also deliver a far better user experience?
Larbey: One of our largest customers is Liberty Global, with a large number of cable operators in a variety of countries across Europe. They were enhancing an IP service. They started with an Internet-based CDN and that’s how they were delivering their service. But recognizing the importance of gaining more control over costs and the quality experience, they wanted to take that in-house and put the content on a private CDN.
We worked with them to deliver that technology. One of things that they noticed very quickly, which I don’t think they were expecting, was a dramatic reduction in the number of people calling in to complain because the stream had stopped or buffered. They enjoyed a big decrease in call-center calls as soon as they switched on our new CDN technology, which is quite an interesting use-case benefit.
We do a lot with Sky in the UK, which was also looking to migrate away from an Internet-based CDN service into something in-house so they could take more control over it and improve the users’ quality of experience.
One of our customers in Canada, TELUS, when they deployed a private CDN, they reached costs payback in less than 12 months in terms of both the network savings and the Internet CDN costs savings.
Gardner: Before we close out, perhaps a look to the future and thinking about some of the requirements on business models as we leverage edge intelligence. What about personalization services, or even inserting ads in different ways? Can there be more of a two-way relationship, or a one-to-one interaction with the end consumers? What are the increased benefits from that high-performing, high-efficiency edge architecture?
VR vision and beyond
Larbey: All of that generates more traffic -- moving from standard-definition to high-definition to 4K, to beyond 4K -- it all generates more network traffic. You then take into account a 360-degree-video capability and virtual reality (VR) services, which is a focus for Nokia with our Ozo camera, and it’s clear that the data is just going to explode.
So being able to optimize, and continue to optimize that, in terms of new codec technology and new streaming technologies -- to be able to constrain the growth of video demands on the network – is essential, otherwise the traffic would just explode.
There is lot of innovation going on to optimize the content experience. People may not want to watch all their TV through VR headsets. That may not become the way you want to watch the latest episode of Game of Thrones. However, maybe there will be a uniquely created piece of content that’s an add-on in 360, and the real serious fans can go and look for it. I think we will see new types of content being created to address these different use-cases.
Gardner: I look forward to that. I’m afraid we will have to leave it there. We have been examining how Nokia is refactoring the video delivery business. And we have heard about new video-delivery architectures and creative ways that media companies are paying for them. It all adds up to a very auspicious future for providers of content and consumers as well.
So please join me in thanking our guest, Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. Thank you, Paul.
Larbey: Thanks, it was great to chat.
Gardner: And thanks as well to our audience for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Thanks again for listening, and please come back next time.
Transcript of a discussion on new video delivery architectures and the creative ways that media companies are paying for the technology that supports IP video streaming. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.
You may also be interested in:
LATEST PODCASTS FOR AUGUST
Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital
transformation success stories.
Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation. Our next Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains. We’ll now learn how a leading-edge manufacturer, Hirotec, in the automotive industry, takes advantage of IoT
and IT combined to deliver dependable, managed, and continuous operations. Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. Welcome, Justin.
BriefingsDirect on Toolbox
BriefingsDirect on LinkedIn
BriefingsDirect on MyTechlogy
BriefingsDirect on Connect
BriefingsDirect on WordPress
Twitter Mentions of Blogs
WEB SYNDICATION PARTNERS
Business Performance Innovation Network
BriefingsDirect on Flipboard (9,400 Viewers; 2,750 Followers): http://flip.it/xy1Rh & http://flip.it/vX7vB
IT.Director.com: http://www.it-director.com/index.php (site currently being updated)
BriefingsDirect on Toolbox
BriefingsDirect on LinkedIn
BriefingsDirect on MyTechlogy
BriefingsDirect on Connect
BriefingsDirect on WordPress
Twitter Mentions of Blogs
WEB SYNDICATION PARTNERS
Ulitizer (content also appears on multiple Sys-Con Media sites) … 24,000 views in the last month alone
Business Performance Innovation Network
BriefingsDirect on Flipboard (9,400 Viewers; 2,750 Followers)