.banner-thumbnail-wrapper { display:none; }

A 5-minute Intro to AI: What it Can (and Can’t) Do

A 5-minute Intro to AI: What it Can (and Can’t) Do

Artificial intelligence (AI), particularly the area of AI known as machine learning, is currently the darling of the IT world. Many businesses are touting these technologies as differentiators for their products; be it a mobile device, a search engine, or a photo management site, nothing is seen as complete unless it leverages AI. The close attention that AI is receiving inevitably raises the danger that companies could perceive it as the cure for every ill and the solution to every challenge. That’s far from an accurate perception, however, and to understand why, it’s helpful to have a clear picture of what exactly AI is and what it can – and can’t – do. Armed with that deeper insight, it’s easier to pick out some of the truly spectacular business opportunities that the technology can help you to seize.

Why Gen10 is great for Skype for Business

Sports drinks, or Skype for Business?

In a relay race, everyone works together to win. In an ideal world, your workforce would, too. But if your unified communications and collaboration (UCC) platform can’t keep up, it’s like they’re running into a brick wall. HPE solutions for collaboration modernization using Microsoft® Skype® for Business is a portfolio of reference architectures and Flex Solutions leveraging HPE Gen10 innovations to speed you to the front of the pack.

A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes

The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.

We will now explore how a regional hospital seeking efficiency -- and a teaching hospital seeking performance -- are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

The Connected Consumer for Retail offering takes the cross-channel experience and enhances it for the brick-and-mortar environment. 

How manufacturers better predict and respond to business demands with in-memory computing

How manufacturers better predict and respond to business demands with in-memory computing

Discover how manufacturers can predict and respond to business demands with HPE Superdome Flex large-scale in-memory computing solutions for SAP HANA for manufacturing.

As you well know, for manufacturing to stay competitive globally and execute within budgets, it is all about efficiency. It is also about asset management and cost control, together with optimized processes—so you can provide on-time delivery.

How VMware, HPE, and Telefonica together bring managed cloud services to a global audience

The next BriefingsDirect Voice of the Customer optimized cloud design interview explores how a triumvirate of VMware, Hewlett Packard Enterprise (HPE), and Telefonica together bring managed cloud services to global audiences. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Learn how Telefonica’s vision for delivering flexible cloud services capabilities to Latin American and European markets has proven so successful. Here to explain how they developed the right recipe for rapid delivery of agile Infrastructure-as-a-Services (IaaS) deployments is Joe Baguley, Vice President and CTO of VMware EMEA, and Antonio Oriol Barat, Head of Cloud IT Infrastructure Services at Telefonica. The interview is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What challenges are mobile and telecom operators now facing as they transition to becoming managed service providers?

Oriol Barat: The main challenge we face at this moment is to help customers navigate in a multi-cloud environment. We now have local platforms, some legacy, some virtualized platforms, hyperscale public cloud providers, and data communications networks. We want to help our customers manage these in a secure way.

Gardner: How have your cloud services evolved? How have partnerships allowed you to enter new markets to quickly provide services?

Oriol Barat

Oriol Barat

Oriol Barat: We have had to transition from being a hosting provider with data centers in many countries. Our movement to cloud was a natural evolution of those hosting services. As a telecommunications company (telco), our main business is shared networks, and the network is a shared asset between many customers. So when we thought about the hosting business, we similarly wanted to be able to have shared assets. VMware, with its virtualization technology, came as a natural partner to help us evolve our hosting services.

Gardner: Joe, it’s as if you designed the VMware stack with customers such as Telefonica in mind.

Baguley: You could say that, yes. The vision has always been for us at VMware to develop what was originally called the software-defined data center (SDDC). Now, with multi-cloud, for me, it’s an operating system (OS) for clouds.

Baguley

Baguley

We’re bringing together storage, networking and compute into one OS that can run both on-premises and off-premises. You could be running on-premises the same OS as someone like Telefonica is running for their public cloud -- meaning that you have a common operating environment, a common infrastructure.

So, yes, entirely, it was built as part of this vision that everyone runs this OS to build his or her clouds.

Gardner: To have a core, common infrastructure -- yet have the ability to adapt on top of that for localized markets -- is the best of all worlds.

Baguley: That’s entirely it. Like someone said, “If all of the clouds are running the same OS, what’s the differentiation?” Well, the differentiation is, you want to go with the biggest player in Latin America. You want to go with the player that has the best direct connections: The guys that can give you service levels maybe that the cloud providers can’t give. They can give you over-the-top services that other cloud providers don’t provide. They can give you an integrated solution for your business that includes the cloud -- and other enterprise services.

It’s about providing the tools for cloud providers to build differentiated powerful clouds for their customers.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: Antonio, please, for those of our listeners and readers that aren’t that familiar with Telefonica, tell us about the breadth and depth of your company.

Oriol Barat: Telefonica is one of the top 10 global telco providers in the world. We are in 21 countries. We have fixed and mobile data services, and now we are in the process of digital transformation, where we have our focus in four areas: cloud, security, Internet of Things (IoT), and big data.

We used to think that our core business was in communications. Now we see what we call a new core of our business at the intersection of data communications, cloud, and security. We think this is really the foundation, the platform, of all the services that come on top.

Gardner: And, of course, we would all like to start with brand-new infrastructure when we enter markets. But as you know, we have to deal with what is already in place, too. When it came time for you to come up with the right combination of vendors, the right combination of technologies, to produce your new managed services capabilities, why did you choose HPE and VMware to create this full solution?

Sharing requires trust

Oriol Barat: VMware was our natural choice with its virtualization technologies to start providing shared IT platforms -- even before cloud, as a word, was invented. We launched “virtual hosting” in 2007. That was 10 years ago, and since then we have been evolving from this virtual hosting that had no portal but was a shared platform for customers, to the cloud services that we have today.

The hardware part is important; we have to have reliable and powerful technology. For us, it’s very important to provide trust to the customers. Trust, because what they are running in their data centers is similar to what we have in our data centers. Having VMware and HPE as partners provides this trust to the customers so that they will move the applications, and they know it will work fine.

Gardner: HPE is very fond of its Synergy platform, with composable infrastructure. How did that help you and VMware pull together the full solution for Telefonica, Joe?

Learn More End-to-End Solutions
From HPE and VMware

Baguley: We have been on this journey together, as Antonio mentioned, since 2007 -- since before cloud was a thing. We don’t have a test environment that’s as big as Telefonica’s production environment -- and neither does HPE. What we have been doing is working together -- and like any of these journeys, there have been missteps along the way. We stumbled occasionally, but it’s been good to work together as a partnership.

As we have grown, we have also both understood how the requirements of the market are changing and evolving. Ten years ago providing a combined cloud platform on a composable infrastructure was unheard of -- and people wouldn’t believe you could do it. But that’s what we have evolved together, with the work that we have done with companies such as Telefonica.

The need for something like HPE Synergy and the Gen10 stack -- where there are these very configurable stacks that you can put together -- has literally grown out of the work that we have done together, along with what we have done in our management stack, with the networking, compute, and storage.

Gardner: The combination of composable infrastructure and SDDC makes for a pretty strong tag team.

Baguley: Yes, definitely. It gives you that flexibility and the agility that a cloud provider needs to then meet the agility requirements of their customers, definitely.

Gardner: When it comes to bringing more end users into the clouds for your managed services providers, one of the important things is for end users to move into that cloud with as much ease as possible. Because VMware is a de facto standard in many markets with its vSphere Hypervisor, how does that help you, being a VMware stack, create that ease of joining these clouds?

Seamless migrations

Oriol Barat: Having the same technology in the customer data center and in our cloud makes things a lot easier. In the first place, in terms of confidence, the customer can be confident that it’s going to work well when it is in place. The other thing is that VMware is providing us with the tools that make these migrations easier.

Baguley: At VMworld 2017, we announced VMware Hybrid Cloud Extension (HCX), which is our hybrid cloud connector. It allows customers to locally install software that connects at a Layer 2 [network] level, as well as right back to vSphere 5.0 in clouds. Those clouds now are IBM and VMware cloud native, but we are extending it to other service providers like Telefonica in 2018.

The important thing here is by going down this road, people can take some of the fear out of going to the cloud.

So a customer can truly feel that their connecting and migrations will be seamless. Things like vSphere vMotion across that gap are going to be possible, too. I think the important thing here is by going down this road, people can take some of the fear out of going to the cloud, because some of the fear is about getting locked in: “I am going to make decisions that I will regret in two years by converting my virtual machines (VMs) to run on another platform.” Right here, there isn’t that fear, there is just more choice, and Telefonica is very much part of that story of choice.

Gardner: It sounds like you have made things attractive for managed service providers in many markets. For example, they gain ease of migration from enterprises into the provider’s cloud. In the case of Telefonica, users gain support, services and integration, knowing that the venerable vendors like VMware and HPE are behind the underlying services.

Do you have any examples where you have been able to bring this total solution to a typical managed service provider account? How has it worked out for them?

Everyone’s doing it

Oriol Barat: We have use cases in all the vertical industries. Because cloud is a horizontal technology, it’s the foundation of everything. I would say that all companies of all verticals are in this process of transformation.

We have a lot of customers in retail that are moving their platforms to cloud. We have had, for example, US companies coming to Europe and deploying their SAP systems on top of our platforms.

For example in Spain, we have a very strong tourism industry with a lot of hotel chains that are also using our cloud services for their reservation systems and for more of their IT.

We have use cases in healthcare, of companies moving their medical systems to our clouds.

We have use cases of software vendors that are growing software-as-a-service (SaaS) businesses and they need a flexible platform that can grow as their businesses grow.

A lot of people are using these platforms as disaster recovery (DR) for the platforms that they have on-premises.

I would say that all verticals are into this transformation.

Learn How HPE and VMware Solutions
Enable a New Style of Business

Gardner: It’s interesting, you mentioned being able to gain global reach from a specific home economy by putting data centers in place with a managed service provider model.

It’s also important for data sovereignty and compliance and General Data Protection Regulation (GDPR) and other issues for that to happen. It sounds like a very good market opportunity.

And that brings us to the last part of our discussion. What happens next? When we have proven technology in place, and we have cloud adoption, where would you like to be in 12 months?

Gaining the edge

Baguley: There has been a lot of talk at recent events, like HPE Discover, about intelligent edge developments. We are doing a lot at the edge, too. When you look at telcos, the edge is going to become something quite interesting.

What we are talking about is taking that same blend of storage, networking and compute, and running it on as small a device as possible. So think micro data centers, nano data centers. How far out can we push this cloud? How much can we distribute this cloud? How close to the point of need can we get our customers to execute their workloads, to do their artificial intelligence (AI), to do their data gathering, et cetera?

And working in partnership with someone who has a fantastic cloud and a fantastic network just means that a customer who is looking to build some kind of distributed edge-to-cloud core capability is something that Telefonica and VMware could probably do over the next 12 months. That could be really, really strong.

Gardner: Antonio?

Oriol Barat: In this transformation that all the enterprises are in, maybe we are in the 20 percent of execution range. So we still have 80 percent of the transformation ahead of us. The potential is huge.

Looking ahead with our services, for example, it’s very important that the network is also in transformation, leveraging the software-defined networking (SDN) technologies. These networks are going to be more flexible. We think that we are in a good position to put together cloud services with such network services -- with security, also with more software-defined capabilities, and create really flexible solutions for our customers.

Learn More End-to-End Solutions
From HPE and VMware

Baguley: One example that I would like to add is if you can imagine that maybe Real Madrid C.F. are playing at home next weekend ... It’s theoretical that Telefonica could have the bottom of those network base stations -- because of VMware Network Functions Virtualization (NFV), it’s no longer specific base station hardware, it’s x86 HPE servers in there. They can maybe turn around to a betting company and say, “Would you like to move your front-end web servers with running containers to run in the base station, in Real Madrid’s stadium, for the four hours in the afternoon of that match?” And suddenly they are the best performing website.

That’s the kind of out-there transformative ideas that are now possible due to new application infrastructures, new cloud infrastructures, edge, and technologies like the network all coming together. So those are the kind of things you are going to see from this kind of solutions approach going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

What would Old MacDonald say about your SharePoint farm?

On Old MacDonald’s farm, everything’s in order: pigs here, cows there, sheep over there. A Microsoft SharePoint® farm can quickly get out of control, because images, multimedia, and other types of large, unstructured content slow SharePoint performance and overwhelm storage capacity.

HPE solutions for collaboration modernization using SharePoint Server includes Reference Architectures (RAs) that leverage HPE Gen10 innovations to help you take back control.

Infatuation leads to love—How container orchestration and federation enables multi-cloud competition

The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way.

And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts.

This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.

Enter the age of digital transformation with in-memory computing

Enter the age of digital transformation with in-memory computing

Digitally transforming your business requires data-driven insights—and that's where in-memory computing can help your enterprise. HPE's SAP HANA solutions are game changers for digital transformation.

Succeeding in today's business world means meeting customers, suppliers, and employees where they live—in the digital universe. For your enterprise, this requires changing the way you communicate, operate, and innovate.

Partnerships performing in 2017 – driving success for NonStop!

Partnerships performing in 2017 – driving success for NonStop!

“‘Specialization,’ Robert Heinlein once wrote, ‘is for insects.’ If that is truly the case, then the garages of many wealthy automotive enthusiasts are veritable master classes in entomology, bursting with cars that are expected to do just one thing.” So began a recent article in a popular car magazine with the writer then adding, “Combine this with the relaxed attitude that the owner of a vehicle fleet can enjoy, regarding the reliability of any particular automobile in that collection, and what results is the proverbial soft bigotry of low expectations.” When I first read this I couldn’t help but reflect on what has brought us to where we are today as far as IT is concerned. The general purpose computer is no longer a major force in the enterprise and as we cater for enhanced user experiences, then expectations too among IT professionals about reliability have become far more casual in nature. However, finely chiseled niche offerings remain important for the enterprise and their presence within the data center is more often than not a result of partnerships between systems and solutions vendors.

Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center

The next BriefingsDirect Voice of the Customer high-performance computing (HPC) success story interview examines how Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies.

We'll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.

How UBC gained TCO advantage via flash for its EduCloud cloud storage service

The next BriefingsDirect cloud efficiency case study explores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.

We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How is satisfying the storage demands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list? 

Dunington: A university setting isn't much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer.

Dunington

Dunington

We have to be able to offer those services, because as most people know, there are choices -- and they can go somewhere else and choose those other products.

Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it's just like any other business, even though it’s a university setting.

HPE

Delivers

Flash Performance

Gardner: Can you give us a sense of the scale that describes your storage requirements?

Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.

We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance.

Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year -- because there is registration, there are exam times -- these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need.

Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?

On-Campus Cloud

Dunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can't leave the borders.

That's why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud.

The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it's a hybrid model or all on-campus. We just want to be able to fulfill those needs.

Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model -- but with the security, control and manageability of the private clouds.

What decisions have you made about your storage underpinnings, the infrastructure that supports your SaaS cloud?

Dunington: We have a large storage footprint. For our site, it’s about 12 petabytes of storage. We realized that we weren’t meeting the needs with spinning disks. One of the problems was that we had runaway virtual workloads that would cause problems, and they would impact other services. We needed some mechanism to fix that.

We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

We went through the whole request for proposal (RFP) process, and all the IT infrastructure vendors responded, but we did have some guidelines that we wanted to go through. One of the things we did is present our problems and make sure that they understood what the problems were and what they were trying to solve.

And there were some minimum requirements. We do have a backup vendor of choice that they needed to merge with. And quality of service is a big thing. We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

Gardner: You gained more than just flash benefits when you got to flash storage, right?

Streamlined, safe, flash storage

Dunington: Yes, for sure. With an entire data center full of spinning disks, it gets to the point where the disks start to manage you; you are no longer managing the disks. And the teams out there changing drives, removing volumes around it, it becomes unwieldy. I mean, the power, the footprint, and all that starts to grow.

Also, Vancouver is in a seismic zone, we are right up against the Pacific plate and it's a very active seismic area. Heaven forbid anything happens, but one of the requirements we had was to move the data center into the interior of the province. So that was what we did.

When we brought this new data center online, one of the decisions the team made was to move to an all-flash storage environment. We wanted to be sure that it made financial sense because it's publicly funded, and also improved the user experience, across the province.

Gardner: As you were going about your decision-making process, you had choices, what made you choose what you did? What were the deciding factors?

Dunington: There were a lot of deciding factors. There’s the technology, of being able to meet the performance and to manage the performance. One of the things was to lock down runaway virtual machines and to put performance tiers on others.

But it’s not just the technology; it's also the business part, too. The financial part had to make sense. When you are buying any storage platform, you are also buying the support team and the sales team that come with it.

Our team believes that technology is a certain piece of the pie, and the rest of it is relationship. If that relationship part doesn't work, it doesn’t matter how well the technology part works -- the whole thing is going to break down.

Because software is software, hardware is hardware -- it breaks, it has problems, there are limitations. And when you have to call someone, you have to depend on him or her. Even though you bought the best technology and got the best price -- if it doesn't work, it doesn’t work, and you need someone to call.

So those service and support issues were all wrapped up into the decision.

HPE

Delivers

Flash Performance

We chose the Hewlett Packard Enterprise (HPE) 3PAR all-flash storage platform. We have been very happy with it. We knew the HPE team well. They came and worked with us on the server blade infrastructure, so we knew the team. The team knew how to support all of it. 

We also use the HPE OneView product for provisioning, and it integrated into that all. It also supported the performance optimization tool (IT Operations Management for HPE OneView) to let us set those values, because one of the things in EduCloud is customers choose their own storage tier, and we mark the price on it. So basically all we would do is present that new tier as new data storage within VMware and then they would just move their workloads across non-disruptively. So it has worked really well.

The 3PAR storage piece also integrates with VMware vRealize Operations Manager. We offer that to all our clients as a portal so they can see how everything is working and they can do their own diagnostics. Because that’s the one goal we have with EduCloud, it has to be self-service. We can let the customers do it, that's what they want.

Gardner: Not that long ago people had the idea that flash was always more expensive and that they would use it for just certain use-cases rather than pervasively. You have been talking in terms of a total cost of ownership reduction. So how does that work? How does the economics of this over a period of time, taking everything into consideration, benefit you all?

Economic sense at scale

Dunington: Our IT team and our management team are really good with that part. They were able to break it all down, and they found that this model would work at scale. I don’t know the numbers per se, but it made economic sense.

Spinning disks will still have a place in the data center. I don't know a year from now if an all-flash data center will make sense, because there are some records that people will throw in and never touch. But right now with the numbers on how we worked it out, it makes sense, because we are using the standard bronze, the gold, the silver tiers, and with the tiers it makes sense.

The 3PAR solution also has dedupe functionality and the compression that they just released. We are hoping to see how well that trends. Compression has only been around for a short period of time, so I can’t really say, but the dedupe has done really well for us.

Gardner: The technology overcomes some of the other baseline economic costs and issues, for sure.

We have talked about the technology and performance requirements. Have you been able to qualify how, from a user experience, this has been a benefit?

Dunington: The best benchmark is the adoption rate. People are using it, and there are no help desk tickets, so no one is complaining. People are using it, and we can see that everything is ramping up, and we are not getting tickets. No one is complaining about the price, the availability. Our operational team isn't complaining about it being harder to manage or that the backups aren’t working. That makes me happy.

The big picture

Gardner: Brent, maybe a word of advice to other organizations that are thinking about a similar move to private cloud SaaS. Now that you have done this, what might you advise them to do as they prepare for or evaluate a similar activity?

Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there.

Dunington: Look at the full picture, look at the total cost of ownership. There’s the buying of the hardware, and there's also supporting the hardware, too. Make sure that you understand your requirements and what your customers are looking for first before you go out and buy it. Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there. We will see in a couple of years how it went.

Look at the big picture, step back. It’s just not the new shiny toy, and you might have to take a stepped approach into buying, but for us it worked. I mean, it’s a solid platform, our team sleeps well at night, and I think our customers are really happy with it.

Gardner: This might be a little bit of a pun in the education field, but do your homework and you will benefit.

HPE

Delivers

Flash Performance

Dunington: Yes, for sure.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·      How IoT capabilities open new doors for Miami telecoms platform provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

 

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

NonStop Uder 40 - Operation Support - Part 1

NonStop Uder 40 - Operation Support - Part 1

There are good news and bad news in the world of NonStop Operation support.

First the good news: NonStop operation support teams have always been known to be one of the most technical and versatile groups in any enterprise environment. The team embodies a wealth of technical knowledge that is just as critical in ensuring the fault tolerance of the applications as the underlying NonStop hardware and Guardian operating system. They can diagnose and fix issues ranging from hardware, system software, application scheduling, database, network and others.

Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.

Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.

What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.

Security demands have gone up, yet there is a continual need for reduced manual labor and costs -- while protecting assets sooner and better.

The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?

Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.

Kater

Kater

What we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits -- that were once thought impossible -- are now are the reality.

Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.

Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.

Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.

KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.

KDFA is a very small organization -- but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.

There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler -- almost the full gamut of Citrix products.

We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.

Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?

We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax, if you will, for other solutions that are on the market. We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

When we embraced virtualization, the security approaches were very traditional in nature. The old way of doing things worked fantastically for a physical endpoint.

The traditional approaches to security had been on our physical PCs for years. But when that security came over to the virtual realm, they bogged down our systems. They still required updates be done manually. They just weren’t innovating at the same speed as the virtualization, which was allowing us to create new endpoints.

And so, the maintenance, the updating, the growing threats were no longer being seen by the traditional approaches of security. We had endpoint security in place on our physical stations, but when we went virtual we no longer had endpoint security. We then had to focus on antivirus and anti-spam at the server level.

What we found out very quickly was that this was not going to solve our security issues. We then faced a lot of growing threats again via e-mail, via web, that were coming in through malware, spyware, other activities that were embedding themselves on our file servers – and then trickling down and moving laterally across our network to our endpoints.

Gardner: Just as your organization went virtual and adjusted to those benefits, the malware and the bad guys, so to speak, adjusted as well -- and started taking advantage of what they saw as perhaps vulnerabilities as organizations transitioned to higher virtualization.

Security for all, by all

Kater: They did. One thing that a lot of security analysts, experts, and end-users forget in the grand scheme of things is that this virtual world we live in has grown so rapidly -- and innovated so quickly -- that the same stuff we use to grow our businesses is also being used by the bad actors. So while we are learning what it can do, they are learning how to exploit it at the same speed -- if not a little faster.

Gardner: You recognized that you had to change; you had to think more about your virtualization environment. What prompted you to increase the capability to focus on the hypervisor for security and prevent issues from trickling across your systems and down to your endpoints?

Kater: Security has always been a concern here at KDFA. And there has been more of a security focus recently, with the latest news and trends. We honestly struggled with CryptoLocker, and we struggled with ransomware.

While we never had to pay out any ransom or anything -- and they were stopped in place before data could be exfiltrated outside of KDFA’s network -- we still had two or three days of either data loss or data interruption. We had to pull back data from an archive; we had to restore some of our endpoints and some of our computers.

We needed to have a solution for our virtual environment -- one that would be easy to deploy, easy to manage, and it would be centrally managed.

As we battled these things over a very short period of time, they were progressively getting worse and worse. We decided that we needed to have a solution for our virtual environment – one that would be not only be easy to deploy, easy to manage, but it would be centrally managed as well, enabling me to have more time to focus back on my workload -- and not have to worry so much about the security thresholds that had to be updated and maintained via the traditional model.

So we went out to the market. We ran very extensive proof of concepts (POCs), and those POCs very quickly illustrated that the underlying architecture was only going to be enterprise-ready via two or three vendors. Once we started running those through the paces, Bitdefender emerged for us.

I had actually been watching the Hypervisor Introspection (HVI) product development for the past four years, since its inception came with a partnership between Citrix, Intel, the Linux community and, of course, Bitdefender. One thing that was continuous throughout all of that was that in order to deploy that solution you would need GravityZone in-house to be able to run the HVI workloads.

And so we became early adopters of Bitdefender GravityZone, and we are able to see what it could do for our endpoints, our servers, and our Microsoft Exchange Servers. Then, Hypervisor Introspection became another security layer that we are able to build upon the security solution that we had already adopted from Bitdefender.

Gardner: And how long have you had these solutions in place?

Kater: We are going on one and a half to two years for GravityZone. And when HVI went to general availability earlier this year, in 2017, and we were one of the first adopters to be able to deploy it across our production environment.

Gardner: If you had a “security is easy” button that you could pound on your desk, what are the sorts of things that you look for in a simpler security solution approach?

IT needs brains to battle breaches

Kater: The “security is easy” button would operate much like the human brain. It would need that level of intuitive instinct, that predictive insight ability. The button would generally be easily managed, automated; it would evolve and learn with artificial intelligence (AI) and machine learning what’s out there. It would dynamically operate with peaks and valleys depending on the current status of the environment, and provide the security that’s needed for that particular environment.

Gardner: Jeff, you really are an early adopter, and I commend you on that. A lot of organizations are not quite as bold. They want to make sure that everything has been in the market for a long time. They are a little hesitant.

But being an early adopter sounds like you have made yourselves ready to adopt more AI and machine learning capabilities. Again, I think that’s very forward-looking of you.

But tell us, in real terms, what has being an early adopter gotten for you? We’ve had some pretty scary incidents just in the recent past, with WannaCry, for example. What has being an early adopter done for you in terms of these contemporary threats?

Kater: The new threats, including the EternalBlue exploit that happened here recently, are very advanced in nature. Oftentimes when these breaches occur, it takes several months before they have even become apparent. And oftentimes they move laterally within our network without us knowing, no matter what you do.

Some of the more advanced and persistent threats don’t even have to infect the local host with any type of software. They work in the virtual memory space. It’s much different than the older threats, where you could simply reboot or clear your browser cache to resolve them and get back to your normal operations.

Earlier, when KDFA still made use of non-persistent desktops, if the user got any type of corruption on their virtual desktop, they were able to reboot, and get back to a master image and move on. However, with these advanced threats, when they get into your network, and they move laterally -- even if you reboot your non-persistent desktop, the threat will come back up and it still infects your network. So with the growing ransomware techniques out there, we can no longer rely on those definition-based approaches. We have to look at the newer techniques.

As far as why we are early adopters, and why I have chosen some of the principles that I have, I feel strongly that you are really only as strong as your weakest link. I strive to provide my users with the most advanced, nimble, and agnostic solutions possible.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations.  

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations. It allows us to have discussions about increasing productivity at that point, and to maximize the potential of our smaller number of users -- versus having to worry about the latest news of security breaches that are happening all around us.

Gardner: You’re able to have a more proactive posture, rather than doing the fire drill when things go amiss and you’re always reacting to things.

Kater: Absolutely.

Gardner: Going back to making sure that you’re getting a fresh image and versions of your tools …  We have heard some recent issues around the web browser not always being safe. What is it about being able to get a clean version of that browser that can be very important when you are dealing with cloud services and extensive virtualization?

Virtual awareness, secure browsing

Kater: Virtualization in and of itself has allowed us to remove the physical element of our workstations when desirable and operate truly in that virtual or memory space. And so when you are talking about browsers, you can have a very isolated, a very clean browser. But that browser is still going to hit a website that can exploit your system. It can run in that memory space for exploitation. And, again, it doesn't rely on plug-ins to be downloaded or anything like that anymore, so we really have to look at the techniques that these browsers are using.

What we are able to do with the secure browsing technique is publish, in our case, via XenApp, any browser flavor with isolation out there on the server. We make it available to the users that have access for that particular browser and for that particular need. We are then able to secure it via Bitdefender HVI, making sure that no matter where that browser goes, no matter what interface it’s trying to align with, it’s secure across the board.

Gardner: In addition to secure browsing, what do you look for in terms of being able to keep all of your endpoints the way you want them? Is there a management approach of being able to verify what works and what doesn’t work? How do you try to guarantee 100 percent security on those many and varied endpoints?

Kater: I am a realist, and I realize that nothing will ever be 100 percent secure, but I really strive for that 99.9 percent security and availability for my users. In doing so -- being that we are so small in staff, and being that I am the one that should manage all of the security, architecture, layers, networking and so forth -- I really look for that centralized model. I want one pane of glass to look at for managing, for reporting.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go,  what did it do to me and how was I protected.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go, and what did it do to me and how was I protected. I need that so that I can report to my management staff and say, “Hey, honestly, this is what happened, this is what was happening behind the scenes. This is how we remediated and we are okay. We are protected. We are safe.”

And so I really look for that centralized management. Automation is key. I want something that will automatically update, with the latest virus and malware definitions, but also download the latest techniques that are seen out there via those innovative labs from our security vendors to fully patch our systems behind the scenes. So it takes that piece of management away from me and automates it to make my job more efficient and more effective.

Gardner: And how has Bitdefender HVI, in association with Bitdefender GravityZone, accomplished that? How big of a role does it play in your overall solution?

Kater: It has been a very easy deployment and management, to be honest. Again, entities large and small, we are all facing the same threats. When we looked at ways to attain the best solution for us, we wanted to make sure that all of the main vendors that we make use of here at KDFA were on board.

And it just so happened this was a perfect partnership, again, between Citrix, Bitdefender, Intel, and the Linux community. That close partnership, it really developed into HVI, and it is not an evolutionary product. It did not grow from anything else. It really is a revolutionary approach. It’s a different way of looking at security models. It’s a different way of protecting.

HVI allows for security to be seen outside of the endpoint, and outside of the guest agent. It’s kind of an inside-looking-outward approach. It really provides high levels of visibility, detection and, again, it prevents the attacks of today, with those advanced persistent threats or APTs.

With that said, since the partnership between GravityZone and HVI is so easy to deploy, so easy to manage, it really allows our systems to grow and scale when the need is there. And we just know that with those systems in place, when I populate my network with new VMs, they are automatically protected via the policies from HVI.

Given that the security has to be protected from the ground all the way up, we rest assured that the security moves with the workload. As the workload moves across my network, it’s spawned off and onto new VMs. The same set of security policies follows the workloads. It really takes out any human missteps, if you will, along the process because it’s all automated and it all works hand-in-hand together.

Behind the screens

Gardner: It sounds like you have gained increased peace of mind. That’s always a good thing in IT; certainly a good thing for security-oriented IT folks. What about your end-users? Has the ability to have these defenses in place allowed you to give people a bit more latitude with what they can do? Is there a productivity, end-user or user experience benefit to this?

Kater: When it comes to security agents and endpoint security as a whole, I think a lot of people would agree with me that the biggest drawback when implementing those into your work environment is loss of productivity. It’s really not the end-user’s fault. It’s not a limitation of what they can and can't do, but it’s what happens when security puts an extra load on your CPU, it puts extra load on your RAM; therefore, it bogs down your systems. Your systems don’t operate as efficiently or effectively and that decreases your productivity.

With Bitdefender, and the approaches that we adopted, we have seen very, very limited, almost uncomputable limitations as far as impacts on our network, impacts on our endpoints. So user adoption has been greater than it ever has, as far as a security solution.

I’m also able to manipulate our policies within that Central Command Center or Central Command Console within Bitdefender GravityZone to allow my users, at will, if they would like, to see what they are being blocked against, and which websites they are trying to run in the background. I am able to pass that through to the endpoint for them to see firsthand. That has been a really eye-opening experience.

We used to compute daily, thinking we were protected, and that nothing was running in the background. We were visiting the pages, and those pages were acting as though we thought that they should. What we have quickly found out is that any given page can launch several hundred, if not thousands, of links in the background, which can then become an exploit mechanism, if not properly secured.

Gardner: I would like to address some of the qualitative metrics of success when you have experienced the transition to more automated security. Let’s begin with your time. You said you went from five or 10 percent of time spent on security to 50 or 60 percent. Have you been able to ratchet that back? What would you estimate is the amount of time you spend on security issues now, given that you are one and a half years in?

Kater: Dating back 5 to 10 years ago with the inception of VDI, my security footprint as far as my daily workload was probably around that 10 percent. And then, with the growing threats in the last two to three years, that ratcheted it up to about 50 percent, at minimum, maybe even 60 percent. By adopting GravityZone and HVI, I have been able to pull that back down to only consume about 10 percent of my workload, as most of it is automated for me behind the scenes.

Gardner: How about ransomware infections? Have you had any of those? Or lost documents, any other sort of qualitative metrics of how to measure efficiency and efficacy here?

We have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Kater: I am happy to report that since the adoption of GravityZone, and now with HVI as an extra security layer on top of Bitdefender GravityZone, that we have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Gardner: Well, that speaks for itself. Let’s look to the future, now that you have obtained this. You mentioned earlier your interest in AI, machine learning, automating, of being proactive. Tell us about what you expect to do in the future in terms of an even better security posture.

Safety layers everywhere, all the time

Kater: In my opinion, again, security layers are vital. They are key to any successful deployment, whether you are large or small. It’s important to have all of your traditional security hardware and software in place working alongside this new interwoven fabric, if you will, of software -- and now at the hypervisor level. This is a new threshold. This is a new undiscovered territory that we are moving into with virtual technologies.

As that technology advances, and more complex deployments are made, it’s important to protect that computing ability every step of the way; again, from that base and core, all the way into the future.

More and more of my users are computing remotely, and they need to have the same security measures in place for all of their computing sessions. What HVI has been able to do for me here in the current time, and in moving to the future, is I am now able to provide secure working environments anywhere -- whether that’s their desktop, whether that’s their secure browser. I am able to leverage that HVI technology once they are logged into our network to make their computing from remote areas safe and effective.

Gardner: For those listening who may not have yet moved toward a hypervisor-level security – or who have maybe even just more recently become involved with pervasive virtualization and VDI -- what advice could you give them, Jeff, on how to get started? What would you suggest others do that would even improve on the way you have done it? And, of course, you have had some pretty good results.

Kater: It’s important to understand that everybody’s situation is very different, so identifying the best solutions for everybody is very much on an individual corporation basis. Each company has its own requirements, its own compliance to follow, of course.

Pick two or three vendors and run very stringent POCs; make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network.

The best advice that I can give is pick two or three vendors, at the least, and run very stringent POCs; no matter what they may be, make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network. Then, when you have two or three that come out of that and that you feel strongly about, continue to break them down.

I cannot stress the importance of POCs enough. It’s very important to identify that one or two that you really feel strongly about. Once you identify those, then talk to the industry experts that support those technologies, talk to the engineers, really get the insight from the inside out on how they are innovating and what their plan is for the future of their products to make sure that you are on a solid footprint.

Most success stories involve a leap of faith. With machine learning and AI, we are now taking a leap that is backed by factual knowledge and analyzing techniques to stay ahead of threats. No longer are we relying on those virus definitions and those virus updates that can be lagging sometimes.

Gardner: Before we sign off, where do you go to get your information? Where would you recommend other people go to find out more?

Kater: Honestly, I was very fortunate that HVI at its inception fell into my lap. When I was looking around at different products, we just hit the market at the right time. But to be honest with you, I cannot stress enough, again, run those POCs.

If you are interested in finding out more about Bitdefender and its product line up, Bitdefender has an excellent set of engineers on staff; they are very knowledgeable, they are very well-rounded in all of their individual disciplines. The Bitdefender website is very comprehensive. It contains many outside resources, along with inside labs reporting, showcasing just what their capabilities are, with a lot of unbiased opinions.

They have several video demos and technical white papers listed out there, you can find them all across the web and you can request the full product demo when you are ready for it and run that POC of Bitdefender products in-house with your network. Also, they have presales support that will help you all along the way.

Bitdefender HVI will revolutionize your data center security capacity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Bitdefender.

You may also be interested in:

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional factors into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

Burris

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America -- independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It's not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset. 

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That's what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that's US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let's just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity. 

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there's some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren't necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.  

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise -- their legal, marketing, security, risk avoidance and governance units -- right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern -- or that type of concern -- is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud -- think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don't automatically default to the notion that you're going to take your business to the cloud.

The same type of thinking pertains here -- the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run -- whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop -- and they will continue to drop -- it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services -- and as these applications increasingly are the basis for the business and the end-value that we’re creating -- we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There's a new data regime that's emerging to support application development. How's that going to work -- the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications -- and especially applications that involve capabilities like deep learning or machine learning with increased automation -- we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications -- inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: