.banner-thumbnail-wrapper { display:none; }

IoT

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

Learn how an integrated backup and recovery capability allowed production processing systems to be snap back into use in only a few hours.

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

Learn how Equinix, Microsoft Azure Stack, and HPE’s Cloud28+ help MSPs and businesses alike obtain world-class hybrid cloud implementations.

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

A discussion on how the 2018 Ryder Cup golf match between European and US players places unique technical and campus requirements on its operators.

How the Internet of Things Is Cultivating a New Vision for Agriculture

 ABOUT THE AUTHOR  IsaacRo  Technologist in the making and proud geek. I crave chaos from disruptive tech trends: #IoT #BigData #AI. Currently leading Digital Marketing and Events @HPE_IoT

ABOUT THE AUTHOR

IsaacRo

Technologist in the making and proud geek. I crave chaos from disruptive tech trends: #IoT #BigData #AI. Currently leading Digital Marketing and Events @HPE_IoT

To head off the threat of food shortages for a global population estimated to top 9 billion by 2050, the world’s agricultural output must double. That mandates innovation to improve monitoring of conditions in the field in order to reduce inputs while maximizing yield and nutritional value. It also means processing data from agricultural land, machines and facilities more efficiently to accelerate research.

These are ideal applications for IoT technologies and edge computing, which is why Hewlett Packard Enterprise is partnering with Purdue University, one of the world’s leading agricultural colleges, to create a new vision for farming and agricultural research in the 21st century. The partnership’s efforts attracted a lot of attention at HPE Discover Las Vegas in June. HPE’s Janice Zdankus, VP for Quality, and Purdue University Executive Sponsor, joined Patrick Smoker, Director and Department Head of Agriculture IT at Purdue, to talk about massive innovation to drive a smarter, more connected, more sustainable agriculture.

Watch the video to learn:

  • How edge computing powered by HPE Edgeline and connectivity tech from Aruba, an HPE company, capture terabytes of data from every inch of Purdue’s 1400-plus acre field research station
  • How intelligent edge technologies accelerate time-to-discovery for research teams
  • How the partners’ innovations will support economic development in Purdue’s home state of Indiana and around the world.

Patrick expanded on these comments in an interview with tech blogger Jake Ludington. How will IoT technologies – including wearables – improve the health and living conditions of livestock? How does the university’s research translate into entrepreneurial opportunities? Watch the video to find out.

Janice also talked with Jake in the interview below. Learn how the partnership with Purdue fits into the broader framework of HPE’s philanthropic efforts, and what comes next for the partners’ digital agriculture initiative.

The Intelligent Edge was one of the main themes at HPE Discover 2018. We announced new edge-to-cloud solutions that enable organizations to run unmodified enterprise-class applications and management software at the edge. Learn more in this post: Unleash the power of the cloud, right at your edge. The latest HPE Edgeline Systems capabilities.

Learn more about HPE Edgeline Converged Edge Systems here.

Featured Articles:

Intelligent IoT Powers Purdue’s Digital Agriculture Initiative for Food Security Worldwide

Purdue University partners with HPE and Aruba in digital-agriculture initiative to fight world hunger

New strategies emerge to stem the costly downside of complex cloud choices

New strategies emerge to stem the costly downside of complex cloud choices

A discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem-- but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

 Steyn

Steyn

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.  

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing? 

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big ... Now we have the entire environment running on SimpliVity cubes.  

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks

The next BriefingsDirect agile data center architecture interview explores how an Ericsson and Hewlett Packard Enterprise (HPE) partnership establishes a mobile telecommunications stack that accelerates data services adoption in rapidly advancing economies. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

We’ll now learn how this mobile business support infrastructure possesses a low-maintenance common core -- yet remains easily customizable for regional deployments just about anywhere. 

Here to help us define the unique challenges of enabling mobile telecommunications operators in countries such as Bangladesh and Uzbekistan, we are joined by Mario Agati, Program Director at Ericsson, based in Amsterdam, and Chris James-Killer, Sales Director for HPE. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions

Here are some excerpts:

Gardner: What are the unique challenges that mobile telecommunications operators face when they go to countries like Bangladesh?

 Agati

Agati

Agati: First of all, these are countries with a very low level of revenue per user (RPU). That means for them cost efficiency is a must. All of the solutions that are going to be implemented in those countries should be, as much as possible, focused on cost efficiency, reusability, and industrialization. That’s one of the main reasons for this program. We are addressing those types of needs -- of high-level industrialization and reusability across countries where cost-efficiency is king.

Gardner: In such markets, the technology needs to be as integrated as possible because some skill sets can be hard to come by. What are some of the stack requirements from the infrastructure side to make it less complex?

James-Killer: These can be very challenging countries, and it’s key to do the pre-work as systematically as you can. So, we work very closely with the architects at Ericsson to ensure that we have something that’s repeatable, that’s standardized and delivers a platform that can be rolled out readily in these locations. 

Even countries such as Algeria are very difficult to get goods into, and so we have to work with customs, we have to work with goods transfer people; we have to work on local currency issues. It’s a big deal.

Learn More About the

HPE and Ericsson Alliance

Gardner: In a partnership like this between such major organizations as Ericsson and HPE, how do you fit together? Who does what in this partnership?

Agati: At Ericsson, we are the prime integrator responsible for running the overall digital transformation. This is for a global operator that is presently in multiple countries. It shows the complexity of such deals.

We are responsible for delivering a new, fully digital business support system (BSS). This is core for all of the telco services. It includes all of the business management solutions -- from the customer-facing front end, to billing, to charging, and the services provisioning.

In order to cope with this level of complexity, we at Ericsson rely on a number of partners that are helping us where we don’t have our own solutions. And, in this case, HPE is our selected partner for all of the infrastructure components. That’s how the partnership was born.

Gardner: From the HPE side, what are the challenges in bringing a data center environment to far-flung parts of the world? Is this something that you can do on a regional basis, with a single data center architecture, or do you have to be discrete to each market?

Your country, your data center

James-Killer: It is more bespoke than we would like. It’s not as easy as just sending one standard shipping container to each country. Each country has its own dynamic, its own specific users. 

The other item worth mentioning is that each country needs its own data center environment. We can’t share them across countries, even if the countries are right next to each other, because there are laws that dictate this separation in the telecommunications world. 

 James-Killer

James-Killer

So there are unique attributes for each country. We work with Ericsson very closely to make sure that we remove as many itemized things as we can. Obviously, we have the technology platform standardized. And then we work out what’s additionally required in each country. Some countries require more of something and some countries require less. We make sure it’s all done ahead of time. Then it comes down to efficient and timely shipping, and working with local partners for installation.

Gardner: What is the actual architecture in terms of products? Is this heavily hyper-converged infrastructure (HCI)-oriented, and software-defined? What are the key ingredients that allow you to meet your requirements?

James-Killer: The next iterations of this will become a lot more advanced. It will leverage a composable infrastructure approach to standardize resources and ensure they are available to support required workloads. This will reduce overall cost, reduce complexity, and make the infrastructure more adaptable to the end customers’ business needs and how they change over time. Our HPE Synergy solution is a critical component of this infrastructure foundation. 

At the moment we have to rely on what’s been standardized as a platform for supporting this BSS portfolio.

This platform has been established for years and years. So it is not necessarily on the latest technology ... but it's a good, standardized, virtualized environment to run this all in a failsafe way.

We have worked with Ericsson for a long time on this. This platform has been established for years and years. So it is not necessarily on the latest technology; the latest is being tested right now. For example, the Ericsson Karlskrona BSS team in Sweden is currently testing HPE Synergy. But, as we speak, the current platform is HPE Gen9 so it’s ProLiant Servers. HPE Aruba is involved; a lot of heavy-duty storage is involved as well. 

But it’s a good, standardized, virtualized environment to run this all in a failsafe way. That’s really the most critical thing. Instead of being the most advanced, we just know that it will work. And Ericsson needs to know that it will work because this platform is critical to the end-users and how they operate within each country.

Gardner: These so-called IT frontiers countries -- in such areas as Southeast Asia, Oceania, the Middle East, Eastern Europe, and the Indian subcontinent -- have a high stake in the success of mobile telecommunications. They want their economies to grow. Having a strong mobile communications and data communications infrastructure is essential to that. How do we ensure the agility and speed? How are you working together to make this happen fast?

Architect globally, customize locally

Agati: This comes back to the industrialization aspect. By being able to define a group-wide solution that is replicable in each of these countries, you are automatically providing a de facto solution in countries where it would be very difficult to develop locally. They obtain a complex, state-of-the-art core telco BSS solution. Thanks to this group initiative, we are able to define a strong set of capabilities and functions, an architecture that is common to all of the countries. 

That becomes a big accelerator because the solution comes pre-integrated, pre-defined, and is just ready to be customized for whatever remains to be done locally. There are always aspects of the regulations that need to be taken care of locally. But you can start from a predefined asset that is already covering some 80 percent of your needs.

Learn More About the

HPE and Ericsson Alliance

In a relatively short time, in those countries, they obtain a state-of-the-art, brand-new, digital BSS solution that otherwise would have required a local and heavy transformation program -- with all of the complexity and disadvantages of that.

Gardner:And there’s a strong economic incentive to keep the total cost of IT for these BSS deployments at a low percentage of the carriers’ revenue. 

Shared risk, shared reward

Agati: Yes. The whole idea of the digital transformation is to address different types of needs from the operator’s perspective. Cost efficiency is probably the biggest driver because it’s the one where the shareholders immediately recognize the value. There are other rationales for digital transformation, such as relating to the flexibility in the offering of new services and of embracing new business models related to improved customer experiences. 

On the topic of cost efficiency, we have created with a global operator an innovative revenue-share deal. From our side, we commit to providing them a solution that enables them a certain level of operational cost reduction. 

The current industry average cost of IT is 5 to 6 percent of total mobile carrier revenue. Now, thanks to the efficiency that we are creating from the industrialization and re-use across the entire operator’s group, we are committed to bringing the operational cost down to the level of around 2 percent. In exchange, we will receive a certain percentage of the operator’s revenue back. 

That is for us, of course, a bold move. I need to say this clearly, because we are betting on our capability of not only providing a simple solution, but on also providing actual shareholder value, because that's the game we are actually playing in now.

It's a real quality of life issue ... These people need to be connected and haven't been connected before.

We are risking our own money on it at the end of the game. So that's what makes the big difference in this deal against any other deal that I have seen in my career -- and in any other deal that I have seen in this industry. There is probably no one that is really taking on such a huge challenge.

Gardner: It's very interesting that we are seeing shared risks, but then also shared rewards. It's a whole different way of being in an ecosystem, being in a partnership, and investing in big-stakes infrastructure projects.

Agati: Yes. 

Gardner: There has been recent activity for your solutions in Bangladesh. Can you describe what's been happening there, and why that is illustrative of the value from this approach?

Bangladesh blueprint

Agati:Bangladesh is one of the countries in the pipeline, but it is not yet one of the most active. We are still working on the first implementation of this new stack. That will be the one that will set the parameters and become the template for all the others to come. 

The logic of the transformation program is to identify a good market where we can challenge ourselves and deliver the first complete solution, and then reuse that solution for all of the others. This is what is happening now; we’re in the advanced stages of this pilot project.

Gardner: Yes, thank you. I was more referring to Bangladesh as an example of how unique and different each market can be. In this case, people often don't have personal identification; therefore, one needs to use a fingerprint biometric approach in the street to sell a SIM to get them up and running, for example. Any insight on that, Chris?

Learn More About the

HPE and Ericsson Alliance

James-Killer: It speaks to the importance of the work that Ericsson is doing in these countries. We have seen in Africa and in parts of the Middle East how important telecommunications is to an individual. It's a real quality of life issue. We take it for granted in Sweden; we certainly take advantage of it in my home country of Australia. But in some of these countries you are actually making a genuine difference.

These people need to be connected and haven’t been connected before. And you can see what has happened politically when the people have been exposed to this kind of technology. So it's admirable, I believe, what Ericsson is doing, particularly commercially, and the way that they are doing it. 

It also speaks to Ericsson's success and the continued excitement around LTE and 4G in these markets; not actually 5G yet. When you visit Ericsson's website or go to Ericsson’s shows, there's a lot of talk about autonomous vehicles and working with Volvo and working with Scania, and the potential of 5G for smart cities initiatives. But some of the best work that Ericsson does is in building out the 4G networks in some of these frontier countries.

Agati: If I can add one thing. You mentioned how specific requirements are coming from such countries as Bangladesh, where we have the specific issue related to identity management. This is one of the big challenges we are now facing, of gaining the proper balance between coping with different local needs, such as different regulations, different habits, different cultures -- but at the same time also industrializing the means, making them repeatable and making that as simple as possible and as consistent as possible across all of these countries. 

There is a continuous battle between the attempts to simplify and the reality check on what does not always allow simplification and industrialization. That is the daily battle that we are waging: What do you need and what don’t you need. Asking, “What is the business value behind a specific capability? What is the reasoning behind why you really need this instead of that?”

We at Ericsson want to be the champion of simplicity and this project is the cornerstone of going in that direction.

At the end of the game, this is the bet that we are making together with our customers -- that there is a path to where you can actually find the right way to simplification. Ericsson has recently been launching our new brand and it is about this quest for making it easier. That's exactly our challenge. We want to be the champion of simplicity and this project is the cornerstone of going in that direction.

Gardner: And only a global integrator with many years of experience in many markets can attain that proper combination of simplicity and customization.

Agati: Yes.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

Retailers get a makeover thanks to data-driven insights, edge computing, and revamped user experiences

The Connected Consumer for Retail offering takes the cross-channel experience and enhances it for the brick-and-mortar environment. 

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

 Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional factors into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

 Burris

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America -- independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It's not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset. 

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That's what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that's US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let's just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity. 

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there's some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren't necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.  

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise -- their legal, marketing, security, risk avoidance and governance units -- right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern -- or that type of concern -- is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud -- think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don't automatically default to the notion that you're going to take your business to the cloud.

The same type of thinking pertains here -- the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run -- whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop -- and they will continue to drop -- it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services -- and as these applications increasingly are the basis for the business and the end-value that we’re creating -- we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There's a new data regime that's emerging to support application development. How's that going to work -- the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications -- and especially applications that involve capabilities like deep learning or machine learning with increased automation -- we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications -- inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT

The next BriefingsDirect Internet of Things (IoT) strategies insights interview focuses on how a Miami telecommunications products provider has developed new breeds of services to help manage complex edge and data scenarios.

We will now learn how IoT platforms and services help to improve network services, operations, and business goals -- for carriers and end users alike.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore what is needed to build an efficient IoT support business is Andres Sanchez, CEO of Identidad IoT in Miami. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How has your business changed in the telecoms support industry and why is IoT such a big opportunity for you?

Sanchez: With the new OTT (Over the Top content) technology, and the way that it came into the picture and took part of the whole communications chain of business, the business is basically getting very tough in telecoms. When we begin evaluating what IoT can do and seeing the possibilities, this is a new wave. We understand that it's not about connectivity, it's not about the 10 percent of the value chain -- it's more about the solutions.

 Sanchez

Sanchez

We saw a very good opportunity to start something new and to take the experience we have with the technology that we have in telecoms, and get new people, get new developers, and start building solutions, and that's what we are doing right now.

Gardner: So as the voice telecoms business trails off, there is a new opportunity at the edge for data and networks to extend for a variety of use cases. What are some the use cases that you are seeing now in IoT that is a growth opportunity for your business?

Sanchez: IoT is everywhere. The beauty of IoT is that you can find solutions everywhere you look. What we have found is that when people think about IoT, they think about connected home, they think about connected car, or the smart parking where it's just a green or red light when the parking is occupied or not. But IoT is more than that.

There are two ways to generate revenue in IoT. One is by having new products. The second is understanding what it is on the operational level that we can do better. And it’s in this way that we are putting in sensors, measuring things, and analyzing things. You can basically reduce your operational cost, or be more effective in the way that you are doing business. It's not only getting the information, it's using that information to automate processes that it will make your company better.

Gardner: As organizations recognize that there are new technologies coming in that are enabling this smart edge, smart network, what is it that’s preventing them from being able to take advantage of this?

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Companies think that they just have to connect the sensors, that they only have to digitize their information. They haven’t realized that they really have to go through a digital transformation. It's not about connecting the sensors that are already there; it's building a solution using that information. They have to reorganize and to reinvent their organizations.

For example, it's not about taking a sensor, putting the sensor in the machine and just start taking information and watching it on a screen. It’s taking the information and being able to see and check special patterns, to predict when a machine is going to break, when a machine at certain temperatures starts to work better or worse. It's being able to be more productive without having to do more work. It’s just letting the machines do the work by themselves.

Gardner: A big part of that is bringing more of an IT mentality to the edge, creating a standard network and standard platforms that can take advantage of the underlying technologies that are now off-the-shelf.

Sanchez: Definitely. The approach that Identidad IoT takes is we are not building solutions based on what we think is good for the customer. What we are doing is building proof of concepts (PoCs) and tailored solutions for companies that need digital transformation.

I don’t think there are two companies doing the same thing that have the same problems. One manufacturer may have one problem, and another manufacturer using the same technology has another completely different problem. So the approach we are taking is that we generate a PoC, check exactly what the problems are, and then develop that application and solution.

But it's important to understand that IoT is not an IT thing. When we go to a customer, we don’t just go to an IT person, we go to the CEO, because this is a change of mentality. This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It's really digital transformation.

Gardner: Where is this being successful? Where are you finding that people really understand it and are willing to take the leap, change their culture, rethink things to gain advantages?

One solution at a time

Sanchez: Unfortunately, people are afraid of what is coming, because people don't understand what IoT is, and everybody thinks it's really complicated. It does need expertise. It does need to have security -- that is a very big topic right now. But it's not impossible.

When we approach a company and that CEO, CIO or CTO understands that the benefits of IoT will be shown once you have that solution built -- and that probably the initial solution is not going to be the final solution, but it's going to be based on iterations -- that’s when it starts working.

If people think it’s just an out-of-the-box solution, it's not going to work. That's the challenge we are having right now. The opportunity is when the head of the company understands that they need to go through a digital transformation.

Manage your solutions

with the HPE

Universal IoT Platform

Gardner: When you work with a partner like Hewlett PackardEnterprise (HPE), they have made big investments and developments in edge computing, such as Universal IoT Platform and Edgeline Systems. How does that help you as a solutions provider make that difficult transition for your customers easier, and encourage them to understand that it's not impossible, that there are a lot of solutions already designed for their needs?

Sanchez: Our relationship with HPE has been a huge success for Identidad IoT. When we started looking at platforms, when we started this company, we couldn't find the right platform to fulfill our needs. We were looking for a platform that we could build solutions on and then extrapolate that data with other data, and build other solutions over those solutions.

When we approached HPE, we saw that they do have a unique platform that allows us to generate whatever applications, for whatever verticals, for whatever organizations – whether a city or company. Even if you wanted to create a product just for end-users, they have the ability to do it.

Also, it's a platform that is so robust that you know it’s going to work, it’s reliable, and it’s very secure. You can build security from the device right on up to the platform and the applications. Other platforms, they don't have that.

Our business model correlates a lot with the HPE business model. We think that IoT is about relationships and partnerships -- it’s about an ecosystem. The approach that HPE has to IoT and to ecosystem is exactly the same approach that we have. They are building this big ecosystem of partners. They are helping each other to build relationships and in that way, they build a better and more robust platform.

Gardner: For companies and network providers looking to take advantage of IoT, what would you suggest that they do in preparation? Is there a typical on-ramp to an IoT project? 

A leap of faith

Sanchez: There's no time to be prepared right now. I think they have to take a leap of faith and start building the IoT applications. The pace of the technology transformation is incredible.

When you see the technology right now, today -- probably in four months it's going to be obsolete. You are going to have even better technology, a better sensor. So if you wait --most likely the competition is not going to wait and they will have a very big advantage.

Our approach at Identidad IoT is about platform-as-a-service (PaaS). We are helping companies take that leap without having to create very big financial struggles. And the companies will know that by our using the HPE platform, they are using the state-of-the-art platform. They are not using just a mom-and pop-platform built in a garage. It's a robust PaaS -- so why not to take that leap of faith and start building it? Now is the time.

Gardner: Once you pick up that success, perhaps via a PoC, that gives you ammunition to show economic and productivity benefits that then would lead to even more investment. It seems like there is a virtuous adoption cycle potential here.

Sanchez: Definitely! Once we start a new solution, usually the people who are seeing that solution, they start seeing things that they are not used to seeing. They can pinpoint problems that they have been having for years – but they didn't understand why.

For example, there's one manufacturer of T-shirts in Colombia. They were having issues with one specific machine. That machine used to break after two or three weeks. There was just this small piece that was broken. When we installed the sensor and we started gathering their information, after two or three breaks, we understood that it was not the amount of work -- it was the temperature at which the machine was working.

So what they did is once the temperature reached a certain point, we automatically started some fans to normalize the temperature, and then they haven't had any broken pieces for months. It was a simple solution, but it took a lot of study and gathering of information to be able to understand that break point -- and that's the beauty of IoT.

Gardner: It's data-driven, it's empirical, it’s understood, but you can't know what you don't know until you start measuring things, right?

Listen to things

Sanchez: Exactly! I always say that the “things” are trying to say something, and we are not listening. IoT enables the people, the companies, and the organization to start listening to the things, and not only to start listening, but to make the things to work for us. We need the applications to be able to trigger something to fix the problem without any human intervention -- and that's also the beauty of IoT.

Gardner: And that IoT philosophy even extends to healthcare, manufacturing, transportation, any place where you have complexity, it is pertinent.

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Yes, the solution for IoT is everywhere. You can think about healthcare or tracking people or tracking guns or building solutions for cities in which the city can understand what is triggering certain pollution levels that they can fix. Or it can be in manufacturing, or even a small thing like finding your cellphone.

It’s everything that you can measure. Everything that you can put a sensor on, you can measure -- that's IoT. The idea is that IoT will help people live better lives without having to take care of the “thing;” things will have to take care of themselves.

Gardner: You seem quite confident that this is a growth industry. You are betting a significant amount of your future growth on it. How do you see it increasing over the next couple of years? Is this a modest change or do you really see some potential for a much larger market?

Sanchez: That's a really good question. I do see that IoT is the next wave of technology. There are several studies that say that by 2020 there are going to be 50 billion devices connected. I am not that futuristic, but I do see that IoT will start working now and probably within the next two or three years we are going to start seeing an incremental growth of the solutions. Once people understand the capability of IoT, there's going to be an explosion of solutions. And I think the moment to start doing it is now. I think that next year it’s going to be too late.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·     Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment 

·     How IoT and OT collaborate to usher in the data-driven factory of the future 

·     DreamWorks Animation crafts its next era of dynamic IT infrastructure

·     How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·     Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·     Converged IoT systems: Bringing the data center to the edge of everything

·     IDOL-powered appliance delivers better decisions via comprehensive business information searches

·     OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·     Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·     How lastminute.com uses machine learning to improve travel bookings user experience

How IoT and OT collaborate to usher in the data-driven factory of the future

The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains.

We’ll now learn how a leading-edge manufacturer, Hirotec, in the global automotive industry, takes advantage of IoT and Operational Technology (OT) combined to deliver dependable, managed, and continuous operations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's happening in the market with business and technology trends that’s driving this need for more modern factories and more responsive supply chains?

Hester: Our customers are demanding shorter lead times. There is a drive for even higher quality, especially in automotive manufacturing. We’re also seeing a much higher level of customization requests coming from our customers. So how can we create products that better match the unique needs of each customer?

As we look at how we can continue to compete in an ever-competitive environment, we are starting to see how the solutions from IoT can help us.

Gardner: What is it about IoT and Industrial IoT (IIoT) that allows you to do things that you could not have done before?

Hester: Within the manufacturing space, a lot of data has been there for years; for decades. Manufacturing has been very good at collecting data. The challenges we've had, though, is bringing in that data in real-time, because the amount of data is so large. How can we act on that data quicker, not on a day-by-day basis or week-by-week basis, but actually on a minute-by-minute basis, or a second-by-second basis? And how do we take that data and contextualize it?

 Hester

Hester

It's one thing in a manufacturing environment to say, “Okay, this machine is having a challenge.” But it’s another thing if I can say, “This machine is having a challenge, and in the context of the factory, here's how it's affecting downstream processes, and here's what we can do to mitigate those downstream challenges that we’re going to have.” That’s where IoT starts bringing us a lot of value.

The analytics, the real-time contextualization of that data that we’ve already had in the manufacturing area, is very helpful.

Gardner: So moving from what may have been a gather, batch, analyze, report process -- we’re now taking more discrete analysis opportunities and injecting that into a wider context of efficiency and productivity. So this is a fairly big change. This is not incremental; this is a step-change advancement, right?

A huge step-change 

Hester: It’s a huge change for the market. It's a huge change for us at Hirotec. One of the things we like to talk about is what we jokingly call the Tuesday Morning Meeting. We talk about this idea that in the morning at a manufacturing facility, everyone gets together and talks about what happened yesterday, and what we can do today to make up for what happened yesterday.

Instead, now we’re making that huge step-change to say,  “Why don't we get the data to the right people with the right context and let them make a decision so they can affect what's going on, instead of waiting until tomorrow to react to what's going on?” It’s a huge step-change. We’re really looking at it as how can we take small steps right away to get to that larger goal.

In manufacturing areas, there's been a lot of delay, confusion, and hesitancy to move forward because everyone sees the value, but it's this huge change, this huge project. At Hirotec, we’re taking more of a scaled approach, and saying let's start small, let’s scale up, let’s learn along the way, let's bring value back to the organization -- and that's helped us move very quickly.

Gardner: We’d like to hear more about that success story but in the meantime, tell us about Hirotec for those who don't know of it. What role do you play in the automotive industry, and how are you succeeding in your markets?

Hester: Hirotec is a large, tier-1 automotive supplier. What that means is we supply parts and systems directly to the automotive original equipment manufacturers (OEMs), like Mazda, General Motors, FCA, Ford, and we specialize in door manufacturing, as well as exhaust system manufacturing. So every year we make about 8 million doors, 1.8 million exhaust systems, and we provide those systems mainly to Mazda and General Motors, but also we provide that expertise through tooling.

For example, if an automotive OEM would like Hirotec’s expertise in producing these parts, but they would like to produce them in-house, Hirotec has a tooling arm where we can provide that tooling for automotive manufacturing. It's an interesting strategy that allows us to take advantage of data both in our facilities, but then also work with our customers on the tooling side to provide those lessons learned and bring them value there as well.

Gardner: How big of a distribution are we talking about? How many factories, how many countries; what’s the scale here?

Hester: We are based in Hiroshima, Japan, but we’re actually in nine countries around the world, currently with 27 facilities. We have reached into all the major continents with automotive manufacturing: we’re in North America, we’re in Europe, we’re all throughout Asia, in China and India. We have a large global presence. Anywhere you find automotive manufacturing, we’re there supporting it.

Discover How the

IoT Advantage

Works in Multiple Industries

Gardner: With that massive scale, very small improvements can turn into very big benefits. Tell us why the opportunity in a manufacturing environment to eke out efficiency and productivity has such big payoffs.

Hester: So especially in manufacturing, what we find when we get to those large scales like you're alluding to is that a 1 percent or 2 percent improvement has huge financial benefits. And so the other thing is in manufacturing, especially automotive manufacturing, we tend to standardize our processes, and within Hirotec, we’ve done a great job of standardizing that world-class leadership in door manufacturing.

And so what we find is when we get improvements not only in IoT but anywhere in manufacturing, if we can get 1 percent or 2 percent, not only is that a huge financial benefit but because we standardized globally, we can move that to our other facilities very quickly, doubling down on that benefit.

Gardner: Well, clearly Hirotec sees this as something to really invest in, they’ve created the IoT Lab. Tell me a little bit about that and how that fits into this?

The IoT Lab works

Hester: The IoT Lab is a very exciting new group, it's part of our Advanced Engineering Center (AEC). The AEC is a group out of our global headquarters and this group is tasked with the five- to 10-year horizon. So they're able to work across all of our global organizations with tooling, with engineering, with production, with sales, and even our global operations groups. Our IoT group goes and finds solutions that can bring value anywhere in the organization through bringing in new technologies, new ideas, and new solutions.

And so we formed the IoT Lab to find how can we bring IoT-based solutions into the manufacturing space, into the tooling space, and how actually can those solutions not only help our manufacturing and tooling teams but also help our IT teams, our finance teams, and our sales teams.

Gardner: Let's dig back down a little bit into why IT, IoT and Operational Technology (OT) are into this step-change opportunity, looking for some significant benefits but being careful in how to institute that. What is required when you move to a more an IT-focused, a standard-platform approach -- across all the different systems -- that allows you to eke these great benefits?

Tell us about how IoT as a concept is working its way into the very edge of the factory floor.

Discover How the

IoT Advantage

Works in Multiple Industries

Hester: One of the things we’re seeing is that IT is beginning to meld, like you alluded to, with OT -- and there really isn't a distinction between OT and IT anymore. What we're finding is that we’re starting to get to these solution levels by working with partners such as PTC and Hewlett Packard Enterprise (HPE) to bring our IT group and our OT group all together within Hirotec and bring value to the organization.

What we find is there is no longer a need in OT that becomes a request for IT to support it, and also that IT has a need and so they go to OT for support. What we are finding is we have organizational needs, and we’re coming to the table together to make these changes. And that actually within itself is bringing even more value to the organization.

Instead of coming last-minute to the IT group and saying, “Hey, we need your support for all these different solutions, and we’ve already got everything set, and you are just here to put it in,” what we are seeing, is that they bring the expertise in, help us out upfront, and we’re finding better solutions because we are getting experts both from OT and IT together.

We are seeing this convergence of these two teams working on solutions to bring value. And they're really moving everything to the edge. So where everyone talks about cloud-based computing -- or maybe it’s in their data center -- where we are finding value is in bringing all of these solutions right out to the production line.

We are doing data collection right there, but we are also starting to do data analytics right at the production line level, where it can bring the best value in the fastest way.

Gardner: So it’s an auspicious time because just as you are seeking to do this, the providers of technology are creating micro data centers, and they are creating Edgeline converged systems, and they are looking at energy conservation so that they can do this in an affordable way -- and with storage models that can support this at a competitive price.

What is it about the way that IT is evolving and providing platforms and systems that has gotten you and The IoT Lab so excited?

Excitement at the edge  

Hester: With IoT and IT platforms, originally to do the analytics, we had to go up to the cloud -- that was the only place where the compute power existed. Solution providers now are bringing that level of intelligence down to the edge. We’re hearing some exciting things from HPE on memory-driven computing, and that's huge for us because as we start doing these very complex analytics at the edge, we need that power, that horsepower, to run different applications at the same time at the production line. And something like memory-driven solutions helps us accomplish that.

It's one thing to have higher-performance computing, but another thing to gain edge computing that's proper for the factory environment. In a manufacturing environment it's not conducive to a standard servers, a standard rack where it needs dust protection and heat protection -- that doesn't exist in a manufacturing environment.

The other thing we're beginning to see with edge computing, that HPE provides with Edgeline products, is that we have computers that have high power, high ability to perform the analytics and data collection capabilities -- but they're also proper for the environment.

I don't need to build out a special protection unit with special temperature control, humidity control – all of which drives up energy costs, which drives up total costs. Instead, we’re able to run edge computing in the environment as it should be on its own, protected from what comes in a manufacturing environment -- and that's huge for us.

Gardner: They are engineering these systems now with such ruggedized micro facilities in mind. It's quite impressive that the very best of what a data center can do, can now be brought to the very worst types of environments. I'm sure we'll see more of that, and I am sure we'll see it get even smaller and more powerful.

Do you have any examples of where you have already been able to take IoT in the confluence of OT and IT to a point where you can demonstrate entirely new types of benefits? I know this is still early in the game, but it helps to demonstrate what you can do in terms of efficiency, productivity, and analytics. What are you getting when you do this well?

IoT insights save time and money

Hester: Taking the stepped strategy that we have, we actually started at Hirotec very small with only eight machines in North America and we were just looking to see if the machines are on, are they running, and even from there, we saw a value because all of a sudden we were getting that real-time contextualized insight into the whole facility. We then quickly moved over to one of our production facilities in Japan, where we have a brand-new robotic inspection system, and this system uses vision sensors, laser sensors, force sensors -- and it's actually inspecting exhaust systems before they leave the facility.

We very quickly implemented an IoT solution in that area, and all we did was we said, “Hey, we just want to get insight into the data, so we want to be able to see all these data points. Over 400 data points are created every inspection. We want to be able to see this data, compared in historical ways -- so let’s bring context to that data, and we want to provide it in real-time.”

Discover How the

IoT Advantage

Works in Multiple Industries

What we found from just those two projects very quickly is that we're bringing value to the organization because now our teams can go in and say, “Okay, the system is doing its job, it's inspecting things before they leave our facility to make sure our customers always get a high-quality product.” But now, we’re able to dive in and find different trends that we weren't able to see before because all we were doing is saying, “Okay, this system leaves the facility or this system doesn't.”

And so already just from that application, we’ve been able to find ways that our engineers can even increase the throughput and the reliability of the system because now they have these historical trends. They were able to do a root-cause analysis on some improvements that would have taken months of investigation; it was completed in less than a week for us.

And so that's a huge value -- not only in that my project costs go down but now I am able to impact the organization quicker, and that's the big thing that Hirotec is seeing. It’s one thing to talk about the financial cost of a project, or I can say, “Okay, here is the financial impact,” but what we are seeing is that we’re moving quicker.

And so, we're having long-term financial benefits because we’re able to react to things much faster. In this case, we’re able to reduce months of investigation down to a week. That means that when I implement my solution quicker, I'm now bringing that impact to the organization even faster, which has long-term benefits. We are already seeing those benefits today.

Gardner: You’ll obviously be able to improve quality, you’ll be able to reduce the time to improving that quality, gain predictive analytics in your operations, but also it sounds like you are going to gain metadata insights that you can take back into design for the next iteration of not only the design for the parts but the design for the tooling as well and even the operations around that. So that intelligence at the edge can be something that is a full lifecycle process, it goes right back to the very initiation of both the design and the tooling.

Data-driven design, decisions 

Hester: Absolutely, and so, these solutions, they can't live in a silo. We're really starting to look at these ideas of what some people call the Digital Thread, the Digital Twin. We’re starting to understand what does that mean as you loop this data back to our engineering teams -- what kind of benefits can we see, how can we improve our processes, how can we drive out into the organization?

And one of the biggest things with IoT-based solutions is that they can't stay inside this box, where we talked about OT to IT, we are talking about manufacturing, engineering, these IoT solutions at their best, all they really do is bring these groups together and bring a whole organization together with more contextualized data to make better decisions faster.

And so, exactly to your point, as we are looping back, we’re able to start understanding the benefit we’re going to be seeing from bringing these teams together.

Gardner: One last point before we close out. It seems to me as well that at a macro level, this type of data insight and efficiency can be brought into the entire supply chain. As you're providing certain elements of an automobile, other suppliers are providing what they specialize in, too, and having that quality control and integration and reduced time-to-value or mean-time-to-resolution of the production issues, and so forth, can be applied at a macro level.

So how does the automotive supplier itself look at this when it can take into consideration all of its suppliers like Hirotec are doing?

Start small 

Hester: It's a very early phase, so a lot of the suppliers are starting to understand what this means for them. There is definitely a macro benefit that the industry is going to see in five to 10 years. Suppliers now need to start small. One of my favorite pictures is a picture of the ocean and a guy holding a lighter. It [boiling the ocean] is not going to happen. So we see these huge macro benefits of where we’re going, but we have to start out somewhere.

Discover How the

IoT Advantage

Works in Multiple Industries

A lot of suppliers, what we’re recommending to them, is to do the same thing we did, just start small with a couple of machines, start getting that data visualized, start pulling that data into the organization. Once you do that, you start benefiting from the data, and then start finding new use-cases.

As these suppliers all start doing their own small projects and working together, I think that's when we are going to start to see the macro benefits but in about five to 10 years out in the industry.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       Veikkaus digitally transforms as it emerges as new combined Finnish national gaming company

 ·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

DreamWorks Animation crafts its next era of dynamic IT infrastructure

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.

Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas?

Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.

Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?

Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.

  Wike

Wike

That's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.

We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.

Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly.

So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.

Greater creativity, increased efficiency

Wike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.

We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”

How HPE Synergy

Automates

Infrastructure Operations

And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there's two things that you get out of that.

One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it's all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or -- because of the compute infrastructure -- you can drive efficiency into the operation.

So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.

Core flexibility, fast

Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it -- we want that in our infrastructure. We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.

Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.

What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity -- the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn't have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.

By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure -- parts of it or all of it -- and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure -- and do it quickly.

Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.

Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?

Wike: That’s absolutely true. We certainly don't want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.

In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn't do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.

The world is great for us today, because people want content -- they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.

Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.

HPE Synergy

Automates

Infrastructure Operations

Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn't work on the factory floor, then what's the point of the design? Are we doing that with digital manufacturing now?

Artifact analytics improve usage, rendering

Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking -- I mean all of those things are commodities that are required to create a project.

Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it's too much.”

One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.

And we correlate that information to a specific asset, such as a character, basket, or chair -- whatever it is that I am rendering -- as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.

One of the things that’s wasteful of digital resources is either having a job that doesn't fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours -- it could take 10 hours to run.

At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don't really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?

Just the facts

Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it's going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.

So now we don't have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.

Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that's 15 percent less money you have to spend. It's 15 percent faster time for a director to be able to see something. It's 15 percent more iterations. So that was really huge for us.

Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.

Creative intelligence at the edge

Wike: It's interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data -- that intelligence -- to an artist, then they are actually really, really good at managing their own work.

It's only a problem when they don't have any idea that six months from now it's going to cause a huge increase in memory usage or render time. When they don't know that, it's hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.

On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn't understand why.

After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.

But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it's really not a key element to the story. And they can do that today, which is really great.

HPE Synergy

Automates

Infrastructure Operations

Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll -- hundreds and hundreds of people at work putting this together.

So you are dealing with not just one artist making a decision, you have an army of people. It's astounding that you can bring this level of data-driven efficiency to it.

Movie-making’s mobile workforce

Wike: It becomes so much more important, too, as we become a more mobile workforce. 

Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it's now all these great screens in people's hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?

Wike: There's an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.

We always say that we don't know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.

Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.

Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.

I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge -- and closer to consumers -- to support live, multi-screen Internet Protocol (IP) entertainment delivery. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected -- with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

 Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go -- on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change -- when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now -- with the new technology on the back-end -- to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology -- all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them -- knowing what they like -- and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to -- at the very edge of the signal distribution, the video signal distribution -- insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. -- why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going -- to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP -- so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news -- this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera -- from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them -- and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That's what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That's very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it's 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That's also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it's an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It's a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed -- but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don't have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it's distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy -- and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations -- from content creation, planning, and distribution -- to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV -- all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don't want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that's the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that's an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side -- these were two groups that didn't speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn't say yet that they are IT companies. The core strength is content, that is their brand, that's what they are good at -- creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can't lose concentration on their core business. I think the IT providers still have a very strong play there. It's always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that's dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years -- because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That's one of the intrinsic values of a hybrid IT approach -- you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that's where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it's disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: