.banner-thumbnail-wrapper { display:none; }

DevOps

A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes

The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.

We will now explore how a regional hospital seeking efficiency -- and a teaching hospital seeking performance -- are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Infatuation leads to love—How container orchestration and federation enables multi-cloud competition

The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way.

And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts.

This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Globalization risks and data complexity demand new breed of hybrid IT management, says Wikibon’s Burris

The next BriefingsDirect Voice of the Analyst interview explores how globalization and distributed business ecosystems factor into hybrid cloud challenges and solutions.

Mounting complexity and a lack of multi-cloud services management maturity are forcing companies to seek new breeds of solutions so they can grow and thrive as digital enterprises. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how international companies must factor localization, data sovereignty and other regional factors into any transition to sustainable hybrid IT is Peter Burris, Head of Research at Wikibon. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Peter, companies doing business or software development just in North America can have an American-centric view of things. They may lack an appreciation for the global aspects of cloud computing models. We want to explore that today. How much more complex is doing cloud -- especially hybrid cloud -- when you’re straddling global regions?

Burris: There are advantages and disadvantages to thinking cloud-first when you are thinking globalization first. The biggest advantage is that you are able to work in locations that don’t currently have the broad-based infrastructure that’s typically associated with a lot of traditional computing modes and models.

Burris

Burris

The downside of it is, at the end of the day, that the value in any computing system is not so much in the hardware per se; it’s in the data that’s the basis of how the system works. And because of the realities of working with data in a distributed way, globalization that is intended to more fully enfranchise data wherever it might be introduces a range of architectural implementation and legal complexities that can’t be discounted.

So, cloud and globalization can go together -- but it dramatically increases the need for smart and forward-thinking approaches to imagining, and then ultimately realizing, how those two go together, and what hybrid architecture is going to be required to make it work.

Gardner: If you need to then focus more on the data issues -- such as compliance, regulation, and data sovereignty -- how is that different from taking an applications-centric view of things?

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Most companies have historically taken an infrastructure-centric approach to things. They start by saying, “Where do I have infrastructure, where do I have servers and storage, do I have the capacity for this group of resources, and can I bring the applications up here?” And if the answer is yes, then you try to ultimately economize on those assets and build the application there.

That runs into problems when we start thinking about privacy, and in ensuring that local markets and local approaches to intellectual property management can be accommodated.

But the issue is more than just things like the General Data Protection Regulation (GDPR) in Europe, which is a series of regulations in the European Union (EU) that are intended to protect consumers from what the EU would regard as inappropriate leveraging and derivative use of their data.

It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

Ultimately, the globe is a big place. It’s 12,000 miles or so from point A to the farthest point B, and physics still matters. So, the first thing we have to worry about when we think about globalization is the cost of latency and the cost of bandwidth of moving data -- either small or very large -- across different regions. It can be extremely expensive and sometimes impossible to even conceive of a global cloud strategy where the service is being consumed a few thousand miles away from where the data resides, if there is any dependency on time and how that works.

So, the issues of privacy, the issues of local control of data are also very important, but the first and most important consideration for every business needs to be: Can I actually run the application where I want to, given the realities of latency? And number two: Can I run the application where I want to given the realities of bandwidth? This issue can completely overwhelm all other costs for data-rich, data-intensive applications over distance.

Gardner: As you are factoring your architecture, you need to take these local considerations into account, particularly when you are factoring costs. If you have to do some heavy lifting and make your bandwidth capable, it might be better to have a local closet-sized data center, because they are small and efficient these days, and you can stick with a private cloud or on-premises approach. At the least, you should factor the economic basis for comparison, with all these other variables you brought up.

Edge centers

Burris: That’s correct. In fact, we call them “edge centers.” For example, if the application features any familiarity with Internet of Things (IoT), then there will likely be some degree of latency considerations obtained, and the cost of doing a round trip message over a few thousand miles can be pretty significant when we consider the total cost of how fast computing can be done these days.

The first consideration is what are the impacts of latency for an application workload like IoT and is that intending to drive more automation into the system? Imagine, if you will, the businessperson who says, “I would like to enter into a new market expand my presence in the market in a cost-effective way. And to do that, I want to have the system be more fully automated as it serves that particular market or that particular group of customers. And perhaps it’s something that looks more process manufacturing-oriented or something along those lines that has IoT capabilities.”

The goal is to bring in the technology in a way that does not explode the administration, management, and labor cost associated with the implementation.

The goal, therefore, is to bring in the technology in a way that does not explode the administration, managements, and labor cost associated with the implementation.

The other way you are going to do that is if you do introduce a fair amount of automation and if, in fact, that automation is capable of operating within the time constraints required by those automated moments, as we call them.

If the round-trip cost of moving the data from a remote global location back to somewhere in North America -- independent of whether it’s legal or not – comes at a cost that exceeds the automation moment, then you just flat out can’t do it. Now, that is the most obvious and stringent consideration.

On top of that, these moments of automation necessitate significant amounts of data being generated and captured. We have done model studies where, for example, the cost of moving data out of a small wind farm can be 10 times as expensive. It can cost hundreds of thousands of dollars a year to do relatively simple and straightforward types of data analysis on the performance of that wind farm.

Process locally, act globally

It’s a lot better to have a local presence that can handle local processing requirements against models that are operating against locally derived data or locally generated data, and let that work be automated with only periodic visibility into how the overall system is working closely. And that’s where a lot of this kind of on-premise hybrid cloud thinking is starting.

It gets more complex than in a relatively simple environment like a wind farm, but nonetheless, the amount of processing power that’s necessary to run some of those kinds of models can get pretty significant. We are going to see a lot more of this kind of analytic work be pushed directly down to the devices themselves. So, the Sense, Infer, and Act loop will occur very, very closely in some of those devices. We will try to keep as much of that data as we can local.

But there are always going to be circumstances when we have to generate visibility across devices, we have to do local training of the data, we have to test the data or the models that we are developing locally, and all those things start to argue for sometimes much larger classes of systems.

Gardner: It’s a fascinating subject as to what to push down the edge given that the storage cost and processing costs are down and footprint is down and what to then use the public cloud environment or Infrastructure-as-a-Service (IaaS) environment for.

But before we go into any further, Peter, tell us about yourself, and your organization, Wikibon.

Learn More About

Hybrid IT Management

Solutions From HPE

Burris: Wikibon is a research firm that’s affiliated with something known as TheCUBE. TheCUBE conducts about 5,000 interviews per year with thought leaders at various locations, often on-site at large conferences.

I came to Wikibon from Forrester Research, and before that I had been a part of META Group, which was purchased by Gartner. I have a longstanding history in this business. I have also worked with IT organizations, and also worked inside technology marketing in a couple of different places. So, I have been around.

Wikibon's objective is to help mid-sized to large enterprises traverse the challenges of digital transformation. Our opinion is that digital transformation actually does mean something. It's not just a set of bromides about multichannel or omnichannel or being “uberized,” or anything along those lines.

The difference between a business and a digital business is the degree to which data is used as an asset. 

The difference between a business and a digital business is the degree to which data is used as an asset. In a digital business, data absolutely is used as a differentiating asset for creating and keeping customers.

We look at the challenges of what does it mean to use data differently, how to capture it differently, which is a lot of what IoT is about. We look at how to turn it into business value, which is a lot of what big data and these advanced analytics like artificial intelligence (AI), machine learning and deep learning are all about. And then finally, how to create the next generation of applications that actually act on behalf of the brand with a fair degree of autonomy, which is what we call “systems of agency” are all about. And then ultimately how cloud and historical infrastructure are going to come together and be optimized to support all those requirements.

We are looking at digital business transformation as a relatively holistic thing that includes IT leadership, business leadership, and, crucially, new classes of partnerships to ensure that the services that are required are appropriately contracted for and can be sustained as it becomes an increasing feature of any company’s value proposition. That's what we do.

Global risk and reward

Gardner: We have talked about the tension between public and private cloud in a global environment through speeds and feeds, and technology. I would like to elevate it to the issues of culture, politics and perception. Because in recent years, with offshoring and looking at intellectual property concerns in other countries, the fact is that all the major hyperscale cloud providers are US-based corporations. There is a wide ecosystem of other second tier providers, but certainly in the top tier.

Is that something that should concern people when it comes to risk to companies that are based outside of the US? What’s the level of risk when it comes to putting all your eggs in the basket of a company that's US-based?

Burris: There are two perspectives on that, but let me add one more just check on this. Alibaba clearly is one of the top-tier, and they are not based in the US and that may be one of the advantages that they have. So, I think we are starting to see some new hyperscalers emerge, and we will see whether or not one will emerge in Europe.

I had gotten into a significant argument with a group of people not too long ago on this, and I tend to think that the political environment almost guarantees that we will get some kind of scale in Europe for a major cloud provider.

If you are a US company, are you concerned about how intellectual property is treated elsewhere? Similarly, if you are a non-US company, are you concerned that the US companies are typically operating under US law, which increasingly is demanding that some of these hyperscale firms be relatively liberal, shall we say, in how they share their data with the government? This is going to be one of the key issues that influence choices of technology over the course of the next few years.

Cross-border compute concerns

We think there are three fundamental concerns that every firm is going to have to worry about.

I mentioned one, the physics of cloud computing. That includes latency and bandwidth. One computer science professor told me years ago, “Latency is the domain of God, and bandwidth is the domain of man.” We may see bandwidth costs come down over the next few years, but let's just lump those two things together because they are physical realities.

The second one, as we talked about, is the idea of privacy and the legal implications.

The third one is intellectual property control and concerns, and this is going to be an area that faces enormous change over the course of the next few years. It’s in conjunction with legal questions on contracting and business practices.

Learn More About

Hybrid IT Management

Solutions From HPE

From our perspective, a US firm that wants to operate in a location that features a more relaxed regime for intellectual property absolutely needs to be concerned. And the reason why they need to be concerned is data is unlike any other asset that businesses work with. Virtually every asset follows the laws of scarcity. 

Money, you can put it here or you can put it there. Time, people, you can put here or you can put there. That machine can be dedicated to this kind of wire or that kind of wire.

Data is weird, because data can be copied, data can be shared. The value of data appreciates as we us it more successfully, as we integrate it and share it across multiple applications.

Scarcity is a dominant feature of how we think about generating returns on assets. Data is weird, though, because data can be copied, data can be shared. Indeed, the value of data appreciates as we use it more successfully, as we use it more completely, as we integrate it and share it across multiple applications.

And that is where the concern is, because if I have data in one location, two things could possibly happen. One is if it gets copied and stolen, and there are a lot of implications to that. And two, if there are rules and regulations in place that restrict how I can combine that data with other sources of data. That means if, for example, my customer data in Germany may not appreciate, or may not be able to generate the same types of returns as my customer data in the US.

Now, that sets aside any moral question of whether or not Germany or the US has better privacy laws and protects the consumers better. But if you are basing investments on how you can use data in the US, and presuming a similar type of approach in most other places, you are absolutely right. On the one hand, you probably aren’t going to be able to generate the total value of your data because of restrictions on its use; and number two, you have to be very careful about concerns related to data leakage and the appropriation of your data by unintended third parties.

Gardner: There is the concern about the appropriation of the data by governments, including the United States with the PATRIOT Act. And there are ways in which governments can access hyperscalers’ infrastructure, assets, and data under certain circumstances. I suppose there’s a whole other topic there, but at least we should recognize that there's some added risk when it comes to governments and their access to this data.

Burris: It’s a double-edged sword that US companies may be worried about hyperscalers elsewhere, but companies that aren't necessarily located in the US may be concerned about using those hyperscalers because of the relationship between those hyperscalers and the US government.

These concerns have been suppressed in the grand regime of decision-making in a lot of businesses, but that doesn’t mean that it’s not a low-intensity concern that could bubble up, and perhaps, it’s one of the reasons why Alibaba is growing so fast right now.

All hyperscalers are going to have to be able to demonstrate that they can protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated.  

All hyperscalers are going to have to be able to demonstrate that they can, in fact, protect their clients, their customers’ data, utilizing the regime that is in place wherever the business is being operated. [The rationale] for basing your business in these types of services is really immature. We have made enormous progress, but there’s a long way yet to go here, and that’s something that businesses must factor as they make decisions about how they want to incorporate a cloud strategy.

Gardner: It’s difficult enough given the variables and complexity of deciding a hybrid cloud strategy when you’re only factoring the technical issues. But, of course, now there are legal issues around data sovereignty, privacy, and intellectual property concerns. It’s complex, and it’s something that an IT organization, on its own, cannot juggle. This is something that cuts across all the different parts of a global enterprise -- their legal, marketing, security, risk avoidance and governance units -- right up to the board of directors. It’s not just a willy-nilly decision to get out a credit card and start doing cloud computing on any sustainable basis.

Burris: Well, you’re right, and too frequently it is a willy-nilly decision where a developer or a business person says, “Oh, no sweat, I am just going to grab some resources and start building something in the cloud.”

I can remember back in the mid-1990s when I would go into large media companies to meet with IT people to talk about the web, and what it would mean technically to build applications on the web. I would encounter 30 people, and five of them would be in IT and 25 of them would be in legal. They were very concerned about what it meant to put intellectual property in a digital format up on the web, because of how it could be misappropriated or how it could lose value. So, that class of concern -- or that type of concern -- is minuscule relative to the broader questions of cloud computing, of the grabbing of your data and holding it a hostage, for example.

There are a lot of considerations that are not within the traditional purview of IT, but CIOs need to start thinking about them on their own and in conjunction with their peers within the business.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: We’ve certainly underlined a lot of the challenges. What about solutions? What can organizations do to prevent going too far down an alley that’s dark and misunderstood, and therefore have a difficult time adjusting?

How do we better rationalize for cloud computing decisions? Do we need better management? Do we need better visibility into what our organizations are doing or not doing? How do we architect with foresight into the larger picture, the strategic situation? What do we need to start thinking about in terms of the solutions side of some of these issues?

Cloud to business, not business to cloud

Burris: That’s a huge question, Dana. I can go on for the next six hours, but let’s start here. The first thing we tell senior executives is, don’t think about bringing your business to the cloud -- think about bringing the cloud to your business. That’s the most important thing. A lot of companies start by saying, “Oh, I want to get rid of IT, I want to move my business to the cloud.”

It’s like many of the mistakes that were made in the 1990s regarding outsourcing. When I would go back and do research on outsourcing, I discovered that a lot of the outsourcing was not driven by business needs, but driven by executive compensation schemes, literally. So, where executives were told that they would be paid on the basis of return in net assets, there was a high likelihood that the business was going to go to outsourcers to get rid of the assets, so the executives could pay themselves an enormous amount of money.

Think about how to bring the cloud to your business, and to better manage your data assets, and don't automatically default to the notion that you're going to take your business to the cloud.

The same type of thinking pertains here -- the goal is not to get rid of IT assets since those assets, generally speaking, are becoming less important features of the overall proposition of digital businesses.

Think instead about how to bring the cloud to your business, and to better manage your data assets, and don’t automatically default to the notion that you’re going to take your business to the cloud.

Every decision-maker needs to ask himself or herself, “How can I get the cloud experience wherever the data demands?” The goal of the cloud experience, which is a very, very powerful concept, ultimately needs to be able to get access to a very rich set of services associated with automation. We need visible pricing and metering, self-sufficiency, and self-service. These are all the experiences that we want out of cloud.

What we want, however, are those experiences wherever the data requires it, and that’s what’s driving hybrid cloud. We call it “true private cloud,” and the idea is of having a technology stack that provides a consistent cloud experience wherever the data has to run -- whether that’s because of IoT or because of privacy issues or because of intellectual property concerns. True private cloud is our concept for describing how the cloud experience is going to be enacted where the data requires, so that you don’t just have to move the data to get to the cloud experience.

Weaving IT all together

The third thing to note here is that ultimately this is going to lead to the most complex integration regime we’ve ever envisioned for IT. By that I mean, we are going to have applications that span Software-as-a-Service (SaaS), public cloud, IaaS services, true private cloud, legacy applications, and many other types of services that we haven’t even conceived of right now.

And understanding how to weave all of those different data sources, and all those different service sources, into coherent application framework that runs reliably and providers a continuous ongoing service to the business is essential. It must involve a degree of distribution that completely breaks most models. We’re thinking about infrastructure, architecture, but also, data management, system management, security management, and as I said earlier, all the way out to even contractual management, and vendor management.

The arrangement of resources for the classes of applications that we are going to be building in the future are going to require deep, deep, deep thinking.

That leads to the fourth thing, and that is defining the metric we’re going to use increasingly from a cost standpoint. And it is time. As the costs of computing and bandwidth continue to drop -- and they will continue to drop -- it means ultimately that the fundamental cost determinant will be, How long does it take an application to complete? How long does it take this transaction to complete? And that’s not so much a throughput question, as it is a question of, “I have all these multiple sources that each on their own are contributing some degree of time to how this piece of work finishes, and can I do that piece of work in less time if I bring some of the work, for example, in-house, and run it close to the event?”

This relationship between increasing distribution of work, increasing distribution of data, and the role that time is going to play when we think about the event that we need to manage is going to become a significant architectural concern.

The fifth issue, that really places an enormous strain on IT is how we think about backing up and restoring data. Backup/restore has been an afterthought for most of the history of the computing industry.

As we start to build these more complex applications that have more complex data sources and more complex services -- and as these applications increasingly are the basis for the business and the end-value that we’re creating -- we are not thinking about backing up devices or infrastructure or even subsystems.

We are thinking about what does it mean to backup, even more importantly, applications and even businesses. The issue becomes associated more with restoring. How do we restore applications in business across this incredibly complex arrangement of services and data locations and sources?

There's a new data regime that's emerging to support application development. How's that going to work -- the role the data scientists and analytics are going to play in working with application developers?

I listed five areas that are going to be very important. We haven’t even talked about the new regime that’s emerging to support application development and how that’s going to work. The role the data scientists and analytics are going to play in working with application developers – again, we could go on and on and on. There is a wide array of considerations, but I think all of them are going to come back to the five that I mentioned.

Gardner: That’s an excellent overview. One of the common themes that I keep hearing from you, Peter, is that there is a great unknown about the degree of complexity, the degree of risk, and a lack of maturity. We really are venturing into unknown territory in creating applications that draw on these resources, assets and data from these different clouds and deployment models.

When you have that degree of unknowns, that lack of maturity, there is a huge opportunity for a party to come in to bring in new types of management with maturity and with visibility. Who are some of the players that might fill that role? One that I am familiar with, and I think I have seen them on theCUBE is Hewlett Packard Enterprise (HPE) with what they call Project New Hybrid IT Stack. We still don’t know too much about it. I have also talked about Cloud28+, which is an ecosystem of global cloud environments that helps mitigate some of the concerns about a single hyperscaler or a handful of hyperscale providers. What’s the opportunity for a business to come in to this problem set and start to solve it? What do you think from what you’ve heard so far about Project New Hybrid IT Stack at HPE?

Key cloud players

Burris: That’s a great question, and I’m going to answer it in three parts. Part number one is, if we look back historically at the emergence of TCP/IP, TCP/IP killed the mini-computers. A lot of people like to claim it was microprocessors, and there is an element of truth to that, but many computer companies had their own proprietary networks. When companies wanted to put those networks together to build more distributed applications, the mini-computer companies said, “Yeah, just bridge our network.” That was an unsatisfyingly bad answer for the users. So along came Cisco, TCP/IP, and they flattened out all those mini-computer networks, and in the process flattened the mini-computer companies.

HPE was one of the few survivors because they embraced TCP/IP much earlier than anybody else.

We are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized.

The second thing is that to build the next generations of more complex applications -- and especially applications that involve capabilities like deep learning or machine learning with increased automation -- we are going to need the infrastructure itself to use deep learning, machine learning, and advanced technology for determining how the infrastructure is managed, optimized, and economized. That is an absolute requirement. We are not going to make progress by adding new levels of complexity and building increasingly rich applications if we don’t take full advantage of the technologies that we want to use in the applications -- inside how we run our infrastructures and run our subsystems, and do all the things we need to do from a hybrid cloud standpoint.

Ultimately, the companies are going to step up and start to flatten out some of these cloud options that are emerging. We will need companies that have significant experience with infrastructure, that really understand the problem. They need a lot of experience with a lot of different environments, not just one operating system or one cloud platform. They will need a lot of experience with these advanced applications, and have both the brainpower and the inclination to appropriately invest in those capabilities so they can build the type of platforms that we are talking about. There are not a lot of companies out there that can.

There are few out there, and certainly HPE with its New Stack initiative is one of them, and we at Wikibon are especially excited about it. It’s new, it’s immature, but HPE has a lot of piece parts that will be required to make a go of this technology. It’s going to be one of the most exciting areas of invention over the next few years. We really look forward to working with our user clients to introduce some of these technologies and innovate with them. It’s crucial to solve the next generation of problems that the world faces; we can’t move forward without some of these new classes of hybrid technologies that weave together fabrics that are capable of running any number of different application forms.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How modern architects transform the messy mix of hybrid cloud into a force multiplier

The next BriefingsDirect cloud strategies insights interview focuses on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios.

We’ll now learn how composable infrastructure and auto-scaling help improve client services, operations, and business goals attainment for a New York cloud services and architecture support provider.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us learn what's needed to reach the potential of multiple -- and often overlapping -- cloud models is Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) Ltd. in New York.

Here are some excerpts:

Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services? 

Reyenger

Reyenger

Reyenger: Like the technology trends themselves, everything is accelerating. Before, you would have three-year or even five-year plans that were developed by the business. They were designed to reach certain business outcomes, they would design the technology to support that and it was then heads-down to build my rocket ship.

It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.

Gardner: More moving parts, less time.

Reyenger: Absolutely.

Gardner: How do you accomplish that? 

Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.

Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting? 

Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.

On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.

Gardner: Tell us about your company. What have you done to transform?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.

It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them. 

Gardner: I suppose there is no typical cloud engagement, but what is a common hurdle that organizations are facing as they go from that traditional IT mindset to the more cloud-centric thinking and hybrid deployment models? 

The cloud answer

Reyenger: The concept of auto-scaling or bursting has become very, very prevalent. You see that within different lines of business. Ultimately, they are all asking for essentially the same thing -- and the cloud is a pretty good answer.

At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.

We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.

And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives. 

Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.

How are you working with companies like Hewlett Packard Enterprise (HPE), for example, to provide composable auto-scaling capabilities with the look and feel of public cloud on their private cloud?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger: Now it’s becoming a multi-cloud strategy. It’s one thing to say only on-premises and using one cloud. But using just one cloud has risk, and this is a problem.

We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.

Intelligent platform performance

We looked at some of the very unique things that HPE has done, specifically around their Synergy platform, to allow for cloud management and cloud automation to deliver true composable infrastructure. That has huge value around energizing a company’s goals, strengthening their profitability, boosting productivity, and enhancing innovation. We've been able to extend that into the public cloud. So now we have customers that truly are getting the best of both worlds.

Composable infrastructure is having true infrastructure that you can deploy as code. It’s being able to standardize on a single RESTful API set. 

Gardner: How do you define composable infrastructure? 

Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.

That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.

Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.

We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on. 

Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?  

Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.

But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment. 

Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?

Consult before composing

Reyenger: We offer a series of consulting services around the delivery of managed services and the actual development to take an existing cloud management stack -- whether that is Red Hat CloudForms, vRealize from VMware, or Terraform -- it really doesn't matter.

We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.

Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.

There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.

Get the

Updated Book

HPE Synergy for Dummies

Gardner: We are in the early days of composability of infrastructure in a multi-cloud world. But as the complexity and scale increases, it seems likely to me that we are going to need to bring things like machine learning and artificial intelligence (AI) because humans doing this manually will run out of runway.

Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process? 

Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.

And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.

We will see more of that single pane of glass that they will leverage across multiple cloud providers. 

It's really early to ultimately pick favorites, but you are going to see more standardization. Rather than having 50 different RESTful APIs that everyone is standardizing on and that are constantly changing, so that I have to provide custom integrations. What we will see is more of that single pane of glass they will leverage across multiple cloud providers. That will leverage a lot of the same automation and orchestration toolsets that we talked about. 

Gardner: And HPE has their sights set on this with Project New Hybrid IT Stack? 

Reyenger: 100 percent. 

Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes? 

Fulfilling the promise

Reyenger: Every organization is different. I’ve spent a good chunk of my career being tactically deployed within very large organizations that are trying to achieve certain goals.

For me, I like to go to a customer’s 10-K SEC filing and look at the promises they’ve made to their investors. We want to ultimately be able to marry back what this IT investment will do for the short-term goals that they are all being judged against, as well as from both the key performance indicators (KPI) standpoint and from the health of the company.

It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.

These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value. 

Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.

Are we going to see IT costs at an operating level adjusted favorably? Is this something that ultimately will be so optimized -- with higher utilization, leveraging the competitive market for cloud services -- that meaningful decreases will occur in the total operating costs of IT in an organization?

An uphill road to lower IT costs

Reyenger: I definitely think that it’s quite possible. The way that most organizations are set up today, IT operations rolls back into finance. So if you sit underneath the CFO, like most organizations do, and a request gets made by marketing or sales or another line of business -- it has to go up the chain, get translated, and then come back down.

A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.

But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.

Get the

Updated Book

HPE Synergy for Dummies

Gardner: Arthur, I have been in this business long enough to know that every time we’ve gotten into the point where we think we are going to meaningfully decrease IT costs, some other new paradigm of IT comes up that requires a whole new round of investment. But it seems to me that this could be different this time, that we actually are getting to a standardized approach for supporting workloads and that traditional economics that impact any procurement service will become in effect here, too.

Mining to minimize risk

Reyenger: Absolutely. One of our big pushes has been around object storage. This still allows for traditional file- and block-level support. We are trying to help customers achieve that new economic view -- of which cloud approach ultimately provides them that best price point, but still gives them low risk, visibility, and control over their data.

I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”

And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service. 

Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.

We help customers gain all the benefits associated with cloud – without taking on any of the additional risk.

These are the types of projects that we really like to work on with customers, to be able to help them gain all the benefits associated with cloud – without taking on any of the additional risk, or the negatives, associated with jumping into cloud with both feet. 

Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation? 

Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.

A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged. 

Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?

Get the

Updated Book

HPE Synergy for Dummies

Reyenger:  We are absolutely using the HPE Apollo storage line, and the different flavors of solid-state disks (SSD) down to SATA physical drives. And we are leveraging best-in-breed object storage software from Red Hat. We also have an OpenStack flavor as well.

We leverage things like automation and orchestration technologies, and our ServiceNow capabilities -- all married with our RIP in order to give customers the choice of buying this, deploying it, and having us layer services on top if you want or if you want to consume a fully managed service for something that’s on-premises. I have a per-GB price and the same SLAs as those public cloud providers. So all of it’s coming together to allow customers to really have the true choice and flexibility that everyone claimed you could years ago.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT’s ability to grow and thrive

The next BriefingsDirect Voice of the Analyst interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice.

We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy

Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles joins us to report on how companies are managing an increasingly complex transition to sustainable hybrid IT. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.

As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?

Crawford

Crawford

Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models -- and cloud is a relatively new consumption model for technology -- it changes the dynamics of it. It’s the combination of changing that consumption model -- but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.

It’s not as it used to be, where we would say, “Okay, I'm not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: The good news is that more and more people are able to -- on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?

Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this -- and it’s something that just kind of raises the hair on the back of my neck -- is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”

The board should be very business-focused and instead they're dictating specific technology -- whether it’s the right technology or not. That’s really what this comes down to. 

What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? 

Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.

We in the enterprise IT space haven't really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.

Another example that I've worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients -- you do the math to figure out what’s the comparative between the IT you're doing on-premises in your corporate data center with any given application -- versus doing it in a public cloud.

Think differently

If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?

When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.

When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally -- from a process standpoint, and from a delivery standpoint -- and also economically.

Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?

It’s not tech for tech’s sake, rather it’s best to say, “How do we use technology for business advantage?” 

Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”

We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.

A day in a life of me could range from, if I'm lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.

I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they're challenged by -- and then, of course, helping them through their challenges.

Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.

It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I'm grateful that the opportunities that I've had to work with have been just fabulous, and the same goes for the people.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: I've always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently -- and I find that very valuable.

Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.

How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT -- to therefore know if we’re doing it appropriately?

The monkey-bread model

Crawford: The first thing we have to do is take a step back to provide the context of that visibility -- or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.

If we look back for a minute, and look at how IT operates -- traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.

Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.

Part of the methodology I use is the monkey bread example. If you're not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you're essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”

And then, what you're left with is this really squishy -- but much smaller -- footprint that you have to contend with. That problem in the center is much more specific -- and arguably that’s what differentiates your company from your competition.

Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.

This example helps you work through determining what’s critical, and -- more importantly -- what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.

Identify opportunities where cloud makes sense – and where it doesn’t. It definitely is one of the most significant opportunities for most IT organizations today. 

That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense -- and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.

So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.

Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.

So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?

Changing IT roles and responsibilities

Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.

The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.

For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business -- unfortunately, most of them would have a really hard time doing that.

The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.

Gardner: If we recognize that cloud is here to stay -- but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?

Learn More About

Hybrid IT Management

Solutions From HPE

Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms -- whether it’s public cloud, SaaS, private cloud, appropriate data centers -- those are all just different levers we can pull depending on the business type.

Businesses change, customers change, demand changes and revenue comes from different places. IT needs to be able to shift gears just as fast and in anticipation of where the company goes. 

As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.

The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.

Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.

Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?

Why, oh why, oh why, cloud?

Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud -- full stop, that’s failure right out of the gate. But if you’re doing it from a position of -- I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.

The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets -- especially as you start thinking about Internet of things (IoT) and start thinking about edge computing -- you know that public cloud becomes problematic for some of those things.

The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?

Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions -- and what decisions maybe you might want to think about doing differently.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it -- but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.

So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?

Cloud’s good, but is it good for business?

Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.

For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.

If you begin to understand the business value of the actions you take -- how you leverage public cloud versus private cloud versus your corporate data center assets -- and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.

Economics plays a very significant role -- but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.

Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on -- perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.

But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks -- along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.

Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.

Safety in numbers

Crawford: You bring up a great point, Dana. Risk -- whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance -- the reality is that nobody across the organization truly understands all of these pieces together.

It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. 

It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.

It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.

Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t -- doing appropriately with hybrid cloud and hybrid IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT

The next BriefingsDirect Internet of Things (IoT) strategies insights interview focuses on how a Miami telecommunications products provider has developed new breeds of services to help manage complex edge and data scenarios.

We will now learn how IoT platforms and services help to improve network services, operations, and business goals -- for carriers and end users alike.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore what is needed to build an efficient IoT support business is Andres Sanchez, CEO of Identidad IoT in Miami. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How has your business changed in the telecoms support industry and why is IoT such a big opportunity for you?

Sanchez: With the new OTT (Over the Top content) technology, and the way that it came into the picture and took part of the whole communications chain of business, the business is basically getting very tough in telecoms. When we begin evaluating what IoT can do and seeing the possibilities, this is a new wave. We understand that it's not about connectivity, it's not about the 10 percent of the value chain -- it's more about the solutions.

Sanchez

Sanchez

We saw a very good opportunity to start something new and to take the experience we have with the technology that we have in telecoms, and get new people, get new developers, and start building solutions, and that's what we are doing right now.

Gardner: So as the voice telecoms business trails off, there is a new opportunity at the edge for data and networks to extend for a variety of use cases. What are some the use cases that you are seeing now in IoT that is a growth opportunity for your business?

Sanchez: IoT is everywhere. The beauty of IoT is that you can find solutions everywhere you look. What we have found is that when people think about IoT, they think about connected home, they think about connected car, or the smart parking where it's just a green or red light when the parking is occupied or not. But IoT is more than that.

There are two ways to generate revenue in IoT. One is by having new products. The second is understanding what it is on the operational level that we can do better. And it’s in this way that we are putting in sensors, measuring things, and analyzing things. You can basically reduce your operational cost, or be more effective in the way that you are doing business. It's not only getting the information, it's using that information to automate processes that it will make your company better.

Gardner: As organizations recognize that there are new technologies coming in that are enabling this smart edge, smart network, what is it that’s preventing them from being able to take advantage of this?

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Companies think that they just have to connect the sensors, that they only have to digitize their information. They haven’t realized that they really have to go through a digital transformation. It's not about connecting the sensors that are already there; it's building a solution using that information. They have to reorganize and to reinvent their organizations.

For example, it's not about taking a sensor, putting the sensor in the machine and just start taking information and watching it on a screen. It’s taking the information and being able to see and check special patterns, to predict when a machine is going to break, when a machine at certain temperatures starts to work better or worse. It's being able to be more productive without having to do more work. It’s just letting the machines do the work by themselves.

Gardner: A big part of that is bringing more of an IT mentality to the edge, creating a standard network and standard platforms that can take advantage of the underlying technologies that are now off-the-shelf.

Sanchez: Definitely. The approach that Identidad IoT takes is we are not building solutions based on what we think is good for the customer. What we are doing is building proof of concepts (PoCs) and tailored solutions for companies that need digital transformation.

I don’t think there are two companies doing the same thing that have the same problems. One manufacturer may have one problem, and another manufacturer using the same technology has another completely different problem. So the approach we are taking is that we generate a PoC, check exactly what the problems are, and then develop that application and solution.

But it's important to understand that IoT is not an IT thing. When we go to a customer, we don’t just go to an IT person, we go to the CEO, because this is a change of mentality. This is not just a change of process. This is not purely putting in new software. This is trying to solve a problem when you may not even know the problem is there. It's really digital transformation.

Gardner: Where is this being successful? Where are you finding that people really understand it and are willing to take the leap, change their culture, rethink things to gain advantages?

One solution at a time

Sanchez: Unfortunately, people are afraid of what is coming, because people don't understand what IoT is, and everybody thinks it's really complicated. It does need expertise. It does need to have security -- that is a very big topic right now. But it's not impossible.

When we approach a company and that CEO, CIO or CTO understands that the benefits of IoT will be shown once you have that solution built -- and that probably the initial solution is not going to be the final solution, but it's going to be based on iterations -- that’s when it starts working.

If people think it’s just an out-of-the-box solution, it's not going to work. That's the challenge we are having right now. The opportunity is when the head of the company understands that they need to go through a digital transformation.

Manage your solutions

with the HPE

Universal IoT Platform

Gardner: When you work with a partner like Hewlett PackardEnterprise (HPE), they have made big investments and developments in edge computing, such as Universal IoT Platform and Edgeline Systems. How does that help you as a solutions provider make that difficult transition for your customers easier, and encourage them to understand that it's not impossible, that there are a lot of solutions already designed for their needs?

Sanchez: Our relationship with HPE has been a huge success for Identidad IoT. When we started looking at platforms, when we started this company, we couldn't find the right platform to fulfill our needs. We were looking for a platform that we could build solutions on and then extrapolate that data with other data, and build other solutions over those solutions.

When we approached HPE, we saw that they do have a unique platform that allows us to generate whatever applications, for whatever verticals, for whatever organizations – whether a city or company. Even if you wanted to create a product just for end-users, they have the ability to do it.

Also, it's a platform that is so robust that you know it’s going to work, it’s reliable, and it’s very secure. You can build security from the device right on up to the platform and the applications. Other platforms, they don't have that.

Our business model correlates a lot with the HPE business model. We think that IoT is about relationships and partnerships -- it’s about an ecosystem. The approach that HPE has to IoT and to ecosystem is exactly the same approach that we have. They are building this big ecosystem of partners. They are helping each other to build relationships and in that way, they build a better and more robust platform.

Gardner: For companies and network providers looking to take advantage of IoT, what would you suggest that they do in preparation? Is there a typical on-ramp to an IoT project? 

A leap of faith

Sanchez: There's no time to be prepared right now. I think they have to take a leap of faith and start building the IoT applications. The pace of the technology transformation is incredible.

When you see the technology right now, today -- probably in four months it's going to be obsolete. You are going to have even better technology, a better sensor. So if you wait --most likely the competition is not going to wait and they will have a very big advantage.

Our approach at Identidad IoT is about platform-as-a-service (PaaS). We are helping companies take that leap without having to create very big financial struggles. And the companies will know that by our using the HPE platform, they are using the state-of-the-art platform. They are not using just a mom-and pop-platform built in a garage. It's a robust PaaS -- so why not to take that leap of faith and start building it? Now is the time.

Gardner: Once you pick up that success, perhaps via a PoC, that gives you ammunition to show economic and productivity benefits that then would lead to even more investment. It seems like there is a virtuous adoption cycle potential here.

Sanchez: Definitely! Once we start a new solution, usually the people who are seeing that solution, they start seeing things that they are not used to seeing. They can pinpoint problems that they have been having for years – but they didn't understand why.

For example, there's one manufacturer of T-shirts in Colombia. They were having issues with one specific machine. That machine used to break after two or three weeks. There was just this small piece that was broken. When we installed the sensor and we started gathering their information, after two or three breaks, we understood that it was not the amount of work -- it was the temperature at which the machine was working.

So what they did is once the temperature reached a certain point, we automatically started some fans to normalize the temperature, and then they haven't had any broken pieces for months. It was a simple solution, but it took a lot of study and gathering of information to be able to understand that break point -- and that's the beauty of IoT.

Gardner: It's data-driven, it's empirical, it’s understood, but you can't know what you don't know until you start measuring things, right?

Listen to things

Sanchez: Exactly! I always say that the “things” are trying to say something, and we are not listening. IoT enables the people, the companies, and the organization to start listening to the things, and not only to start listening, but to make the things to work for us. We need the applications to be able to trigger something to fix the problem without any human intervention -- and that's also the beauty of IoT.

Gardner: And that IoT philosophy even extends to healthcare, manufacturing, transportation, any place where you have complexity, it is pertinent.

Manage your solutions

with the HPE

Universal IoT Platform

Sanchez: Yes, the solution for IoT is everywhere. You can think about healthcare or tracking people or tracking guns or building solutions for cities in which the city can understand what is triggering certain pollution levels that they can fix. Or it can be in manufacturing, or even a small thing like finding your cellphone.

It’s everything that you can measure. Everything that you can put a sensor on, you can measure -- that's IoT. The idea is that IoT will help people live better lives without having to take care of the “thing;” things will have to take care of themselves.

Gardner: You seem quite confident that this is a growth industry. You are betting a significant amount of your future growth on it. How do you see it increasing over the next couple of years? Is this a modest change or do you really see some potential for a much larger market?

Sanchez: That's a really good question. I do see that IoT is the next wave of technology. There are several studies that say that by 2020 there are going to be 50 billion devices connected. I am not that futuristic, but I do see that IoT will start working now and probably within the next two or three years we are going to start seeing an incremental growth of the solutions. Once people understand the capability of IoT, there's going to be an explosion of solutions. And I think the moment to start doing it is now. I think that next year it’s going to be too late.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·     Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment 

·     How IoT and OT collaborate to usher in the data-driven factory of the future 

·     DreamWorks Animation crafts its next era of dynamic IT infrastructure

·     How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·     Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·     Converged IoT systems: Bringing the data center to the edge of everything

·     IDOL-powered appliance delivers better decisions via comprehensive business information searches

·     OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·     Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·     How lastminute.com uses machine learning to improve travel bookings user experience

DreamWorks Animation crafts its next era of dynamic IT infrastructure

The next BriefingsDirect Voice of the Customer thought leader interview examines how DreamWorks Animation is building a multipurpose, all-inclusive, and agile data center capability.

Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas?

Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.

Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?

Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.

Wike

Wike

That's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.

We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.

Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly.

So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.

Greater creativity, increased efficiency

Wike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.

We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”

How HPE Synergy

Automates

Infrastructure Operations

And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there's two things that you get out of that.

One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it's all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or -- because of the compute infrastructure -- you can drive efficiency into the operation.

So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.

Core flexibility, fast

Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it -- we want that in our infrastructure. We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.

Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.

What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity -- the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn't have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.

By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure -- parts of it or all of it -- and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure -- and do it quickly.

Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.

Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?

Wike: That’s absolutely true. We certainly don't want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.

In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn't do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.

The world is great for us today, because people want content -- they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.

Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.

HPE Synergy

Automates

Infrastructure Operations

Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn't work on the factory floor, then what's the point of the design? Are we doing that with digital manufacturing now?

Artifact analytics improve usage, rendering

Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking -- I mean all of those things are commodities that are required to create a project.

Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it's too much.”

One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.

And we correlate that information to a specific asset, such as a character, basket, or chair -- whatever it is that I am rendering -- as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.

One of the things that’s wasteful of digital resources is either having a job that doesn't fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours -- it could take 10 hours to run.

At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don't really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?

Just the facts

Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it's going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.

So now we don't have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.

Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.

I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that's 15 percent less money you have to spend. It's 15 percent faster time for a director to be able to see something. It's 15 percent more iterations. So that was really huge for us.

Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.

Creative intelligence at the edge

Wike: It's interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data -- that intelligence -- to an artist, then they are actually really, really good at managing their own work.

It's only a problem when they don't have any idea that six months from now it's going to cause a huge increase in memory usage or render time. When they don't know that, it's hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.

On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn't understand why.

After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.

But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it's really not a key element to the story. And they can do that today, which is really great.

HPE Synergy

Automates

Infrastructure Operations

Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll -- hundreds and hundreds of people at work putting this together.

So you are dealing with not just one artist making a decision, you have an army of people. It's astounding that you can bring this level of data-driven efficiency to it.

Movie-making’s mobile workforce

Wike: It becomes so much more important, too, as we become a more mobile workforce. 

Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.

Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it's now all these great screens in people's hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?

Wike: There's an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.

We always say that we don't know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.

Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.

Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.

I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Enterprises look for partners to make the most of Microsoft Azure Stack apps

The next BriefingsDirect Voice of the Customer hybrid cloud advancements discussion explores the application development and platform-as-a-service (PaaS) benefits from Microsoft Azure Stack

We’ll now learn how ecosystems of solutions partners are teaming to provide specific vertical industries with applications and services that target private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the latest in successful cloud-based applications development and deployment is our panel, Martin van den Berg, Vice President and Cloud Evangelist at Sogeti USA, based in Cleveland, and Ken Won, Director of Cloud Solutions Marketing at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what are some of the trends that are driving the adoption of hybrid cloud applications specifically around the Azure Stack platform?

Van den Berg: What our clients are dealing with on a daily basis is an ever-expanding data center, they see ever-expanding private clouds in their data centers. They are trying to get into the hybrid cloud space to reap all the benefits from both an agility and compute perspective.

van den Berg

van den Berg

They are trying to get out of the data center space, to see how the ever-growing demand can leverage the cloud. What we see is that Azure Stack will bridge the gap between the cloud that they have on-premises, and the public cloud that they want to leverage -- and basically integrate the two in a true hybrid cloud scenario.

Gardner: What sorts of applications are your clients calling for in these clouds? Are these cloud-native apps, greenfield apps? What are they hoping to do first and foremost when they have that hybrid cloud capability?

Van den Berg: We see a couple of different streams there. One is the native-cloud development. More and more of our clients are going into cloud-native development. We recently brought out a white paper wherein we see that 30 percent of applications being built today are cloud-native already. We expect that trend to grow to more than 60 percent over the next three years for new applications.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

The issue that some of our clients have has to do with some of the data being consumed in these applications. Either due to compliance issues, or that their information security divisions are not too happy, they don’t want to put this data in the public cloud. Azure Stack bridges that gap as well.

Microsoft Azure Stack can bridge the gap between the on-premises data center and what they do in the cloud. They can leverage the whole Azure public cloud PaaS while still having their data on-premises in their own data center. That's a unique capability.

On the other hand, what we also see is that some of our clients are looking at Azure Stack as a bridge to gap the infrastructure-as-a-service (IaaS) space. Even in that space, where clients are not willing to expand their own data center footprint, they can use Azure Stack as a means to seamlessly go to the Azure public IaaS cloud.

Gardner: Ken, does this jibe with what you are seeing at HPE, that people are starting to creatively leverage hybrid models? For example, are they putting apps in one type of cloud and data in another, and then also using their data center and expanding capacity via public cloud means?

Won: We see a lot of it. The customers are interested in using both private clouds and public clouds. In fact, many of the customers we talk to use multiple private clouds and multiple public clouds. They want to figure out how they can use these together -- rather than as separate, siloed environments. The great thing about Azure Stack is the compatibility between what’s available through Microsoft Azure public cloud and what can be run in their own data centers.

Won

Won

The customer concerns are data privacy, data sovereignty, and security. In some cases, there are concerns about application performance. In all these cases, it's a great situation to be able to run part or all of the application on-premises, or on an Azure Stack environment, and have some sort of direct connectivity to a public cloud like Microsoft Azure.

Because you can get full API compatibility, the applications that are developed in the Azure public cloud can be deployed in a private cloud -- with no change to the application at all.

Gardner: Martin, are there specific vertical industries gearing up for this more than others? What are the low-lying fruit in terms of types of apps?

Hybrid healthcare files

Van den Berg: I would say that hybrid cloud is of interest across the board, but I can name a couple of examples of industries where we truly see a business case for Azure Stack.

One of them is a client of ours in the healthcare industry. They wanted to standardize on the Microsoft Azure platform. One of the things that they were trying to do is deal with very large files, such as magnetic resonance imaging (MRI) files. What they found is that in their environment such large files just do not work from a latency and bandwidth perspective in a cloud.

With Microsoft Azure Stack, they can keep these larger files on-premises, very close to where they do their job, and they can still leverage the entire platform and still do analytics from a cloud perspective, because that doesn’t require the bandwidth to interact with things right away. So this is a perfect example where Azure Stack bridges the gap between on-premises and cloud requirements while leveraging the entire platform.

Gardner: What are some of the challenges that these organizations are having as they move to this model? I assume that it's a little easier said than done. What's holding people back when it comes to taking full advantage of hybrid models such as Azure Stack?

Van den Berg: The level of cloud adoption is not really yet where it should be. A lot of our clients have cloud strategies that they are implementing, but they don't have a lot of expertise yet on using the power that the platform brings.

Some of the basic challenges that we need to solve with clients are that they are still dealing with just going to Microsoft Azure cloud and the public cloud services. Azure Stack simplifies that because they now have the cloud on-premises. With that, it’s going to be easier for them to spin-up workload environments and try this all in a secure environment within their own walls, their own data centers.

Won: We see a similar thing with our client base as customers look to adopt hybrid IT environments, a mix of private and public clouds. Some of the challenges they have include how to determine which workload should go where. Should a specific workload go in a private cloud, or should another workload go in a public cloud?

We also see some challenges around processes, organizational process and business process. How do you facilitate and manage an environment that has both private and public clouds? How do you put the business processes in place to ensure that they are being used in the proper way? With Azure Stack -- because of that full compatibility with Azure -- it simplifies the ability to move applications across different environments.

Gardner: Now that we know there are challenges, and that we are not seeing the expected adoption rate, how are organizations like Sogeti working in collaboration with HPE to give a boost to hybrid cloud adoption?

Strategic, secure, scalable cloud migration 

Van den Berg: As the Cloud Evangelist with Sogeti, for the past couple of years I have been telling my clients that they don’t need a data center. The truth is, they probably need some form of on-premises still. But the future is in the clouds, from a scalability and agility perspective -- and the hyperscale with which Microsoft is building out their Azure cloud capabilities, there are no enterprise clients that can keep up with that. 
We try to help our clients define strategy, help them with governance -- how do they approach cloud and what workloads can they put where based on their internal regulations and compliance requirements, and then do migration projects.

We have a service offering called the Sogeti Cloud Assessment, where we go in and evaluate their application portfolio on their cloud readiness. At the end of this engagement, we start moving things right away. We have been really successful with many of our clients in starting to move workloads to the cloud.

Having Azure Stack will make that even easier. Now when a cloud assessment turns up some issues on moving the Microsoft Azure public cloud -- because of compliance or privacy issues or just comfort (sometimes the information security departments just don't feel comfortable moving certain types of data to a public cloud setting) -- we can move those applications to the cloud, leverage the full power and scalability of the cloud while keeping it within the walls of our clients’ data centers. That’s how we are trying to accelerate the cloud adoption, and we truly feel that Azure Stack bridges that gap.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

Gardner: Ken, same question, how are you and Sogeti working together to help foster more hybrid cloud adoption?

Won: The cloud market has been maturing and growing. In the past, it’s been somewhat complicated to implement private clouds. Sometimes these private clouds have been incompatible with each other, and with the public clouds.

In the Azure Stack area, now we have almost an appliance-like experience where we have systems that we build in our factories that we pre-configure, pretest, and get them into the customers’ environment so that they can quickly get their private cloud up and running. We can help them with the implementation, set it up so that Sogeti can help with the cloud-native applications work.

With Sogeti and HPE working together, we make it much simpler for companies to adopt the hybrid cloud models and to quickly see the benefit of moving into a hybrid environment.

Sogeti and HPE work together to make it much simpler for companies to adopt the hybrid cloud models.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer.

Won: I agree. When they talk about a private cloud, they are really talking about virtualmachines, or virtualization. But because the Microsoft Azure Stack solution provides built-in services that are fully compatible with what's available through Microsoft Azure public cloud, it truly provides the full cloud experience. These are the types of services that are beyond just virtualization running within the customers’ data center.

Keep IT simple

I think Azure Stack adoption will be a huge boost to organizations looking to implement private clouds in their data centers.

Gardner: Of course your typical end-user worker is interested primarily their apps, they don’t really care where they are running. But when it comes to getting new application development, rapid application development (RAD), these are some of the pressing issues that most businesses tell us concern them.

So how does RAD, along with some DevOps benefits, play into this, Martin? How are the development people going to help usher in cloud and hybrid cloud models because it helps them satisfy the needs of the end-users in terms of rapid application updates and development?

Van den Berg: This is also where we are talking about the difference between virtualization, private cloud, hybrid clouds, and definitely cloud services. So for the application development staff, they still run in the traditional model, they still run into issues in provisioning of their development environments and sometimes test environments.

A lot of cloud-native application development projects are much easier because you can spin-up environments on the go. What Azure Stack is going to help with is having that environment within the client’s data center; it’s going to help the developers to spin up their own resources.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite.

There is going to be on-demand orchestration and provisioning, which is truly beneficial to application development -- and it's really beneficial to the whole DevOps suite. 

We need to integrate business development and IT operations to deliver value to our clients. If we are waiting multiple weeks for development and the best environment to spin up -- that’s an issue our clients are still dealing with today. That’s where Azure Stack is going to bridge the gap, too.

Won: There are a couple of things that we see happening that will make developers much more productive and able to bring new applications or updates quicker than ever before. One is the ability to get access to these services very, very quickly. Instead of going to the IT department and asking them to spin up services, they will be able to access these services on their own.

The other big thing that Azure Stack offers is compatibility between private and public cloud environments. For the first time, the developer doesn't have to worry about what the underlying environment is going to be. They don’t have to worry about deciding, is this application going to run in a private cloud or a public cloud, and based on where it’s going, do they have to use a certain set of tools for that particular environment.

Now that we have compatibility between the private cloud and the public cloud, the developer can just focus on writing code, focus on the functionality of the application they are developing, knowing that that application now can easily be deployed into a private cloud or a public cloud depending on the business situation, the security requirements, and compliance requirements.

So it’s really about helping the developers become more effective and helping them focus more on code development and applications rather than having them worry about the infrastructure, or waiting for infrastructure to come from the IT department.

HPE Partnership Case Studies

Show Power

of Flex Capacity Financing

Gardner: Martin, for those organizations interested in this and want to get on a fast track, how does an organization like Sogeti working in collaboration with HPE help them accelerate adoption?

Van den Berg: This is where we heavily partner with HPE, to bring the best solutions to our clients. We have all kinds of proof of concepts, we have accelerators, and one of the things that we talked about already is making developers get up to speed faster. We can truly leverage those accelerators and help our clients adopt cloud, and adopt all the services that are available on the hybrid platform.

We have all heard the stories about standardizing on micro-services, on a server fabric, or serverless computing, but developers have not had access to this up until now and IT departments have been slow to push this to the developers.

The accelerators that we have, the approaches that we have, and the proofs of concept that we can do with our client -- together with HPE --  are going to accelerate cloud adoption with our clientele. 

Gardner: Any specific examples, some specific vertical industry use-cases where this really demonstrates the power of the true hybrid model?

When the ship comes in

Won: I can share a couple of examples of the types of companies that we are working with in the hybrid area, and what places that we see typical customers using Azure Stack.

People want to implement disconnected applications or edge applications. These are situations where you may have a data center or an environment running an application that you may either want to run in a disconnected fashion or run to do some local processing, and then move that data to the central data center.

One example of this is the cruise ship industry. All large cruise ships have essentially data centers running the ship, supporting the thousands of customers that are on the ship. What the cruise line vendors want to do is put an application on their many ships and to run the same application in all of their ships. They want to be able to disconnect from connectivity of the central data center while the ship is out at sea and to do a lot of processing and analytics in the data center, in the ship. Then when the ship comes in and connects to port and to the central data center, it only sends the results of the analysis back to the central data center.

This is a great example of having an application that can be developed once and deployed in many different environments, you can do that with Azure Stack. It’s ideal, running that same application in multiple different environments, in either disconnected or connected situations.

Van den Berg: In the financial services industry, we know they are heavily regulated. We need to make sure that they are always in compliance.

So one of the things that we did in the financial services industry with one of our accelerators, we actually have a tool called Sogeti OneShare. It’s a portal solution on top of Microsoft Azure that can help you with orchestration, which can help you with the whole DevOps concept. We were able to have the edge node be Azure Stack -- building applications, have some of the data reside within the data center on the Azure Stack appliance, but still leverage the power of the clouds and all the analytics performance that was available there.

That's what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.

Van den Berg: In talking to many of our clients, when we see the adoption of private cloud in their organizations -- if they are really honest -- it doesn't go very far past just virtualization. They truly haven't leveraged what cloud could bring, not even in a private cloud setting.

So talking about hybrid cloud, it is very hard for them to leverage the power of hybrid clouds when their own private cloud is just virtualization. Azure Stack can help them to have a true private cloud within the walls of their own data centers and so then also leverage everything that Microsoft Azure public cloud has to offer. We just did a project in this space and we were able to deliver functionality to the business from start of the project in just eight weeks. They have never seen that before -- the project that just lasts eight weeks and truly delivers business value. That's the direction that we should be taking. That’s what DevOps is supposed to deliver -- faster value to the business, leveraging the power of clouds.

Gardner: Perhaps we could now help organizations understand how to prepare from a people, process, and technology perspective to be able to best leverage hybrid cloud models like Microsoft Azure Stack.

Martin, what do you suggest organizations do now in order to be in the best position to make this successful when they adopt?

Be prepared

Van den Berg: Make sure that the cloud strategy and governance are in place. That's one of the first things this should always start with.

Then, start training developers, and make sure that the IT department is the broker of cloud services. In the traditional sense, it is always normal that the IT department is the broker for everything that is happening on-premises within the data center. In the cloud space, this doesn’t always happen. In the cloud space, because it is so easy to spin-up things, sometimes the line of business is deploying.

We try to enable IT departments and operators within our clients to be the broker of cloud services and to help with the adoption of Microsoft Azure cloud and Azure Stack. That will help bridge the gap between the clouds and the on-premises data centers.

Gardner: Ken, how should organizations get ready to be in the best position to take advantage of this successfully?

Mapping the way

Won: As IT organizations look at this transformation to hybrid IT, one of the most important things is to have a strong connection to the line of business and to the business goals, and to be able to map those goals to strategic IT priorities.

Once you have done this mapping, the IT department can look at these goals and determine which projects should be implemented and how they should be implemented. In some cases, they should be implemented in private clouds, in some cases public clouds, and in some cases across both private and public cloud.

The task then changes to understanding the workloads, the characterization of the workloads, and looking at things such as performance, security, compliance, risk, and determining the best place for that workload.

Then, it’s finding the right platform to enable developers to be as successful and as impactful as possible, because we know ultimately the big game changer here is enabling the developers to be much more productive, to bring applications out much faster than we have ever seen in the past.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How a Florida school district tames the wild west of education security at scale and on budget

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West.

For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety -- without breaking the bank.

The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we're joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district?

Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change -- of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard.

Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can't clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.

Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?

Skipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice.

Skipper

Skipper

One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.

I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it's no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.

Access for all, but not all-access

Gardner: Whenever you are trying to change people's perceptions, behaviors, culture, it’s useful to have both the carrot and a stick approach.

So to you Rich, what's been working in terms of a carrot? How do you incentivize people? What works in practice there?

Perkins: That's a tough one. We don't really have a carrot that we use. We basically say, “If you are doing the wrong things, you are not going to be able to use our network.”  So we focus more on negatives.

Perkins

Perkins

The positives would be you get to do your job. You get to use the Internet. We don't really give them something more. We see security as directly intertwined with our customer service. Every person we have is our customer and our job is to protect them -- and sometimes that's from themselves.

So we don't really have a carrot-type of system. We don't allow students to play games if they have no problems. We give everybody the same access and treat everybody the same. Either you are a student and you get this level of access, or you are a staff member, you get this level of access, or you don't get access.

Gardner: Let’s get background on the Clay County School District. Tell us how many students you have, how many staff administrators, the size and scope of your school district?

Bunkley: Our school district is the 22nd largest in Florida, we are right on the edge of small and medium in Florida, which in most districts is a very large school district. We run about 38,500 students.

And as far as our IT team, which is our student information system, our Enterprise Resource Planning (ERP) system, security, down to desktop support, network infrastructure support, our web services, we have about 48 people total in our department.

Our scope is literally everything. For some reason IT means that if it plugs into a wall, we are responsible for it. That's generally a true statement in education across the board, where the IT staff tends to be a Jack-of-all-trades, and we fix everything.

Practical IT

Gardner: Where you are headed in terms of technology? Is there a one-to-one student-to-device ratio in the works? What sort of technology do you enable for them?

Bunkley: I am extremely passionate about this, because the one-to-one scenario seems to be the buzzword, and we generally despise buzzwords in this office and we prefer a more practical approach.

The idea of one-to-one is itself to me flawed, because if I just throw a device in a student's hand, what am I actually doing besides throwing a device in a student's hand? We haven't trained them. We haven’t given them the proper platform. All we have done is thrown technology.

And when I hear the terms, well, kids inherently know how to use technology today; it kind of just bothers me, because kids inherently know how to use social media, not technology. They are not production-driven, they are socially driven, and that is a sticking point with me.

We are in fact moving to a one-to-one, but in a nontraditional sense. We have established a one-to-one platform so we can introduce a unified platform for all students and employees to see through a portal system; we happen to use ClassLink, there are various other vendors out there, that’s just the one we happen to use.

We have integrated that in moving to Google Apps for Education and we have a very close relationship with Google. It’s pretty awesome, to be quite honest with you.

So we are moving in the direction of Chromebooks, because it’s just a fiscally more responsible move for us.

I know Microsoft is coming out with Windows 10 S, it’s kind of a strong move on their part. But for us, just because we have the expertise on the Google Apps for Education, or G Suite, it just made a lot of sense for us to go that direction.

So we are moving in one-to-one now with the devices, but the device is literally the least important -- and the last -- step in our project.

Non-stop security, no shenanigans

Gardner: Tell us about the requirements now for securing the current level of devices, and then for the new one. It seems like you are going to have to keep the airplane flying while changing the wings, right? So what is the security approach that works for you that allows for that?

Skipper: Clay County School District has always followed trends as far as devices go. So we actually have a good mixture of devices in our network, which means that no one solution is ever the right solution.

So, for example, we still have some iPads out in our networks, we still have some older Apple products, and then we have a mixture of Chromebooks and also Windows devices. We really need to make sure that we are running the right security platform for the full environment.

As we are transitioning more and more to a take-home philosophy -- and that’s where we as an IT department are seeing this going – so that if the decision is made to make the entire student population go home, we are going to be ready to go.

We have coordinated with our content filter company, and they have some extensions that we can deploy that lock the Chromebooks into a filter situation regardless of their network. That’s been really successful in identifying, maybe blocking students, from those late-night searches. We have also been able to identify some shenanigans that might be taking place due to some interesting web searches that they might do over YouTube, for example. That’s worked really well.

Our next objective is to figure out how to secure our Windows devices and possibly even the Mac devices. While our content filter does a good job as far as securing the content on the Internet, it’s a little bit more difficult to deploy into a Windows device, because users have the option of downloading different Internet browsers. So, content filtering doesn’t really work as well on those.

I have deployed Bitdefender to my laptops, and also to take-home Apple products. That allows me to put in more content filtering, and use that to block people from malicious websites that maybe the content filter didn’t see or was unable to see due to a different browser being used.

In those aspects we definitely are securing our network down further than it ever has been before.

Block and Lock

Perkins: With Bitdefender, one of the things we like is that if we have those devices go off network, we can actually have it turn on the Bitdefender Firewall that allows us to further lock down those machines or protect them if they are in an open environment, like at a hotel or whatever, from possible malicious activity.

And it allows us to block executables at some point. So we can actually go in and say, “No, I don’t want you to be able to run this browser, because I can’t do anything to protect you. Or I can’t watch what you do, or I can’t keep you from doing things you shouldn’t do.” So those are all very useful tools in a single pane of glass that we can see all of those devices at one time and monitor and manage. It saves us a lot of time.

Bunkley: I would follow up on that with a base concept, Dana, and our base concept is of an external network. We come from the concept of, we are an everywhere network. We are not only aiming to defend our internal network while you are here and maybe do some stuff while you are at our house, we are literally an externally built network, where our network will extend directly down into the student and teacher’s home.

We have gone as far as moving everything we physically can out of this network, right down to our firewall. We are moving our domain controllers, external to the network to create literally an everywhere network. And so our security focus is not just internal, it is focused on external first, then internal.

Gardner: With security products, what have you been using, what wasn't working, and where do you expect to go next given those constraints?

No free lunch

Perkins: Well, we can tell you that “free” is not always the best option; as a matter of fact, it’s almost never a good option, but we have had to deal with it.

We were previously using an antivirus called Avast, and it’s a great home product. We found out that it has not been the best business-level product. It’s very much marketed to education, and there are some really good things about it. Transferring away from it hasn’t been the easiest because it’s next to impossible to uninstall. So we have been having some problems with that.

We have also tested some other security measures and programs along the way that haven’t been so successful. And we are always in the process of evaluating where we are. We are never okay with status quo. Even if we achieve where we want to be, I don't think any of us will be satisfied, and that’s actually something that a lot of this is built on -- we always want to go that step further. And I know that’s cliché, but I would say for an institution of this size, the reason we are able to do some of the stuff is the staff that has been assembled here is second to none for an educational institution.

So even in the processes that we have identified, which were helter-skelter before we got here, we have some more issues to continue working out, but we won’t be satisfied with where we are even if we achieve the task.

Skipper: One of the things that our office actually hates is just checking the box on a security audit. I mean, we are very vocal to the auditors when they come in. We don’t do things just to satisfy their audit. We actually look at the audit and we look at the intent of the question and if we find merit in it, we are going to go and meet that expectation and then make it better. Audits are general. We are going to exceed and make it a better functioning process than just saying, “Yes, I have purchased an antivirus product,” or “I have purchased x.” To us that’s unacceptable.

Bunkley: Audits are a good thing, and nobody likes to do them because they are time-consuming. But you do them because they are required by law, for our institution anyways. So instead of just having a generic audit, where we ignore the audit, we have adopted the concept of the audit as a very useful thing for us to have as a self-reflection tool. It’s nice to not have the same set of eyes on your work all the time. And instead of taking offense to someone coming in and saying, “You are not doing this good enough,” we have literally changed our internal culture here, audits are not a bad thing; audits are a desired thing.

Gardner: Let’s go around the table and hear how you began your journey into IT and security, and how the transition to an educational environment went.

IT’s the curriculum

Bunkley: I started in the banking industry. Those hours were crazy and the pressure was pretty high. So as soon as I left that after a year, I entered education, and honestly, I entered education because I thought the schedule was really easy and I kind of copped out on that. Come to find out, I am working almost as many hours, but that’s because I have come to love it.

This is my 17th year in education, so I have been in a few districts now. Wholesale change is what I have been hired to do, that’s also what I was hired here to do in Clay. We want to change the culture, make IT part of the instruction instead of a separate segment of education.

We have to be interwoven into everything, otherwise we are going to be on an island, and the last time I heard the definition of education is to educate children. So IT can never by itself be a high-functioning department in education. So we have decided instead to go to instruction, and go to professional development, and go to administration and intervene ourselves.

Gardner: Jon, tell us about your background and how the transition has been for you.

Skipper: I was at active-duty Air Force until 2014 when I retired after 20 years. And then I came into education on the side. I didn’t really expect this job, wasn’t mentally searching for it. I tried it out, and that was three years ago.

It’s been an interesting environment. Education, and especially a small IT department like this one, is one of those interesting places where you can come and really expand on your weak areas. So that’s what I actually like about this. If I need to practice on my group policy knowledge, I can dive in there and I can affect that change. Overall this has been an effective change, totally different from the military, a lot looser as far as a lot of things go, but really interesting.

Gardner: Rick, same question to you, your background and how did the transition go?

Perkins: I spent 21 years in the military, I was Navy. When I retired in 2010, I actually went to work for a smaller district in education mainly because they were the first one to offer me a job. In that smaller district, just like here, we have eight people doing operations, and we have this big department. Jeremy understands from where he came from. It was pretty much me doing every aspect of it, so you do a little security, you do a little bit of everything, which I enjoyed because you are your own boss, but you are not your own boss.

You still have people residing over you and dictating how you are going to work, but I really enjoyed the challenge. Coming from IT security in the military and then coming into education, it’s almost a role reversal where we came in and found next to no policies.

I am used to a black-and-white world. So we are trying to interject some of that and some of the security best practices into education. You have to be flexible because education is not the military, so you can’t be that stringent. So that’s a challenge.

Gardner: What are you using to put policies in place enforce them? How does that work?

Policy plans

Perkins: From a [Microsoft] Active Directory side, we use group policy like most people do, and we try and automate it as much as we can. We are switching over, on the student side, very heavily to Google. They effectively have their own version of Active Directory with group policy. And then I will let Jon speak more to the security side though we have used various programs like PDQ for our patch management system that allows us to push out stuff. We use some logging systems with ManageEngine. And then as we have said before we use Bitdefender to push a lot of policy and security out as well, and we've been reevaluating some other stuff.

We also use SolarWinds to monitor our network and we actually manage changes to our network and switching using SolarWinds, but on the actual security side, I will let Jon get more specific for you.

Skipper: When we came in … there was a fear of having too much in policy equated to too much auditing overhead. One of the first things we did was identify what we can lock down, and the easiest one was the filter.

The content filter met such stipulations as making sure adult material is not acceptable on the network. We had that down. But it didn't really take into account the dynamic of the Internet as far as sites are popping up every minute or second, and how do you maintain that for unclassified and uncategorized sites?

So one of the things we did was we looked at a vendor, like, okay, does this vendor have a better product for that aspect of it, and we got that working, I think that's been working a lot better. And then we started moving down, we were like, okay, cool, so now we have content filtering down, luckily move on to active network, actually not about finding someone else who is doing it, and borrowing their work and making their own.

We look into some of the bigger school districts and see how they are doing it. I think Chicago, Los Angeles. We both looked at some of their policies where we can find it. I found a lot of higher education in some of the universities. Their policies are a lot more along the lines of where we want to be. I think they have it better than what some of the K-12s do.

So we have been going through there and we are going to have to rewrite policy – we are in an active rewrite of our policies right now, we are taking all of those in and we are looking at them, and we are trying to figure out which ones work in our environment and then make sure we do a really good search and replace.

Gardner: We have talked about people, process and technology. We have heard that you are on a security journey and that it’s long-term and culturally oriented.

Let's look at this then as to what you get when you do it right, particularly vis-à-vis education. Do you have any examples of where you have been able to put in the right technology, add some policy and process improvements, and then culturally attune the people? What does that get for you? How do you turn a problem student into a computer scientist at some point? Tell us some of the examples of when it works, what it gets you.

Positive results

Skipper: When we first got in here, we were a Microsoft district. We had some policies in place to help prevent data loss, and stuff like that.

One of the first things we did is review those policies and activate them, and we started getting some hits. We were surprised at some of hits that we saw, and what we saw going out. We already knew we were moving to the Google networks, continuing the process.

We researched a lot and one of the things we discovered is that just by a minor tweak in a user’s procedures, we were able to identify that we could introduce that user to and get them used to using email encryption, for example. With the Gmail solution, we are able to add an extension, and that extension actually looks at their email as it goes out and finds keywords -- or it may be PII -- and automatically encrypt the email, preventing those kinds of breaches from going out there. So that’s really been helpful.

As far as taking a student who may be on the wrong path and reeducating them and bringing them back into the fold, Bitdefender has actually helped out on that one.

We had a student a while back who went out to YouTube and find out how he could just do a simple search on how to crash the school network, and he found about five links. And he researched those links and went out there and found that this batch filed with this type will crash a school server.

He was able to implement it and started trying to get that attack out there, and Bitdefender was able to actually go out there and see the batch file, see what it did and prevent it. By quarantining the file, I was able to get that reported very quickly from the moment that he introduced the attack, and it identified the student and we were able to sit down with the administrators and talk to the student about that process and educate them on the dangers of actually attacking a school network and the possible repercussions of it.

Gardner: It certainly helps when you can let them know that you are able to track and identify those issues, and then trace them back to an individual. Any other anecdotes about where the technology process and people have come together for a positive result?

Applied IT knowledge for the next generation

Skipper: One of the things that’s really worked well for the school district is what we call Network Academy. It’s taught by one of our local retired master chiefs, and he is actually going in there and teaching students at the high school level how to go as far as earning a Cisco Certified Network Associate (CCNA)-level IT certificate.

If a student comes in and they try hard enough, they will actually figure it out and they can leave when they graduate with a CCNA, which is pretty awesome. A high school student can walk away with a pretty major industry certification.

We like to try and grab these kids as soon as they leave high school, or even before they leave high school, and start introducing them to our network. They may have a different viewpoint on how to do something that’s revolutionary to us.

But we like having that aspect of it, we can educate those kids who are coming in and  getting their industry certifications, and we are able to utilize them before they move on to a college or another job that pays more than we do.

Bunkley: Charlie Thompson leads this program that Jon is speaking of, and actually over half of our team has been through the program. We didn’t create it, we have just taken advantage of the opportunity. We even tailor the classes to some of the specific things that we need. We have effectively created our own IT hiring pipeline out of this program.

Gardner: Next let’s take a look to the future. Where do you see things going, such as more use of cloud services, interest in unified consoles and controls from the cloud as APIs come into play more for your overall IT management? Encryption? Where do you take it from here?

Holistic solutions in the cloud

Bunkley: Those are some of the areas we are focusing on heavily as we move that “anywhere network.” The unified platform for management is going to be a big deal to us. It is a big deal to us already. Encryption is something we take very seriously because we have a team of eight protecting the data of  about 42,000 users..

If you consider the perfect cyber crime reaching down into a 7th or an 8th grader and stealing all of their personal information, taking that kid’s identity and using it, that kid won’t even know that their identity has been stolen.

We consider that a very serious charge of ours to take on. So we will continue to improve our protection of the students’ and teachers’ PII -- even if it sometimes means protecting them from themselves. We take it very seriously.

As we move to the cloud, that unified management platform leads to a more unified security platform. As the operating systems continue to mature, they seem to be going different ways. And what’s good for Mac is not always good for Chrome, is not always good for Windows. But as we move forward with our projects we bring everything back to that central point -- can the three be operated from the single point of connection, so that we can save money moving forward? Just because it’s a cool technology and we want to do, it doesn't mean it's the right thing for us.

Sometimes we have to choose an option that we don’t necessarily like as much, but pick it because it is better for the whole. As we continue to move forward, everything will be focused on that centralization. We can remain a small and flexible department to continue making sure that we are able to provide the services needed internally as well as protect our users.

Skipper: I think Jeremy hit it pretty solid on that one. As we integrate more with the cloud services, Google, etc., we are utilizing those APIs and we are leading our vendors that we use and forcing them into new areas. Lightspeed, for instance, is integrating more-and-more with Google and utilizing their API to ensure that content filtering -- even to the point of mobile device management (MDM) that is more integrated into the Google and Apple platforms to make sure that students are well protected and we have all the tools available that they need at any given time.

We are really leaning heavily on more cloud services, and also the interoperability between APIs and vendors.

Perkins: Public education is changing more to the realm of college education where the classroom is not a classroom -- a classroom is anywhere in the world. We are tasked with supporting them and protecting them no matter where they are located. We have to take care of our customers either way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

Journey to the cloud

Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that's where we are seeing a lot of traction and focus.

The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public -- and managed services -- to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

Won

Won

We have these conversations with many of our customers about how do you determine the right placement for these different workloads -- taking into account things like security, performance, compliance, and cost -- and helping them evaluate this hybrid IT environment that they now need to manage.

Gardner: Ro, a lot of what people have used public cloud for is greenfield apps -- beginning in the cloud, developing in the cloud, deploying in the cloud -- but there's also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy, it's no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

Antao

Antao

When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

Public and/or private power

Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data -- those are better placed in a private cloud.

Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience -- where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that's another big advantage we see.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: Yeah, absolutely, it's giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud -- depending on its requirements -- is incredibly powerful. There's no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks -- that's a boon. It's also important for enterprises to be able to continue with the skills they have.

Ro, is such a commonality of skills and processes not top of mind for many organizations? 

Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization -- because you now can have similar stacks on both ends.

Won: That's a great point. We know that the biggest challenge to adopting new concepts is not the technology; it's really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It's been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

Hybrid: a tectonic IT shift

Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it's a tectonic shift in IT's operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That's one of the typical ways that we work with HPE in this space.

Won: It's a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It's a great partnership.

Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It's not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack -- as well fully supported on Azure Stack that’s running on HPE gear.

Gardner: They might not be concerned about the hardware, but they are concerned about the total value -- and the total solution. If the hardware players aren't collaborating well with the service providers and with the cloud providers -- then that's not going to work.

Quick collaboration is key

Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it's the necessary evil, it’s super important, but you just as soon not have to do it.

Gardner: I just don’t know what to take to the dry cleaner or not, right?

Won: Yeah, there you go!

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution -- again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers -- because that’s what they expect today.

Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don't want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

Low risk, all reward

Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum -- and then suffer performance issues, lost data, with end customers not happy.

PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift -- that paradigm shift -- that they have to make.

In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

Won: It’s absolutely critical -- this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that's incredibly helpful to our customer base.

Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there's this whole journey between getting from where we today to that hybrid IT state -- and having the governing mechanisms around it -- so that they can do it in a way that doesn't expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

Managed multi-cloud continuum

Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That's where we see a lot of them evolve their management practices, their processes, the talent -- to be able to abstract these different pools and focus on the business. That's where we see a lot of the talent development.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

Antao: What's making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who's banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

Nimble, quick and cost-efficient

Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

And a larger part of that was because the minute you have your workloads sitting on a public cloud -- whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value -- the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready -- almost production-ready -- and then when the time is right to drive value from that application -- that’s when you move to a public cloud. Those are huge cost savings right there.

Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months -- or sometimes even weeks -- rather than the two years that it used to take.

It's that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we're seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage -- the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today's world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge -- closer to your internal systems.

Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that's where an ecosystem solutions approach can be a huge benefit.

Let's look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

Look at clouds from both sides now

Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you'll be running different aspects of those services in different locations based on what are the requirements of that particular microservice

So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

Plan to succeed with IT on board

Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

We also have innovation centers that we've built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future

We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?

Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs.

Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together -- and we are sailing out to sea. It’s going great.

Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?

Fuller

Fuller

Fuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other's “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.

And they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.

This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.

Learn From Customers Who

Realize the IoT Advantage

Read More

Yet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”

And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.

Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You’re in the process of processing. You’re making one material into another and you’re doing that in bulk, and you need to do it on a just-in-time basis, given the demands of supply chains these days.

You need to make your business processes and your IT network mesh, to reach every corner. How does a wireless network become an enabler for your requirements?

The heart of IT 

Smith: In a large plant facility, we have different pieces of equipment. One piece of equipment is a pump -- the analogy would be the heart of the process facility of the plant.

Smith

Smith

So your question regarding the wireless network, if we can sensor a pump and tie it into a mesh network, there are incredible cost savings for us. The physical wiring of a pump runs anywhere from $3,000 to $5,000 per pump. So, we see a savings in that.

Being able to have the information wirelessly right away -- that gives us knowledge immediately that we wouldn’t have otherwise. We have workers and millwrights at the plant that physically go out and inspect every single pump in our plant, and we have 133 pumps. If we can utilize our sensors through the wireless network, our millwrights can concentrate on the pumps that they know are having problems.

Gardner: You’re also able to track those individuals, those workers, so if there’s a need to communicate, to locate, to make sure that they hearing the policy, that’s another big part of IoT and people coming together.

Safety is good business

Smith: The tracking of workers is more of a safety issue -- and safety is critical, absolutely critical in a petrochemical facility. We must account for all our people and know where they are in the event of any type of emergency situation.

Gardner: We have the sensors, we can link things up, we can begin to analyze devices and bring that data analytics to the edge, perhaps within a mini data center facility, something that’s ruggedized and tough and able to handle a plant environment.

Given this scenario, JR, what sorts of efficiencies are organizations like Texmark seeing? I know in some businesses, they talk about double digit increases, but in a mature industry, how does this all translate into dollars?

Fuller: We talk about the power of one percent. A one percent improvement in one of the major companies is multi-billions of dollars saved. A one percent change is huge, and, yes, at Texmark we’re able to see some larger percentage-wise efficiency, because they’re actually very nimble.

It’s hard to turn a big titanic ship, but the smaller boat is actually much better at it. We’re able to do things at Texmark that we are not able to do at other places, but we’re then able to create that blueprint of how they do it. 

You’re absolutely right, doing edge computing, with our HPE Edgeline products, and gathering the micro-data from the extra compute power we have installed, provides a lot of opportunities for us to go into the predictive part of this. It’s really where you see the new efficiencies.

Recently I was with the engineers out there, and we’re walking through the facility, and they’re showing us all the equipment that we’re looking at sensoring up, and adding all these analytics. I noticed something on one of the pumps. I’ve been around pumps, I know pumps very well.

I saw this thing, and I said, “What is that?”

“So that’s a filter,” they said.

I said, “What happens if the filter gets clogged?”

“It shuts down the whole pump,” they said.

“What happens if you lose this pump?” I asked.

“We lose the whole chemical process,” they explained.

“Okay, are there sensors on this filter?”

“No, there are only sensors on the pump,” they said.

There weren’t any sensors on the filter. Now, that’s just something that we haven’t thought of, right? But again, I’m not a chemical guy. So I can ask questions that maybe they didn’t ask before.

So I said, “How do you solve this problem today?”

“Well, we have a scheduled maintenance plan,” they said.

They don’t have a problem, but based on the scheduled maintenance plan that filter gets changed whether it needs to or not. It just gets changed on a regular basis. Using IoT technology, we can tell them exactly when to change that filter. Therefore IoT saves on the cost of the filter and the cost of the manpower -- and those types of potential efficiencies and savings are just one small example of the things that we’re trying to accomplish.

Continuous functionality

Smith: It points to the uniqueness of the people-level relationship between the HPE team, our partners, and the Texmark team. We are able to have these conversations to identify things that we haven’t even thought of before. I could give you 25 examples of things just like this, where we say, “Oh, wow, I hadn’t thought about that.” And yet it makes people safer and it all becomes more efficient.

Learn From Customers Who

Realize the IoT Advantage
Read More

Gardner: You don’t know until you have that network in place and the data analytics to utilize what the potential use-cases can be. The name of the game is utilization efficiency, but also continuous operations.

How do you increase your likelihood or reduce the risk of disruption and enhance your continuous operations using these analytics?

Smith: To answer, I’m going to use the example of toll processing. Toll processing is when we would have a customer come to us and ask us to run a process on the equipment that we have at Texmark.

Normally, they would give us a recipe, and we would process a material. We take samples throughout the process, the production, and deliver a finished product to them. With this new level of analytics, with the sensoring of all these components in the refinery of the future vision, we can provide a value-add to the customers by giving them more data than they could ever want. We can document and verify the manufacture and production of the particular chemical that we’re toll processing for them.

Fuller: To add to that, as part of the process, sometimes you may have to do multiple runs when you're tolling, because of your feed stock and the way it works.

By usingadvanced analytics, and some of the predictive benefits of having all of that data available, we're looking to gain efficiencies to cut down the number of additional runs needed. If you take a process that would have taken three runs and we can knock that down to two runs -- that's a 30 percent decrease in total cost and expense. It also allows them produce more products, and to get it out to people a lot faster

Smith: Exactly. Exactly!

Gardner: Of course, the more insight that you can obtain from a pump, and the more resulting data analysis, that gives you insight into the larger processes. You can extend that data and information back into your supply chain. So there's no guesswork. There's no gap. You have complete visibility -- and that's a big plus when it comes to reducing risk in any large, complex, multi-supplier undertaking.

Beyond data gathering, data sharing

Smith: It goes back to relationships at Texmark. We have relationships with our neighbors that are unique in the industry, and so we would be able to share the data that we have.

Fuller: With suppliers.

Smith: Exactly, with suppliers and vendors. It's transformational.

Gardner: So you're extending a common standard industry-accepted platform approach locally into an extended process benefit. And you can share that because you are using common, IT-industry-wide infrastructurefrom HPE.

Fuller: And that's very important. We have a three-phase project, and we've just finished the first two phases. Phase 1 was to put ubiquitous WiFi infrastructure in there, with the location-based services, and all of the things to enable that. The second phase was to upgrade the compute infrastructure with our Edgeline compute and put in our HPE Micro Datacenter in there. So now they have some very robust compute.

Learn From Customers Who

Realize the IoT Advantage

Read More

With that infrastructure in place, it now allows us to do that third phase, where we're bringing in additional IoT projects. We will create a data infrastructure with data storage, and application programming interfaces (APIs), and things like that. That will allow us to bring in a specialty video analytic capability that will overlay on top of the physical and logical infrastructure. And it makes it so much easier to integrate all that.

Gardner: You get a chance to customize the apps much better when you have a standard IT architecture underneath that, right?

Trailblazing standards for a new workforce

Smith: Well, exactly. What are you saying, Dana is – and it gives me chills when I start thinking about what we're doing at Texmark within our industry – is the setting of standards, blazing a new trail. When we talk to our customers and our suppliers and we tell them about this refinery of the future project that we're initiating, all other business goes out the window. They want to know more about what we're doing with the IoT -- and that's incredibly encouraging.

Gardner: I imagine that there are competitive advantages when you can get out in front and you're blazing that trail. If you have the experience, the skills of understanding how to leverage an IoT environment, and an edge computing capability, then you're going to continue to be a step ahead of the competition on many levels: efficiency, safety, ability to customize, and supply chain visibility.

Smith: It surely allows our Texmark team to do their jobs better. I use the example of the millwrights going out and inspecting pumps, and they do that everyday. They do it very well. If we can give them the tools, where they can focus on what they do best over a lifetime of working with pumps, and only work on the pumps that they need to, that's a great example.

I am extremely excited about the opportunities at the refinery of the future to bring new workers into the petrochemical industry. We have a large number of people within our industry who are retiring; they’re taking intellectual capital with them. So to be able to show young people that we are using advanced technology in new and exciting ways is a real draw and it would bring more young people into our industry.

Gardner: By empowering that facilities edge and standardizing IT around it, that also gives us an opportunity to think about the other part of this spectrum -- and that's the cloud. There are cloud services and larger data sets that could be brought to bear.

How does the linking of the edge to the cloud have a benefit?

Cloud watching

Fuller: Texmark Chemicals has one location, and they service the world from that location as a global leader in dicyclopentadiene (DCPD) production. So the cloud doesn't have the same impact as it would for maybe one of the other big oil or big petrochemical companies. But there are ways that we're going to use the cloud at Texmark and rally around it for safety and security.

Utilizing our location-based services, and our compute, if there is an emergency -- whether it's at Texmark or a neighbor -- using cloud-based information like weather, humidity, and wind direction -- and all of these other things that are constantly changing -- we can provide better directed responses. That's one way we would be using cloud at Texmark.

When we start talking about the larger industry -- and connecting multiple refineries together or upstream, downstream and midstream kinds of assets together with a petrochemical company -- cloud becomes critical. And you have to have hybrid infrastructure support.

You don't want to send all your video to the cloud to get analyzed. You want to do that at the edge. You don't want to send all of your vibration data to the cloud, you want to do that at the edge. But, yes, you do want to know when a pump fails, or when something happens so you can educate and train and learn and share that information and institutional knowledge throughout the rest of the organization.

Gardner: Before we sign off, let’s take a quick look into the crystal ball. Refinery of the future, five years from now, Doug, where do you see this going?

Learn From Customers Who

Realize the IoT Advantage

Read More

Smith: The crystal ball is often kind of foggy, but it’s fun to look into it. I had mentioned earlier opportunities for education of a new workforce. Certainly, I am focused on the solutions that IoT brings to efficiencies, safety, and profitability of Texmark as a company. But I am definitely interested in giving people opportunities to find a job to work in a good industry that can be a career.

Gardner: JR, I know HPE has a lot going on with edge computing, making these data centers more efficient, more capable, and more rugged. Where do you see the potential here for IoT capability in refineries of the future?

Future forecast: safe, efficient edge

Fuller: You're going to see the pace pick up. I have to give kudos to Doug. He is a visionary. Whether he admits that or not, he is actually showing an industry that has been around for many years how to do this and be successful at it. So that's incredible. In that crystal ball look, that five-year look, he's going to be recognized as someone who helped really transform this industry from old to new economy.

As far as edge-computing goes, what we're seeing with our converged Edgeline systems, which are our first generation, and we've created this market space for converged edge systems with the hardening of it. Now, we’re working on generation 2. We're going to get faster, smaller, cheaper, and become more ubiquitous. I see our IoT infrastructure as having a dramatic impact on what we can actually accomplish and the workforce in five years. It will be more virtual and augmented and have all of these capabilities. It’s going to be a lot safer for people, and it’s going to be a lot more efficient.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Get ready for the post-cloud world

Just when cloud computing seems inevitable as the dominant force in IT, it’s time to move on because we’re not quite at the end-state of digital transformation. Far from it.

Now's the time to prepare for the post-cloud world.

It’s not that cloud computing is going away. It’s that we need to be ready for making the best of IT productivity once cloud in its many forms become so pervasive as to be mundane, the place where all great IT innovations must go.

Read the rest ...

 

You may also be interested in:

Sumo Logic CEO on how modern apps benefit from 'continuous intelligence' and DevOps insights

The next BriefingsDirect applications health monitoring interview explores how a new breed of continuous intelligence emerges by gaining data from systems infrastructure logs -- either on-premises or in the cloud -- and then cross-referencing that with intrinsic business metrics information.

We’ll now explore how these new levels of insight and intelligence into what really goes on underneath the covers of modern applications help ensure that apps are built, deployed, and operated properly.

Today, more than ever, how a company's applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work, too.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

We’re joined by an executive from Sumo Logic to learn why modern applications are different, what's needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

To describe how to build and maintain the best applications, welcome Ramin Sayar, President and CEO of Sumo Logic. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience -- but the business outcomes.

Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

User experience

Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

 

Gardner: That's right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps -- security and operations. They’re really joined at the hip.

Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.

Access the Webinar
On Gaining Operational Visibility
Into AWS

We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It's really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that's often good for existing types of workloads.

The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

Often it's centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren't important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can't tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it's all apps, right?

It's a data problem

Sayar: Absolutely. Regardless of whether it's enterprise or consumer, if it's mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

Gardner: Let's talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.

Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learningand algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

So, unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You'd get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what's the big step to going to streaming? Why is that an essential part of making this so actionable?

Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

Center of the business

Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users -- customers of Sumo Logic and internally here at Sumo Logic -- are used to and then we break that down into this lifecycle -- centered on this concept of peacetime and wartime.

Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

Then, there's this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you're getting PagerDuty alerts, you're getting Slack notifications. It's no longer the traditional helpdesk notification process when people are getting on bridge lines.

Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we're enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we've built.

So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

Gardner: It sounds like we're able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We're trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we've built to bring teams together versus divided.

Gardner: That sounds terrific if I'm the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

Need for agility

Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

So, we're able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

At the same time we're helping with agility, we're also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in -- not at the infrastructure level or only the end-user level -- but at the application and code level. What we're really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.

Download the State
Of Modern Applications
In AWS Report

Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

Sayar: Yes. And ultimately what we're able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we're hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you're using machine learningfor when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you're security, ops, or development teams.

To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they're not searching the irrelevant data. They're searching the relevant and oftentimes occurring data that's not frequent or not really known, versus what’s constantly occurring in their environment.

In doing so, it’s not just about mean time to identification, but it’s also how quickly we're able to respond and repair. We've seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

Predictive capabilities

Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you've had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you're looking at changes over time when you've deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we're able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

So, reactive, to proactive, all the way to predictive is the philosophy that we've been trying to build in terms of our analytics stack and capabilities.

Gardner: How are some actual customers using this and what are they getting back for their investment?

Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We're well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora -- modern companies as well as traditional companies.

What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that's where they start leveraging Sumo Logic.

Second, what we see is that's it’s not always a digital transformation; it's often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

So whether it's agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

Cloud migration

Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they're using Sumo Logic to better assess and manage customer experience. It's not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

The last one is someone that we're all pretty familiar with and that’s Airbnb. We're seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they're taking a lot of credit-card and payment-processing information. They're using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.

Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you're giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

We did a pilot for them, and as they're modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they're looking at not just their existing real estate of infrastructure and apps for all these teams, but they're going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

So, you're going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That's a great example of an innovative company doing something unique and different with Sumo Logic.

Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

Data source and location

Sayar: Whether you’re running on-prem, whether you're running co-los, whether you're running through CDN providers like Akamai, whether you're running on AWS or Azure, Heroku, whether you're running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?

Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we're at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

I'd say that we're on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you're continuously adapting and learning. It's not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there's a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we're talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

Business Insights

Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it's from everything with respect to providing business insights to operational insights, to security insights.

We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they're extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they're managing; in this case, a set of apps that run on a CDN.

Try Sumo Logic for Free
To Get Critical Data and Insights
Into Apps and Infrastructure Operations

At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

Full stack isn’t just the technology; it's providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we've talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Sumo Logic.

You may also be interested in:

ServiceMaster's path to an agile development twofer: Better security and DevOps business benefits

ServiceMaster's path to an agile development twofer: Better security and DevOps business benefits

The next BriefingsDirect Voice of the Customer security transformation discussion explores how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

To learn how security technology leads to posture maturity and DevOps business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

How JetBlue turns mobile applications quality assurance into improved user experience wins

The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications.

We'll now hear how JetBlue cultivated a DevOps model by including advanced performance feedback in the continuous integration process to enable greater customer and workforce productivity.

Strategic DevOps—How advanced testing brings broad benefits to Independent Health

The next BriefingsDirect Voice of the Customer digital business transformation case study highlights how Independent Health in Buffalo, New York has entered into a next phase of "strategic DevOps."

After a two-year drive to improve software development, speed to value, and improved user experience of customer service applications, Independent Health has further extended advanced testing benefits to ongoing apps production and ongoing performance monitoring.

Learn here how the reuse of proven performance scripts and replaying of synthetic transactions that mimic user experience have cut costs and gained early warning and trending insights into app behaviors and system status.

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.