.banner-thumbnail-wrapper { display:none; }

DevOps

Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

Journey to the cloud

Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that's where we are seeing a lot of traction and focus.

The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public -- and managed services -- to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

Won

Won

We have these conversations with many of our customers about how do you determine the right placement for these different workloads -- taking into account things like security, performance, compliance, and cost -- and helping them evaluate this hybrid IT environment that they now need to manage.

Gardner: Ro, a lot of what people have used public cloud for is greenfield apps -- beginning in the cloud, developing in the cloud, deploying in the cloud -- but there's also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy, it's no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

Antao

Antao

When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

Public and/or private power

Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data -- those are better placed in a private cloud.

Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience -- where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that's another big advantage we see.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: Yeah, absolutely, it's giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud -- depending on its requirements -- is incredibly powerful. There's no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks -- that's a boon. It's also important for enterprises to be able to continue with the skills they have.

Ro, is such a commonality of skills and processes not top of mind for many organizations? 

Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization -- because you now can have similar stacks on both ends.

Won: That's a great point. We know that the biggest challenge to adopting new concepts is not the technology; it's really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It's been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

Hybrid: a tectonic IT shift

Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it's a tectonic shift in IT's operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That's one of the typical ways that we work with HPE in this space.

Won: It's a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It's a great partnership.

Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It's not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack -- as well fully supported on Azure Stack that’s running on HPE gear.

Gardner: They might not be concerned about the hardware, but they are concerned about the total value -- and the total solution. If the hardware players aren't collaborating well with the service providers and with the cloud providers -- then that's not going to work.

Quick collaboration is key

Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it's the necessary evil, it’s super important, but you just as soon not have to do it.

Gardner: I just don’t know what to take to the dry cleaner or not, right?

Won: Yeah, there you go!

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution -- again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers -- because that’s what they expect today.

Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don't want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

Low risk, all reward

Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum -- and then suffer performance issues, lost data, with end customers not happy.

PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift -- that paradigm shift -- that they have to make.

In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

Won: It’s absolutely critical -- this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that's incredibly helpful to our customer base.

Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there's this whole journey between getting from where we today to that hybrid IT state -- and having the governing mechanisms around it -- so that they can do it in a way that doesn't expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

Managed multi-cloud continuum

Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That's where we see a lot of them evolve their management practices, their processes, the talent -- to be able to abstract these different pools and focus on the business. That's where we see a lot of the talent development.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

Antao: What's making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who's banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

Nimble, quick and cost-efficient

Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

And a larger part of that was because the minute you have your workloads sitting on a public cloud -- whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value -- the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready -- almost production-ready -- and then when the time is right to drive value from that application -- that’s when you move to a public cloud. Those are huge cost savings right there.

Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months -- or sometimes even weeks -- rather than the two years that it used to take.

It's that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we're seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage -- the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today's world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge -- closer to your internal systems.

Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that's where an ecosystem solutions approach can be a huge benefit.

Let's look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

Look at clouds from both sides now

Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you'll be running different aspects of those services in different locations based on what are the requirements of that particular microservice

So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

Plan to succeed with IT on board

Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

We also have innovation centers that we've built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future

We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?

Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs.

Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together -- and we are sailing out to sea. It’s going great.

Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?

Fuller

Fuller

Fuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other's “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.

And they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.

This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.

Learn From Customers Who

Realize the IoT Advantage

Read More

Yet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”

And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.

Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You’re in the process of processing. You’re making one material into another and you’re doing that in bulk, and you need to do it on a just-in-time basis, given the demands of supply chains these days.

You need to make your business processes and your IT network mesh, to reach every corner. How does a wireless network become an enabler for your requirements?

The heart of IT 

Smith: In a large plant facility, we have different pieces of equipment. One piece of equipment is a pump -- the analogy would be the heart of the process facility of the plant.

Smith

Smith

So your question regarding the wireless network, if we can sensor a pump and tie it into a mesh network, there are incredible cost savings for us. The physical wiring of a pump runs anywhere from $3,000 to $5,000 per pump. So, we see a savings in that.

Being able to have the information wirelessly right away -- that gives us knowledge immediately that we wouldn’t have otherwise. We have workers and millwrights at the plant that physically go out and inspect every single pump in our plant, and we have 133 pumps. If we can utilize our sensors through the wireless network, our millwrights can concentrate on the pumps that they know are having problems.

Gardner: You’re also able to track those individuals, those workers, so if there’s a need to communicate, to locate, to make sure that they hearing the policy, that’s another big part of IoT and people coming together.

Safety is good business

Smith: The tracking of workers is more of a safety issue -- and safety is critical, absolutely critical in a petrochemical facility. We must account for all our people and know where they are in the event of any type of emergency situation.

Gardner: We have the sensors, we can link things up, we can begin to analyze devices and bring that data analytics to the edge, perhaps within a mini data center facility, something that’s ruggedized and tough and able to handle a plant environment.

Given this scenario, JR, what sorts of efficiencies are organizations like Texmark seeing? I know in some businesses, they talk about double digit increases, but in a mature industry, how does this all translate into dollars?

Fuller: We talk about the power of one percent. A one percent improvement in one of the major companies is multi-billions of dollars saved. A one percent change is huge, and, yes, at Texmark we’re able to see some larger percentage-wise efficiency, because they’re actually very nimble.

It’s hard to turn a big titanic ship, but the smaller boat is actually much better at it. We’re able to do things at Texmark that we are not able to do at other places, but we’re then able to create that blueprint of how they do it. 

You’re absolutely right, doing edge computing, with our HPE Edgeline products, and gathering the micro-data from the extra compute power we have installed, provides a lot of opportunities for us to go into the predictive part of this. It’s really where you see the new efficiencies.

Recently I was with the engineers out there, and we’re walking through the facility, and they’re showing us all the equipment that we’re looking at sensoring up, and adding all these analytics. I noticed something on one of the pumps. I’ve been around pumps, I know pumps very well.

I saw this thing, and I said, “What is that?”

“So that’s a filter,” they said.

I said, “What happens if the filter gets clogged?”

“It shuts down the whole pump,” they said.

“What happens if you lose this pump?” I asked.

“We lose the whole chemical process,” they explained.

“Okay, are there sensors on this filter?”

“No, there are only sensors on the pump,” they said.

There weren’t any sensors on the filter. Now, that’s just something that we haven’t thought of, right? But again, I’m not a chemical guy. So I can ask questions that maybe they didn’t ask before.

So I said, “How do you solve this problem today?”

“Well, we have a scheduled maintenance plan,” they said.

They don’t have a problem, but based on the scheduled maintenance plan that filter gets changed whether it needs to or not. It just gets changed on a regular basis. Using IoT technology, we can tell them exactly when to change that filter. Therefore IoT saves on the cost of the filter and the cost of the manpower -- and those types of potential efficiencies and savings are just one small example of the things that we’re trying to accomplish.

Continuous functionality

Smith: It points to the uniqueness of the people-level relationship between the HPE team, our partners, and the Texmark team. We are able to have these conversations to identify things that we haven’t even thought of before. I could give you 25 examples of things just like this, where we say, “Oh, wow, I hadn’t thought about that.” And yet it makes people safer and it all becomes more efficient.

Learn From Customers Who

Realize the IoT Advantage
Read More

Gardner: You don’t know until you have that network in place and the data analytics to utilize what the potential use-cases can be. The name of the game is utilization efficiency, but also continuous operations.

How do you increase your likelihood or reduce the risk of disruption and enhance your continuous operations using these analytics?

Smith: To answer, I’m going to use the example of toll processing. Toll processing is when we would have a customer come to us and ask us to run a process on the equipment that we have at Texmark.

Normally, they would give us a recipe, and we would process a material. We take samples throughout the process, the production, and deliver a finished product to them. With this new level of analytics, with the sensoring of all these components in the refinery of the future vision, we can provide a value-add to the customers by giving them more data than they could ever want. We can document and verify the manufacture and production of the particular chemical that we’re toll processing for them.

Fuller: To add to that, as part of the process, sometimes you may have to do multiple runs when you're tolling, because of your feed stock and the way it works.

By usingadvanced analytics, and some of the predictive benefits of having all of that data available, we're looking to gain efficiencies to cut down the number of additional runs needed. If you take a process that would have taken three runs and we can knock that down to two runs -- that's a 30 percent decrease in total cost and expense. It also allows them produce more products, and to get it out to people a lot faster

Smith: Exactly. Exactly!

Gardner: Of course, the more insight that you can obtain from a pump, and the more resulting data analysis, that gives you insight into the larger processes. You can extend that data and information back into your supply chain. So there's no guesswork. There's no gap. You have complete visibility -- and that's a big plus when it comes to reducing risk in any large, complex, multi-supplier undertaking.

Beyond data gathering, data sharing

Smith: It goes back to relationships at Texmark. We have relationships with our neighbors that are unique in the industry, and so we would be able to share the data that we have.

Fuller: With suppliers.

Smith: Exactly, with suppliers and vendors. It's transformational.

Gardner: So you're extending a common standard industry-accepted platform approach locally into an extended process benefit. And you can share that because you are using common, IT-industry-wide infrastructurefrom HPE.

Fuller: And that's very important. We have a three-phase project, and we've just finished the first two phases. Phase 1 was to put ubiquitous WiFi infrastructure in there, with the location-based services, and all of the things to enable that. The second phase was to upgrade the compute infrastructure with our Edgeline compute and put in our HPE Micro Datacenter in there. So now they have some very robust compute.

Learn From Customers Who

Realize the IoT Advantage

Read More

With that infrastructure in place, it now allows us to do that third phase, where we're bringing in additional IoT projects. We will create a data infrastructure with data storage, and application programming interfaces (APIs), and things like that. That will allow us to bring in a specialty video analytic capability that will overlay on top of the physical and logical infrastructure. And it makes it so much easier to integrate all that.

Gardner: You get a chance to customize the apps much better when you have a standard IT architecture underneath that, right?

Trailblazing standards for a new workforce

Smith: Well, exactly. What are you saying, Dana is – and it gives me chills when I start thinking about what we're doing at Texmark within our industry – is the setting of standards, blazing a new trail. When we talk to our customers and our suppliers and we tell them about this refinery of the future project that we're initiating, all other business goes out the window. They want to know more about what we're doing with the IoT -- and that's incredibly encouraging.

Gardner: I imagine that there are competitive advantages when you can get out in front and you're blazing that trail. If you have the experience, the skills of understanding how to leverage an IoT environment, and an edge computing capability, then you're going to continue to be a step ahead of the competition on many levels: efficiency, safety, ability to customize, and supply chain visibility.

Smith: It surely allows our Texmark team to do their jobs better. I use the example of the millwrights going out and inspecting pumps, and they do that everyday. They do it very well. If we can give them the tools, where they can focus on what they do best over a lifetime of working with pumps, and only work on the pumps that they need to, that's a great example.

I am extremely excited about the opportunities at the refinery of the future to bring new workers into the petrochemical industry. We have a large number of people within our industry who are retiring; they’re taking intellectual capital with them. So to be able to show young people that we are using advanced technology in new and exciting ways is a real draw and it would bring more young people into our industry.

Gardner: By empowering that facilities edge and standardizing IT around it, that also gives us an opportunity to think about the other part of this spectrum -- and that's the cloud. There are cloud services and larger data sets that could be brought to bear.

How does the linking of the edge to the cloud have a benefit?

Cloud watching

Fuller: Texmark Chemicals has one location, and they service the world from that location as a global leader in dicyclopentadiene (DCPD) production. So the cloud doesn't have the same impact as it would for maybe one of the other big oil or big petrochemical companies. But there are ways that we're going to use the cloud at Texmark and rally around it for safety and security.

Utilizing our location-based services, and our compute, if there is an emergency -- whether it's at Texmark or a neighbor -- using cloud-based information like weather, humidity, and wind direction -- and all of these other things that are constantly changing -- we can provide better directed responses. That's one way we would be using cloud at Texmark.

When we start talking about the larger industry -- and connecting multiple refineries together or upstream, downstream and midstream kinds of assets together with a petrochemical company -- cloud becomes critical. And you have to have hybrid infrastructure support.

You don't want to send all your video to the cloud to get analyzed. You want to do that at the edge. You don't want to send all of your vibration data to the cloud, you want to do that at the edge. But, yes, you do want to know when a pump fails, or when something happens so you can educate and train and learn and share that information and institutional knowledge throughout the rest of the organization.

Gardner: Before we sign off, let’s take a quick look into the crystal ball. Refinery of the future, five years from now, Doug, where do you see this going?

Learn From Customers Who

Realize the IoT Advantage

Read More

Smith: The crystal ball is often kind of foggy, but it’s fun to look into it. I had mentioned earlier opportunities for education of a new workforce. Certainly, I am focused on the solutions that IoT brings to efficiencies, safety, and profitability of Texmark as a company. But I am definitely interested in giving people opportunities to find a job to work in a good industry that can be a career.

Gardner: JR, I know HPE has a lot going on with edge computing, making these data centers more efficient, more capable, and more rugged. Where do you see the potential here for IoT capability in refineries of the future?

Future forecast: safe, efficient edge

Fuller: You're going to see the pace pick up. I have to give kudos to Doug. He is a visionary. Whether he admits that or not, he is actually showing an industry that has been around for many years how to do this and be successful at it. So that's incredible. In that crystal ball look, that five-year look, he's going to be recognized as someone who helped really transform this industry from old to new economy.

As far as edge-computing goes, what we're seeing with our converged Edgeline systems, which are our first generation, and we've created this market space for converged edge systems with the hardening of it. Now, we’re working on generation 2. We're going to get faster, smaller, cheaper, and become more ubiquitous. I see our IoT infrastructure as having a dramatic impact on what we can actually accomplish and the workforce in five years. It will be more virtual and augmented and have all of these capabilities. It’s going to be a lot safer for people, and it’s going to be a lot more efficient.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Get ready for the post-cloud world

Just when cloud computing seems inevitable as the dominant force in IT, it’s time to move on because we’re not quite at the end-state of digital transformation. Far from it.

Now's the time to prepare for the post-cloud world.

It’s not that cloud computing is going away. It’s that we need to be ready for making the best of IT productivity once cloud in its many forms become so pervasive as to be mundane, the place where all great IT innovations must go.

Read the rest ...

 

You may also be interested in:

Sumo Logic CEO on how modern apps benefit from 'continuous intelligence' and DevOps insights

The next BriefingsDirect applications health monitoring interview explores how a new breed of continuous intelligence emerges by gaining data from systems infrastructure logs -- either on-premises or in the cloud -- and then cross-referencing that with intrinsic business metrics information.

We’ll now explore how these new levels of insight and intelligence into what really goes on underneath the covers of modern applications help ensure that apps are built, deployed, and operated properly.

Today, more than ever, how a company's applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work, too.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

We’re joined by an executive from Sumo Logic to learn why modern applications are different, what's needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

To describe how to build and maintain the best applications, welcome Ramin Sayar, President and CEO of Sumo Logic. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience -- but the business outcomes.

Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

User experience

Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

 

Gardner: That's right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps -- security and operations. They’re really joined at the hip.

Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.

Access the Webinar
On Gaining Operational Visibility
Into AWS

We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It's really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that's often good for existing types of workloads.

The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

Often it's centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren't important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can't tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it's all apps, right?

It's a data problem

Sayar: Absolutely. Regardless of whether it's enterprise or consumer, if it's mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

Gardner: Let's talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.

Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learningand algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

So, unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You'd get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what's the big step to going to streaming? Why is that an essential part of making this so actionable?

Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

Center of the business

Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users -- customers of Sumo Logic and internally here at Sumo Logic -- are used to and then we break that down into this lifecycle -- centered on this concept of peacetime and wartime.

Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

Then, there's this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you're getting PagerDuty alerts, you're getting Slack notifications. It's no longer the traditional helpdesk notification process when people are getting on bridge lines.

Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we're enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we've built.

So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

Gardner: It sounds like we're able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We're trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we've built to bring teams together versus divided.

Gardner: That sounds terrific if I'm the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

Need for agility

Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

So, we're able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

At the same time we're helping with agility, we're also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in -- not at the infrastructure level or only the end-user level -- but at the application and code level. What we're really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.

Download the State
Of Modern Applications
In AWS Report

Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

Sayar: Yes. And ultimately what we're able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we're hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you're using machine learningfor when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you're security, ops, or development teams.

To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they're not searching the irrelevant data. They're searching the relevant and oftentimes occurring data that's not frequent or not really known, versus what’s constantly occurring in their environment.

In doing so, it’s not just about mean time to identification, but it’s also how quickly we're able to respond and repair. We've seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

Predictive capabilities

Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you've had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you're looking at changes over time when you've deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we're able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

So, reactive, to proactive, all the way to predictive is the philosophy that we've been trying to build in terms of our analytics stack and capabilities.

Gardner: How are some actual customers using this and what are they getting back for their investment?

Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We're well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora -- modern companies as well as traditional companies.

What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that's where they start leveraging Sumo Logic.

Second, what we see is that's it’s not always a digital transformation; it's often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

So whether it's agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

Cloud migration

Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they're using Sumo Logic to better assess and manage customer experience. It's not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

The last one is someone that we're all pretty familiar with and that’s Airbnb. We're seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they're taking a lot of credit-card and payment-processing information. They're using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.

Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you're giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

We did a pilot for them, and as they're modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they're looking at not just their existing real estate of infrastructure and apps for all these teams, but they're going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

So, you're going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That's a great example of an innovative company doing something unique and different with Sumo Logic.

Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

Data source and location

Sayar: Whether you’re running on-prem, whether you're running co-los, whether you're running through CDN providers like Akamai, whether you're running on AWS or Azure, Heroku, whether you're running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?

Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we're at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

I'd say that we're on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you're continuously adapting and learning. It's not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there's a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we're talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

Business Insights

Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it's from everything with respect to providing business insights to operational insights, to security insights.

We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they're extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they're managing; in this case, a set of apps that run on a CDN.

Try Sumo Logic for Free
To Get Critical Data and Insights
Into Apps and Infrastructure Operations

At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

Full stack isn’t just the technology; it's providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we've talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Sumo Logic.

You may also be interested in:

ServiceMaster's path to an agile development twofer: Better security and DevOps business benefits

ServiceMaster's path to an agile development twofer: Better security and DevOps business benefits

The next BriefingsDirect Voice of the Customer security transformation discussion explores how home-maintenance repair and services provider ServiceMaster develops applications with a security-minded focus as a DevOps benefit.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

To learn how security technology leads to posture maturity and DevOps business benefits, we're joined by Jennifer Cole, Chief Information Security Officer and Vice President of IT, Information Security, and Governance for ServiceMaster in Memphis, Tennessee, and Ashish Kuthiala, Senior Director of Marketing and Strategy at Hewlett Packard Enterprise DevOps. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

How JetBlue turns mobile applications quality assurance into improved user experience wins

The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications.

We'll now hear how JetBlue cultivated a DevOps model by including advanced performance feedback in the continuous integration process to enable greater customer and workforce productivity.

Strategic DevOps—How advanced testing brings broad benefits to Independent Health

The next BriefingsDirect Voice of the Customer digital business transformation case study highlights how Independent Health in Buffalo, New York has entered into a next phase of "strategic DevOps."

After a two-year drive to improve software development, speed to value, and improved user experience of customer service applications, Independent Health has further extended advanced testing benefits to ongoing apps production and ongoing performance monitoring.

Learn here how the reuse of proven performance scripts and replaying of synthetic transactions that mimic user experience have cut costs and gained early warning and trending insights into app behaviors and system status.

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.