.banner-thumbnail-wrapper { display:none; }

Security

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

A discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time.

Better management of multicloud IaaS proves accelerant to developer productivity for European gaming leader Magellan Robotech

Better management of multicloud IaaS proves accelerant to developer productivity for European gaming leader Magellan Robotech

Learn how Magellan Robotech uses cloud management as a means to best access hybrid cloud services that rapidly bring new resources to developers.

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

Learn how an integrated backup and recovery capability allowed production processing systems to be snap back into use in only a few hours.

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

A discussion on how hyperconverged infrastructure and virtual desktop infrastructure are combining to make one of the more traditionally challenging workloads far easier to deploy, optimize, and operate.

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

A discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

A discussion on how IT leaders face an increasingly complex mix of identifying and automating for both best performance and best price points across all of their cloud options.

How new tools help any business build ethical and sustainable supply chains

The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.

We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks -- from use of illegal and unethical labor practices to hidden environmental malpractices

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?

Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.

 Harris

Harris

First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good. 

Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society -- and not just generating dividends and capital growth for shareholders. 

Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent -- particularly Millennials and Generation Z -- and they are more likely to get better investment rates as well. 

Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?

Better business decisions with AI

Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.

We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings. 

These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.

What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.

Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba. 

Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.

McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand -- not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements. 

 McVeigh

McVeigh

But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.

We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.

At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive. 

They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks -- like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators -- it’s all that data matrix that comes together that tell the actionable story. 

Some companies still just want to check a [compliance] box. Other companies want to dig deeper -- but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips. 

Global compliance

Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.

Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”

 Rakowski

Rakowski

They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business. 

What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in. 

Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score. 

And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices. 

Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own -- and SAP Ariba is now aggregating the aggregators.

How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from? 

Timely insights, best business decisions

Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions. 

The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there’s risk, there is opportunity.

So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.

We're looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there is risk there is opportunity.

Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?

Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.

When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt. 

Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them? 

AI risk assessment reaps rewards

Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology -- through our procurement suite, for example -- to impact the decisions they’re making. 

For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time. 

A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas. 

Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?

Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.

We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.

AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.

The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material. 

And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant. 

Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.

We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article. 

By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”

And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly. 

Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem? 

The AI-aggregation advantage

McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably

We look at many indices. We actually just revamped our social indices a couple of years ago.

Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources. 

Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them. 

We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it. 

Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?

Partnership prowess

Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.

We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame. 

And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly. 

What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers. 

Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?

Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba's supplier risk solutions directly.

Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those. 

If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.

Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well -- or not so well.

Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency -- how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?

Nice-to-have ROI

Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.

How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable. 

And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”

And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process. 

We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it -- but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.

Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen? 

Pay attention, be prepared

Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that. 

From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great. 

They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.

Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.

And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.

Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?

Synergistic risk management

Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team. 

I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.

I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers -- and who are the suppliers to the suppliers to the suppliers?

We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Panel explores new ways to solve the complexity of hybrid cloud monitoring

The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.

We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.

Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.

We will also hear about new solutions and explore examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing -- and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.

Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.

They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.

Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.

The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.

In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing -- but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs -- never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.

Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.

Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.

Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured -- and at compelling prices.

Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.

Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.

IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.

We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.  

Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information. 

Ian, please tell us more about the survey findings.

IT leaders respond to their needs 

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.

 Bromehead

Bromehead

More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.

The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].

What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments -- whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.

What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.

The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.

Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.

What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed -- and what depends on the internal, on-premise as well as private and public cloud instances.

People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.

They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.

It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.

I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.

The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts. 

So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.

People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs -- the cost of what’s being deployed -- and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.

The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.

For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users -- then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.

Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading. 

So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.

So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics 

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.

Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.

 Brandt

Brandt

So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.

Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.

We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on -- they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.

But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey -- some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.

Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data? 

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical -- and must be applied across hybrid IT against your resources and map them to your services that you define.

Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.

And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.

When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt. 

And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.

We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.

Operations Bridge in more detail  

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.

We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.

Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.

 Burose

Burose

All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data -- all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.

And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.

This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.

As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting -- independent of the technology -- whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.

They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology -- everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.

Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like AzureAmazon, or Google Cloud.

We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.

Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.

This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.

Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks -- or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.

If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.

Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.

Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data -- not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?

And you can have this on a network operations center (NOC) wall, on your tablet, or your phone -- wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.

Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.

David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.

 Herrera

Herrera

We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.

We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.

The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using -- almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.

The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.

[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.

Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.

We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Micro Focus.

You may also be interested in:

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem-- but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

 Steyn

Steyn

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.  

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing? 

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big ... Now we have the entire environment running on SimpliVity cubes.  

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Containers, microservices, and HCI help governments in Norway provide safer public data sharing

The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.

We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI)containers, and microservices.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us discover the benefits of a modern platform for smarter government data sharing is FrodeSjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving interest in having a common platform for public information in your country?

SjovatsenWe need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.

Gardner: What problems do you need to overcome in order to create a more common approach?

Common API at the core

SjovatsenOne of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes -- that’s where the problem is. And that’s what the FINT project is all about.

  Sjovatsen

Sjovatsen

[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal -- for all of those business applications. It gives us the ability to use our data much more efficiently.

Gardner: Please describe for us what the FINT project is and why this is so important for public health.

SjovatsenIt’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs -- both for public use and also for internal use.

Gardner: What sort of information are we dealing with here? Why is it important for the general public health? 

SjovatsenIt’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.

Learn More About

HPE Pointnext Services

Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?

Agility, scalability, and speed

SjovatsenWe needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails. 

The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.

Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach? 

SjovatsenWe are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity

Container comfort

Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?

SjovatsenYes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.

The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.

Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?

Learn More About

HPE Pointnext Services

SjovatsenThe container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.

Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sector applications?

Unlocking the potential

SjovatsenYes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.

Gardner: That could have some very beneficial impacts, very good payoffs.

SjovatsenYes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.

In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.

Gardner: It shows the wide applicability of this. Any advice, Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?

SjovatsenI definitely recommend it -- it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes

The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.

We will now explore how a regional hospital seeking efficiency -- and a teaching hospital seeking performance -- are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Infatuation leads to love—How container orchestration and federation enables multi-cloud competition

The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way.

And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts.

This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.

How UBC gained TCO advantage via flash for its EduCloud cloud storage service

The next BriefingsDirect cloud efficiency case study explores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.

We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How is satisfying the storage demands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list? 

Dunington: A university setting isn't much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer.

 Dunington

Dunington

We have to be able to offer those services, because as most people know, there are choices -- and they can go somewhere else and choose those other products.

Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it's just like any other business, even though it’s a university setting.

HPE

Delivers

Flash Performance

Gardner: Can you give us a sense of the scale that describes your storage requirements?

Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.

We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance.

Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year -- because there is registration, there are exam times -- these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need.

Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?

On-Campus Cloud

Dunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can't leave the borders.

That's why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud.

The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it's a hybrid model or all on-campus. We just want to be able to fulfill those needs.

Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model -- but with the security, control and manageability of the private clouds.

What decisions have you made about your storage underpinnings, the infrastructure that supports your SaaS cloud?

Dunington: We have a large storage footprint. For our site, it’s about 12 petabytes of storage. We realized that we weren’t meeting the needs with spinning disks. One of the problems was that we had runaway virtual workloads that would cause problems, and they would impact other services. We needed some mechanism to fix that.

We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

We went through the whole request for proposal (RFP) process, and all the IT infrastructure vendors responded, but we did have some guidelines that we wanted to go through. One of the things we did is present our problems and make sure that they understood what the problems were and what they were trying to solve.

And there were some minimum requirements. We do have a backup vendor of choice that they needed to merge with. And quality of service is a big thing. We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

Gardner: You gained more than just flash benefits when you got to flash storage, right?

Streamlined, safe, flash storage

Dunington: Yes, for sure. With an entire data center full of spinning disks, it gets to the point where the disks start to manage you; you are no longer managing the disks. And the teams out there changing drives, removing volumes around it, it becomes unwieldy. I mean, the power, the footprint, and all that starts to grow.

Also, Vancouver is in a seismic zone, we are right up against the Pacific plate and it's a very active seismic area. Heaven forbid anything happens, but one of the requirements we had was to move the data center into the interior of the province. So that was what we did.

When we brought this new data center online, one of the decisions the team made was to move to an all-flash storage environment. We wanted to be sure that it made financial sense because it's publicly funded, and also improved the user experience, across the province.

Gardner: As you were going about your decision-making process, you had choices, what made you choose what you did? What were the deciding factors?

Dunington: There were a lot of deciding factors. There’s the technology, of being able to meet the performance and to manage the performance. One of the things was to lock down runaway virtual machines and to put performance tiers on others.

But it’s not just the technology; it's also the business part, too. The financial part had to make sense. When you are buying any storage platform, you are also buying the support team and the sales team that come with it.

Our team believes that technology is a certain piece of the pie, and the rest of it is relationship. If that relationship part doesn't work, it doesn’t matter how well the technology part works -- the whole thing is going to break down.

Because software is software, hardware is hardware -- it breaks, it has problems, there are limitations. And when you have to call someone, you have to depend on him or her. Even though you bought the best technology and got the best price -- if it doesn't work, it doesn’t work, and you need someone to call.

So those service and support issues were all wrapped up into the decision.

HPE

Delivers

Flash Performance

We chose the Hewlett Packard Enterprise (HPE) 3PAR all-flash storage platform. We have been very happy with it. We knew the HPE team well. They came and worked with us on the server blade infrastructure, so we knew the team. The team knew how to support all of it. 

We also use the HPE OneView product for provisioning, and it integrated into that all. It also supported the performance optimization tool (IT Operations Management for HPE OneView) to let us set those values, because one of the things in EduCloud is customers choose their own storage tier, and we mark the price on it. So basically all we would do is present that new tier as new data storage within VMware and then they would just move their workloads across non-disruptively. So it has worked really well.

The 3PAR storage piece also integrates with VMware vRealize Operations Manager. We offer that to all our clients as a portal so they can see how everything is working and they can do their own diagnostics. Because that’s the one goal we have with EduCloud, it has to be self-service. We can let the customers do it, that's what they want.

Gardner: Not that long ago people had the idea that flash was always more expensive and that they would use it for just certain use-cases rather than pervasively. You have been talking in terms of a total cost of ownership reduction. So how does that work? How does the economics of this over a period of time, taking everything into consideration, benefit you all?

Economic sense at scale

Dunington: Our IT team and our management team are really good with that part. They were able to break it all down, and they found that this model would work at scale. I don’t know the numbers per se, but it made economic sense.

Spinning disks will still have a place in the data center. I don't know a year from now if an all-flash data center will make sense, because there are some records that people will throw in and never touch. But right now with the numbers on how we worked it out, it makes sense, because we are using the standard bronze, the gold, the silver tiers, and with the tiers it makes sense.

The 3PAR solution also has dedupe functionality and the compression that they just released. We are hoping to see how well that trends. Compression has only been around for a short period of time, so I can’t really say, but the dedupe has done really well for us.

Gardner: The technology overcomes some of the other baseline economic costs and issues, for sure.

We have talked about the technology and performance requirements. Have you been able to qualify how, from a user experience, this has been a benefit?

Dunington: The best benchmark is the adoption rate. People are using it, and there are no help desk tickets, so no one is complaining. People are using it, and we can see that everything is ramping up, and we are not getting tickets. No one is complaining about the price, the availability. Our operational team isn't complaining about it being harder to manage or that the backups aren’t working. That makes me happy.

The big picture

Gardner: Brent, maybe a word of advice to other organizations that are thinking about a similar move to private cloud SaaS. Now that you have done this, what might you advise them to do as they prepare for or evaluate a similar activity?

Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there.

Dunington: Look at the full picture, look at the total cost of ownership. There’s the buying of the hardware, and there's also supporting the hardware, too. Make sure that you understand your requirements and what your customers are looking for first before you go out and buy it. Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there. We will see in a couple of years how it went.

Look at the big picture, step back. It’s just not the new shiny toy, and you might have to take a stepped approach into buying, but for us it worked. I mean, it’s a solid platform, our team sleeps well at night, and I think our customers are really happy with it.

Gardner: This might be a little bit of a pun in the education field, but do your homework and you will benefit.

HPE

Delivers

Flash Performance

Dunington: Yes, for sure.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·      How IoT capabilities open new doors for Miami telecoms platform provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

 

Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.

Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.

What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.

Security demands have gone up, yet there is a continual need for reduced manual labor and costs -- while protecting assets sooner and better.

The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?

Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.

 Kater

Kater

What we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits -- that were once thought impossible -- are now are the reality.

Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.

Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.

Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.

KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.

KDFA is a very small organization -- but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.

There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler -- almost the full gamut of Citrix products.

We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.

Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?

We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax, if you will, for other solutions that are on the market. We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

When we embraced virtualization, the security approaches were very traditional in nature. The old way of doing things worked fantastically for a physical endpoint.

The traditional approaches to security had been on our physical PCs for years. But when that security came over to the virtual realm, they bogged down our systems. They still required updates be done manually. They just weren’t innovating at the same speed as the virtualization, which was allowing us to create new endpoints.

And so, the maintenance, the updating, the growing threats were no longer being seen by the traditional approaches of security. We had endpoint security in place on our physical stations, but when we went virtual we no longer had endpoint security. We then had to focus on antivirus and anti-spam at the server level.

What we found out very quickly was that this was not going to solve our security issues. We then faced a lot of growing threats again via e-mail, via web, that were coming in through malware, spyware, other activities that were embedding themselves on our file servers – and then trickling down and moving laterally across our network to our endpoints.

Gardner: Just as your organization went virtual and adjusted to those benefits, the malware and the bad guys, so to speak, adjusted as well -- and started taking advantage of what they saw as perhaps vulnerabilities as organizations transitioned to higher virtualization.

Security for all, by all

Kater: They did. One thing that a lot of security analysts, experts, and end-users forget in the grand scheme of things is that this virtual world we live in has grown so rapidly -- and innovated so quickly -- that the same stuff we use to grow our businesses is also being used by the bad actors. So while we are learning what it can do, they are learning how to exploit it at the same speed -- if not a little faster.

Gardner: You recognized that you had to change; you had to think more about your virtualization environment. What prompted you to increase the capability to focus on the hypervisor for security and prevent issues from trickling across your systems and down to your endpoints?

Kater: Security has always been a concern here at KDFA. And there has been more of a security focus recently, with the latest news and trends. We honestly struggled with CryptoLocker, and we struggled with ransomware.

While we never had to pay out any ransom or anything -- and they were stopped in place before data could be exfiltrated outside of KDFA’s network -- we still had two or three days of either data loss or data interruption. We had to pull back data from an archive; we had to restore some of our endpoints and some of our computers.

We needed to have a solution for our virtual environment -- one that would be easy to deploy, easy to manage, and it would be centrally managed.

As we battled these things over a very short period of time, they were progressively getting worse and worse. We decided that we needed to have a solution for our virtual environment – one that would be not only be easy to deploy, easy to manage, but it would be centrally managed as well, enabling me to have more time to focus back on my workload -- and not have to worry so much about the security thresholds that had to be updated and maintained via the traditional model.

So we went out to the market. We ran very extensive proof of concepts (POCs), and those POCs very quickly illustrated that the underlying architecture was only going to be enterprise-ready via two or three vendors. Once we started running those through the paces, Bitdefender emerged for us.

I had actually been watching the Hypervisor Introspection (HVI) product development for the past four years, since its inception came with a partnership between Citrix, Intel, the Linux community and, of course, Bitdefender. One thing that was continuous throughout all of that was that in order to deploy that solution you would need GravityZone in-house to be able to run the HVI workloads.

And so we became early adopters of Bitdefender GravityZone, and we are able to see what it could do for our endpoints, our servers, and our Microsoft Exchange Servers. Then, Hypervisor Introspection became another security layer that we are able to build upon the security solution that we had already adopted from Bitdefender.

Gardner: And how long have you had these solutions in place?

Kater: We are going on one and a half to two years for GravityZone. And when HVI went to general availability earlier this year, in 2017, and we were one of the first adopters to be able to deploy it across our production environment.

Gardner: If you had a “security is easy” button that you could pound on your desk, what are the sorts of things that you look for in a simpler security solution approach?

IT needs brains to battle breaches

Kater: The “security is easy” button would operate much like the human brain. It would need that level of intuitive instinct, that predictive insight ability. The button would generally be easily managed, automated; it would evolve and learn with artificial intelligence (AI) and machine learning what’s out there. It would dynamically operate with peaks and valleys depending on the current status of the environment, and provide the security that’s needed for that particular environment.

Gardner: Jeff, you really are an early adopter, and I commend you on that. A lot of organizations are not quite as bold. They want to make sure that everything has been in the market for a long time. They are a little hesitant.

But being an early adopter sounds like you have made yourselves ready to adopt more AI and machine learning capabilities. Again, I think that’s very forward-looking of you.

But tell us, in real terms, what has being an early adopter gotten for you? We’ve had some pretty scary incidents just in the recent past, with WannaCry, for example. What has being an early adopter done for you in terms of these contemporary threats?

Kater: The new threats, including the EternalBlue exploit that happened here recently, are very advanced in nature. Oftentimes when these breaches occur, it takes several months before they have even become apparent. And oftentimes they move laterally within our network without us knowing, no matter what you do.

Some of the more advanced and persistent threats don’t even have to infect the local host with any type of software. They work in the virtual memory space. It’s much different than the older threats, where you could simply reboot or clear your browser cache to resolve them and get back to your normal operations.

Earlier, when KDFA still made use of non-persistent desktops, if the user got any type of corruption on their virtual desktop, they were able to reboot, and get back to a master image and move on. However, with these advanced threats, when they get into your network, and they move laterally -- even if you reboot your non-persistent desktop, the threat will come back up and it still infects your network. So with the growing ransomware techniques out there, we can no longer rely on those definition-based approaches. We have to look at the newer techniques.

As far as why we are early adopters, and why I have chosen some of the principles that I have, I feel strongly that you are really only as strong as your weakest link. I strive to provide my users with the most advanced, nimble, and agnostic solutions possible.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations.  

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations. It allows us to have discussions about increasing productivity at that point, and to maximize the potential of our smaller number of users -- versus having to worry about the latest news of security breaches that are happening all around us.

Gardner: You’re able to have a more proactive posture, rather than doing the fire drill when things go amiss and you’re always reacting to things.

Kater: Absolutely.

Gardner: Going back to making sure that you’re getting a fresh image and versions of your tools …  We have heard some recent issues around the web browser not always being safe. What is it about being able to get a clean version of that browser that can be very important when you are dealing with cloud services and extensive virtualization?

Virtual awareness, secure browsing

Kater: Virtualization in and of itself has allowed us to remove the physical element of our workstations when desirable and operate truly in that virtual or memory space. And so when you are talking about browsers, you can have a very isolated, a very clean browser. But that browser is still going to hit a website that can exploit your system. It can run in that memory space for exploitation. And, again, it doesn't rely on plug-ins to be downloaded or anything like that anymore, so we really have to look at the techniques that these browsers are using.

What we are able to do with the secure browsing technique is publish, in our case, via XenApp, any browser flavor with isolation out there on the server. We make it available to the users that have access for that particular browser and for that particular need. We are then able to secure it via Bitdefender HVI, making sure that no matter where that browser goes, no matter what interface it’s trying to align with, it’s secure across the board.

Gardner: In addition to secure browsing, what do you look for in terms of being able to keep all of your endpoints the way you want them? Is there a management approach of being able to verify what works and what doesn’t work? How do you try to guarantee 100 percent security on those many and varied endpoints?

Kater: I am a realist, and I realize that nothing will ever be 100 percent secure, but I really strive for that 99.9 percent security and availability for my users. In doing so -- being that we are so small in staff, and being that I am the one that should manage all of the security, architecture, layers, networking and so forth -- I really look for that centralized model. I want one pane of glass to look at for managing, for reporting.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go,  what did it do to me and how was I protected.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go, and what did it do to me and how was I protected. I need that so that I can report to my management staff and say, “Hey, honestly, this is what happened, this is what was happening behind the scenes. This is how we remediated and we are okay. We are protected. We are safe.”

And so I really look for that centralized management. Automation is key. I want something that will automatically update, with the latest virus and malware definitions, but also download the latest techniques that are seen out there via those innovative labs from our security vendors to fully patch our systems behind the scenes. So it takes that piece of management away from me and automates it to make my job more efficient and more effective.

Gardner: And how has Bitdefender HVI, in association with Bitdefender GravityZone, accomplished that? How big of a role does it play in your overall solution?

Kater: It has been a very easy deployment and management, to be honest. Again, entities large and small, we are all facing the same threats. When we looked at ways to attain the best solution for us, we wanted to make sure that all of the main vendors that we make use of here at KDFA were on board.

And it just so happened this was a perfect partnership, again, between Citrix, Bitdefender, Intel, and the Linux community. That close partnership, it really developed into HVI, and it is not an evolutionary product. It did not grow from anything else. It really is a revolutionary approach. It’s a different way of looking at security models. It’s a different way of protecting.

HVI allows for security to be seen outside of the endpoint, and outside of the guest agent. It’s kind of an inside-looking-outward approach. It really provides high levels of visibility, detection and, again, it prevents the attacks of today, with those advanced persistent threats or APTs.

With that said, since the partnership between GravityZone and HVI is so easy to deploy, so easy to manage, it really allows our systems to grow and scale when the need is there. And we just know that with those systems in place, when I populate my network with new VMs, they are automatically protected via the policies from HVI.

Given that the security has to be protected from the ground all the way up, we rest assured that the security moves with the workload. As the workload moves across my network, it’s spawned off and onto new VMs. The same set of security policies follows the workloads. It really takes out any human missteps, if you will, along the process because it’s all automated and it all works hand-in-hand together.

Behind the screens

Gardner: It sounds like you have gained increased peace of mind. That’s always a good thing in IT; certainly a good thing for security-oriented IT folks. What about your end-users? Has the ability to have these defenses in place allowed you to give people a bit more latitude with what they can do? Is there a productivity, end-user or user experience benefit to this?

Kater: When it comes to security agents and endpoint security as a whole, I think a lot of people would agree with me that the biggest drawback when implementing those into your work environment is loss of productivity. It’s really not the end-user’s fault. It’s not a limitation of what they can and can't do, but it’s what happens when security puts an extra load on your CPU, it puts extra load on your RAM; therefore, it bogs down your systems. Your systems don’t operate as efficiently or effectively and that decreases your productivity.

With Bitdefender, and the approaches that we adopted, we have seen very, very limited, almost uncomputable limitations as far as impacts on our network, impacts on our endpoints. So user adoption has been greater than it ever has, as far as a security solution.

I’m also able to manipulate our policies within that Central Command Center or Central Command Console within Bitdefender GravityZone to allow my users, at will, if they would like, to see what they are being blocked against, and which websites they are trying to run in the background. I am able to pass that through to the endpoint for them to see firsthand. That has been a really eye-opening experience.

We used to compute daily, thinking we were protected, and that nothing was running in the background. We were visiting the pages, and those pages were acting as though we thought that they should. What we have quickly found out is that any given page can launch several hundred, if not thousands, of links in the background, which can then become an exploit mechanism, if not properly secured.

Gardner: I would like to address some of the qualitative metrics of success when you have experienced the transition to more automated security. Let’s begin with your time. You said you went from five or 10 percent of time spent on security to 50 or 60 percent. Have you been able to ratchet that back? What would you estimate is the amount of time you spend on security issues now, given that you are one and a half years in?

Kater: Dating back 5 to 10 years ago with the inception of VDI, my security footprint as far as my daily workload was probably around that 10 percent. And then, with the growing threats in the last two to three years, that ratcheted it up to about 50 percent, at minimum, maybe even 60 percent. By adopting GravityZone and HVI, I have been able to pull that back down to only consume about 10 percent of my workload, as most of it is automated for me behind the scenes.

Gardner: How about ransomware infections? Have you had any of those? Or lost documents, any other sort of qualitative metrics of how to measure efficiency and efficacy here?

We have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Kater: I am happy to report that since the adoption of GravityZone, and now with HVI as an extra security layer on top of Bitdefender GravityZone, that we have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Gardner: Well, that speaks for itself. Let’s look to the future, now that you have obtained this. You mentioned earlier your interest in AI, machine learning, automating, of being proactive. Tell us about what you expect to do in the future in terms of an even better security posture.

Safety layers everywhere, all the time

Kater: In my opinion, again, security layers are vital. They are key to any successful deployment, whether you are large or small. It’s important to have all of your traditional security hardware and software in place working alongside this new interwoven fabric, if you will, of software -- and now at the hypervisor level. This is a new threshold. This is a new undiscovered territory that we are moving into with virtual technologies.

As that technology advances, and more complex deployments are made, it’s important to protect that computing ability every step of the way; again, from that base and core, all the way into the future.

More and more of my users are computing remotely, and they need to have the same security measures in place for all of their computing sessions. What HVI has been able to do for me here in the current time, and in moving to the future, is I am now able to provide secure working environments anywhere -- whether that’s their desktop, whether that’s their secure browser. I am able to leverage that HVI technology once they are logged into our network to make their computing from remote areas safe and effective.

Gardner: For those listening who may not have yet moved toward a hypervisor-level security – or who have maybe even just more recently become involved with pervasive virtualization and VDI -- what advice could you give them, Jeff, on how to get started? What would you suggest others do that would even improve on the way you have done it? And, of course, you have had some pretty good results.

Kater: It’s important to understand that everybody’s situation is very different, so identifying the best solutions for everybody is very much on an individual corporation basis. Each company has its own requirements, its own compliance to follow, of course.

Pick two or three vendors and run very stringent POCs; make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network.

The best advice that I can give is pick two or three vendors, at the least, and run very stringent POCs; no matter what they may be, make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network. Then, when you have two or three that come out of that and that you feel strongly about, continue to break them down.

I cannot stress the importance of POCs enough. It’s very important to identify that one or two that you really feel strongly about. Once you identify those, then talk to the industry experts that support those technologies, talk to the engineers, really get the insight from the inside out on how they are innovating and what their plan is for the future of their products to make sure that you are on a solid footprint.

Most success stories involve a leap of faith. With machine learning and AI, we are now taking a leap that is backed by factual knowledge and analyzing techniques to stay ahead of threats. No longer are we relying on those virus definitions and those virus updates that can be lagging sometimes.

Gardner: Before we sign off, where do you go to get your information? Where would you recommend other people go to find out more?

Kater: Honestly, I was very fortunate that HVI at its inception fell into my lap. When I was looking around at different products, we just hit the market at the right time. But to be honest with you, I cannot stress enough, again, run those POCs.

If you are interested in finding out more about Bitdefender and its product line up, Bitdefender has an excellent set of engineers on staff; they are very knowledgeable, they are very well-rounded in all of their individual disciplines. The Bitdefender website is very comprehensive. It contains many outside resources, along with inside labs reporting, showcasing just what their capabilities are, with a lot of unbiased opinions.

They have several video demos and technical white papers listed out there, you can find them all across the web and you can request the full product demo when you are ready for it and run that POC of Bitdefender products in-house with your network. Also, they have presales support that will help you all along the way.

Bitdefender HVI will revolutionize your data center security capacity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Bitdefender.

You may also be interested in:

Case study: How HCI-powered private clouds accelerate efficient digital transformation

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.

We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn't necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

 McKittrick

McKittrick

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Also, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds -- and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively -- or even more cost effectively -- than you can in the public clouds. There are certainly places for both in the market.

Gardner: It's going to vary, of course, from company to company -- and even department to department within a company -- but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.

We have a head start on what's really coming down the pipe -- we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what's really coming down the pipe --  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can't mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.

I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure -- but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What's new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from -- depending on whom you spoke to -- somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.

The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that's different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn't really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn't work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That's what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That's when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth -- would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE's portfolio.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: This thing is now built and running, and it's been two years since inception. So that's a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it -- and he is a very forward-thinking technologist -- he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have -- from the server on down -- folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that's going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture -- yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out -- and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let's revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process 

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people -- moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what's happening, why it's happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don't get done, or they become almost unmanageable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How IoT and OT collaborate to usher in the data-driven factory of the future

The next BriefingsDirect Internet of Things (IoT) technology trends interview explores how innovation is impacting modern factories and supply chains.

We’ll now learn how a leading-edge manufacturer, Hirotec, in the global automotive industry, takes advantage of IoT and Operational Technology (OT) combined to deliver dependable, managed, and continuous operations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us to find the best factory of the future attributes is Justin Hester, Senior Researcher in the IoT Lab at Hirotec Corp. in Hiroshima, Japan. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's happening in the market with business and technology trends that’s driving this need for more modern factories and more responsive supply chains?

Hester: Our customers are demanding shorter lead times. There is a drive for even higher quality, especially in automotive manufacturing. We’re also seeing a much higher level of customization requests coming from our customers. So how can we create products that better match the unique needs of each customer?

As we look at how we can continue to compete in an ever-competitive environment, we are starting to see how the solutions from IoT can help us.

Gardner: What is it about IoT and Industrial IoT (IIoT) that allows you to do things that you could not have done before?

Hester: Within the manufacturing space, a lot of data has been there for years; for decades. Manufacturing has been very good at collecting data. The challenges we've had, though, is bringing in that data in real-time, because the amount of data is so large. How can we act on that data quicker, not on a day-by-day basis or week-by-week basis, but actually on a minute-by-minute basis, or a second-by-second basis? And how do we take that data and contextualize it?

 Hester

Hester

It's one thing in a manufacturing environment to say, “Okay, this machine is having a challenge.” But it’s another thing if I can say, “This machine is having a challenge, and in the context of the factory, here's how it's affecting downstream processes, and here's what we can do to mitigate those downstream challenges that we’re going to have.” That’s where IoT starts bringing us a lot of value.

The analytics, the real-time contextualization of that data that we’ve already had in the manufacturing area, is very helpful.

Gardner: So moving from what may have been a gather, batch, analyze, report process -- we’re now taking more discrete analysis opportunities and injecting that into a wider context of efficiency and productivity. So this is a fairly big change. This is not incremental; this is a step-change advancement, right?

A huge step-change 

Hester: It’s a huge change for the market. It's a huge change for us at Hirotec. One of the things we like to talk about is what we jokingly call the Tuesday Morning Meeting. We talk about this idea that in the morning at a manufacturing facility, everyone gets together and talks about what happened yesterday, and what we can do today to make up for what happened yesterday.

Instead, now we’re making that huge step-change to say,  “Why don't we get the data to the right people with the right context and let them make a decision so they can affect what's going on, instead of waiting until tomorrow to react to what's going on?” It’s a huge step-change. We’re really looking at it as how can we take small steps right away to get to that larger goal.

In manufacturing areas, there's been a lot of delay, confusion, and hesitancy to move forward because everyone sees the value, but it's this huge change, this huge project. At Hirotec, we’re taking more of a scaled approach, and saying let's start small, let’s scale up, let’s learn along the way, let's bring value back to the organization -- and that's helped us move very quickly.

Gardner: We’d like to hear more about that success story but in the meantime, tell us about Hirotec for those who don't know of it. What role do you play in the automotive industry, and how are you succeeding in your markets?

Hester: Hirotec is a large, tier-1 automotive supplier. What that means is we supply parts and systems directly to the automotive original equipment manufacturers (OEMs), like Mazda, General Motors, FCA, Ford, and we specialize in door manufacturing, as well as exhaust system manufacturing. So every year we make about 8 million doors, 1.8 million exhaust systems, and we provide those systems mainly to Mazda and General Motors, but also we provide that expertise through tooling.

For example, if an automotive OEM would like Hirotec’s expertise in producing these parts, but they would like to produce them in-house, Hirotec has a tooling arm where we can provide that tooling for automotive manufacturing. It's an interesting strategy that allows us to take advantage of data both in our facilities, but then also work with our customers on the tooling side to provide those lessons learned and bring them value there as well.

Gardner: How big of a distribution are we talking about? How many factories, how many countries; what’s the scale here?

Hester: We are based in Hiroshima, Japan, but we’re actually in nine countries around the world, currently with 27 facilities. We have reached into all the major continents with automotive manufacturing: we’re in North America, we’re in Europe, we’re all throughout Asia, in China and India. We have a large global presence. Anywhere you find automotive manufacturing, we’re there supporting it.

Discover How the

IoT Advantage

Works in Multiple Industries

Gardner: With that massive scale, very small improvements can turn into very big benefits. Tell us why the opportunity in a manufacturing environment to eke out efficiency and productivity has such big payoffs.

Hester: So especially in manufacturing, what we find when we get to those large scales like you're alluding to is that a 1 percent or 2 percent improvement has huge financial benefits. And so the other thing is in manufacturing, especially automotive manufacturing, we tend to standardize our processes, and within Hirotec, we’ve done a great job of standardizing that world-class leadership in door manufacturing.

And so what we find is when we get improvements not only in IoT but anywhere in manufacturing, if we can get 1 percent or 2 percent, not only is that a huge financial benefit but because we standardized globally, we can move that to our other facilities very quickly, doubling down on that benefit.

Gardner: Well, clearly Hirotec sees this as something to really invest in, they’ve created the IoT Lab. Tell me a little bit about that and how that fits into this?

The IoT Lab works

Hester: The IoT Lab is a very exciting new group, it's part of our Advanced Engineering Center (AEC). The AEC is a group out of our global headquarters and this group is tasked with the five- to 10-year horizon. So they're able to work across all of our global organizations with tooling, with engineering, with production, with sales, and even our global operations groups. Our IoT group goes and finds solutions that can bring value anywhere in the organization through bringing in new technologies, new ideas, and new solutions.

And so we formed the IoT Lab to find how can we bring IoT-based solutions into the manufacturing space, into the tooling space, and how actually can those solutions not only help our manufacturing and tooling teams but also help our IT teams, our finance teams, and our sales teams.

Gardner: Let's dig back down a little bit into why IT, IoT and Operational Technology (OT) are into this step-change opportunity, looking for some significant benefits but being careful in how to institute that. What is required when you move to a more an IT-focused, a standard-platform approach -- across all the different systems -- that allows you to eke these great benefits?

Tell us about how IoT as a concept is working its way into the very edge of the factory floor.

Discover How the

IoT Advantage

Works in Multiple Industries

Hester: One of the things we’re seeing is that IT is beginning to meld, like you alluded to, with OT -- and there really isn't a distinction between OT and IT anymore. What we're finding is that we’re starting to get to these solution levels by working with partners such as PTC and Hewlett Packard Enterprise (HPE) to bring our IT group and our OT group all together within Hirotec and bring value to the organization.

What we find is there is no longer a need in OT that becomes a request for IT to support it, and also that IT has a need and so they go to OT for support. What we are finding is we have organizational needs, and we’re coming to the table together to make these changes. And that actually within itself is bringing even more value to the organization.

Instead of coming last-minute to the IT group and saying, “Hey, we need your support for all these different solutions, and we’ve already got everything set, and you are just here to put it in,” what we are seeing, is that they bring the expertise in, help us out upfront, and we’re finding better solutions because we are getting experts both from OT and IT together.

We are seeing this convergence of these two teams working on solutions to bring value. And they're really moving everything to the edge. So where everyone talks about cloud-based computing -- or maybe it’s in their data center -- where we are finding value is in bringing all of these solutions right out to the production line.

We are doing data collection right there, but we are also starting to do data analytics right at the production line level, where it can bring the best value in the fastest way.

Gardner: So it’s an auspicious time because just as you are seeking to do this, the providers of technology are creating micro data centers, and they are creating Edgeline converged systems, and they are looking at energy conservation so that they can do this in an affordable way -- and with storage models that can support this at a competitive price.

What is it about the way that IT is evolving and providing platforms and systems that has gotten you and The IoT Lab so excited?

Excitement at the edge  

Hester: With IoT and IT platforms, originally to do the analytics, we had to go up to the cloud -- that was the only place where the compute power existed. Solution providers now are bringing that level of intelligence down to the edge. We’re hearing some exciting things from HPE on memory-driven computing, and that's huge for us because as we start doing these very complex analytics at the edge, we need that power, that horsepower, to run different applications at the same time at the production line. And something like memory-driven solutions helps us accomplish that.

It's one thing to have higher-performance computing, but another thing to gain edge computing that's proper for the factory environment. In a manufacturing environment it's not conducive to a standard servers, a standard rack where it needs dust protection and heat protection -- that doesn't exist in a manufacturing environment.

The other thing we're beginning to see with edge computing, that HPE provides with Edgeline products, is that we have computers that have high power, high ability to perform the analytics and data collection capabilities -- but they're also proper for the environment.

I don't need to build out a special protection unit with special temperature control, humidity control – all of which drives up energy costs, which drives up total costs. Instead, we’re able to run edge computing in the environment as it should be on its own, protected from what comes in a manufacturing environment -- and that's huge for us.

Gardner: They are engineering these systems now with such ruggedized micro facilities in mind. It's quite impressive that the very best of what a data center can do, can now be brought to the very worst types of environments. I'm sure we'll see more of that, and I am sure we'll see it get even smaller and more powerful.

Do you have any examples of where you have already been able to take IoT in the confluence of OT and IT to a point where you can demonstrate entirely new types of benefits? I know this is still early in the game, but it helps to demonstrate what you can do in terms of efficiency, productivity, and analytics. What are you getting when you do this well?

IoT insights save time and money

Hester: Taking the stepped strategy that we have, we actually started at Hirotec very small with only eight machines in North America and we were just looking to see if the machines are on, are they running, and even from there, we saw a value because all of a sudden we were getting that real-time contextualized insight into the whole facility. We then quickly moved over to one of our production facilities in Japan, where we have a brand-new robotic inspection system, and this system uses vision sensors, laser sensors, force sensors -- and it's actually inspecting exhaust systems before they leave the facility.

We very quickly implemented an IoT solution in that area, and all we did was we said, “Hey, we just want to get insight into the data, so we want to be able to see all these data points. Over 400 data points are created every inspection. We want to be able to see this data, compared in historical ways -- so let’s bring context to that data, and we want to provide it in real-time.”

Discover How the

IoT Advantage

Works in Multiple Industries

What we found from just those two projects very quickly is that we're bringing value to the organization because now our teams can go in and say, “Okay, the system is doing its job, it's inspecting things before they leave our facility to make sure our customers always get a high-quality product.” But now, we’re able to dive in and find different trends that we weren't able to see before because all we were doing is saying, “Okay, this system leaves the facility or this system doesn't.”

And so already just from that application, we’ve been able to find ways that our engineers can even increase the throughput and the reliability of the system because now they have these historical trends. They were able to do a root-cause analysis on some improvements that would have taken months of investigation; it was completed in less than a week for us.

And so that's a huge value -- not only in that my project costs go down but now I am able to impact the organization quicker, and that's the big thing that Hirotec is seeing. It’s one thing to talk about the financial cost of a project, or I can say, “Okay, here is the financial impact,” but what we are seeing is that we’re moving quicker.

And so, we're having long-term financial benefits because we’re able to react to things much faster. In this case, we’re able to reduce months of investigation down to a week. That means that when I implement my solution quicker, I'm now bringing that impact to the organization even faster, which has long-term benefits. We are already seeing those benefits today.

Gardner: You’ll obviously be able to improve quality, you’ll be able to reduce the time to improving that quality, gain predictive analytics in your operations, but also it sounds like you are going to gain metadata insights that you can take back into design for the next iteration of not only the design for the parts but the design for the tooling as well and even the operations around that. So that intelligence at the edge can be something that is a full lifecycle process, it goes right back to the very initiation of both the design and the tooling.

Data-driven design, decisions 

Hester: Absolutely, and so, these solutions, they can't live in a silo. We're really starting to look at these ideas of what some people call the Digital Thread, the Digital Twin. We’re starting to understand what does that mean as you loop this data back to our engineering teams -- what kind of benefits can we see, how can we improve our processes, how can we drive out into the organization?

And one of the biggest things with IoT-based solutions is that they can't stay inside this box, where we talked about OT to IT, we are talking about manufacturing, engineering, these IoT solutions at their best, all they really do is bring these groups together and bring a whole organization together with more contextualized data to make better decisions faster.

And so, exactly to your point, as we are looping back, we’re able to start understanding the benefit we’re going to be seeing from bringing these teams together.

Gardner: One last point before we close out. It seems to me as well that at a macro level, this type of data insight and efficiency can be brought into the entire supply chain. As you're providing certain elements of an automobile, other suppliers are providing what they specialize in, too, and having that quality control and integration and reduced time-to-value or mean-time-to-resolution of the production issues, and so forth, can be applied at a macro level.

So how does the automotive supplier itself look at this when it can take into consideration all of its suppliers like Hirotec are doing?

Start small 

Hester: It's a very early phase, so a lot of the suppliers are starting to understand what this means for them. There is definitely a macro benefit that the industry is going to see in five to 10 years. Suppliers now need to start small. One of my favorite pictures is a picture of the ocean and a guy holding a lighter. It [boiling the ocean] is not going to happen. So we see these huge macro benefits of where we’re going, but we have to start out somewhere.

Discover How the

IoT Advantage

Works in Multiple Industries

A lot of suppliers, what we’re recommending to them, is to do the same thing we did, just start small with a couple of machines, start getting that data visualized, start pulling that data into the organization. Once you do that, you start benefiting from the data, and then start finding new use-cases.

As these suppliers all start doing their own small projects and working together, I think that's when we are going to start to see the macro benefits but in about five to 10 years out in the industry.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       Veikkaus digitally transforms as it emerges as new combined Finnish national gaming company

 ·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

How a Florida school district tames the wild west of education security at scale and on budget

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West.

For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety -- without breaking the bank.

The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we're joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district?

Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change -- of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard.

Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can't clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.

Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?

Skipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice.

 Skipper

Skipper

One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.

I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it's no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.

Access for all, but not all-access

Gardner: Whenever you are trying to change people's perceptions, behaviors, culture, it’s useful to have both the carrot and a stick approach.

So to you Rich, what's been working in terms of a carrot? How do you incentivize people? What works in practice there?

Perkins: That's a tough one. We don't really have a carrot that we use. We basically say, “If you are doing the wrong things, you are not going to be able to use our network.”  So we focus more on negatives.

 Perkins

Perkins

The positives would be you get to do your job. You get to use the Internet. We don't really give them something more. We see security as directly intertwined with our customer service. Every person we have is our customer and our job is to protect them -- and sometimes that's from themselves.

So we don't really have a carrot-type of system. We don't allow students to play games if they have no problems. We give everybody the same access and treat everybody the same. Either you are a student and you get this level of access, or you are a staff member, you get this level of access, or you don't get access.

Gardner: Let’s get background on the Clay County School District. Tell us how many students you have, how many staff administrators, the size and scope of your school district?

Bunkley: Our school district is the 22nd largest in Florida, we are right on the edge of small and medium in Florida, which in most districts is a very large school district. We run about 38,500 students.

And as far as our IT team, which is our student information system, our Enterprise Resource Planning (ERP) system, security, down to desktop support, network infrastructure support, our web services, we have about 48 people total in our department.

Our scope is literally everything. For some reason IT means that if it plugs into a wall, we are responsible for it. That's generally a true statement in education across the board, where the IT staff tends to be a Jack-of-all-trades, and we fix everything.

Practical IT

Gardner: Where you are headed in terms of technology? Is there a one-to-one student-to-device ratio in the works? What sort of technology do you enable for them?

Bunkley: I am extremely passionate about this, because the one-to-one scenario seems to be the buzzword, and we generally despise buzzwords in this office and we prefer a more practical approach.

The idea of one-to-one is itself to me flawed, because if I just throw a device in a student's hand, what am I actually doing besides throwing a device in a student's hand? We haven't trained them. We haven’t given them the proper platform. All we have done is thrown technology.

And when I hear the terms, well, kids inherently know how to use technology today; it kind of just bothers me, because kids inherently know how to use social media, not technology. They are not production-driven, they are socially driven, and that is a sticking point with me.

We are in fact moving to a one-to-one, but in a nontraditional sense. We have established a one-to-one platform so we can introduce a unified platform for all students and employees to see through a portal system; we happen to use ClassLink, there are various other vendors out there, that’s just the one we happen to use.

We have integrated that in moving to Google Apps for Education and we have a very close relationship with Google. It’s pretty awesome, to be quite honest with you.

So we are moving in the direction of Chromebooks, because it’s just a fiscally more responsible move for us.

I know Microsoft is coming out with Windows 10 S, it’s kind of a strong move on their part. But for us, just because we have the expertise on the Google Apps for Education, or G Suite, it just made a lot of sense for us to go that direction.

So we are moving in one-to-one now with the devices, but the device is literally the least important -- and the last -- step in our project.

Non-stop security, no shenanigans

Gardner: Tell us about the requirements now for securing the current level of devices, and then for the new one. It seems like you are going to have to keep the airplane flying while changing the wings, right? So what is the security approach that works for you that allows for that?

Skipper: Clay County School District has always followed trends as far as devices go. So we actually have a good mixture of devices in our network, which means that no one solution is ever the right solution.

So, for example, we still have some iPads out in our networks, we still have some older Apple products, and then we have a mixture of Chromebooks and also Windows devices. We really need to make sure that we are running the right security platform for the full environment.

As we are transitioning more and more to a take-home philosophy -- and that’s where we as an IT department are seeing this going – so that if the decision is made to make the entire student population go home, we are going to be ready to go.

We have coordinated with our content filter company, and they have some extensions that we can deploy that lock the Chromebooks into a filter situation regardless of their network. That’s been really successful in identifying, maybe blocking students, from those late-night searches. We have also been able to identify some shenanigans that might be taking place due to some interesting web searches that they might do over YouTube, for example. That’s worked really well.

Our next objective is to figure out how to secure our Windows devices and possibly even the Mac devices. While our content filter does a good job as far as securing the content on the Internet, it’s a little bit more difficult to deploy into a Windows device, because users have the option of downloading different Internet browsers. So, content filtering doesn’t really work as well on those.

I have deployed Bitdefender to my laptops, and also to take-home Apple products. That allows me to put in more content filtering, and use that to block people from malicious websites that maybe the content filter didn’t see or was unable to see due to a different browser being used.

In those aspects we definitely are securing our network down further than it ever has been before.

Block and Lock

Perkins: With Bitdefender, one of the things we like is that if we have those devices go off network, we can actually have it turn on the Bitdefender Firewall that allows us to further lock down those machines or protect them if they are in an open environment, like at a hotel or whatever, from possible malicious activity.

And it allows us to block executables at some point. So we can actually go in and say, “No, I don’t want you to be able to run this browser, because I can’t do anything to protect you. Or I can’t watch what you do, or I can’t keep you from doing things you shouldn’t do.” So those are all very useful tools in a single pane of glass that we can see all of those devices at one time and monitor and manage. It saves us a lot of time.

Bunkley: I would follow up on that with a base concept, Dana, and our base concept is of an external network. We come from the concept of, we are an everywhere network. We are not only aiming to defend our internal network while you are here and maybe do some stuff while you are at our house, we are literally an externally built network, where our network will extend directly down into the student and teacher’s home.

We have gone as far as moving everything we physically can out of this network, right down to our firewall. We are moving our domain controllers, external to the network to create literally an everywhere network. And so our security focus is not just internal, it is focused on external first, then internal.

Gardner: With security products, what have you been using, what wasn't working, and where do you expect to go next given those constraints?

No free lunch

Perkins: Well, we can tell you that “free” is not always the best option; as a matter of fact, it’s almost never a good option, but we have had to deal with it.

We were previously using an antivirus called Avast, and it’s a great home product. We found out that it has not been the best business-level product. It’s very much marketed to education, and there are some really good things about it. Transferring away from it hasn’t been the easiest because it’s next to impossible to uninstall. So we have been having some problems with that.

We have also tested some other security measures and programs along the way that haven’t been so successful. And we are always in the process of evaluating where we are. We are never okay with status quo. Even if we achieve where we want to be, I don't think any of us will be satisfied, and that’s actually something that a lot of this is built on -- we always want to go that step further. And I know that’s cliché, but I would say for an institution of this size, the reason we are able to do some of the stuff is the staff that has been assembled here is second to none for an educational institution.

So even in the processes that we have identified, which were helter-skelter before we got here, we have some more issues to continue working out, but we won’t be satisfied with where we are even if we achieve the task.

Skipper: One of the things that our office actually hates is just checking the box on a security audit. I mean, we are very vocal to the auditors when they come in. We don’t do things just to satisfy their audit. We actually look at the audit and we look at the intent of the question and if we find merit in it, we are going to go and meet that expectation and then make it better. Audits are general. We are going to exceed and make it a better functioning process than just saying, “Yes, I have purchased an antivirus product,” or “I have purchased x.” To us that’s unacceptable.

Bunkley: Audits are a good thing, and nobody likes to do them because they are time-consuming. But you do them because they are required by law, for our institution anyways. So instead of just having a generic audit, where we ignore the audit, we have adopted the concept of the audit as a very useful thing for us to have as a self-reflection tool. It’s nice to not have the same set of eyes on your work all the time. And instead of taking offense to someone coming in and saying, “You are not doing this good enough,” we have literally changed our internal culture here, audits are not a bad thing; audits are a desired thing.

Gardner: Let’s go around the table and hear how you began your journey into IT and security, and how the transition to an educational environment went.

IT’s the curriculum

Bunkley: I started in the banking industry. Those hours were crazy and the pressure was pretty high. So as soon as I left that after a year, I entered education, and honestly, I entered education because I thought the schedule was really easy and I kind of copped out on that. Come to find out, I am working almost as many hours, but that’s because I have come to love it.

This is my 17th year in education, so I have been in a few districts now. Wholesale change is what I have been hired to do, that’s also what I was hired here to do in Clay. We want to change the culture, make IT part of the instruction instead of a separate segment of education.

We have to be interwoven into everything, otherwise we are going to be on an island, and the last time I heard the definition of education is to educate children. So IT can never by itself be a high-functioning department in education. So we have decided instead to go to instruction, and go to professional development, and go to administration and intervene ourselves.

Gardner: Jon, tell us about your background and how the transition has been for you.

Skipper: I was at active-duty Air Force until 2014 when I retired after 20 years. And then I came into education on the side. I didn’t really expect this job, wasn’t mentally searching for it. I tried it out, and that was three years ago.

It’s been an interesting environment. Education, and especially a small IT department like this one, is one of those interesting places where you can come and really expand on your weak areas. So that’s what I actually like about this. If I need to practice on my group policy knowledge, I can dive in there and I can affect that change. Overall this has been an effective change, totally different from the military, a lot looser as far as a lot of things go, but really interesting.

Gardner: Rick, same question to you, your background and how did the transition go?

Perkins: I spent 21 years in the military, I was Navy. When I retired in 2010, I actually went to work for a smaller district in education mainly because they were the first one to offer me a job. In that smaller district, just like here, we have eight people doing operations, and we have this big department. Jeremy understands from where he came from. It was pretty much me doing every aspect of it, so you do a little security, you do a little bit of everything, which I enjoyed because you are your own boss, but you are not your own boss.

You still have people residing over you and dictating how you are going to work, but I really enjoyed the challenge. Coming from IT security in the military and then coming into education, it’s almost a role reversal where we came in and found next to no policies.

I am used to a black-and-white world. So we are trying to interject some of that and some of the security best practices into education. You have to be flexible because education is not the military, so you can’t be that stringent. So that’s a challenge.

Gardner: What are you using to put policies in place enforce them? How does that work?

Policy plans

Perkins: From a [Microsoft] Active Directory side, we use group policy like most people do, and we try and automate it as much as we can. We are switching over, on the student side, very heavily to Google. They effectively have their own version of Active Directory with group policy. And then I will let Jon speak more to the security side though we have used various programs like PDQ for our patch management system that allows us to push out stuff. We use some logging systems with ManageEngine. And then as we have said before we use Bitdefender to push a lot of policy and security out as well, and we've been reevaluating some other stuff.

We also use SolarWinds to monitor our network and we actually manage changes to our network and switching using SolarWinds, but on the actual security side, I will let Jon get more specific for you.

Skipper: When we came in … there was a fear of having too much in policy equated to too much auditing overhead. One of the first things we did was identify what we can lock down, and the easiest one was the filter.

The content filter met such stipulations as making sure adult material is not acceptable on the network. We had that down. But it didn't really take into account the dynamic of the Internet as far as sites are popping up every minute or second, and how do you maintain that for unclassified and uncategorized sites?

So one of the things we did was we looked at a vendor, like, okay, does this vendor have a better product for that aspect of it, and we got that working, I think that's been working a lot better. And then we started moving down, we were like, okay, cool, so now we have content filtering down, luckily move on to active network, actually not about finding someone else who is doing it, and borrowing their work and making their own.

We look into some of the bigger school districts and see how they are doing it. I think Chicago, Los Angeles. We both looked at some of their policies where we can find it. I found a lot of higher education in some of the universities. Their policies are a lot more along the lines of where we want to be. I think they have it better than what some of the K-12s do.

So we have been going through there and we are going to have to rewrite policy – we are in an active rewrite of our policies right now, we are taking all of those in and we are looking at them, and we are trying to figure out which ones work in our environment and then make sure we do a really good search and replace.

Gardner: We have talked about people, process and technology. We have heard that you are on a security journey and that it’s long-term and culturally oriented.

Let's look at this then as to what you get when you do it right, particularly vis-à-vis education. Do you have any examples of where you have been able to put in the right technology, add some policy and process improvements, and then culturally attune the people? What does that get for you? How do you turn a problem student into a computer scientist at some point? Tell us some of the examples of when it works, what it gets you.

Positive results

Skipper: When we first got in here, we were a Microsoft district. We had some policies in place to help prevent data loss, and stuff like that.

One of the first things we did is review those policies and activate them, and we started getting some hits. We were surprised at some of hits that we saw, and what we saw going out. We already knew we were moving to the Google networks, continuing the process.

We researched a lot and one of the things we discovered is that just by a minor tweak in a user’s procedures, we were able to identify that we could introduce that user to and get them used to using email encryption, for example. With the Gmail solution, we are able to add an extension, and that extension actually looks at their email as it goes out and finds keywords -- or it may be PII -- and automatically encrypt the email, preventing those kinds of breaches from going out there. So that’s really been helpful.

As far as taking a student who may be on the wrong path and reeducating them and bringing them back into the fold, Bitdefender has actually helped out on that one.

We had a student a while back who went out to YouTube and find out how he could just do a simple search on how to crash the school network, and he found about five links. And he researched those links and went out there and found that this batch filed with this type will crash a school server.

He was able to implement it and started trying to get that attack out there, and Bitdefender was able to actually go out there and see the batch file, see what it did and prevent it. By quarantining the file, I was able to get that reported very quickly from the moment that he introduced the attack, and it identified the student and we were able to sit down with the administrators and talk to the student about that process and educate them on the dangers of actually attacking a school network and the possible repercussions of it.

Gardner: It certainly helps when you can let them know that you are able to track and identify those issues, and then trace them back to an individual. Any other anecdotes about where the technology process and people have come together for a positive result?

Applied IT knowledge for the next generation

Skipper: One of the things that’s really worked well for the school district is what we call Network Academy. It’s taught by one of our local retired master chiefs, and he is actually going in there and teaching students at the high school level how to go as far as earning a Cisco Certified Network Associate (CCNA)-level IT certificate.

If a student comes in and they try hard enough, they will actually figure it out and they can leave when they graduate with a CCNA, which is pretty awesome. A high school student can walk away with a pretty major industry certification.

We like to try and grab these kids as soon as they leave high school, or even before they leave high school, and start introducing them to our network. They may have a different viewpoint on how to do something that’s revolutionary to us.

But we like having that aspect of it, we can educate those kids who are coming in and  getting their industry certifications, and we are able to utilize them before they move on to a college or another job that pays more than we do.

Bunkley: Charlie Thompson leads this program that Jon is speaking of, and actually over half of our team has been through the program. We didn’t create it, we have just taken advantage of the opportunity. We even tailor the classes to some of the specific things that we need. We have effectively created our own IT hiring pipeline out of this program.

Gardner: Next let’s take a look to the future. Where do you see things going, such as more use of cloud services, interest in unified consoles and controls from the cloud as APIs come into play more for your overall IT management? Encryption? Where do you take it from here?

Holistic solutions in the cloud

Bunkley: Those are some of the areas we are focusing on heavily as we move that “anywhere network.” The unified platform for management is going to be a big deal to us. It is a big deal to us already. Encryption is something we take very seriously because we have a team of eight protecting the data of  about 42,000 users..

If you consider the perfect cyber crime reaching down into a 7th or an 8th grader and stealing all of their personal information, taking that kid’s identity and using it, that kid won’t even know that their identity has been stolen.

We consider that a very serious charge of ours to take on. So we will continue to improve our protection of the students’ and teachers’ PII -- even if it sometimes means protecting them from themselves. We take it very seriously.

As we move to the cloud, that unified management platform leads to a more unified security platform. As the operating systems continue to mature, they seem to be going different ways. And what’s good for Mac is not always good for Chrome, is not always good for Windows. But as we move forward with our projects we bring everything back to that central point -- can the three be operated from the single point of connection, so that we can save money moving forward? Just because it’s a cool technology and we want to do, it doesn't mean it's the right thing for us.

Sometimes we have to choose an option that we don’t necessarily like as much, but pick it because it is better for the whole. As we continue to move forward, everything will be focused on that centralization. We can remain a small and flexible department to continue making sure that we are able to provide the services needed internally as well as protect our users.

Skipper: I think Jeremy hit it pretty solid on that one. As we integrate more with the cloud services, Google, etc., we are utilizing those APIs and we are leading our vendors that we use and forcing them into new areas. Lightspeed, for instance, is integrating more-and-more with Google and utilizing their API to ensure that content filtering -- even to the point of mobile device management (MDM) that is more integrated into the Google and Apple platforms to make sure that students are well protected and we have all the tools available that they need at any given time.

We are really leaning heavily on more cloud services, and also the interoperability between APIs and vendors.

Perkins: Public education is changing more to the realm of college education where the classroom is not a classroom -- a classroom is anywhere in the world. We are tasked with supporting them and protecting them no matter where they are located. We have to take care of our customers either way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge -- and closer to consumers -- to support live, multi-screen Internet Protocol (IP) entertainment delivery. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected -- with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

 Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go -- on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change -- when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now -- with the new technology on the back-end -- to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology -- all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them -- knowing what they like -- and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to -- at the very edge of the signal distribution, the video signal distribution -- insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. -- why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going -- to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP -- so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news -- this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera -- from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them -- and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That's what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That's very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it's 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That's also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it's an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It's a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed -- but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don't have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it's distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy -- and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations -- from content creation, planning, and distribution -- to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV -- all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don't want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that's the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that's an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side -- these were two groups that didn't speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn't say yet that they are IT companies. The core strength is content, that is their brand, that's what they are good at -- creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can't lose concentration on their core business. I think the IT providers still have a very strong play there. It's always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that's dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years -- because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That's one of the intrinsic values of a hybrid IT approach -- you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that's where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it's disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills

The next BriefingsDirect healthcare thought leadership panel discussion examines how a global standards body, The Open Group, is working to improve how the healthcare industry functions.

We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.

Lee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.

 Lee

Lee

 

There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare -- whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.

Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.

Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?

Kipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.

 Kipf

Kipf

 

As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, and approaches. We partner a lot in the industry.

The healthcare industry is really a crowded place and there are many standard development organizations. There are many players. It’s quite vital as a forum that we reach out, collaborate, and engage with others to reach where we want to be.

Gardner: Gail, why is the role of the enterprise architecture function an important ingredient to help bring this together? What’s important about EA when we think about the healthcare industry?

Kalbfleisch: From an EA perspective, I don’t really think that it matters whether you are talking about the healthcare industry or the finance industry or the personnel industry or the gas and electric industry. If you look at any of those, the organizations or the companies that tend to be highly functioning, they have not just architecture -- because everyone has architecture for what they do. But that architecture is documented and it’s available for use by decision-makers, and by developers across the system so that each part can work well together.

 Kalbfleisch

Kalbfleisch

 

We know that within the healthcare industry it is exceedingly complicated, and it’s a mixture of a lot of different things. It’s not just your body and your doctor, it’s also your insurance, your payers, research, academia -- and putting all of those together.

If we don’t have EA, people new to the system -- or people who were deeply embedded into their parts of the system -- can’t see how that system all works together usefully. For example, there are a lot of different standards organizations. If we don’t see how all of that works together -- where everybody else is working, and how to make it fit together – then we’re going to have a hard time getting to interoperability quickly and efficiently.

It's important that we get to individual solution building blocks to attain a more integrated approach. 

Kipf: If you think of the healthcare industry, we’ve been very good at developing individual solutions to specific problems. There’s a lot of innovation and a lot of technology that we use. But there is an inherent risk of producing silos among the many stakeholders who, ultimately, work for the good of the patient. It's important that we get to individual solution building blocks to attain a more integrated approach based on architecture building blocks, and based on common frameworks, tools and approaches.

Gardner: Healthcare is a very complex environment and IT is very fast-paced. Can you give us an update on what the Healthcare Forum has been doing, given the difficulty of managing such complexity?

Bird’s-eye view mapping

Lee: The Healthcare Forum began with a series of white papers, initially focusing on an information model that has a long history in the federal government. We used enterprise architecture to evaluate the Federal Health Information Model (FHIM).  People began listening and we started to talk to people outside of The Open Group, and outside of the normal channels of The Open Group. We talked to different types of architects, such as information architects, solution architects, engineers, and initially settled on the problem that is essential to The Open Group -- and that is the problem of boundaryless information flow.

We need to get beyond the silos that Oliver mentioned and that Gail alluded to. As I mentioned in my opening comments, this is a huge industry, and Gail illustrated it by naming some of the stakeholders within the health, healthcare and wellness enterprises. If you think of your hospital, it can be difficult to achieve boundaryless information flow to enable your information to travel digitally, securely, quickly, and in a way that’s valid, reliable and understandable by those who send it and by those who receive it.  But if that is possible, it’s all to the betterment of the patient.

Initially, in our focus on what healthcare folks call interoperability -- what we refer to as boundaryless information flow -- we came to realize through discussions with stakeholders in the public sector, as well as the private sector and globally, that understanding how the different pieces are linked together is critical. Anybody who works in an organization or belongs to a church, school or family understands that sometimes getting the right message communicated from point A to point B can be difficult.

To address that issue, the HCF members have decided to create a Health Enterprise Reference Architecture (HERA) that is essentially a framework and a map at the highest level. It helps people see that what they do relates to what others do, regardless of their position in their company. You want to deliver value to those people, to help them understand how their work is interconnected, and how IT can help them achieve their goals.

Gardner: Oliver, who should be aware of and explore engaging with the HCF?

Kipf: The members of The Open Group themselves, many of them are players in the field of healthcare, and so they are the natural candidates to really engage with. In that healthcare ecosystem we have providers, payers, governing bodies, pharmaceuticals, and IT companies.

Those who deeply need planning, management and architecting -- to make big thinking a reality out there -- those decision-makers are the prime candidates for engagement in the Healthcare Forum. They can benefit from the kinds of products we produce, the reference architecture, and the white papers that we offer. In a nutshell, it’s the members, and it’s the healthcare industry, and the healthcare ecosystem that we are targeting.

Gardner: Gail, perhaps you could address the reference architecture initiative? Why do you see that as important? Who do you think should be aware of it and contribute to it?

Shared reference points

Kalbfleisch: Reference architecture is one of those building block pieces that should be used. You can call it a template. You can have words that other people can relate to, maybe easier than the architecture-speak.

If you take that template, you can make it available to other people so that we can all be designing our processes and systems with a common understanding of our information exchange -- so that it crosses boundaries easily and securely. If we are all running on the same template, that’s going to enable us to identify how to start, what has to be included, and what standards we are going to use.

A reference architecture is one of those very important pieces that not only forms a list of how we want to do things, and what we agreed to, but it also makes it so that every organization doesn’t have to start from scratch. It can be reused and improved upon as we go through the work. If someone improves the architecture, that can come back into the reference architecture.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector -- whether it’s Oliver in Europe, whether it’s someone over in California, Australia, it really doesn't matter. Anyone who wants to make interoperability better should know about it.

My focus is on decision-makers, policymakers, process developers, and other people who look at it from a device-design perspective. One of the things that has been discussed within the HCF’s reference architecture work is the need to make sure that it’s all at a high-enough level, where we can agree on what it looks like. Yet it also must go down deeply enough so that people can apply it to what they are doing -- whether it’s designing a piece of software or designing a medical device.

Gardner: Jason, The Open Group has been involved with standards and reference architectures for decades, with such recent initiatives as the IT4IT approach, as well as the longstanding TOGAF reference architecture. How does the HERA relate to some of these other architectural initiatives?

Building on a strong foundation

Lee: The HERA starts by using the essential components and insights that are built into the TOGAF ArchitecturalDevelopment Model (ADM) and builds from there. It also uses the ArchiMate language, but we have never felt restricted to using only those existing Open Group models that have been around for some time and are currently being developed further.

We are a big organization in terms of our approach, our forum, and so we want to draw from the best there is in order to fill in the gaps. Over the last few decades, an incredible amount of talent has joined The Open Group to develop architectural models and standards that apply across multiple industries, including healthcare. We reuse and build from this important work.

In addition, as we have dug deeper into the healthcare industry, we have found other issues – gaps -- that need filling. There are related topics that would benefit. To do that, we have been working hard to establish relationships with other organizations in the healthcare space, to bring them in, and to collaborate. We have done this with the Health Level Seven Organization (HL7), which is one of the best-known standards organizations in the world.

We are also doing this now with an organization called Healthcare Services Platform Consortium (HSPC), which involves academic, government and hospital organizations, as well as people who are focused on developing standards around terminology.

IT’s getting better all the time

Kipf: If you think about reference architecture in a specific domain, such as in the healthcare industry, you look at your customers and the enterprises -- those really concerned with the delivery of health services. You need to ask yourself the question: What are their needs?

And the need in this industry is a focus on the person and on the service. It’s also highly regulatory, so being compliant is a big thing. Quality is a big thing. The idea of lifetime evolution -- that you become better and better all the time -- that is very important, very intrinsic to the healthcare industry.

When we are looking into the customers out there that we believe that the HERA could be of value, it’s the small- to mid-sized and the large enterprises that you have to think of, and it’s really across the globe. That’s why we believe that the HERA is something that is tuned into the needs of our industry.

And as Jason mentioned, we build on open standards and we leverage them where we can. ArchiMate is one of the big ones -- not only the business language, but also a lot of the concepts are based on ArchiMate. But we need to include other standards as well, obviously those from the healthcare industry, and we need to deviate from specific standards where this is of value to our industry.

Gardner: Oliver, in order to get this standard to be something that's used, that’s very practical, people look to results. So if you were to take advantage of such reference architectures as HERA, what should you expect to get back? If you do it right, what are the payoffs?

Capacity for change and collaboration

Kipf: It should enable you to do a better job, to become more efficient, and to make better use of technology. Those are the kinds of benefits that you see realized. It’s not only that you have a place where you can model all the elements of your enterprise, where you can put and manage your processes and your services, but it’s also in the way you are architecting your enterprise.

It gives you the ability to change. From a transformation management perspective, we know that many healthcare systems have great challenges and there is this need to change. The HERA gives you the tools to get where you want to be, to define where you want to be -- and also how to get there. This is where we believe it provides a lot of benefits.

Gardner: Gail, similar question, for those organizations, both public and private sector, that do this well, that embrace HERA, what should they hope to get in return?

Kalbfleisch: I completely agree with what Oliver said. To add, one of the benefits that you get from using EA is a chance to have a perspective from outside your own narrow silos. The HERA should be able to help a person see other areas that they have to take into consideration, that maybe they wouldn’t have before.

Another value is to engage with other people who are doing similar work, who may have either learned lessons, or are doing similar things at the same time. So that's one of the ways I see the effectiveness and of doing our jobs better, quicker, and faster.

Also, it can help us identify where we have gaps and where we need to focus our efforts. We can focus our limited resources in much better ways on specific issues -- where we can accomplish what we are looking to -- and to gain that boundaryless information flow.

Reaching your goals

Lee: Essentially, the HERA will provide a framework that enables companies to leverage IT to achieve their goals. The wonderful thing about it is that we are not telling organizations what their goals should be. We show them how they can follow a roadmap to accomplish their self-defined goals more effectively. Often this involves communicating the big picture, as Gail said, to those who are in siloed positions within their organizations.

There is an old saying: “What you see depends on where you sit.” The HERA helps stakeholders gain this perspective by helping key players understand the relationships, for example, between business processes and engineering. So whether a stakeholder’s interest is increasing patient satisfaction, reducing error, improving quality, and having better patient outcomes and gaining more reimbursement where reimbursement is tied to outcomes -- using the product and the architecture that we are developing helps all of these goals.

Gardner: Jason, for those who are intrigued by what you are doing with HERA, tell us about its trajectory, its evolution, and how that journey unfolds. Who can they learn more or get involved?

Lee: We have only been working on the HERA per se for the last year, although its underpinnings go back 20 years or more. Its trajectory is not to a single point, but to an evolutionary process. We will be producing products, white papers, as well as products that others can use in a modular fashion to leverage what they already use within their legacy systems.

We encourage anyone out there, particularly in the health system delivery space, to join us. That can be done by contacting me at j.lee@opengroup.org and at www.opengroup.org/healthcare.

It’s an incredible time, a very opportune time, for key players to be involved because we are making very important decisions that lay the foundation for the HERA. We collaborate with key players, and we lay down the tracks from which we will build increasing levels of complexity.

But we start at the top, using non-architectural language to be able to talk to decision-makers, whether they are in the public sector or private sector. So we invite any of these organizations to join us.

Learn from others’ mistakes

Kalbfleisch: My first foray into working with The Open Group was long before I was in the health IT sector. I was with the US Air Force and we were doing very non-health architectural work in conjunction with The Open Group.

The interesting part to me is in ensuring boundaryless information flow in a manner that is consistent with the information flowing where it needs to go and who has access to it. How does it get from place to place across distinct mission areas, or distinct business areas where the information is not used the same way or stored in the same way? Such dissonance between those business areas is not a problem that is isolated just to healthcare; it’s across all business areas.

That was exciting. I was able to take awareness of The Open Group from a previous life, so to speak, and engage with them to get involved in the Healthcare Forum from my current position.

A lot of the technical problems that we have in exchanging information, regardless of what industry you are in, have been addressed by other people, and have already been worked on. By leveraging the way organizations have already worked on it for 20 years, we can leverage that work within the healthcare industry. We don't have to make the same mistakes that were made before. We can take what people have learned and extend it much further. We can do that best by working together in areas like The Open Group HCF.

Kipf: On that evolutionary approach, I also see this as a long-term journey. Yes, there will be releases when we have a specification, and there will guidelines. But it's important that this is an engagement, and we have ongoing collaboration with customers in the future, even after it is released. The coming together of a team is what really makes a great reference architecture, a team that places the architecture at a high level.

We can also develop distinct flavors of the specification. We should expect much more detail. Those implementation architectures then become spin-offs of reference architectures such as the HERA.

Lee: I can give some concrete examples, to bookend the kinds of problems that can be addressed using the HERA. At the micro end, a hospital can use the HERA structure to implement a patient check-in to the hospital for patients who would like to bypass the usual process and check themselves in. This has a number of positive value outcomes for the hospital in terms of staffing and in terms of patient satisfaction and cost savings.

At the other extreme, a large hospital system in Philadelphia or Stuttgart or Oslo or in India finds itself with patients appearing at the emergency room or in the ambulatory settings unaffiliated with that particular hospital. Rather than have that patient come as a blank sheet of paper, and redo all the tests that had been done prior, the HERA will help these healthcare organizations figure out how to exchange data in a meaningful way. So the information can flow digitally, securely, and it means the same thing to those who get it as much as it does to those who receive it, and everything is patient-focused, patient-centric.

Gardner: Oliver, we have seen with other Open Group standards and reference architectures, a certification process often comes to bear that helps people be recognized for being adept and properly trained. Do you expect to have a certification process with HERA at some point?

Certifiable enterprise expertise

Kipf: Yes, the more we mature with the HERA, along with the defined guidelines and the specifications and the HERA model, the more there will be a need and demand for health enterprise-focused employees in the marketplace. They can show how consulting services can then use HERA.

And that's a perfect place when you think of certification. It helps make sure that the quality of the workforce is strong, whether it's internal or in the form of a professional services role. They can comply with the HERA.

Gardner: Clearly, this has applicability to healthcare payer organizations, provider organizations, government agencies, and the vendors who supply pharmaceuticals or medical instruments. There are a great deal of process benefits when done properly, so that enterprise architects could become certified eventually.

My question then is how do we take the HERA, with such a potential for being beneficial across the board, and make it well-known? Jason, how do we get the word out? How can people who are listening to this or reading this, help with that?

Spread the word, around the world

Lee: It's a question that has to be considered every time we meet. I think the answer is straightforward. First, we build a product [the HERA] that has clear value for stakeholders in the healthcare system. That’s the internal part.

Second—and often, simultaneously—we develop a very important marketing/collaboration/socialization capability. That’s the external part. I've worked in healthcare for more than 30 years, and whether it's public or private sector decision-making, there are many stakeholders, and everybody's focused on the same few things: improving value, enhancing quality, expanding access, and providing security.

We will continue developing relationships with key players to ensure them that what they’re doing is key to the HERA. At the broadest level, all companies must plan, build, operate and improve.

There are immense opportunities for business development. There are innumerable ways to use the HERA to help health enterprise systems operate efficiently and effectively. There are opportunities to demonstrate to key movers and shakers in healthcare system how what we're doing integrates with what they're doing. This will maximize the uptake of the HERA and minimize the chances it sits on a shelf after it's been developed.

Gardner: Oliver, there are also a variety of regional conferences and events around the world. Some of them are from The Open Group. How important is it for people to be aware of these events, maybe by taking part virtually online or in person? Tell us about the face-time opportunities, if you will, of these events, and how that can foster awareness and improvement of HERA uptake.

Kipf: We began with the last Open Group event. I was in Berlin, presenting the HERA. As we see more development, more maturity, we can then show more. The uptake will be there and we also need to include things like cyber security, things like risk compliance. So we can bring in a lot of what we have been doing in various other initiatives within The Open Group. We can show how it can be a fusion, and make this something that is really of value.

I am confident that through face-to-face events, such as The Open Group events, we can further spread the message.

Lee: And a real shout-out to Gail and Oliver who have been critical in making introductions and helping to share The Open Group Healthcare Forum’s work broadly. The most recent example is the 2016 HIMSS conference, a meeting that brings together more than 40,000 people every year. There is a federal interoperability showcase there, and we have been able to introduce and discuss our HERA work there.

We’ve collaborated with the Office of the National Coordinator where the Federal Heath Architecture sits, with the US Veterans Administration, with the US Department of Defense, and with the Centers for Medicare and Medicaid (CMS). This is all US-centered, but there are lots of opportunities globally to not just spread the word in public for domains and public venues, but also to go to those key players who are moving the industry forward, and in some cases convince them that enterprise architecture does provide that structure, that template that can help them achieve their goals.

Future forecast

Gardner: I’m afraid we are almost out of time. Gail, perhaps a look into the crystal ball. What do you expect and hope to see in the next few years when it comes to improvements initiatives like HERA at The Open Group Forum can provide? What do you hope to see in the next couple of years in terms of improvement?

Kalbfleisch: What I would like to see happen in the next couple of years as it relates to the HERA, is the ability to have a place where we can go from anywhere and get a glimpse of the landscape. Right now, it’s hard to find anywhere where someone in the US can see the great work that Oliver is doing, or the people in Norway, or the people in Australia are doing.

It’s really important that we have opportunities to communicate as large groups, but also the one-on-one. Yet when we are not able to communicate personally, I would like to see a resource or a tool where people can go and get the information they need on the HERA on their own time, or as they have a question. Reference architecture is great to have, but it has no power until it’s used.

My hope for the future is for the HERA to be used by decision-makers, developers, and even patients. So when an organizations such as some hospital wants to develop a new electronic health record (EHR) system, they have a place to go and get started, without having to contact Jason or wait for a vendor to come along and tell them how to solve a problem. That would be my hope for the future.

Lee: You can think of the HERA as a soup with three key ingredients. First is the involvement and commitment of very bright people and top-notch organizations. Second, we leverage the deep experience and products of other forums of The Open Group. Third, we build on external relationships. Together, these three things will help make the HERA successful as a certifiable product that people can use to get their work done and do better.

Gardner: Jason, perhaps you could also tee-up the next Open Group event in Amsterdam. Can you tell us more about that and how to get involved?

Lee: We are very excited about our next event in Amsterdam in October. You can go to www.opengroup.org and look under Events, read about the agendas, and sign up there. We will have involvement from experts from the US, UK, Germany, Australia, Norway, and this is just in the Healthcare Forum!

The Open Group membership will be giving papers, having discussions, moving the ball forward. It will be a very productive and fun time and we are looking forward to it. Again, anyone who has a question or is interested in joining the Healthcare Forum can please send me, Jason Lee, an email at j.lee@opengroup.org.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Awesome Procurement —Survey shows how business networks fuel innovation and business transformation

The next BriefingsDirect digital business insights interview explores the successful habits, practices, and culture that define highly effective procurement organizations.

We'll uncover unique new research that identifies and measures how innovative companies have optimized their practices to overcome the many challenges facing business-to-business (B2B) commerce.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the traits and best practices of the most successful procurement organizations, please join Kay Ree Lee, Director of Business Analytics and Insights at SAP Ariba. The interview was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Procurement is more complex than ever, supply chains stretch around the globe, regulation is on the rise, and risk is heightened across many fronts. Despite these, innovative companies have figured out how to overcome their challenges, and you have uncovered some of their secrets through your Annual Benchmarking Survey. Tell us about your research and your findings.

Lee: Every year we conduct a large benchmark program benefiting our customers that combines a traditional survey with data from the procurement applications, as well as business network.

 Lee

Lee

This past year, more than 200 customers participated, covering more than $400 billion in spend. We analyzed the quantitative and qualitative responses of the survey and identified the intersection between those responses for top performers compared to average performers. This has allowed us to draw correlations between what top performers did well and the practices that drove those achievements.

Gardner: What’s changed from the past, what are you seeing as long-term trends?

Lee: There are three things that are quite different from when we last talked about this a year ago.

The number one trend that we see is that digital procurement is gaining momentum quickly. A lot of organizations are now offering self-service tools to their internal stakeholders. These self-service tools enable the user to evaluate and compare item specifications and purchase items in an electronic marketplace, which allows them to operate 24x7, around-the-clock. They are also utilizing digital networks to reach and collaborate with others on a larger scale.

The second trend that we see is that while risk management is generally acknowledged as important and critical, for the average company, a large proportion of their spend is not managed. Our benchmark data indicates that an average company manages 68% of their spend. This leaves 32% of spend that is unmanaged. If this spend is not managed, the average company is also probably not managing their risk. So, what happens when something unexpected occurs to that non-managed spend?

The third trend that we see is related to compliance management. We see compliance management as a way for organizations to deliver savings to the bottom line. Capturing savings through sourcing and negotiation is a good start,  but at the end of the day, eliminating loopholes through a focus on implementation and compliance management is how organizations deliver and realize negotiated savings.

Gardner: You have uncovered some essential secrets -- or the secret sauce -- behind procurement success in a digital economy. Please describe those.

Five elements driving procurement processes

Lee: From the data, we identified five key takeaways. First, we see that procurement organizations continue to expand their sphere of influence to greater depth and quality within their organizations. This is important because it shows that the procurement organization and the work that procurement professionals are involved in matters and is appreciated within the organization.

The second takeaway is that – while cost reduction savings is near and dear to the heart of most procurement professionals -- leading organizations are focused on capturing value beyond basic cost reduction. They are focused on capturing value in other areas and tracking that value better.

The third takeaway is that digital procurement is firing on all cylinders and is front and center in people's minds. This was reflected in the transactional data that we extracted.

The fourth takeaway is related to risk management. This is a key focus area that we see instead of just news tracking related to your suppliers.

The fifth takeaway is -- compliance management and closing the purchasing loopholes is what will help procurement deliver bottom-line savings.

Gardner: What next are some of the best practices that are driving procurement organizations to have a strategic impact at their companies, culturally?

Lee: To have a strategic impact in the business, procurement needs to be proactive in engaging the business. They should have a mentality of helping the business solve business problems as opposed to asking stakeholders to follow a prescribed procurement process. Playing a strategic role is a key practice that drives impact.

They should also focus on broadening the value proposition of procurement. We see leading organizations placing emphasis on contributing to revenue growth, or increasing their involvement in product development, or co-innovation that contributes to a more efficient and effective process.

Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks, system controls to direct compliance, automation through workflow, et cetera.

These are examples of practices and focus areas that are becoming more important to organizations.

Using technology to track technology usage

Gardner: In many cases, we see the use of technology having a virtuous adoption cycle in procurement. So the more technology used, the better they become at it, and the more technology can be exploited, and so on. Where are we seeing that? How are leading organizations becoming highly technical to gain an advantage?

Lee: Companies that adopt new technology capabilities are able to elevate their performance and differentiate themselves through their capabilities. This is also just a start. Procurement organizations are pivoting towards advanced and futuristic concepts, and leaving behind the single-minded focus on cost reduction and cost efficiency.

Digital procurement utilizing electronic marketplaces, virtual catalogs, gaining visibility into the lifecycle of purchase transactions, predictive risk management, and utilizing large volumes of data to improve decision-making – these are key capabilities that benefit the bold and the future-minded. This enables the transformation of procurement, and forms new roles and requirements for the future procurement organization.

Gardner: We are also seeing more analytics become available as we have more data-driven and digital processes. Is there any indication from your research that procurement people are adopting data-scientist-ways of thinking? How are they using analysis more now that the data and analysis are available through the technology?

Lee: You are right. The users of procurement data want insights. We are working with a couple of organizations on co-innovation projects. These organizations   actively research, analyze, and use their data to answer questions such as:

  • How does an organization validate that the prices they are paying are competitive in the marketplace?
  • After an organization conducts a sourcing event and implements the categories, how do they actually validate that the price paid is what was negotiated?
  • How do we categorize spend accurately, particularly if a majority of spend is services spend where the descriptions are non-standard?
  • Are we using the right contracts with the right pricing?

As you can imagine, when people enter transactions in a system, not all of it is contract-based or catalog-based. There is still a lot of free-form text. But if you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights. This goes back to the managing compliance practice we talked about earlier.

They are also looking to answer questions like, how do we scale supplier risk management to manage all of our suppliers systematically, as opposed to just managing the top-tier suppliers?

These two organizations are taking data analysis further in terms of creating advantages that begin to imbue excellence into modern procurement and across all of their operations.

Gardner: Kay Ree, now that you have been tracking this Benchmark Survey for a few years, and looking at this year's results, what would you recommend that people do based on your findings?

Future focus: Cost-reduction savings and beyond

Lee: There are several recommendations that we have. One is that procurement should continue to expand their span of influence across the organization. There are different ways to do this but it starts with an understanding of the stakeholder requirements.

The second is about capturing value beyond cost-reduction savings. From a savings perspective, the recommendation we have is to continue to track sourcing savings -- because cost-reduction savings are important. But there are other measures of value to track beyond cost savings. That includes things like contribution to revenue, involvement in product development, et cetera.

The third recommendation relates to adopting digital procurement by embracing technology. For example, SAP Ariba has recently introduced some innovations. I think the user really has an advantage in terms of going out there, evaluating what is out there, trying it out, and then seeing what works for them and their organization.

As organizations expand their footprint globally, the fourth recommendation focuses on transaction efficiency. The way procurement can support organizations operating globally is by offering self-service technology so that they can do more with less. With self-service technology, no one in procurement needs to be there to help a user buy. The user goes on the procurement system and creates transactions while their counterparts in other parts of the world may be offline.

The fifth recommendation is related to risk management. A lot of organizations when they say, “risk management,” they are really only tracking news related to their suppliers. But risk management includes things like predictive analytics, predictive risk measures beyond your strategic suppliers, looking deeper into supply chains, and across all your vendors. If you can measure risk for your suppliers, why not make it systematic? We now have the ability to manage a larger volume of suppliers, to in fact manage all of them. The ones that bubble to the top, the ones that are the most risky, those are the ones that you create contingency plans for. That helps organizations really prepare to respond to disruptions in their business.

The last recommendation is around compliance management, which includes internal and external compliance. So, internal adherence to procurement policies and procedures, and then also external following of governmental regulations. This helps the organization close all the loopholes and ensure that sourcing savings get to the bottom line.

Be a leader, not a laggard

Gardner: When we examine and benchmark companies through this data, we identify leaders, and perhaps laggards -- and there is a delta between them. In trying to encourage laggards to transform -- to be more digital, to take upon themselves these recommendations that you have -- how can we entice them? What do you get when you are a leader? What defines the business value that you can deliver when you are taking advantage of these technologies, following these best practices?

Lee: Leading organizations see higher cost reduction savings, process efficiency savings and better collaboration internally and externally. These benefits should speak for themselves and entice both the average and the laggards to strive for improvements and transformation.

From a numbers perspective, top performers achieve 9.7% savings as a percent of sourced spend. This translates to approximately $20M higher savings per $B in spend compared to the average organization.

We talked about compliance management earlier. A 5% increase in compliance increases realized savings of $4.4M per $1B in spend. These are real hard dollar savings that top performers are able to achieve.

In addition, top performers are able to attract a talent pool that will help the procurement organization perform even better. If you look at some of the procurement research, industry analysts and leaders are predicting that there may be a talent shortage in procurement. But, as a top performer, if you go out and recruit, it is easier to entice talent to the organization. People want to do cool things and they want to use new technology in their roles.

Gardner: Wrapping up, we are seeing some new and compellingtechnologies here at Ariba LIVE 2017 -- more use of artificial intelligence(AI), increased use of bringing predictive tools into a context so that they can be of value to procurement during the life-cycle of a process.

As we think about the future, and more of these technologies become available, what is it that companies should be doing now to put themselves in the best position to take advantage of all of that?

Curious org

Lee: It's important to be curious about the technology available in the market and perhaps structure the organization in such a way that there is a team of people on the procurement team who are continuously evaluating the different procurement technologies from different vendors out there. Then they can make decisions on what best fits their organization.

Having people who can look ahead, evaluate, and then talk about the requirements, then understand the architecture, and evaluate what's out there and what would make sense for them in the future. This is a complex role. He or she has to understand the current architecture of the business, the requirements from the stakeholders, and then evaluate what technology is available. They must then determine if it will assist the organization in the future, and if adopting these solutions provides a return on investment and ongoing payback.

So I think being curious, understanding the business really well, and then wearing a technology hat to understand what's out there are key. You can then be helpful to the organization and envision how adopting these newer technologies will play out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in: