.banner-thumbnail-wrapper { display:none; }

Data security and encrypt

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem-- but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

 Steyn

Steyn

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.  

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing? 

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big ... Now we have the entire environment running on SimpliVity cubes.  

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Containers, microservices, and HCI help governments in Norway provide safer public data sharing

The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.

We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI)containers, and microservices.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us discover the benefits of a modern platform for smarter government data sharing is FrodeSjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving interest in having a common platform for public information in your country?

SjovatsenWe need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.

Gardner: What problems do you need to overcome in order to create a more common approach?

Common API at the core

SjovatsenOne of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes -- that’s where the problem is. And that’s what the FINT project is all about.

  Sjovatsen

Sjovatsen

[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal -- for all of those business applications. It gives us the ability to use our data much more efficiently.

Gardner: Please describe for us what the FINT project is and why this is so important for public health.

SjovatsenIt’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs -- both for public use and also for internal use.

Gardner: What sort of information are we dealing with here? Why is it important for the general public health? 

SjovatsenIt’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.

Learn More About

HPE Pointnext Services

Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?

Agility, scalability, and speed

SjovatsenWe needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails. 

The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.

Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach? 

SjovatsenWe are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity

Container comfort

Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?

SjovatsenYes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.

The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.

Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?

Learn More About

HPE Pointnext Services

SjovatsenThe container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.

Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sector applications?

Unlocking the potential

SjovatsenYes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.

Gardner: That could have some very beneficial impacts, very good payoffs.

SjovatsenYes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.

In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.

Gardner: It shows the wide applicability of this. Any advice, Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?

SjovatsenI definitely recommend it -- it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

How UBC gained TCO advantage via flash for its EduCloud cloud storage service

The next BriefingsDirect cloud efficiency case study explores how a storage-as-a-service offering in a university setting gains performance and lower total cost benefits by a move to all-flash storage.

We’ll now learn how the University of British Columbia (UBC) has modernized its EduCloud storage service and attained both efficiency as well as better service levels for its diverse user base.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to help us explore new breeds of SaaS solutions is Brent Dunington, System Architect at UBC in Vancouver. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How is satisfying the storage demands at a large and diverse university setting a challenge? Is there something about your users and the diverse nature of their needs that provides you with a complex requirements list? 

Dunington: A university setting isn't much different than any other business. The demands are the same. UBC has about 65,000 students and about 15,000 staff. The students these days are younger kids, they all have iPhones and iPads, and they just want to push buttons and get instant results and instant gratification. And that boils down to the services that we offer.

 Dunington

Dunington

We have to be able to offer those services, because as most people know, there are choices -- and they can go somewhere else and choose those other products.

Our team is a rather small team. There are 15 members in our team, so we have to be agile, we have to be able to automate things, and we need tools that can work and fulfill those needs. So it's just like any other business, even though it’s a university setting.

HPE

Delivers

Flash Performance

Gardner: Can you give us a sense of the scale that describes your storage requirements?

Dunington: We do SaaS, we also do infrastructure-as-a-service (IaaS). EduCloud is a self-service IaaS product that we deliver to UBC, but we also deliver it to 25 other higher institutions in the Province of British Columbia.

We have been doing IaaS for five years, and we have been very, very successful. So more people are looking to us for guidance.

Because we are not just delivering to UBC, we have to be up running and always able to deliver, because each school has different requirements. At different times of the year -- because there is registration, there are exam times -- these things have to be up. You can’t not be functioning during an exam and have 600 students not able to take the tests that they have been studying for. So it impacts their life and we want to make sure that we are there and can provide the services for what they need.

Gardner: In order to maintain your service levels within those peak times, do you in your IaaS and storage services employ hybrid-cloud capabilities so that you can burst? Or are you doing this all through your own data center and your own private cloud?

On-Campus Cloud

Dunington: We do it all on-campus. British Columbia has a law that says all the data has to stay in Canada. It’s a data-sovereignty law, the data can't leave the borders.

That's why EduCloud has been so successful, in my opinion, because of that option. They can just go and throw things out in the private cloud.

The public cloud providers are providing more services in Canada: Amazon Web Services (AWS) and Microsoft Azure cloud are putting data centers in Canada, which is good and it gives people an option. Our team’s goal is to provide the services, whether it's a hybrid model or all on-campus. We just want to be able to fulfill those needs.

Gardner: It sounds like the best of all worlds. You are able to give that elasticity benefit, a lot of instant service requirements met for your consumers. But you are starting to use cloud pay-as-you-go types of models and get the benefit of the public cloud model -- but with the security, control and manageability of the private clouds.

What decisions have you made about your storage underpinnings, the infrastructure that supports your SaaS cloud?

Dunington: We have a large storage footprint. For our site, it’s about 12 petabytes of storage. We realized that we weren’t meeting the needs with spinning disks. One of the problems was that we had runaway virtual workloads that would cause problems, and they would impact other services. We needed some mechanism to fix that.

We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

We went through the whole request for proposal (RFP) process, and all the IT infrastructure vendors responded, but we did have some guidelines that we wanted to go through. One of the things we did is present our problems and make sure that they understood what the problems were and what they were trying to solve.

And there were some minimum requirements. We do have a backup vendor of choice that they needed to merge with. And quality of service is a big thing. We wanted to make sure that we had the ability to attain quality of service levels and control those runaway virtual machines in our footprint.

Gardner: You gained more than just flash benefits when you got to flash storage, right?

Streamlined, safe, flash storage

Dunington: Yes, for sure. With an entire data center full of spinning disks, it gets to the point where the disks start to manage you; you are no longer managing the disks. And the teams out there changing drives, removing volumes around it, it becomes unwieldy. I mean, the power, the footprint, and all that starts to grow.

Also, Vancouver is in a seismic zone, we are right up against the Pacific plate and it's a very active seismic area. Heaven forbid anything happens, but one of the requirements we had was to move the data center into the interior of the province. So that was what we did.

When we brought this new data center online, one of the decisions the team made was to move to an all-flash storage environment. We wanted to be sure that it made financial sense because it's publicly funded, and also improved the user experience, across the province.

Gardner: As you were going about your decision-making process, you had choices, what made you choose what you did? What were the deciding factors?

Dunington: There were a lot of deciding factors. There’s the technology, of being able to meet the performance and to manage the performance. One of the things was to lock down runaway virtual machines and to put performance tiers on others.

But it’s not just the technology; it's also the business part, too. The financial part had to make sense. When you are buying any storage platform, you are also buying the support team and the sales team that come with it.

Our team believes that technology is a certain piece of the pie, and the rest of it is relationship. If that relationship part doesn't work, it doesn’t matter how well the technology part works -- the whole thing is going to break down.

Because software is software, hardware is hardware -- it breaks, it has problems, there are limitations. And when you have to call someone, you have to depend on him or her. Even though you bought the best technology and got the best price -- if it doesn't work, it doesn’t work, and you need someone to call.

So those service and support issues were all wrapped up into the decision.

HPE

Delivers

Flash Performance

We chose the Hewlett Packard Enterprise (HPE) 3PAR all-flash storage platform. We have been very happy with it. We knew the HPE team well. They came and worked with us on the server blade infrastructure, so we knew the team. The team knew how to support all of it. 

We also use the HPE OneView product for provisioning, and it integrated into that all. It also supported the performance optimization tool (IT Operations Management for HPE OneView) to let us set those values, because one of the things in EduCloud is customers choose their own storage tier, and we mark the price on it. So basically all we would do is present that new tier as new data storage within VMware and then they would just move their workloads across non-disruptively. So it has worked really well.

The 3PAR storage piece also integrates with VMware vRealize Operations Manager. We offer that to all our clients as a portal so they can see how everything is working and they can do their own diagnostics. Because that’s the one goal we have with EduCloud, it has to be self-service. We can let the customers do it, that's what they want.

Gardner: Not that long ago people had the idea that flash was always more expensive and that they would use it for just certain use-cases rather than pervasively. You have been talking in terms of a total cost of ownership reduction. So how does that work? How does the economics of this over a period of time, taking everything into consideration, benefit you all?

Economic sense at scale

Dunington: Our IT team and our management team are really good with that part. They were able to break it all down, and they found that this model would work at scale. I don’t know the numbers per se, but it made economic sense.

Spinning disks will still have a place in the data center. I don't know a year from now if an all-flash data center will make sense, because there are some records that people will throw in and never touch. But right now with the numbers on how we worked it out, it makes sense, because we are using the standard bronze, the gold, the silver tiers, and with the tiers it makes sense.

The 3PAR solution also has dedupe functionality and the compression that they just released. We are hoping to see how well that trends. Compression has only been around for a short period of time, so I can’t really say, but the dedupe has done really well for us.

Gardner: The technology overcomes some of the other baseline economic costs and issues, for sure.

We have talked about the technology and performance requirements. Have you been able to qualify how, from a user experience, this has been a benefit?

Dunington: The best benchmark is the adoption rate. People are using it, and there are no help desk tickets, so no one is complaining. People are using it, and we can see that everything is ramping up, and we are not getting tickets. No one is complaining about the price, the availability. Our operational team isn't complaining about it being harder to manage or that the backups aren’t working. That makes me happy.

The big picture

Gardner: Brent, maybe a word of advice to other organizations that are thinking about a similar move to private cloud SaaS. Now that you have done this, what might you advise them to do as they prepare for or evaluate a similar activity?

Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there.

Dunington: Look at the full picture, look at the total cost of ownership. There’s the buying of the hardware, and there's also supporting the hardware, too. Make sure that you understand your requirements and what your customers are looking for first before you go out and buy it. Not everybody needs that speed, not everybody needs that performance, but it is the future and things will move there. We will see in a couple of years how it went.

Look at the big picture, step back. It’s just not the new shiny toy, and you might have to take a stepped approach into buying, but for us it worked. I mean, it’s a solid platform, our team sleeps well at night, and I think our customers are really happy with it.

Gardner: This might be a little bit of a pun in the education field, but do your homework and you will benefit.

HPE

Delivers

Flash Performance

Dunington: Yes, for sure.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·      How IoT capabilities open new doors for Miami telecoms platform provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

 

How modern storage provides hints on optimizing and best managing hybrid IT and multi-cloud resources

The next BriefingsDirect Voice of the Analyst interview examines the growing need for proper rationalizing of which apps, workloads, services and data should go where across a hybrid IT continuum.

Managing hybrid IT necessitates not only a choice between public cloud and private cloud, but a more granular approach to picking and choosing which assets go where based on performance, costs, compliance, and business agility.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to report on how to begin to better assess what IT variables should be managed and thoughtfully applied to any cloud model is Mark Peters, Practice Director and Senior Analyst at Enterprise Strategy Group (ESG). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now that cloud adoption is gaining steam, it may be time to step back and assess what works and what doesn’t. In past IT adoption patterns, we’ve seen a rapid embrace that sometimes ends with at least a temporary hangover. Sometimes, it’s complexity or runaway or unmanaged costs, or even usage patterns that can’t be controlled. Mark, is it too soon to begin assessing best practices in identifying ways to hedge against any ill effects from runaway adoption of cloud? 

Peters: The short answer, Dana, is no. It’s not that the IT world is that different. It’s just that we have more and different tools. And that is really what hybrid comes down to -- available tools.

 Peters

Peters

It’s not that those tools themselves demand a new way of doing things. They offer the opportunity to continue to think about what you want. But if I have one repeated statement as we go through this, it will be that it’s not about focusing on the tools, it’s about focusing on what you’re trying to get done. You just happen to have more and different tools now.

Gardner: We hear sometimes that at as high as board of director levels, they are telling people to go cloud-first, or just dump IT all together. That strikes me as an overreaction. If we’re looking at tools and to what they do best, is cloud so good that we can actually just go cloud-first or cloud-only?

Cloudy cloud adoption

Peters: Assuming you’re speaking about management by objectives (MBO), doing cloud or cloud-only because that’s what someone with a C-level title saw on a Microsoft cloud ad on TV and decided that is right, well -- that clouds everything.

You do see increasingly different people outside of IT becoming involved in the decision. When I say outside of IT, I mean outside of the operational side of IT.

You get other functions involved in making demands. And because the cloud can be so easy to consume, you see people just running off and deploying some software-as-a-service (SaaS) or infrastructure-as-a-service (IaaS) model because it looked easy to do, and they didn’t want to wait for the internal IT to make the change.

All of the research we do shows that the world is hybrid for as far ahead as we can see.

Running away from internal IT and on-premises IT is not going to be a good idea for most organizations -- at least for a considerable chunk of their workloads. All of the research we do shows that the world is hybrid for as far ahead as we can see. 

Gardner: I certainly agree with that. If it’s all then about a mix of things, how do I determine the correct mix? And if it’s a correct mix between just a public cloud and private cloud, how do I then properly adjust to considerations about applications as opposed to data, as opposed to bringing in microservices and Application Programming Interfaces (APIs) when they’re the best fit?

How do we begin to rationalize all of this better? Because I think we’ve gotten to the point where we need to gain some maturity in terms of the consumption of hybrid IT.

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: I often talk about what I call the assumption gap. And the assumption gap is just that moment where we move from one side where it’s okay to have lots of questions about something, in this case, in IT. And then on the other side of this gap or chasm, to use a well-worn phrase, is where it’s not okay to ask anything because you’ll see you don’t know what you’re talking about. And that assumption gap seems to happen imperceptibly and very fast at some moment.

So, what is hybrid IT? I think we fall into the trap of allowing ourselves to believe that having some on-premises workloads and applications and some off-premises workloads and applications is hybrid IT. I do not think it is. It’s using a couple of tools for different things.

It’s like having a Prius and a big diesel and/or gas F-150 pickup truck in your garage and saying, “I have two hybrid vehicles.” No, you have one of each, or some of each. Just because someone has put an application or a backup off into the cloud, “Oh, yeah. Well, I’m hybrid.” No, you’re not really.

The cloud approach

The cloud is an approach. It’s not a thing per se. It’s another way. As I said earlier, it’s another tool that you have in the IT arsenal. So how do you start figuring what goes where?

I don’t think there are simple answers, because it would be just as sensible a question to say, “Well, what should go on flash or what should go on disk, or what should go on tape, or what should go on paper?” My point being, such decisions are situational to individual companies, to the stage of that company’s life, and to the budgets they have. And they’re not only situational -- they’re also dynamic.

I want to give a couple of examples because I think they will stick with people. Number one is you take something like email, a pretty popular application; everyone runs email. In some organizations, that is the crucial application. They cannot run without it. Probably, what you and I do would fall into that category. But there are other businesses where it’s far less important than the factory running or the delivery vans getting out on time. So, they could have different applications that are way more important than email.

When instant messaging (IM) first came out, Yahoo IM text came out, to be precise. They used to do the maintenance between 9 am and 5 pm because it was just a tool to chat to your friends with at night. And now you have businesses that rely on that. So, clearly, the ability to instant message and text between us is now crucial. The stock exchange in Chicago runs on it. IM is a very important tool.

The answer is not that you or I have the ability to tell any given company, “Well, x application should go onsite and Y application should go offsite or into a cloud,” because it will vary between businesses and vary across time.

If something is or becomes mission-critical or high-risk, it is more likely that you’ll want the feeling of security, I’m picking my words very carefully, of having it … onsite.

You have to figure out what you're trying to get done before you figure out what you're going to do with it.

But the extent to which full-production apps are being moved to the cloud is growing every day. That’s what our research shows us. The quick answer is you have to figure out what you’re trying to get done before you figure out what you’re going to do it with. 

Gardner: Before we go into learning more about how organizations can better know themselves and therefore understand the right mix, let’s learn more about you, Mark. 

Tell us about yourself, your organization at ESG. How long have you been an IT industry analyst? 

Peters: I grew up in my working life in the UK and then in Europe, working on the vendor side of IT. I grew up in storage, and I haven’t really escaped it. These days I run ESG’s infrastructure practice. The integration and the interoperability between the various elements of infrastructure have become more important than the individual components. I stayed on the vendor side for many years working in the UK, then in Europe, and now in Colorado. I joined ESG 10 years ago.

Lessons learned from storage

Gardner: It’s interesting that you mentioned storage, and the example of whether it should be flash or spinning media, or tape. It seems to me that maybe we can learn from what we’ve seen happen in a hybrid environment within storage and extrapolate to how that pertains to a larger IT hybrid undertaking.

Is there something about the way we’ve had to adjust to different types of storage -- and do that intelligently with the goals of performance, cost, and the business objectives in mind? I’ll give you a chance to perhaps go along with my analogy or shoot it down. Can we learn from what’s happened in storage and apply that to a larger hybrid IT model?

Learn More About

Hybrid IT Management

Solutions From HPE

Peters: The quick answer to your question is, absolutely, we can. Again, the cloud is a different approach. It is a very beguiling and useful business model, but it’s not a panacea. I really don’t believe it ever will become a panacea.

Now, that doesn’t mean to say it won’t grow. It is growing. It’s huge. It’s significant. You look at the recent announcements from the big cloud providers. They are at tens of billions of dollars in run rates.

But to your point, it should be viewed as part of a hierarchy, or a tiering, of IT. I don’t want to suggest that cloud sits at the bottom of some hierarchy or tiering. That’s not my intent. But it is another choice of another tool.

Let’s be very, very clear about this. There isn’t “a” cloud out there. People talk about the cloud as if it exists as one thing. It does not. Part of the reason hybrid IT is so challenging is you’re not just choosing between on-prem and the cloud, you’re choosing between on-prem and many clouds -- and you might want to have a multi-cloud approach as well. We see that increasingly.

What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes.

Those various clouds have various attributes; some are better than others in different things. It is exactly parallel to what you were talking about in terms of which server you use, what storage you use, what speed you use for your networking. It’s exactly parallel to the decisions you should make about which cloud and to what extent you deploy to which cloud. In other words, all the things you said at the beginning: cost, risk, requirements, and performance.

People get so distracted by bright, shiny objects. Like they are the answer to everything. What we should be looking for are not bright, shiny objects -- but bright, shiny outcomes. That’s all we should be looking for.

Focus on the outcome that you want, and then you figure out how to get it. You should not be sitting down IT managers and saying, “How do I get to 50 percent of my data in the cloud?” I don’t think that’s a sensible approach to business. 

Gardner: Lessons learned in how to best utilize a hybrid storage environment, rationalizing that, bringing in more intelligence, software-defined, making the network through hyper-convergence more of a consideration than an afterthought -- all these illustrate where we’re going on a larger scale, or at a higher abstraction.

Going back to the idea that each organization is particular -- their specific business goals, their specific legacy and history of IT use, their specific way of using applications and pursuing business processes and fulfilling their obligations. How do you know in your organization enough to then begin rationalizing the choices? How do you make business choices and IT choices in conjunction? Have we lost sufficient visibility, given that there are so many different tools for doing IT?

Get down to specifics

Peters: The answer is yes. If you can’t see it, you don’t know about it. So to some degree, we are assuming that we don’t know everything that’s going on. But I think anecdotally what you propose is absolutely true.

I’ve beaten home the point about starting with the outcomes, not the tools that you use to achieve those outcomes. But how do you know what you’ve even got -- because it’s become so easy to consume in different ways? A lot of people talk about shadow IT. You have this sprawl of a different way of doing things. And so, this leads to two requirements.

Number one is gaining visibility. It’s a challenge with shadow IT because you have to know what’s in the shadows. You can’t, by definition, see into that, so that’s a tough thing to do. Even once you find out what’s going on, the second step is how do you gain control? Control -- not for control’s sake -- only by knowing all the things you were trying to do and how you’re trying to do them across an organization. And only then can you hope to optimize them.

You can't manage what you can't measure. You also can't improve things that can't be managed or measured.

Again, it’s an old, old adage. You can’t manage what you can’t measure. You also can’t improve things that can’t be managed or measured. And so, number one, you have to find out what’s in the shadows, what it is you’re trying to do. And this is assuming that you know what you are aiming toward.

This is the next battleground for sophisticated IT use and for vendors. It’s not a battleground for the users. It’s a choice for users -- but a battleground for vendors. They must find a way to help their customers manage everything, to control everything, and then to optimize everything. Because just doing the first and finding out what you have -- and finding out that you’re in a mess -- doesn’t help you.

Learn More About

Hybrid IT Management

Solutions From HPE

Visibility is not the same as solving. The point is not just finding out what you have – but of actually being able to do something about it. The level of complexity, the range of applications that most people are running these days, the extremely high levels of expectations both in the speed and flexibility and performance, and so on, mean that you cannot, even with visibility, fix things by hand.

You and I grew up in the era where a lot of things were done on whiteboards and Excel spreadsheets. That doesn’t cut it anymore. We have to find a way to manage what is automated. Manual management just will not cut it -- even if you know everything that you’re doing wrong. 

Gardner: Yes, I agree 100 percent that the automation -- in order to deal with the scale of complexity, the requirements for speed, the fact that you’re going to be dealing with workloads and IT assets that are off of your premises -- means you’re going to be doing this programmatically. Therefore, you’re in a better position to use automation.

I’d like to go back again to storage. When I first took a briefing with Nimble Storage, which is now a part of Hewlett Packard Enterprise (HPE), I was really impressed with the degree to which they used intelligence to solve the economic and performance problems of hybrid storage.

Given the fact that we can apply more intelligence nowadays -- that the cost of gathering and harnessing data, the speed at which it can be analyzed, the degree to which that analysis can be shared -- it’s all very fortuitous that just as we need greater visibility and that we have bigger problems to solve across hybrid IT, we also have some very powerful analysis tools.

Mark, is what worked for hybrid storage intelligence able to work for a hybrid IT intelligence? To what degree should we expect more and more, dare I say, artificial intelligence (AI) and machine learning to be brought to bear on this hybrid IT management problem?

Intelligent automation a must

Peters: I think it is a very straightforward and good parallel. Storage has become increasingly sophisticated. I’ve been in and around the storage business now for more than three decades. The joke has always been, I remember when a megabyte was a lot, let alone a gigabyte, a terabyte, and an exabyte.

And I’d go for a whole day class, when I was on the sales side of the business, just to learn something like dual parsing or about cache. It was so exciting 30 years ago. And yet, these days, it’s a bit like cars. I mean, you and I used to use a choke, or we’d have to really go and check everything on the car before we went on 100-mile journey. Now, we press the button and it better work in any temperature and at any speed. Now, we just demand so much from cars.

To stretch that analogy, I’m mixing cars and storage -- and we’ll make it all come together with hybrid IT in that it’s better to do things in an automated fashion. There’s always one person in every crowd I talk to who still believes that a stick shift is more economic and faster than an automatic transmission. It might be true for one in 1,000 people, and they probably drive cars for a living. But for most people, 99 percent of the people, 99.9 percent of the time, an automatic transmission will both get you there faster and be more efficient in doing so. The same became true of storage.

We used to talk about how much storage someone could capacity-plan or manage. That’s just become old hat now because you don’t talk about it in those terms. Storage has moved to be -- how do we serve applications? How do we serve up the right place in the right time, get the data to the right person at the right time at the right price, and so on?

We don’t just choose what goes where or who gets what, we set the parameters -- and we then allow the machine to operate in an automated fashion. These days, increasingly, if you talk to 10 storage companies, 10 of them will talk to you about machine learning and AI because they know they’ve got to be in that in order to make that execution of change ever more efficient and ever faster. They’re just dealing with tremendous scale, and you could not do it even with simple automation that still involves humans.

It will be self-managing and self-optimizing. It will not be a “recommending tool,” it will be an “executing tool.”

We have used cars as a social analogy. We used storage as an IT analogy, and absolutely, that’s where hybrid IT is going. It will be self-managing and self-optimizing. Just to make it crystal clear, it will not be a “recommending tool,” it will be an “executing tool.” There is no time to wait for you and me to finish our coffee, think about it, and realize we have to do something, because then it’s too late. So, it’s not just about the knowledge and the visibility. It’s about the execution and the automated change. But, yes, I think your analogy is a very good one for how the IT world will change.

Learn More About

Hybrid IT Management

Solutions From HPE

Gardner: How you execute, optimize and exploit intelligence capabilities can be how you better compete, even if other things are equal. If everyone is using AWS, and everyone is using the same services for storage, servers, and development, then how do you differentiate?

How you optimize the way in which you gain the visibility, know your own business, and apply the lessons of optimization, will become a deciding factor in your success, no matter what business you’re in. The tools that you pick for such visibility, execution, optimization and intelligence will be the new real differentiators among major businesses.

So, Mark, where do we look to find those tools? Are they yet in development? Do we know the ones we should expect? How will organizations know where to look for the next differentiating tier of technology when it comes to optimizing hybrid IT?

What’s in the mix?

Peters: We’re talking years ahead for us to be in the nirvana that you’re discussing.

I just want to push back slightly on what you said. This would only apply if everyone were using exactly the same tools and services from AWS, to use your example. The expectation, assuming we have a hybrid world, is they will have kept some applications on-premises, or they might be using some specialist, regional or vertical industry cloud. So, I think that’s another way for differentiation. It’s how to get the balance. So, that’s one important thing.

And then, back to what you were talking about, where are those tools? How do you make the right move?

We have to get from here to there. It’s all very well talking about the future. It doesn’t sound great and perfect, but you have to get there. We do quite a lot of research in ESG. I will throw just a couple of numbers, which I think help to explain how you might do this.

We already find that the multi-cloud deployment or option is a significant element within a hybrid IT world. So, asking people about this in the last few months, we found that about 75 percent of the respondents already have more than one cloud provider, and about 40 percent have three or more.

You’re getting diversity -- whether by default or design. It really doesn’t matter at this point. We hope it’s by design. But nonetheless, you’re certainly getting people using different cloud providers to take advantage of the specific capabilities of each.

This is a real mix. You can’t just plunk down some new magic piece of software, and everything is okay, because it might not work with what you already have -- the legacy systems, and the applications you already have. One of the other questions we need to ask is how does improved management embrace legacy systems?

Some 75 percent of our respondents want hybrid management to be from the infrastructure up, which means that it’s got to be based on managing their existing infrastructure, and then extending that management up or out into the cloud. That’s opposed to starting with some cloud management approach and then extending it back down to their infrastructure.

People want to enhance what they currently have so that it can embrace the cloud. It’s enhancing your choice of tiers so you can embrace change.

People want to enhance what they currently have so that it can embrace the cloud. It's enhancing your choice of tiers so you can embrace change. Rather than just deploying something and hoping that all of your current infrastructure -- not just your physical infrastructure but your applications, too -- can use that, we see a lot of people going to a multi-cloud, hybrid deployment model. That entirely makes sense. You're not just going to pick one cloud model and hope that it  will come backward and make everything else work. You start with what you have and you gradually embrace these alternative tools. 

Gardner: We’re creating quite a list of requirements for what we’d like to see develop in terms of this management, optimization, and automation capability that’s maybe two or three years out. Vendors like Microsoft are just now coming out with the ability to manage between their own hybrid infrastructures, their own cloud offerings like Azure Stack and their public cloud Azure.

Learn More About

Hybrid IT Management

Solutions From HPE

Where will we look for that breed of fully inclusive, fully intelligent tools that will allow us to get to where we want to be in a couple of years? I’ve heard of one from HPE, it’s called Project New Hybrid IT Stack. I’m thinking that HPE can’t be the only company. We can’t be the only analysts that are seeing what to me is a market opportunity that you could drive a truck through. This should be a big problem to solve.

Who’s driving?

Peters: There are many organizations, frankly, for which this would not be a good commercial decision, because they don’t play in multiple IT areas or they are not systems providers. That’s why HPE is interested, capable, and focused on doing this. 

Many vendor organizations are either focused on the cloud side of the business -- and there are some very big names -- or on the on-premises side of the business. Embracing both is something that is not as difficult for them to do, but really not top of their want-to-do list before they’re absolutely forced to.

From that perspective, the ones that we see doing this fall into two categories. There are the trendy new startups, and there are some of those around. The problem is, it’s really tough imagining that particularly large enterprises are going to risk [standardizing on them]. They probably even will start to try and write it themselves, which is possible – unlikely, but possible.

Where I think we will get the list for the other side is some of the other big organizations --- Oracle and IBM spring to mind in terms of being able to embrace both on-premises and off-premises.  But, at the end of the day, the commonality among those that we’ve mentioned is that they are systems companies. At the end of the day, they win by delivering the best overall solution and package to their clients, not individual components within it.

If you’re going to look for a successful hybrid IT deployment took, you probably have to look at a hybrid IT vendor.

And by individual components, I include cloud, on-premises, and applications. If you’re going to look for a successful hybrid IT deployment tool, you probably have to look at a hybrid IT vendor. That last part I think is self-descriptive. 

Gardner: Clearly, not a big group. We’re not going to be seeking suppliers for hybrid IT management from request for proposals (RFPs) from 50 or 60 different companies to find some solutions. 

Peters: Well, you won’t need to. Looking not that many years ahead, there will not be that many choices when it comes to full IT provisioning. 

Gardner: Mark, any thoughts about what IT organizations should be thinking about in terms of how to become proactive rather than reactive to the hybrid IT environment and the complexity, and to me the obvious need for better management going forward?

Management ends, not means

Peters: Gaining visibility into not just hybrid IT but the on-premise and the off-premise and how you manage these things. Those are all parts of the solution, or the answer. The real thing, and it’s absolutely crucial, is that you don’t start with those bright shiny objects. You don’t start with, “How can I deploy more cloud? How can I do hybrid IT?” Those are not good questions to ask. Good questions to ask are, “What do I need to do as an organization? How do I make my business more successful? How does anything in IT become a part of answering those questions?”

In other words, drum roll, it’s the thinking about ends, not means.

Gardner:  If our listeners and readers want to follow you and gain more of your excellent insight, how should they do that? 

Peters: The best way is to go to our website, www.esg-global.com. You can find not just me and all my contact details and materials but those of all my colleagues and the many areas we cover and study in this wonderful world of IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Kansas Development Finance Authority gains peace of mind, end-points virtual shield using hypervisor-level security

Implementing and managing IT security has leaped in complexity for organizations ranging from small and medium-sized businesses (SMBs) to massive government agencies.

Once-safe products used to thwart invasions now have been exploited. E-mail phishing campaigns are far more sophisticated, leading to damaging ransomware attacks.

What’s more, the jack-of-all-trades IT leaders of the mid-market concerns are striving to protect more data types on and off premises, their workload servers and expanded networks, as well as the many essential devices of the mobile workforce.

Security demands have gone up, yet there is a continual need for reduced manual labor and costs -- while protecting assets sooner and better.

The next BriefingsDirect security strategies case study examines how a Kansas economic development organization has been able to gain peace of mind by relying on increased automation and intelligence in how it secures its systems and people.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

To explore how an all-encompassing approach to security has enabled improved results with fewer hours at a smaller enterprise, BriefingsDirect sat down with Jeff Kater, Director of Information Technology and Systems Architect at Kansas Development Finance Authority (KDFA) in Topeka. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: As a director of all of IT at KDFA, security must be a big concern, but it can’t devour all of your time. How have you been able to balance security demands with all of your other IT demands?

Kater: That’s a very interesting question, and it has a multi-segmented answer. In years past, leading up to the development of what KDFA is now, we faced the trends that demanded very basic anti-spam solutions and the very basic virus threats that came via the web and e-mail.

 Kater

Kater

What we’ve seen more recently is the growing trend of enhanced security attacks coming through malware and different exploits -- that were once thought impossible -- are now are the reality.

Therefore in recent times, my percentage of time dedicated to security had grown from probably five to 10 percent all the way up to 50 to 60 percent of my workload during each given week.

Gardner: Before we get to how you’ve been able to react to that, tell us about KDFA.

Kater: KDFA promotes economic development and prosperity for the State of Kansas by providing efficient access to capital markets through various tax-exempt and taxable debt obligations.

KDFA works with public and private entities across the board to identify financial options and solutions for those entities. We are a public corporate entity operating in the municipal finance market, and therefore we are a conduit finance authority.

KDFA is a very small organization -- but a very important one. Therefore we run enterprise-ready systems around the clock, enabling our staff to be as nimble and as efficient as possible.

There are about nine or 10 of us that operate here on any given day at KDFA. We run on a completely virtual environment platform via Citrix XenServer. So we run XenApp, XenDesktop, and NetScaler -- almost the full gamut of Citrix products.

We have a few physical endpoints, such as laptops and iPads, and we also have the mobile workforce on iPhones as well. They are all interconnected using the virtual desktop infrastructure (VDI) approach.

Gardner: You’ve had this swing, where your demands from just security issues have blossomed. What have you been doing to wrench that back? How do you get your day back, to innovate and put in place real productivity improvements?

We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

Kater: We went with virtualization via Citrix. It became our solution of choice due to not being willing to pay the extra tax, if you will, for other solutions that are on the market. We wanted to be able to be nimble, to be adaptive, and to grow our business workload while maintaining our current staff size.

When we embraced virtualization, the security approaches were very traditional in nature. The old way of doing things worked fantastically for a physical endpoint.

The traditional approaches to security had been on our physical PCs for years. But when that security came over to the virtual realm, they bogged down our systems. They still required updates be done manually. They just weren’t innovating at the same speed as the virtualization, which was allowing us to create new endpoints.

And so, the maintenance, the updating, the growing threats were no longer being seen by the traditional approaches of security. We had endpoint security in place on our physical stations, but when we went virtual we no longer had endpoint security. We then had to focus on antivirus and anti-spam at the server level.

What we found out very quickly was that this was not going to solve our security issues. We then faced a lot of growing threats again via e-mail, via web, that were coming in through malware, spyware, other activities that were embedding themselves on our file servers – and then trickling down and moving laterally across our network to our endpoints.

Gardner: Just as your organization went virtual and adjusted to those benefits, the malware and the bad guys, so to speak, adjusted as well -- and started taking advantage of what they saw as perhaps vulnerabilities as organizations transitioned to higher virtualization.

Security for all, by all

Kater: They did. One thing that a lot of security analysts, experts, and end-users forget in the grand scheme of things is that this virtual world we live in has grown so rapidly -- and innovated so quickly -- that the same stuff we use to grow our businesses is also being used by the bad actors. So while we are learning what it can do, they are learning how to exploit it at the same speed -- if not a little faster.

Gardner: You recognized that you had to change; you had to think more about your virtualization environment. What prompted you to increase the capability to focus on the hypervisor for security and prevent issues from trickling across your systems and down to your endpoints?

Kater: Security has always been a concern here at KDFA. And there has been more of a security focus recently, with the latest news and trends. We honestly struggled with CryptoLocker, and we struggled with ransomware.

While we never had to pay out any ransom or anything -- and they were stopped in place before data could be exfiltrated outside of KDFA’s network -- we still had two or three days of either data loss or data interruption. We had to pull back data from an archive; we had to restore some of our endpoints and some of our computers.

We needed to have a solution for our virtual environment -- one that would be easy to deploy, easy to manage, and it would be centrally managed.

As we battled these things over a very short period of time, they were progressively getting worse and worse. We decided that we needed to have a solution for our virtual environment – one that would be not only be easy to deploy, easy to manage, but it would be centrally managed as well, enabling me to have more time to focus back on my workload -- and not have to worry so much about the security thresholds that had to be updated and maintained via the traditional model.

So we went out to the market. We ran very extensive proof of concepts (POCs), and those POCs very quickly illustrated that the underlying architecture was only going to be enterprise-ready via two or three vendors. Once we started running those through the paces, Bitdefender emerged for us.

I had actually been watching the Hypervisor Introspection (HVI) product development for the past four years, since its inception came with a partnership between Citrix, Intel, the Linux community and, of course, Bitdefender. One thing that was continuous throughout all of that was that in order to deploy that solution you would need GravityZone in-house to be able to run the HVI workloads.

And so we became early adopters of Bitdefender GravityZone, and we are able to see what it could do for our endpoints, our servers, and our Microsoft Exchange Servers. Then, Hypervisor Introspection became another security layer that we are able to build upon the security solution that we had already adopted from Bitdefender.

Gardner: And how long have you had these solutions in place?

Kater: We are going on one and a half to two years for GravityZone. And when HVI went to general availability earlier this year, in 2017, and we were one of the first adopters to be able to deploy it across our production environment.

Gardner: If you had a “security is easy” button that you could pound on your desk, what are the sorts of things that you look for in a simpler security solution approach?

IT needs brains to battle breaches

Kater: The “security is easy” button would operate much like the human brain. It would need that level of intuitive instinct, that predictive insight ability. The button would generally be easily managed, automated; it would evolve and learn with artificial intelligence (AI) and machine learning what’s out there. It would dynamically operate with peaks and valleys depending on the current status of the environment, and provide the security that’s needed for that particular environment.

Gardner: Jeff, you really are an early adopter, and I commend you on that. A lot of organizations are not quite as bold. They want to make sure that everything has been in the market for a long time. They are a little hesitant.

But being an early adopter sounds like you have made yourselves ready to adopt more AI and machine learning capabilities. Again, I think that’s very forward-looking of you.

But tell us, in real terms, what has being an early adopter gotten for you? We’ve had some pretty scary incidents just in the recent past, with WannaCry, for example. What has being an early adopter done for you in terms of these contemporary threats?

Kater: The new threats, including the EternalBlue exploit that happened here recently, are very advanced in nature. Oftentimes when these breaches occur, it takes several months before they have even become apparent. And oftentimes they move laterally within our network without us knowing, no matter what you do.

Some of the more advanced and persistent threats don’t even have to infect the local host with any type of software. They work in the virtual memory space. It’s much different than the older threats, where you could simply reboot or clear your browser cache to resolve them and get back to your normal operations.

Earlier, when KDFA still made use of non-persistent desktops, if the user got any type of corruption on their virtual desktop, they were able to reboot, and get back to a master image and move on. However, with these advanced threats, when they get into your network, and they move laterally -- even if you reboot your non-persistent desktop, the threat will come back up and it still infects your network. So with the growing ransomware techniques out there, we can no longer rely on those definition-based approaches. We have to look at the newer techniques.

As far as why we are early adopters, and why I have chosen some of the principles that I have, I feel strongly that you are really only as strong as your weakest link. I strive to provide my users with the most advanced, nimble, and agnostic solutions possible.

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations.  

We are able to grow and compute on any device anywhere, anytime, securely, with minimal limitations. It allows us to have discussions about increasing productivity at that point, and to maximize the potential of our smaller number of users -- versus having to worry about the latest news of security breaches that are happening all around us.

Gardner: You’re able to have a more proactive posture, rather than doing the fire drill when things go amiss and you’re always reacting to things.

Kater: Absolutely.

Gardner: Going back to making sure that you’re getting a fresh image and versions of your tools …  We have heard some recent issues around the web browser not always being safe. What is it about being able to get a clean version of that browser that can be very important when you are dealing with cloud services and extensive virtualization?

Virtual awareness, secure browsing

Kater: Virtualization in and of itself has allowed us to remove the physical element of our workstations when desirable and operate truly in that virtual or memory space. And so when you are talking about browsers, you can have a very isolated, a very clean browser. But that browser is still going to hit a website that can exploit your system. It can run in that memory space for exploitation. And, again, it doesn't rely on plug-ins to be downloaded or anything like that anymore, so we really have to look at the techniques that these browsers are using.

What we are able to do with the secure browsing technique is publish, in our case, via XenApp, any browser flavor with isolation out there on the server. We make it available to the users that have access for that particular browser and for that particular need. We are then able to secure it via Bitdefender HVI, making sure that no matter where that browser goes, no matter what interface it’s trying to align with, it’s secure across the board.

Gardner: In addition to secure browsing, what do you look for in terms of being able to keep all of your endpoints the way you want them? Is there a management approach of being able to verify what works and what doesn’t work? How do you try to guarantee 100 percent security on those many and varied endpoints?

Kater: I am a realist, and I realize that nothing will ever be 100 percent secure, but I really strive for that 99.9 percent security and availability for my users. In doing so -- being that we are so small in staff, and being that I am the one that should manage all of the security, architecture, layers, networking and so forth -- I really look for that centralized model. I want one pane of glass to look at for managing, for reporting.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go,  what did it do to me and how was I protected.

I want that management interface and that central console to really tell me when and if an exploit happens, what happened with that exploit, where did it go, and what did it do to me and how was I protected. I need that so that I can report to my management staff and say, “Hey, honestly, this is what happened, this is what was happening behind the scenes. This is how we remediated and we are okay. We are protected. We are safe.”

And so I really look for that centralized management. Automation is key. I want something that will automatically update, with the latest virus and malware definitions, but also download the latest techniques that are seen out there via those innovative labs from our security vendors to fully patch our systems behind the scenes. So it takes that piece of management away from me and automates it to make my job more efficient and more effective.

Gardner: And how has Bitdefender HVI, in association with Bitdefender GravityZone, accomplished that? How big of a role does it play in your overall solution?

Kater: It has been a very easy deployment and management, to be honest. Again, entities large and small, we are all facing the same threats. When we looked at ways to attain the best solution for us, we wanted to make sure that all of the main vendors that we make use of here at KDFA were on board.

And it just so happened this was a perfect partnership, again, between Citrix, Bitdefender, Intel, and the Linux community. That close partnership, it really developed into HVI, and it is not an evolutionary product. It did not grow from anything else. It really is a revolutionary approach. It’s a different way of looking at security models. It’s a different way of protecting.

HVI allows for security to be seen outside of the endpoint, and outside of the guest agent. It’s kind of an inside-looking-outward approach. It really provides high levels of visibility, detection and, again, it prevents the attacks of today, with those advanced persistent threats or APTs.

With that said, since the partnership between GravityZone and HVI is so easy to deploy, so easy to manage, it really allows our systems to grow and scale when the need is there. And we just know that with those systems in place, when I populate my network with new VMs, they are automatically protected via the policies from HVI.

Given that the security has to be protected from the ground all the way up, we rest assured that the security moves with the workload. As the workload moves across my network, it’s spawned off and onto new VMs. The same set of security policies follows the workloads. It really takes out any human missteps, if you will, along the process because it’s all automated and it all works hand-in-hand together.

Behind the screens

Gardner: It sounds like you have gained increased peace of mind. That’s always a good thing in IT; certainly a good thing for security-oriented IT folks. What about your end-users? Has the ability to have these defenses in place allowed you to give people a bit more latitude with what they can do? Is there a productivity, end-user or user experience benefit to this?

Kater: When it comes to security agents and endpoint security as a whole, I think a lot of people would agree with me that the biggest drawback when implementing those into your work environment is loss of productivity. It’s really not the end-user’s fault. It’s not a limitation of what they can and can't do, but it’s what happens when security puts an extra load on your CPU, it puts extra load on your RAM; therefore, it bogs down your systems. Your systems don’t operate as efficiently or effectively and that decreases your productivity.

With Bitdefender, and the approaches that we adopted, we have seen very, very limited, almost uncomputable limitations as far as impacts on our network, impacts on our endpoints. So user adoption has been greater than it ever has, as far as a security solution.

I’m also able to manipulate our policies within that Central Command Center or Central Command Console within Bitdefender GravityZone to allow my users, at will, if they would like, to see what they are being blocked against, and which websites they are trying to run in the background. I am able to pass that through to the endpoint for them to see firsthand. That has been a really eye-opening experience.

We used to compute daily, thinking we were protected, and that nothing was running in the background. We were visiting the pages, and those pages were acting as though we thought that they should. What we have quickly found out is that any given page can launch several hundred, if not thousands, of links in the background, which can then become an exploit mechanism, if not properly secured.

Gardner: I would like to address some of the qualitative metrics of success when you have experienced the transition to more automated security. Let’s begin with your time. You said you went from five or 10 percent of time spent on security to 50 or 60 percent. Have you been able to ratchet that back? What would you estimate is the amount of time you spend on security issues now, given that you are one and a half years in?

Kater: Dating back 5 to 10 years ago with the inception of VDI, my security footprint as far as my daily workload was probably around that 10 percent. And then, with the growing threats in the last two to three years, that ratcheted it up to about 50 percent, at minimum, maybe even 60 percent. By adopting GravityZone and HVI, I have been able to pull that back down to only consume about 10 percent of my workload, as most of it is automated for me behind the scenes.

Gardner: How about ransomware infections? Have you had any of those? Or lost documents, any other sort of qualitative metrics of how to measure efficiency and efficacy here?

We have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Kater: I am happy to report that since the adoption of GravityZone, and now with HVI as an extra security layer on top of Bitdefender GravityZone, that we have had zero ransomware infections in more than a year now. We have had zero exploits and we have had zero network impact.

Gardner: Well, that speaks for itself. Let’s look to the future, now that you have obtained this. You mentioned earlier your interest in AI, machine learning, automating, of being proactive. Tell us about what you expect to do in the future in terms of an even better security posture.

Safety layers everywhere, all the time

Kater: In my opinion, again, security layers are vital. They are key to any successful deployment, whether you are large or small. It’s important to have all of your traditional security hardware and software in place working alongside this new interwoven fabric, if you will, of software -- and now at the hypervisor level. This is a new threshold. This is a new undiscovered territory that we are moving into with virtual technologies.

As that technology advances, and more complex deployments are made, it’s important to protect that computing ability every step of the way; again, from that base and core, all the way into the future.

More and more of my users are computing remotely, and they need to have the same security measures in place for all of their computing sessions. What HVI has been able to do for me here in the current time, and in moving to the future, is I am now able to provide secure working environments anywhere -- whether that’s their desktop, whether that’s their secure browser. I am able to leverage that HVI technology once they are logged into our network to make their computing from remote areas safe and effective.

Gardner: For those listening who may not have yet moved toward a hypervisor-level security – or who have maybe even just more recently become involved with pervasive virtualization and VDI -- what advice could you give them, Jeff, on how to get started? What would you suggest others do that would even improve on the way you have done it? And, of course, you have had some pretty good results.

Kater: It’s important to understand that everybody’s situation is very different, so identifying the best solutions for everybody is very much on an individual corporation basis. Each company has its own requirements, its own compliance to follow, of course.

Pick two or three vendors and run very stringent POCs; make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network.

The best advice that I can give is pick two or three vendors, at the least, and run very stringent POCs; no matter what they may be, make sure that they are able to identify your security restraints, try to break them, run them through the phases, see how they affect your network. Then, when you have two or three that come out of that and that you feel strongly about, continue to break them down.

I cannot stress the importance of POCs enough. It’s very important to identify that one or two that you really feel strongly about. Once you identify those, then talk to the industry experts that support those technologies, talk to the engineers, really get the insight from the inside out on how they are innovating and what their plan is for the future of their products to make sure that you are on a solid footprint.

Most success stories involve a leap of faith. With machine learning and AI, we are now taking a leap that is backed by factual knowledge and analyzing techniques to stay ahead of threats. No longer are we relying on those virus definitions and those virus updates that can be lagging sometimes.

Gardner: Before we sign off, where do you go to get your information? Where would you recommend other people go to find out more?

Kater: Honestly, I was very fortunate that HVI at its inception fell into my lap. When I was looking around at different products, we just hit the market at the right time. But to be honest with you, I cannot stress enough, again, run those POCs.

If you are interested in finding out more about Bitdefender and its product line up, Bitdefender has an excellent set of engineers on staff; they are very knowledgeable, they are very well-rounded in all of their individual disciplines. The Bitdefender website is very comprehensive. It contains many outside resources, along with inside labs reporting, showcasing just what their capabilities are, with a lot of unbiased opinions.

They have several video demos and technical white papers listed out there, you can find them all across the web and you can request the full product demo when you are ready for it and run that POC of Bitdefender products in-house with your network. Also, they have presales support that will help you all along the way.

Bitdefender HVI will revolutionize your data center security capacity.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Bitdefender.

You may also be interested in:

Case study: How HCI-powered private clouds accelerate efficient digital transformation

The next BriefingsDirect cloud efficiency case study examines how a world-class private cloud project evolved in the financial sector.

We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly on-premises model built on hyper-converged infrastructure for a risk-sensitive financial services company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Jim McKittrick joins to help explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn't necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

 McKittrick

McKittrick

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Also, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds -- and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively -- or even more cost effectively -- than you can in the public clouds. There are certainly places for both in the market.

Gardner: It's going to vary, of course, from company to company -- and even department to department within a company -- but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.

We have a head start on what's really coming down the pipe -- we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what's really coming down the pipe --  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can't mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.

I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure -- but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What's new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from -- depending on whom you spoke to -- somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.

The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that's different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn't really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn't work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That's what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That's when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth -- would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE's portfolio.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: This thing is now built and running, and it's been two years since inception. So that's a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it -- and he is a very forward-thinking technologist -- he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have -- from the server on down -- folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that's going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture -- yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out -- and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform

To A Hybrid IT

Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let's revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process 

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people -- moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what's happening, why it's happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don't get done, or they become almost unmanageable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcomes

The next BriefingsDirect healthcare transformation use-case discussion focuses on how an ecosystem approach to big data solutions brings about improved healthcare informatics-driven outcomes.

We'll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy.

Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.


Here are some excerpts:

Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?

Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer

 Heemskerk

Heemskerk

underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution -- or the total value proposition -- for our customers.

Gardner: I'm sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?

Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.

Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.

Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.

Big data value add

Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It's a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.

This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.

Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.

Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?

Healthy hardware relationships 

Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.

As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.

Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?

Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing -- the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.

You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?

By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization -- the people, the process, and the culture -- the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.

Who else makes up the ecosystem? It takes more than two players to make an ecosystem.

Heemskerk: Yes, that's very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.

Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.

Besides that, we don't have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.

Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?

Saving time, money, and people

Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.

First of all, addressing standardization across the clinical application layer means that a customer doesn't have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that's already a big savings.

Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.

Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.

Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better -- where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?

Learn About HPE

Digital Solutions

That Drive Healthcare and Life Sciences

Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.

If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file -- it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript ordownload a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

·       How IoT capabilities open new doors for Miami Telecoms Platform Provider Identidad

·       DreamWorks Animation crafts its next era of dynamic IT infrastructure

·       How Enterprises Can Take the Ecosystem Path to Making the Most of Microsoft Azure Stack Apps

·       Hybrid Cloud ecosystem readies for impact from Microsoft Azure Stack

·       Converged IoT systems: Bringing the data center to the edge of everything

·       IDOL-powered appliance delivers better decisions via comprehensive business information searches

·        OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

·       Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

·       How lastminute.com uses machine learning to improve travel bookings user experience

·       HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

How a Florida school district tames the wild west of education security at scale and on budget

Bringing a central IT focus to large public school systems has always been a challenge, but bringing a security focus to thousands of PCs and devices has been compared to bringing law and order to the Wild West.

For the Clay County School District in Florida, a team of IT administrators is grabbing the bull by the horns nonetheless to create a new culture of computing safety -- without breaking the bank.

The next BriefingsDirect security insight’s discussion examines how Clay County is building a secure posture for their edge, network, and data centers while allowing the right mix and access for exploration necessary in an educational environment. 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn how to ensure that schools are technically advanced and secure at low cost and at high scale, we're joined by Jeremy Bunkley, Supervisor of the Clay County School District Information and Technology Services Department; Jon Skipper, Network Security Specialist at the Clay County School District, and Rich Perkins, Coordinator for Information Services at the Clay County School District. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the biggest challenges to improving security, compliance, and risk reduction at a large school district?

Bunkley: I think the answer actually scales across the board. The problem even bridges into businesses. It’s the culture of change -- of making people recognize security as a forethought, instead of an afterthought. It has been a challenge in education, which can be a technology laggard.

Getting people to start the recognition process of making sure that they are security-aware has been quite the battle for us. I don’t think it’s going to end anytime soon. But we are starting to get our key players on board with understanding that you can't clear-text Social Security numbers and credit card numbers and personally identifiable information (PII). It has been an interesting ride for us, let’s put it that way.

Gardner: Jon, culture is such an important part of this, but you also have to have tools and platforms in place to help give reinforcement for people when they do the right thing. Tell us about what you have needed on your network, and what your technology approach has been?

Skipper: Education is one of those weird areas where the software development has always been lacking in the security side of the house. It has never even been inside the room. So one of the things that we have tried to do in education, at least with the Clay County School District, is try to modify that view, with doing change management. We are trying to introduce a security focus. We try to interject ourselves and highlight areas that might be a bad practice.

 Skipper

Skipper

One of our vendors uses plain text for passwords, and so we went through with them and showed them how that’s a bad practice, and we made a little bit of improvement with that.

I evaluate our policies and how we manage the domains, maybe finding some stuff that came from a long time ago where it's no longer needed. We can pull the information out, whereas before they put all the Social Security numbers into a document that was no longer needed. We have been trying really hard to figure that stuff out and then to try and knock it down, as much as we can.

Access for all, but not all-access

Gardner: Whenever you are trying to change people's perceptions, behaviors, culture, it’s useful to have both the carrot and a stick approach.

So to you Rich, what's been working in terms of a carrot? How do you incentivize people? What works in practice there?

Perkins: That's a tough one. We don't really have a carrot that we use. We basically say, “If you are doing the wrong things, you are not going to be able to use our network.”  So we focus more on negatives.

 Perkins

Perkins

The positives would be you get to do your job. You get to use the Internet. We don't really give them something more. We see security as directly intertwined with our customer service. Every person we have is our customer and our job is to protect them -- and sometimes that's from themselves.

So we don't really have a carrot-type of system. We don't allow students to play games if they have no problems. We give everybody the same access and treat everybody the same. Either you are a student and you get this level of access, or you are a staff member, you get this level of access, or you don't get access.

Gardner: Let’s get background on the Clay County School District. Tell us how many students you have, how many staff administrators, the size and scope of your school district?

Bunkley: Our school district is the 22nd largest in Florida, we are right on the edge of small and medium in Florida, which in most districts is a very large school district. We run about 38,500 students.

And as far as our IT team, which is our student information system, our Enterprise Resource Planning (ERP) system, security, down to desktop support, network infrastructure support, our web services, we have about 48 people total in our department.

Our scope is literally everything. For some reason IT means that if it plugs into a wall, we are responsible for it. That's generally a true statement in education across the board, where the IT staff tends to be a Jack-of-all-trades, and we fix everything.

Practical IT

Gardner: Where you are headed in terms of technology? Is there a one-to-one student-to-device ratio in the works? What sort of technology do you enable for them?

Bunkley: I am extremely passionate about this, because the one-to-one scenario seems to be the buzzword, and we generally despise buzzwords in this office and we prefer a more practical approach.

The idea of one-to-one is itself to me flawed, because if I just throw a device in a student's hand, what am I actually doing besides throwing a device in a student's hand? We haven't trained them. We haven’t given them the proper platform. All we have done is thrown technology.

And when I hear the terms, well, kids inherently know how to use technology today; it kind of just bothers me, because kids inherently know how to use social media, not technology. They are not production-driven, they are socially driven, and that is a sticking point with me.

We are in fact moving to a one-to-one, but in a nontraditional sense. We have established a one-to-one platform so we can introduce a unified platform for all students and employees to see through a portal system; we happen to use ClassLink, there are various other vendors out there, that’s just the one we happen to use.

We have integrated that in moving to Google Apps for Education and we have a very close relationship with Google. It’s pretty awesome, to be quite honest with you.

So we are moving in the direction of Chromebooks, because it’s just a fiscally more responsible move for us.

I know Microsoft is coming out with Windows 10 S, it’s kind of a strong move on their part. But for us, just because we have the expertise on the Google Apps for Education, or G Suite, it just made a lot of sense for us to go that direction.

So we are moving in one-to-one now with the devices, but the device is literally the least important -- and the last -- step in our project.

Non-stop security, no shenanigans

Gardner: Tell us about the requirements now for securing the current level of devices, and then for the new one. It seems like you are going to have to keep the airplane flying while changing the wings, right? So what is the security approach that works for you that allows for that?

Skipper: Clay County School District has always followed trends as far as devices go. So we actually have a good mixture of devices in our network, which means that no one solution is ever the right solution.

So, for example, we still have some iPads out in our networks, we still have some older Apple products, and then we have a mixture of Chromebooks and also Windows devices. We really need to make sure that we are running the right security platform for the full environment.

As we are transitioning more and more to a take-home philosophy -- and that’s where we as an IT department are seeing this going – so that if the decision is made to make the entire student population go home, we are going to be ready to go.

We have coordinated with our content filter company, and they have some extensions that we can deploy that lock the Chromebooks into a filter situation regardless of their network. That’s been really successful in identifying, maybe blocking students, from those late-night searches. We have also been able to identify some shenanigans that might be taking place due to some interesting web searches that they might do over YouTube, for example. That’s worked really well.

Our next objective is to figure out how to secure our Windows devices and possibly even the Mac devices. While our content filter does a good job as far as securing the content on the Internet, it’s a little bit more difficult to deploy into a Windows device, because users have the option of downloading different Internet browsers. So, content filtering doesn’t really work as well on those.

I have deployed Bitdefender to my laptops, and also to take-home Apple products. That allows me to put in more content filtering, and use that to block people from malicious websites that maybe the content filter didn’t see or was unable to see due to a different browser being used.

In those aspects we definitely are securing our network down further than it ever has been before.

Block and Lock

Perkins: With Bitdefender, one of the things we like is that if we have those devices go off network, we can actually have it turn on the Bitdefender Firewall that allows us to further lock down those machines or protect them if they are in an open environment, like at a hotel or whatever, from possible malicious activity.

And it allows us to block executables at some point. So we can actually go in and say, “No, I don’t want you to be able to run this browser, because I can’t do anything to protect you. Or I can’t watch what you do, or I can’t keep you from doing things you shouldn’t do.” So those are all very useful tools in a single pane of glass that we can see all of those devices at one time and monitor and manage. It saves us a lot of time.

Bunkley: I would follow up on that with a base concept, Dana, and our base concept is of an external network. We come from the concept of, we are an everywhere network. We are not only aiming to defend our internal network while you are here and maybe do some stuff while you are at our house, we are literally an externally built network, where our network will extend directly down into the student and teacher’s home.

We have gone as far as moving everything we physically can out of this network, right down to our firewall. We are moving our domain controllers, external to the network to create literally an everywhere network. And so our security focus is not just internal, it is focused on external first, then internal.

Gardner: With security products, what have you been using, what wasn't working, and where do you expect to go next given those constraints?

No free lunch

Perkins: Well, we can tell you that “free” is not always the best option; as a matter of fact, it’s almost never a good option, but we have had to deal with it.

We were previously using an antivirus called Avast, and it’s a great home product. We found out that it has not been the best business-level product. It’s very much marketed to education, and there are some really good things about it. Transferring away from it hasn’t been the easiest because it’s next to impossible to uninstall. So we have been having some problems with that.

We have also tested some other security measures and programs along the way that haven’t been so successful. And we are always in the process of evaluating where we are. We are never okay with status quo. Even if we achieve where we want to be, I don't think any of us will be satisfied, and that’s actually something that a lot of this is built on -- we always want to go that step further. And I know that’s cliché, but I would say for an institution of this size, the reason we are able to do some of the stuff is the staff that has been assembled here is second to none for an educational institution.

So even in the processes that we have identified, which were helter-skelter before we got here, we have some more issues to continue working out, but we won’t be satisfied with where we are even if we achieve the task.

Skipper: One of the things that our office actually hates is just checking the box on a security audit. I mean, we are very vocal to the auditors when they come in. We don’t do things just to satisfy their audit. We actually look at the audit and we look at the intent of the question and if we find merit in it, we are going to go and meet that expectation and then make it better. Audits are general. We are going to exceed and make it a better functioning process than just saying, “Yes, I have purchased an antivirus product,” or “I have purchased x.” To us that’s unacceptable.

Bunkley: Audits are a good thing, and nobody likes to do them because they are time-consuming. But you do them because they are required by law, for our institution anyways. So instead of just having a generic audit, where we ignore the audit, we have adopted the concept of the audit as a very useful thing for us to have as a self-reflection tool. It’s nice to not have the same set of eyes on your work all the time. And instead of taking offense to someone coming in and saying, “You are not doing this good enough,” we have literally changed our internal culture here, audits are not a bad thing; audits are a desired thing.

Gardner: Let’s go around the table and hear how you began your journey into IT and security, and how the transition to an educational environment went.

IT’s the curriculum

Bunkley: I started in the banking industry. Those hours were crazy and the pressure was pretty high. So as soon as I left that after a year, I entered education, and honestly, I entered education because I thought the schedule was really easy and I kind of copped out on that. Come to find out, I am working almost as many hours, but that’s because I have come to love it.

This is my 17th year in education, so I have been in a few districts now. Wholesale change is what I have been hired to do, that’s also what I was hired here to do in Clay. We want to change the culture, make IT part of the instruction instead of a separate segment of education.

We have to be interwoven into everything, otherwise we are going to be on an island, and the last time I heard the definition of education is to educate children. So IT can never by itself be a high-functioning department in education. So we have decided instead to go to instruction, and go to professional development, and go to administration and intervene ourselves.

Gardner: Jon, tell us about your background and how the transition has been for you.

Skipper: I was at active-duty Air Force until 2014 when I retired after 20 years. And then I came into education on the side. I didn’t really expect this job, wasn’t mentally searching for it. I tried it out, and that was three years ago.

It’s been an interesting environment. Education, and especially a small IT department like this one, is one of those interesting places where you can come and really expand on your weak areas. So that’s what I actually like about this. If I need to practice on my group policy knowledge, I can dive in there and I can affect that change. Overall this has been an effective change, totally different from the military, a lot looser as far as a lot of things go, but really interesting.

Gardner: Rick, same question to you, your background and how did the transition go?

Perkins: I spent 21 years in the military, I was Navy. When I retired in 2010, I actually went to work for a smaller district in education mainly because they were the first one to offer me a job. In that smaller district, just like here, we have eight people doing operations, and we have this big department. Jeremy understands from where he came from. It was pretty much me doing every aspect of it, so you do a little security, you do a little bit of everything, which I enjoyed because you are your own boss, but you are not your own boss.

You still have people residing over you and dictating how you are going to work, but I really enjoyed the challenge. Coming from IT security in the military and then coming into education, it’s almost a role reversal where we came in and found next to no policies.

I am used to a black-and-white world. So we are trying to interject some of that and some of the security best practices into education. You have to be flexible because education is not the military, so you can’t be that stringent. So that’s a challenge.

Gardner: What are you using to put policies in place enforce them? How does that work?

Policy plans

Perkins: From a [Microsoft] Active Directory side, we use group policy like most people do, and we try and automate it as much as we can. We are switching over, on the student side, very heavily to Google. They effectively have their own version of Active Directory with group policy. And then I will let Jon speak more to the security side though we have used various programs like PDQ for our patch management system that allows us to push out stuff. We use some logging systems with ManageEngine. And then as we have said before we use Bitdefender to push a lot of policy and security out as well, and we've been reevaluating some other stuff.

We also use SolarWinds to monitor our network and we actually manage changes to our network and switching using SolarWinds, but on the actual security side, I will let Jon get more specific for you.

Skipper: When we came in … there was a fear of having too much in policy equated to too much auditing overhead. One of the first things we did was identify what we can lock down, and the easiest one was the filter.

The content filter met such stipulations as making sure adult material is not acceptable on the network. We had that down. But it didn't really take into account the dynamic of the Internet as far as sites are popping up every minute or second, and how do you maintain that for unclassified and uncategorized sites?

So one of the things we did was we looked at a vendor, like, okay, does this vendor have a better product for that aspect of it, and we got that working, I think that's been working a lot better. And then we started moving down, we were like, okay, cool, so now we have content filtering down, luckily move on to active network, actually not about finding someone else who is doing it, and borrowing their work and making their own.

We look into some of the bigger school districts and see how they are doing it. I think Chicago, Los Angeles. We both looked at some of their policies where we can find it. I found a lot of higher education in some of the universities. Their policies are a lot more along the lines of where we want to be. I think they have it better than what some of the K-12s do.

So we have been going through there and we are going to have to rewrite policy – we are in an active rewrite of our policies right now, we are taking all of those in and we are looking at them, and we are trying to figure out which ones work in our environment and then make sure we do a really good search and replace.

Gardner: We have talked about people, process and technology. We have heard that you are on a security journey and that it’s long-term and culturally oriented.

Let's look at this then as to what you get when you do it right, particularly vis-à-vis education. Do you have any examples of where you have been able to put in the right technology, add some policy and process improvements, and then culturally attune the people? What does that get for you? How do you turn a problem student into a computer scientist at some point? Tell us some of the examples of when it works, what it gets you.

Positive results

Skipper: When we first got in here, we were a Microsoft district. We had some policies in place to help prevent data loss, and stuff like that.

One of the first things we did is review those policies and activate them, and we started getting some hits. We were surprised at some of hits that we saw, and what we saw going out. We already knew we were moving to the Google networks, continuing the process.

We researched a lot and one of the things we discovered is that just by a minor tweak in a user’s procedures, we were able to identify that we could introduce that user to and get them used to using email encryption, for example. With the Gmail solution, we are able to add an extension, and that extension actually looks at their email as it goes out and finds keywords -- or it may be PII -- and automatically encrypt the email, preventing those kinds of breaches from going out there. So that’s really been helpful.

As far as taking a student who may be on the wrong path and reeducating them and bringing them back into the fold, Bitdefender has actually helped out on that one.

We had a student a while back who went out to YouTube and find out how he could just do a simple search on how to crash the school network, and he found about five links. And he researched those links and went out there and found that this batch filed with this type will crash a school server.

He was able to implement it and started trying to get that attack out there, and Bitdefender was able to actually go out there and see the batch file, see what it did and prevent it. By quarantining the file, I was able to get that reported very quickly from the moment that he introduced the attack, and it identified the student and we were able to sit down with the administrators and talk to the student about that process and educate them on the dangers of actually attacking a school network and the possible repercussions of it.

Gardner: It certainly helps when you can let them know that you are able to track and identify those issues, and then trace them back to an individual. Any other anecdotes about where the technology process and people have come together for a positive result?

Applied IT knowledge for the next generation

Skipper: One of the things that’s really worked well for the school district is what we call Network Academy. It’s taught by one of our local retired master chiefs, and he is actually going in there and teaching students at the high school level how to go as far as earning a Cisco Certified Network Associate (CCNA)-level IT certificate.

If a student comes in and they try hard enough, they will actually figure it out and they can leave when they graduate with a CCNA, which is pretty awesome. A high school student can walk away with a pretty major industry certification.

We like to try and grab these kids as soon as they leave high school, or even before they leave high school, and start introducing them to our network. They may have a different viewpoint on how to do something that’s revolutionary to us.

But we like having that aspect of it, we can educate those kids who are coming in and  getting their industry certifications, and we are able to utilize them before they move on to a college or another job that pays more than we do.

Bunkley: Charlie Thompson leads this program that Jon is speaking of, and actually over half of our team has been through the program. We didn’t create it, we have just taken advantage of the opportunity. We even tailor the classes to some of the specific things that we need. We have effectively created our own IT hiring pipeline out of this program.

Gardner: Next let’s take a look to the future. Where do you see things going, such as more use of cloud services, interest in unified consoles and controls from the cloud as APIs come into play more for your overall IT management? Encryption? Where do you take it from here?

Holistic solutions in the cloud

Bunkley: Those are some of the areas we are focusing on heavily as we move that “anywhere network.” The unified platform for management is going to be a big deal to us. It is a big deal to us already. Encryption is something we take very seriously because we have a team of eight protecting the data of  about 42,000 users..

If you consider the perfect cyber crime reaching down into a 7th or an 8th grader and stealing all of their personal information, taking that kid’s identity and using it, that kid won’t even know that their identity has been stolen.

We consider that a very serious charge of ours to take on. So we will continue to improve our protection of the students’ and teachers’ PII -- even if it sometimes means protecting them from themselves. We take it very seriously.

As we move to the cloud, that unified management platform leads to a more unified security platform. As the operating systems continue to mature, they seem to be going different ways. And what’s good for Mac is not always good for Chrome, is not always good for Windows. But as we move forward with our projects we bring everything back to that central point -- can the three be operated from the single point of connection, so that we can save money moving forward? Just because it’s a cool technology and we want to do, it doesn't mean it's the right thing for us.

Sometimes we have to choose an option that we don’t necessarily like as much, but pick it because it is better for the whole. As we continue to move forward, everything will be focused on that centralization. We can remain a small and flexible department to continue making sure that we are able to provide the services needed internally as well as protect our users.

Skipper: I think Jeremy hit it pretty solid on that one. As we integrate more with the cloud services, Google, etc., we are utilizing those APIs and we are leading our vendors that we use and forcing them into new areas. Lightspeed, for instance, is integrating more-and-more with Google and utilizing their API to ensure that content filtering -- even to the point of mobile device management (MDM) that is more integrated into the Google and Apple platforms to make sure that students are well protected and we have all the tools available that they need at any given time.

We are really leaning heavily on more cloud services, and also the interoperability between APIs and vendors.

Perkins: Public education is changing more to the realm of college education where the classroom is not a classroom -- a classroom is anywhere in the world. We are tasked with supporting them and protecting them no matter where they are located. We have to take care of our customers either way.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

How SAP Ariba became a first-mover as Blockchain comes to B2B

The next BriefingsDirect digital business thought leadership panel discussion examines the major opportunity from bringing Blockchain technology to business-to-business (B2B) procurement and supply chain management.

We will now explore how Blockchain’s unique capabilities can provide comprehensive visibility across global supply chains and drive simpler verification of authenticity, security, and ultimately control.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how Blockchain is poised to impact and improve supply chain risk and management, we're joined by Joe Fox, Senior Vice President for Business Development and Strategy at SAP Ariba, and Leanne Kemp, Founder and CEO of Everledger, based in London. The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Joe, Blockchain has emerged as a network methodology, running crypto currency Bitcoin, as most people are aware of it. It's a digitally shared record of transactions maintained by a network of computers, not necessarily with centralized authority. What could this be used for powerfully when it comes to gaining supply chain integrity?

Fox: Blockchain did start in the Bitcoin area, as peer-to-peer consumer functionality. But a lot of the capabilities of Blockchain have been recognized as important for new areas of innovation in the enterprise software space.

 Fox

Fox

Those areas of innovation are around “trusted commerce.” Trusted commerce allows buyers and sellers, and third parties, to gain more visibility into asset-tracking. Not just asset tracking in the context of the buyer receiving and the seller shipping -- but in the context of where is the good in transit? What do I need to do to protect that good? What is the transfer of funds associated with that important asset? There are even areas of other applications, such as an insurance aspect or some kind of ownership-proof.

Gardner: It sounds to me like we are adding lot of metadata to a business process. What's different when you apply that through Blockchain than if you were doing it through a platform?

Inherit the trust

Fox: That's a great question. Blockchain is like the cloud from the perspective of it’s an innovation at the platform layer. But the chain is only as valuable as the external trust that it inherits. That external trust that it inherits is the proof of what you have put on the chain digitally. And that includes that proof of who has taken it off and in what way they have control.

As we associate a chain transaction, or a posting to the ledger with its original transactions within the SAP Ariba Network, we are actually adding a lot of prominence to that single Blockchain record. That's the real key, marrying the transactional world and the B2B world with this new trusted commerce capability that comes with Blockchain.

Gardner: Leanne, we have you here as a prime example of where Blockchain is being used outside of its original adoption. Tell us first about Everledger, and then what it was you saw in Blockchain that made you think it was applicable to a much wider businesscapability.

Kemp: Everledger is a fast-moving startup using the best of emerging technology to assist in the reduction of risk and fraud. We began in April of 2015, so it's actually our birthday this week. We started in the world of diamonds where we apply blockchain technology to bring transparency to a once opaque market.

 Kemp

Kemp

And what did I see in the technology? At the very core of cryptocurrency, they were solving the problem of double-spend. They were solving the problem of transfer of value, and we could translate those very two powerful concepts into the diamond industry.

At the heart of the diamond industry, beyond the physical object itself, is certification, and certificates in the diamond industry are the currency of trade. Diamonds are cited on web sites around the world, and they are mostly sold off the merit of the certification. We were able to see the potential of the cryptocurrency, but we could decouple the currency from the ledger and we were able to then use the synthesis of the currency as a way to transfer value, or transfer ownership or custody. And, of course, diamonds are a girl's best friend, so we might as well start there.

Dealing with diamonds

Gardner: What was the problem in the diamond industry that you were solving? What was not possible that now is?

Kemp: The diamond industry boasts some pretty impressive numbers. First, it's been around for 130 years. Most of the relationships among buyers and sellers have survived generation upon generation based on a gentleman's handshake and trust.

The industry itself has been bound tightly with those relationships. As time has passed and generations have passed, what we are starting to see is a glacial melt. Some of the major players have sold off entities into other regions, and now that gentleman's handshake needs to be transposed into an electronic form.

Some of the major players in the market, of course, still reside today. But most of the data under their control sits in a siloed environment. Even the machines that are on the pipeline that help provide identity to the physical object are also black-boxed in terms of data.

We are able to bring a business network to an existing market. It's global. Some 81 countries around the world trade in rough diamonds. And, of course, the value of the diamonds increases as they pass through their evolutionary chain. We are able to bring an aggregated set of data. Not only that, we transpose the human element of trust -- the gentleman's handshake, the chit of paper and the promise to pay that's largely existed and has built has built 130 years of trade.

We are now able to transpose that into a set of electronic-form technologies -- Blockchain, smart contracts, cryptography, machine vision -- and we are able to take forward a technology platform that will see transactional trust being embedded well beyond my lifetime -- for generations to come.

Gardner: Joe, we have just heard how this is a problem-solution value in the diamond industry. But SAP Ariba has its eyes on many industries. What is it about the way things are done now in general business that isn't good enough but that Blockchain can help improve?

Fox: As we have spent years at Ariba solving procurement problems, we identified some of the toughest. When I saw Everledger, it occurred to me that they may have cracked the nut on one of the toughest areas of B2B trade -- and that is true understanding, visibility, and control of asset movement.

It dawned on me, too, that if you can track and trace diamonds, you can track and trace anything. I really felt like we could team up with this young company and leverage the unique way they figured out how to track and trace diamonds and apply that across a huge procurement problem. And that is, how do a supplier and a buyer manage the movement of any asset after they have purchased it? How do we actually associate that movement of the asset back to its original transactions that approved the commit-to-pay? How do you associate a digital purchase order (PO) with a digital movement of the asset, and then to the actual physical asset? That's what we really are teaming up to do.

That receipt of the asset has been a dark space in the B2B world for a long time. Sure, you can get a shipping notice, but most businesses don't do goods receipts. And as the asset flows through the supply chain -- especially the more expensive the item is -- that lack of visibility and control causes significant problems. Maybe the most important one is: overpaying for inventory to cover actual lost supply chain items in transit.

I talked to a really large UK-based telecom company and they told me that what we are going to do with Everledger, with just their fiber optics, they could cut their buying in half. Why? Because they overbuy their fiber optics to make sure they are never short on fiber optic inventory.

That precision of buying and delivery applies across the board to all merchants and all supply chains, even middle of the supply chain manufacturers. Whenever you have disruption to your inbound supply, that’s going to disrupt your profitability.

Gardner: It sounds as if what we are really doing here is getting a highly capable means -- that’s highly extensible -- to remove the margin of error from the tracking of goods, from cradle to grave.

Chain transactions

Fox: That’s exactly right. And the Internet is the enabler, because Blockchain is everywhere. Now, as the asset moves, you have the really cool stuff that Everledger has done, and other things we are going to do together – and that’s going to allow anybody from anywhere to post to the chain the asset receipt and asset movement.

For example, with a large container coming from overseas, you will have the chain record of every place that container has been. If it doesn't show up at a dock, you now have visibility as the buyer that there is a supply chain disruption. That chain being out on the Internet, at a layer that’s accessible by everyone, is one of the keys to this technology.

We are going to be focusing on connecting the fabric of the chain together with Hyperledger. Everledger builds on the Hyperledger platform. The fabric that we are going to tie into is going to directly connect those block posts back to the original transactions, like the purchase order, the invoice, the ship notice. Then the companies can see not only where their asset is, but also view it in context of the transactions that resulted in the shipment.

Gardner: So the old adage -- trust but verify -- we can now put that to work and truly verify. There's newstaking place here at SAP Ariba LIVE between Everledger and SAP Ariba. Tell us about that, and how the two companies -- one quite small, one very large -- are going to work together.

Fox: Ariba is all-in on transforming the procurement industry, the procurement space, the processes of procurement for our customers, buyers and sellers, and we are going to partner heavily with key players like Everledger.

Part of the announcement is this partnership with Everledger around track and trace, but it is not limited to track and trace. We will leverage what they have learned across our platform of $1 trillion a year in spend, with 2.5 million companies trading assets with each other. We are going to apply this partnership to many other capabilities within that.

Kemp: I am very excited. It’s a moment in time that I think I will remember for years to come. In March we also made an importantannouncement with IBM on some of the work that we have done beyond identifying objects. And that is to take the next step around ensuring that we have an ethical trade platform, meaning one that is grounded in cognitive compliance.

We will be able to identify the asset, but also know, for example in the diamond industry, that a diamond has passed through the right channels, paid the dutiful taxes that are due as a part of an international trade platform, and ensure all compliance is hardened within the chain.

I am hugely excited about the opportunity that sits before me. I am sincerely grateful that such a young company has been afforded the opportunity to really show how we are going to shine.

Gardner: When it comes to open trade, removing friction from commerce, these have been goals for hundreds of years. But we really seem to be onto something that can make this highly scalable, very rich -- almost an unlimited amount of data applied to any asset, connected to a ledger that’s a fluid, movable, yet tangible resource.

Fox: That’s right.

Gardner: So where do we go next, Joe? If the sky is the limit, describe the sky for me? How big is this, and where can you take it beyond individual industries? It sounds like there is more potential here.

Reduced friction costs

Fox: There is a lot of potential. If you think about it, Blockchain is an evolution of the Internet; we are going to be able to take advantage of that.

The new evolution is that it's a structured capability across the Internet itself. It’s going to be open, and it’s going to be able to allow companies to ledger their interactions with each other. They are going to be able, in an immutable way, to track who owns which asset, where the assets are, and be able to then use that as an audit capability.

That's all very important to businesses, and until now the Internet itself has not really had a structure for business. It's been open, the Wild West. This structure for business is going to help with what I call trusted commerce because in the end businesses establish relationships because they want to do business with each other, not based on what technology they have.

Another key fact about Blockchain is that it’s going to reduce friction in global B2B. I always like to say if you just accelerated B2B payments by a few days globally, you would open up Gross Domestic Product (GDP), and economies would start growing dramatically. This friction around assets has a direct tie to how slowly money moves around the globe, and the overall cost and friction from that.

So how big could it go? Well, I think that we are going to innovate together with Everledger and other partners using the Hyperledger framework. We are going to add every buyer and seller on the Ariba Network onto the chain. They are just going to get it as part of our platform.

Then we are going to begin ledgering all the transactions that they think make sense between themselves. We are going to release a couple of key functions, such as smart contracts, so their contract business rules can be applicable in the flow of commerce -- at the time commerce is happening, not locked up in some contract, or in some drawer or Portable Document Format (PDF) file. We are going to start with those things.

I don't know what applications we are going to build beyond that, but that's the excitement of it. I think the fact that we don't know is the big play.

Gardner: From a business person’s perspective, they don’t probably care too much that it’s Blockchain that’s enabling this, just like a lot of people didn't care 20 years ago that it was the Internet that was allowing them to shop online or send emails to anybody anywhere. What is it that we would tease out of this, rather than what the technology is, what's the business benefit that people should be thinking about?

Fox: Everybody wants digital trust, right? Leanne, why don’t you share some of the things you guys have been exploring?

Making the opaque transparent

Kemp: In the diamond industry, there is fraud related to document tampering. Typically paper certificates exist across the backbone, so it’s very easy to be able to transpose those into a PDF and make appropriate changes for self-gain.

Double-financing of the pipeline is a very real problem; invoicing, of course accounts receivable, they have the ability to have banks finance those invoices two, three, four times.

We have issues with round-tripping of diamonds through countries, where transfer pricing isn't declared correctly, along with the avoidance of tax and duties.

All of these issues are the dark side of the market. But, now we have the ability to bring transparency around any object, particularly in diamonds -- the one commodity that’s yet to have true financial products wrapped around it. Now, what do I mean by that? It doesn’t have a futures market yet. It doesn’t have exchange traded funds (ETFs), but the performance of diamonds has outperformed gold, platinum and palladium.

Now, what does this mean? It means we can bring transparency to the once opaque, have the ability to know if an object has gone through an ethical chain, and then realize the true value of that asset. This process allows us to start and think about how new financial products can be formed around these assets.

We are hugely interested in rising asset classes beyond just the commodity section of the market. This platform shift is like going from the World Wide Web to the World Wide Ledger. Joe was absolutely correct when he mentioned that the Internet hasn't been woven for transactional trust -- but we have the ability to do this now.

So from a business perspective, you can begin to really innovate on top of this exponential set of technology stacks. A lot of companies quote Everledger as a Blockchain company. I have to correct them and I say that we are an emerging technology company. We use the very best of Blockchain and smart contracts, machine vision, sensorial data points, for us to be able to form the identity of objects.

Now, why is that important? Most financial services companies have really been focused on Know Your Customer (KYC), but we believe that it's Know Your Object (KYO) that really creates an entirely new context around it.

Now, that transformation and the relationship of the object have already started to move. When you think about Internet of Things (IoT), mobile phones, and autonomous cars -- these are largely devices to the fabric of the web. But are they connected to the fabric of the transactions and the identity around those objects?

Insurance companies have begun to understand this. My work in the last 10 years has been deeply involved in insurance. As you begin to build and understand the chain of trust and the chain of risk, then tectonic plate shifts in financial services begin to unfold.

Apps and assets, on and off the chain

Fox: It’s not just about the chain, it's about the apps we build on top, and it's really about what is the value to the buyer and the seller as we build those apps on top.

To Leanne’s point, it’s first going to be about the object. The funny thing is we have struggled to be able to, in a digital way, provide visibility and control of an object and this is going to fix that. In the end, B2B, which is where SAP Ariba is, is about somebody getting something and paying for it. And that physical asset that they are getting is being paid for with another asset. They are just two different forms. By digitizing both and keeping that in a ledger that really cannot be altered -- it will be the truth, but it's open to everyone, buyers and sellers.

Businesses will have to invent ways to control how frictionless this is going to be. I will give you a perfect example. In the past if I told you I could do an international payment of $1 million to somebody in two minutes, you would have told me I was crazy. With Blockchain, one corporation can pay another corporation $1 million in two minutes, internationally.

And on the chain companies like Everledger can build capabilities that do the currency translation on the fly, as it’s passing through, and that doesn’t dis-remediate the banks because how did the $1 million get onto the chain in the first place? Someone put it on the chain through a bank. The bank is backing that digital version. How does it get off the chain so you can actually do something with it? It goes through another bank. It’s actually going to make the banks more important. Again, Blockchain is only as good as the external trust that it inherits.

I really think we have to focus on getting the chain out there and really building these applications on top.

Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

A Secure Root of Trust

A Secure Root of Trust

I’m proud to be part of the “Chain of Trust” that begins here in Sunnyvale California for the HPE Atalla HSM (Hardware Security Module). When I joined Tandem Computers 20 years ago I was awed by what the Atalla security products provided our Payment Card Industry (PCI) customers. Since the mid-1970’s, Atalla products have led the way in shaping the payments industry and ensuring the security of not only my personal banking transactions but also the way in which global trade is conducted to this very day. If you use an ATM machine (anywhere in the world), buy gas at the pump, or swipe your credit card at a store, the chances are highly likely that transaction was protected and authorized thru an Atalla HSM appliance on the back-end.