.banner-thumbnail-wrapper { display:none; }

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

A discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time.

Dark side of cloud—How people and organizations are unable to adapt to improve the business

image.jpeg

The next BriefingsDirect cloud deployment strategies interview explores how public cloud adoption is not reaching its potential due to outdated behaviors and persistent dissonance between what businesses can do and will do with cloud strengths.

Many of our ongoing hybrid IT and cloud computing discussions focus on infrastructure trends that support the evolving hybrid IT continuum. Today’s focus shifts to behavior -- how individuals and groups, both large and small, benefit from cloud adoption. 

It turns out that a dark side to cloud points to a lackluster business outcome trend. A large part of the disappointment has to do with outdated behaviors and persistent dissonance between what line of business (LOB) practitioners can do and will do with their newfound cloud strengths. 

We’ll now hear from an observer of worldwide cloud adoption patterns on why making cloud models a meaningful business benefit rests more with adjusting the wetware than any other variable.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore why cloud failures and cost overruns are dogging many enterprises is Robert Christiansen, Vice President, Global Delivery, Cloud Professional Services and Innovation at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening now with the adoption of cloud that makes the issue of how people react such a pressing concern? What’s bringing this to a head now?

  Chistiansen

Chistiansen

Christiansen: Enterprises are on a cloud journey. They have begun their investment, they recognize that agility is a mandate for them, and they want to get those teams rolling. They have already done that to some degree and extent. They may be moving a few applications, or they may be doing wholesale shutdowns of data centers. They are in lots of different phases in adoption situations. 

What we are seeing is a lack of progress with regard to the speed and momentum of the adoption of applications into public clouds. It’s going a little slower than they’d like.

Gardner: We have been through many evolutions, generations, and even step-changes in technology. Most of them have been in a progressive direction. Why are we catching our heels now?

Christiansen: Cloud is a completely different modality, Dana. One of the things that we have learned here is that adoption of infrastructure that can be built from the ground-up using software is a whole other way of thinking that has never really been the core bread-and-butter of an infrastructure or a central IT team. So, the thinking and the process -- the ability to change things on the fly from an infrastructure point of view -- is just a brand new way of doing things. 

And we have had various fits and starts around technology adoption throughout history, but nothing at this level. The tool kits available today have completely changed and redefined how we go about doing this stuff.

Gardner: We are not just changing a deployment pattern, we are reinventing the concept of an application. Instead of monolithic applications and systems of record that people get trained on and line up around, we are decomposing processes into services that require working across organizational boundaries. The users can also access data and insights in ways they never had before. So that really is something quite different. Even the concept of an application is up for grabs.

Christiansen: Well, think about this. Historically, an application team or a business unit, let’s say in a bank, said, “Hey, I see an opportunity to reinvent how we do funding for auto loans.”

We worked with a company that did this. And historically, they would have had to jump through a bunch of hoops. They would justify the investment of buying new infrastructure, set up the various components necessary, maybe landing new hardware in the organization, and going into the procurement process for all of that. Typically, in the financial world, it takes months to make that happen.

Today, that same team using a very small investment can stand up a highly available redundant data center in less than a day on a public cloud. In less than a day, using a software-defined framework. And now they can go iterate and test and have very low risk to see if the marketplace is willing to accept the kind of solution they want to offer.

And that just blows apart the procedural-based thinking that we have had up to this point; it just blows it apart. And that thinking, that way of looking at stuff is foreign to most central IT people. Because of that emotion, going to the cloud has come in fits and starts. Some people are doing it really well, but a majority of them are struggling because of the people issue.

Gardner: It seems ironic, Robert, because typically when you run into too much of a good thing, you slap on governance and put in central command and control, and you throttle it back. But that approach subverts the benefits, too.

How do you find a happy medium? Or is there such a thing as a happy medium when it comes to moderating and governing cloud adoption?

Control issues

Christiansen: That’s where the real rub is, Dana. Let’s give it an analogy. At Cloud Technology Partners (CTP), we do cloud adoption workshops where we bring in all the various teams and try to knock down the silos. They get into these conversations to address exactly what you just said. “How do we put governance in place without getting in the way of innovation?”

It’s a huge, huge problem, because the central IT team’s whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

When you have a structure like that but supplied by the public clouds like Amazon (AWS)Google, and Microsoft Azure, you still have the ability to put in a lot of those controls in the software. Before it was done either manually or at least semi-manually.

The central IT team's whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

The challenge is that the central IT teams are not necessarily set up with the skills to make that happen. They are not by nature software development people. They are hardware people. They are rack and stack people. They are people who understand how to stitch this stuff together -- and they may use some automation. But as a whole it’s never been their core competency. So therein lies the rub: How do you convert these teams over to think in that new way?

At the same time, you have the pressing issue of, “Am I going to automate myself right out of a job?” That’s the other part, right? That’s the big, 800-pound gorilla sitting in the corner that no one wants to talk about. How do you deal with that?

Gardner: Are we talking about private cloud, public cloud, hybrid cloud, hybrid IT -- all the above when it comes to these trends?

Public perceptions 

Christiansen: It’s mostly public cloud that you see the perceived threats. The public cloud is perceived as a threat to the current way of doing IT today, if you are an internal IT person. 

Let’s say that you are a classic compute and management person. You actually split across both storage and compute, and you are able to manage and handle a lot of those infrastructure servers and storage solutions for your organization. You may be part of a team of 50 in a data center or for a couple of data centers. Many of those classic roles literally go away with a public cloud implementation. You just don’t need them. So these folks need to pivot or change into new roles or reinvent themselves.

Let’s say you’re the director of that group and you happen to be five years away from retirement. This actually happened to me, by the way. There is no way these folks want to give up the range right before their retirement. They don’t want to reinvent their roles just before they’re going to go into their last years. 

They literally said to me, “I am not changing my career this far into it for the sake of a public cloud reinvention.” They are hunkering down, building up the walls, and slowing the process. This seems to be an undercurrent in a number of areas where people just don’t want to change. They don’t want any differences.

Gardner: Just to play the devil’s advocate, when you hear things around serverless, when we see more operations automation, when we see artificial intelligence (AI)Ops use AI and machine learning (ML) -- it does get sort of scary. 

You’re handing over big decisions within an IT environment on whether to use public or private, some combination, or multicloud in some combination. These capabilities are coming into fruition.

Maybe we do need to step back and ask, “Just because you can do something, should you?” Isn’t that more than just protecting my career? Isn’t there a need for careful consideration before we leap into some of these major new trends?

Transform fear into function 

Christiansen: Of course, yeah. It’s a hybrid world. There are applications where it may not make sense to be in the public cloud. There are legacy applications. There are what I call centers of gravity that are database-centric; the business runs on them. Moving them and doing a big lift over to a public cloud platform may not make financial sense. There is no real benefit to it to make that happen. We are going to be living between an on-premises and a public cloud environment for quite some time. 

The challenge is that people want to create a holistic view of all of that. How do I govern it in one view and under one strategy? And that requires a lot of what you are talking about, being more cautious going forward.

And that’s a big part of what we have done at CTP. We help people establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. How do you network between the two environments? How do you create low-latency communications between your sources of data and your sources of truth? Making that happen is what we have been doing for the last five or six years.

We help establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. 

The challenge we have, Dana, is that once we have established that -- we call that methodology the Minimum Viable Cloud (MVC). And after you put all of that structure, rigor, and security in place -- we still run into the problems of motion and momentum. Those needed governance frameworks are well-established.

Gardner: Before we dig into why the cloud adoption inertia still exists, let’s hear more about CTP. You were acquired by HPE not that long ago. Tell us about your role and how that fits into HPE.

CTP: A cloud pioneer

Christiansen: CTP was established in 2010. Originally, we were doing mostly private cloud, OpenStack stuff, and we did that for about two to three years, up to 2013.

1.jpeg

I am one of the first 20 employees. It’s a Boston-based company, and I came over with the intent to bring more public cloud into the practice. We were seeing a lot of uptick at the time. I had just come out of another company called Cloud Nation that I owned. I sold that company; it was an Amazon-based, Citrix-for-rent company. So imagine, if you would, you swipe a credit card and you get NetScaler, XenApp and XenDesktop running on top of AWS way back in 2012 and 2013. 

I sold that company, and I joined CTP. We grew the practice of public cloud on Google, Azure, and AWS over those years and we became the leading cloud-enabled professional services organization in the world.

We were purchased by HPE in October 2017, and my role since that time is to educate, evangelize, and press deeply into the methodologies for adopting public cloud in a holistic way so it works well with what people have on-premises. That includes the technologies, economics, strategies, organizational change, people, security, and establishing a DevOps practice in the organization. These are all within our world.

We do consultancy and professional services advisory types of things, but on the same coin, we flip it over, and we have a very large group of engineers and architects who are excellent on keyboards. These are the people who actually write software code to help make a lot of this stuff automated to move people to the public clouds. That’s what we are doing to this day.

Gardner: We recognize that cloud adoption is a step-change, not an iteration in the evolution of computing. This is not going from client/server to web apps and then to N-Tier architectures. We are bringing services and processes into a company in a whole new way and refactoring that company. If you don’t, the competition or a new upstart unicorn company is going to eat your lunch. We certainly have seen plenty of examples of that. 

So what prevents organizations from both seeing and realizing the cloud potential? Is this a matter of skills? Is it because everyone is on the cusp of retirement and politically holding back? What can we identify as the obstacles to overcome to break that inertia?

A whole new ball game

Christiansen: From my perspective, we are right in the thick of it. CTP has been involved with many Fortune 500 companies throughthis process.

The technology is ubiquitous, meaning that everybody in the marketplace now can own pretty much the same technology. Dana, this is a really interesting thought. If a team of 10 Stanford graduates can start up a company to disrupt the rental car industry, which somebody has done, by the way, and they have access to technologies that were only once reserved for those with hundreds of millions of dollars in IT budgets, you have all sorts of other issues to deal with, right?

So what’s your competitive advantage? It’s not access to the technologies. The true competitive advantage now for any company is the people and how they consume and use the technology to solve a problem. Before [the IT advantage] was reserved for those who had access to the technology. That’s gone away. We now have a level playing field. Anybody with a credit card can spin up a big data solution today – anybody. And that’s amazing, that’s truly amazing.

For an organization that had always fallen back on their big iron or infrastructure -- those processes they had as their competitive advantage -- that now has become a detriment. That’s now the thing that’s slowing them down. It’s the anchor holding them back, and the processes around it. That rigidity of people and process locks them into doing the same thing over and over again. It is a serious obstacle. 

Untangle spaghetti systems 

Another major issue came very much as a surprise, Dana. We observed it over the last couple of years of doing application inventory assessments for people considering shutting down data centers. They were looking at their applications, the ones holding the assets of data centers, as not competitive. And they asked, “Hey, can we shut down a data center and move a lot of it to the public cloud?”

We at CTP were hired to do what are called application assessments, economic evaluations. We determine if there is a cost validation for doing a lift-and-shift [to the public cloud]. And the number-one obstacle was inventory. The configuration management data bases (CMDBs), which hold the inventory of where all the servers are and what’s running on them for these organizations, were wholly out of date. Many of the CMDBs just didn’t give us an accurate view of it all. 

When it came time to understand what applications were actually running inside the four walls of the data centers -- nobody really knew. As a matter of fact, nobody really knew what applications were talking to what applications, or how much data was being moved back and forth. They were so complex; we would be talking about hundreds, if not thousands, of applications intertwined with themselves, sharing data back and forth. And nobody inside organizations understood which applications were connected to which, how many there were, which ones were important, and how they worked.

When it came time to understand what applications were actually running inside of the four walls of the data centers -- no one really knew. Nobody knew what applications were talking to what applications, or how much data was being moved back and forth.

Years of managing that world has created such a spaghetti mess behind those walls that it’s been exceptionally difficult for organizations to get their hands around what can be moved and what can’t. There is great integration within the systems.

The third part of this trifecta of obstacles to moving to the cloud is, as we mentioned, people not wanting to change their behaviors. They are locked in to the day-to-day motion of maintaining those systems and are not really motivated to go beyond that.

Gardner: I can see why they would find lots of reasons to push off to another day, rather than get into solving that spaghetti maze of existing data centers. That’s hard work, it’s very difficult to synthesize that all into new apps and services.

Christiansen: It was hard enough just virtualizing these systems, never mind trying to pull it all apart.

Gardner: Virtualizing didn’t solve the larger problem, it just paved the cow paths, gained some efficiency, reduced poor server utilization -- but you still have that spaghetti, you still have those processes that can’t be lifted out. And if you can’t do that, then you are stuck.

Christiansen: Exactly right.

Gardner: Companies for many years have faced other issues of entrenchment and incumbency, which can have many downsides. Many of them have said, “Okay, we are going to create a Skunk Works, a new division within the company, and create a seed organization to reinvent ourselves.” And maybe they begin subsuming other elements of the older company along the way.

Is that what the cloud and public cloud utilization within IT is doing? Why wouldn’t that proof of concept (POC) and Skunk Works approach eventually overcome the digital transformation inertia?

Clandestine cloud strategists

Christiansen: That’s a great question, and I immediately thought of a client who we helped. They have a separate team that re-wrote or rebuilt an application using serverless on Amazon. It’s now a fairly significant revenue generator for them, and they did it almost two and-a-half years ago.

It uses a few cloud servers, but mostly they rely on the messaging backbones and non-server-based platform-as-a-service (PaaS) layers of AWS to solve their problem. They are a consumer credit company and have a lot of customer-facing applications that they generate revenue from on this new platform.

The team behind the solution educated themselves. They were forward-thinkers and saw the changes in public cloud. They received permission from the business unit to break away from the central IT team’s standard processes, and they completely redefined the whole thing.

The team really knocked it out of the park. So, high success. They were able to hold it up and tried to extend that success back into the broader IT group. The IT group, on the other hand, felt that they wanted more of a multicloud strategy. They weren’t going to have all their eggs in Amazon. They wanted to give the business units options, of either going to Amazon, Azure, or Google. They wanted to still have a uniform plane of compute for on-premises deployments. So they brought in Red Hat’s OpenShift, and they overlaid that, and built out a [hybrid cloud] platform.

Now, the Red Hat platform, I personally had had no direct experience, but I had heard good things about it. I had heard of people who adopted it and saw benefits. This particular environment though, Dana, the business units themselves rejected it.

The core Amazon team said, “We are not doing that because we’re skilled in Amazon. We understand it, we’re using AWS CloudFormation. We are going to write code to the applications, we are going to use Lambda whenever we can.” They said, “No, we are not doing that [hybrid and multicloud platform approach].”

Other groups then said, “Hey, we’re an Azure shop, and we’re not going to be tied up around Amazon because we don’t like the Amazon brand.” And all that political stuff arose, they just use Azure, and decided to go shooting off on their own and did not use the OpenShift platform because, at the time, the tool stacks were not quite what they needed to solve their problems.

The company ended up getting a fractured view. We recommended that they go on an education path, to bring the people up to speed on what OpenShift could do for them. Unfortunately, they opted not to do that -- and they are still wrestling with this problem.

CTP and I personally believe that this was an issue of education, not technology, and not opportunity. They needed to lean in, sponsor, and train their business units. They needed to teach the app builders and the app owners on why this was good, the advantages of doing it, but they never invested the time. They built it and hoped that the users would come. And now they are dealing with the challenges of the blowback from that.

Gardner: What you’re describing, Robert, sounds an awful lot like basic human nature, particularly with people in different or large groups. So, politics, right? The conundrum is that when you have a small group of people, you can often get them on board. But there is a certain cut-off point where the groups are too large, and you lose control, you lose synergy, and there is no common philosophy. It’s Balkanization; it’s Europe in 1916.

Christiansen: Yeah, that is exactly it.

Gardner:Very difficult hurdles. These are problems that humankind has been dealing with for tens of thousands of years, if not longer. So, tribalism, politics. How does a fleet organization learn from what software development has come up with to combat some of these political issues? I’m thinking of Agile methodologiesscrums, and having short bursts, lots of communication, and horizontal rather than command-and-control structures. Those sorts of things.

Find common ground first

Christiansen: Well, you nailed it. How you get this done is the question. How do you get some kind of agility throughout the organization to make this happen? And there are successes out there, whole organizations, 4,000 or 5,000 or 6,000 people, have been able to move. And we’ve been involved with them. The best practices that we see today, Dana, are around allowing the businesses themselves to select the platforms to go deep on, to get good at.

Let’s say you have a business unit generating $300 million a year with some service. They have money, they are paying the IT bill. But they want more control, they want more the “dev” from the DevOps process.

The best practices that we see today are around allowing the businesses themselves to select the cloud platforms to go deep on, to get good at. ... They want the "dev" from the DevOps process.

They are going to provide much of that on their own, but they still need core common services from central IT team. This is the most important part. They need the core services, such as identity and access management, key management, logging and monitoring, and they need networking. There is a set of core functions that the central team must provide.

And we help those central teams to find and govern those services. Then, the business units [have cloud model choice and freedom as long as they] consume those core services -- the access and identity process, the key management services, they encrypt what they are supposed to, and they use the networking functions. They set up separation of the services appropriately, based on standards. And they use automation to keep them safe. Automation prevents them from doing silly things, like leaving unencrypted AWS S3 buckets open to the public Internet, things like that.

You now have software that does all of that automation. You can turn those tools on and then it’s like a playground, a protected playground. You say, “Hey, you can come out into this playground and do whatever you want, whether it’s on Azure or Google, or on Amazon or on-premises.”

 “Here are the services, and if you adopt them in this way, then you, as the team, can go deep, you can use Application programming interface (API) calls, you can use CloudFoundation or Python or whatever happens to be the scripting language you want to build your infrastructure with.”

Then you have the ability to let those teams do what they want. If you notice, what it doesn’t do is overlay a common PaaS layer, which isolates the hyperscale public cloud provider from your work. That’s a whole other food fight, religious battle, Dana, around lock-in and that kind of conversation.

Gardner: Imposing your will on everyone else doesn’t seem to go over very well.

So what you’re describing, Robert, is a right-sizing for agility, and fostering a separate-but-equal approach. As long as you can abstract to the services level, and as long as you conform to a certain level of compliance for security and governance -- let’s see who can do it better. And let the best approach to cloud computing win, as long as your processes end up in the right governance mix.

Development power surges

Christiansen: People have preferences, right? Come on! There’s been a Linux and .NET battle since I have been in business. We all have preferences, right? So, how you go about coding your applications is really about what you like and what you don’t like. Developers are quirky people. I was a C programmer for 14 years, I get it.

The last thing you want to do is completely blow up your routines by taking development back and starting over with a whole bunch of new languages and tools. Then they’re trying to figure out how to release code, test code, and build up a continuous integration/continuous delivery pipeline that is familiar and fast.

These are really powerful personal stories that have to be addressed. You have to understand that. You have to understand that the development community now has the power -- they have the power, not the central IT teams. That shift has occurred. That power shift is monumental across the ecosystem. You have to pay attention to that.

If the people don’t feel like they have a choice, they will go around you, which is where the problems are happening.

Gardner: I think the power has always been there with the developers inside of their organizations. But now it’s blown out of the development organization and has seeped up right into the line of business units.

Christiansen: Oh, that’s a good point.

Gardner: Your business strategy needs to consider all the software development issues, and not just leave them under the covers. We’re probably saying the same thing. I just see the power of development choice expanding, but I think it’s always been there.

But that leads to the question, Robert, of what kind of leadership person can be mindful of a development culture in an organization, and also understand the line of business concerns. They must appreciate the C-suite strategies. If you are a public company, keeping Wall Street happy, and keeping the customer expectations met because those are always going up nowadays.

It seems to me we are asking an awful lot of a person or small team that sits at the middle of all of this. It seems to me that there’s an organizational and a talent management deficit, or at least something that’s unprecedented.

Tech-business cross-pollination

Christiansen: It is. It really is. And this brings us to a key piece to our conversation. And that is the talent enablement. It is now well beyond how we’ve classically looked at it.

Some really good friends of mine run learning and development organizations and they have consulting companies that do talent and organizational change, et cetera. And they are literally baffled right now at the dramatic shift in what it takes to get teams to work together.

In the more flexible-thinking communities of up-and-coming business, a lot of the folks that start businesses today are technology people. They may end up in the coffee industry or in the restaurant industry, but these folks know technology. They are not unaware of what they need to do to use technology.

So, business knowledge and technology knowledge are mixing together. They are good when they get swirled together. You can’t live with one and not have the other.

For example, a developer needs to understand the implications of economics when they write something for cloud deployment. If they build an application that does not economically work inside the constructs of the new world, that’s a bad business decision, but it’s in the hands of the developer.

It’s an interesting thing. We’ve had that need for developer-empowerment before, but then you had a whole other IT group put restrictions on them, right? They’d say, “Hey, there’s only so much hardware you get. That’s it. Make it work.” That’s not the case anymore, right?

We have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud. 

At the same time, you now have an operations person involved with figuring out how to architect for the cloud, and they may think that the developers do not understand what has to come together.

As a result, we have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud.

We have found that much of an organization’s delay in rolling this out is because the people who are consuming the cloud are not ready or knowledgeable enough on how to maximize their investment in cloud. This is not for the people building up those core services that I talked about, but for the consumers of the services, the business units.

We are rolling that out later this year, a full Talent Enablement track around those new roles.

Gardner: This targets the people in that line of business, decision-making, planning, and execution role. It brings them up to speed on what cloud really means, how to consume it. They can then be in a position of bringing teams together in ways that hadn’t been possible before. Is that what you are getting at?

Teamwork wins 

Christiansen: That’s exactly right. Let me give you an example. We did this for a telecommunications company about a year ago. They recognized that they were not going to be able to roll out their common core services.

The central team had built out about 12 common core services, and they knew almost immediately that the rest of the organization, the 11 other lines of business, were not ready to consume them.

They had been asking for it, but they weren’t ready to actually drive this new Ferrari that they had asked for. There were more than 5,000 people who needed to be up-skilled on how to consume the services that a team of about 100 people had put together.

Now, these are not classic technical services like AWS architecture, security frameworks, or Access control list (ACL) and Network ACL (NACL) for networking traffic, or how you connect back and backhaul, that kind of stuff. None of that.

I’m talking about how to make sure you don’t get a cloud bill that’s out of whack. How do I make sure that my team is actually developing in the right way, in a safe way? How do I make sure my team understands the services we want them to consume so that we can support it?

It was probably 10 or 12 basic use domains. The teams simply didn’t understand how to consume the services. So we helped this organization build a training program to bring up the skills of these 4,000 to 5,000 people.

Now think about that. That has to happen in every global Fortune 2000 company where you may only have a central team of a 100, and maybe 50 cloud people. But they may need to turn over the services to 1,000 people.

We have a massive, massive, training, up-skilling, and enablement process that has to happen over the next several years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Ask the right questions about your SAP HANA migration infrastructure partner

Ask the right questions about your SAP HANA migration infrastructure partner

Whether you’ve just started your SAP HANA journey or have a HANA environment to refresh or expand, you have many infrastructure options to choose from. Know what questions to ask as you evaluate your choices.

Migrating to SAP HANA is one of most significant and transformational events your organization will undertake. That’s why you want to minimize business risk in all aspects of the project—including your choice of infrastructure partner. Given the importance of this decision, you need to make sure you carefully evaluate all the options and ask the right questions during the selection process:

The Open Group panel explores ways to help smart cities initiatives overcome public sector obstacles

 Credit: Wikimedia Commons

Credit: Wikimedia Commons

The next BriefingsDirect thought leadership panel discussion focuses on how The Open Group is spearheading ways to make smart cities initiatives more effective.

Many of the latest technologies -- such as Internet of Things (IoT) platforms, big data analytics, and cloud computing -- are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.

The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.

The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibusDr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at BoeingDon Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?

 Harding

Harding

Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens -- and money is just a means to achieving that end.

This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.

Bottom-up blueprint approach

Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down. 

In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.

There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.

 Saha

Saha

Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.

This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help. 

Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken. 

These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.

Sunderland: I agree with everything that’s been said so far. In the particular case of New York City -- and with a lot of cities in the US -- cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set. 

The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive. 

Create demand to drive demand

 Credit: Wikimedia Commons

Credit: Wikimedia Commons

Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone. 

If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need. 

In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract -- without somehow having demand developed by the user base -- is pretty difficult.

By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector. 

The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand. 

Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications? 

 Lisdorf

Lisdorf

Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.

This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on. 

It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.

Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives? 

Framework for a common foundation 

Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously -- and I suspect that this is actually a problem that cities share with corporate enterprises. 

Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains -- and they need to share data. What’s needed is some kind of framework to allow data sharing to happen. 

This framework has to be at two levels. It has to be at a policy level -- and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.

Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things? 

Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.

Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned. 

One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.

Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.

The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority -- not a specific agency or department of the city.

Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services. 

Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?

Public- and private-sector parity

 Brancato

Brancato

Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?

This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there. 

Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.

So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.

Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public -- and if we know a common data framework is important -- why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?

Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.

In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward. 

 Sunderland

Sunderland

The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity -- like an Amazon or Facebook can -- would be doomed to failure, because nobody could actually afford to implement it.

Since you can’t do top-down solutions -- even if you pass a law -- the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up -- as opposed to the top-down.

Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?

Not something to legislate

Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful. 

The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”

That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management. 

It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.

A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice. 

Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?

Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.

Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?

How to pay for IT?

Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us -- and the vendors -- into a world of interoperability and a common data approach.

Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.

Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector. 

Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.

What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.

This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.

The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.

There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.

So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.

Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?

Cities in the cloud, but services grounded

Saha: Across the world, many governments -- not just local governments but even state and central governments -- are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.

It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.

Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.

Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?

Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.

One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.

There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.

This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.

Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.

Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?

Smart city success strategies

Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities. 

One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.

At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.

Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.

Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?

Collective, crystal-clear standards 

Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.

Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.

Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?

Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns.  That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.

Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.

We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.

The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.

I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.

Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?

Use-case consumer value

Sunderland: I like the idea of reference architectures attached to use cases because -- for better or worse -- when folks engage around these issues -- even in large entities like New York City -- they are going to be engaging for specific needs.

Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.

Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.

When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?

Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced -- or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.

Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.

This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.

Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?

Prioritize policy and people 

Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.

The development of policy around data sharing -- or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.

Gardner: Dr. Saha, same question to you. 

Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health -- or whatever the priority is for that city.

Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.

The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.

It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.

Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?

City smarts and safety first 

Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.

In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.

In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.

Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service -- all the way down to the data, so that eventually data can be tied back to the roles.

For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors. 

Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?

Mobile-app appeal

Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.

The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are -- whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.

When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

The Open Group digital practitioner effort eases the people path to digital business transformation

The Open Group digital practitioner effort eases the people path to digital business transformation

Learn how The Open Group is ambitiously seeking to close the gap between IT education, business methods, and what it will take to truly succeed at work over the next decade. 

Better management of multicloud IaaS proves accelerant to developer productivity for European gaming leader Magellan Robotech

Better management of multicloud IaaS proves accelerant to developer productivity for European gaming leader Magellan Robotech

Learn how Magellan Robotech uses cloud management as a means to best access hybrid cloud services that rapidly bring new resources to developers.

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack approach

Learn how an integrated backup and recovery capability allowed production processing systems to be snap back into use in only a few hours.

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

Learn how Equinix, Microsoft Azure Stack, and HPE’s Cloud28+ help MSPs and businesses alike obtain world-class hybrid cloud implementations.

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

A discussion on how the 2018 Ryder Cup golf match between European and US players places unique technical and campus requirements on its operators.

Is Multi-Cloud Sprawl Causing Your Money to Fly Away?

multicloud-100769597-large.jpg

According to a recent Forrester Research survey of IT decision makers, two-thirds of those pursuing hybrid IT in their digital transformation quest did so without a comprehensive plan. The result? A chaotic hybrid cloud deployment model that can be costly – not only economically, but also in terms agility and governance.

What causes this haphazard cloud use, and what new tools, processes, and methods are available that help IT leaders reign in their hybrid IT sprawl?

Dana Gardner, Principal Analyst at Interarbor Solutions, discusses these issues in a recent BriefingsDirect, Voice of the Analyst hybrid IT management strategies. In this podcast, Gardner talks with Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy, to get his take on multi-cloud sprawl and what can be done to contain it.

Jerry-rigged, multi-cloud management tools aren’t working

Gardner and Dillingham describe how enterprises today are using at least one public cloud and many are using multiple public clouds—in addition to their private infrastructure. Although public cloud deployment has matured over the years, the typical enterprise doesn’t have the tools needed to understand the optimal cloud mix in terms of purchase and consumption options. When you combine that challenge with determining an accurate cost model for private infrastructure, the task can become overwhelming very quickly. 

The challenge is how to manage these infrastructures in terms of costs, security, and governance. Commonly available management tools only work on a cloud-by-cloud basis; a single tool that consolidates the management of all resources is hard to find. Although many organizations already have management tools for solving a variety of issues relating to heterogeneous systems, existing toolsets don’t extend to the public cloud.

The enterprise clearly needs better cloud management tools and services. These tools should encompass the entire hybrid infrastructure (aka hybrid estate) – from multiple off-premises, public clouds to numerous on-premises, private clouds. Until these tools or services are deployed, the problem of cloud sprawl will continue to grow.

What’s available now to better manage multi-cloud sprawl?

According to Dillingham, private infrastructure vendors are delivering new management capabilities, but actually managing clouds isn’t where most of them started. The rush to adopt public cloud – and the focus on agility over cost-efficiency – promoted a culture of visibility and reporting, but not governance. Therefore, many of the tools available are better at delivering visibility, instead of the management. Yet both visibility and governance are needed for enterprises to be able to get the most out of their hybrid IT infrastructure.

A number of vendors are innovating in this space. Dillingham gives the example of HPE OneSphere from Hewlett Packard Enterprise (HPE). HPE OneSphere is a multi-cloud management solution that delivers visibility and governance capabilities along with the analytics enterprises need to make better cloud decisions.

Managed services are also starting to appear—the next logical step in helping the enterprise gain better management of their multi-cloud chaos. This type of service analyzes and optimizes the enterprise’s footprint across various cloud infrastructures on the basis of agility and cost comparisons.

Managing hybrid IT with tools or services?

Gardner wonders if enterprises should think of cloud management oversight and optimization as a set of services, rather than a product or a tool. He mentions, HPE GreenLake Hybrid Cloud, a new service that delivers cloud-native operations, compliance, financial control, and more for public clouds.  “Is that the way to go?” Gardner asks? “Should we think of cloud management oversight and optimization as a set of services, rather than a product or a tool? It seems to me that a set of services, with an ecosystem behind them, is pretty powerful.”

Dillingham explains that he believes in a three layer approach. The first is the multi-cloud infrastructure management tool, whether it is consumed as software or as a service. The second is the professional consultative services around the tool, which helps the enterprise take full advantage of the tool. And the third is a decision on whether you need an operational partner from a managed service provider perspective.

Dillingham explains, this is where “HPE is stepping up and saying we will handle all three of these. We will deliver your tools in various consumption models through a software-as-a-service (SaaS) delivery model with HPE OneSphere. And we will operate the services for you beyond that SaaS control portal – into your infrastructure management, across a hybrid footprint with the HPE GreenLake Hybrid Cloud offering. It is very compelling.”

Lots of moving parts. Choose carefully—with a long term view in mind.

Gardner concludes the podcast by asking Dillingham what the end user needs to consider to be successful in a cloud-first organization. With so many moving parts, what things should be top of mind?

Dillingham explains that it’s a complex process – and the enterprise needs a plan that includes many aspects. And that’s where you may want to enlist a professional services partner, to help walk you through the decision-making process. This discussion should include where you want to be in three, five, or even 10 years. The most important aspect to consider, according to Dillingham, is the goal. And this goal needs to be considered with a long term view in mind.

To listen to the complete podcast, click here. To learn more from HPE about managing your multi-cloud environment, check out this link. Read more from Rhett Dillingham on controlling hybrid cloud costs in a recent Forbes article.

Chris

Follow HPE Composable Infrastructure

 ABOUT THE AUTHOR  Chris Purcell  Composable Infrastructure, Integrated and Multi-Cloud management, Hyperconverged Infrastructure and Cloud

ABOUT THE AUTHOR

Chris Purcell

Composable Infrastructure, Integrated and Multi-Cloud management, Hyperconverged Infrastructure and Cloud

How the Internet of Things Is Cultivating a New Vision for Agriculture

 ABOUT THE AUTHOR  IsaacRo  Technologist in the making and proud geek. I crave chaos from disruptive tech trends: #IoT #BigData #AI. Currently leading Digital Marketing and Events @HPE_IoT

ABOUT THE AUTHOR

IsaacRo

Technologist in the making and proud geek. I crave chaos from disruptive tech trends: #IoT #BigData #AI. Currently leading Digital Marketing and Events @HPE_IoT

To head off the threat of food shortages for a global population estimated to top 9 billion by 2050, the world’s agricultural output must double. That mandates innovation to improve monitoring of conditions in the field in order to reduce inputs while maximizing yield and nutritional value. It also means processing data from agricultural land, machines and facilities more efficiently to accelerate research.

These are ideal applications for IoT technologies and edge computing, which is why Hewlett Packard Enterprise is partnering with Purdue University, one of the world’s leading agricultural colleges, to create a new vision for farming and agricultural research in the 21st century. The partnership’s efforts attracted a lot of attention at HPE Discover Las Vegas in June. HPE’s Janice Zdankus, VP for Quality, and Purdue University Executive Sponsor, joined Patrick Smoker, Director and Department Head of Agriculture IT at Purdue, to talk about massive innovation to drive a smarter, more connected, more sustainable agriculture.

Watch the video to learn:

  • How edge computing powered by HPE Edgeline and connectivity tech from Aruba, an HPE company, capture terabytes of data from every inch of Purdue’s 1400-plus acre field research station
  • How intelligent edge technologies accelerate time-to-discovery for research teams
  • How the partners’ innovations will support economic development in Purdue’s home state of Indiana and around the world.

Patrick expanded on these comments in an interview with tech blogger Jake Ludington. How will IoT technologies – including wearables – improve the health and living conditions of livestock? How does the university’s research translate into entrepreneurial opportunities? Watch the video to find out.

Janice also talked with Jake in the interview below. Learn how the partnership with Purdue fits into the broader framework of HPE’s philanthropic efforts, and what comes next for the partners’ digital agriculture initiative.

The Intelligent Edge was one of the main themes at HPE Discover 2018. We announced new edge-to-cloud solutions that enable organizations to run unmodified enterprise-class applications and management software at the edge. Learn more in this post: Unleash the power of the cloud, right at your edge. The latest HPE Edgeline Systems capabilities.

Learn more about HPE Edgeline Converged Edge Systems here.

Featured Articles:

Intelligent IoT Powers Purdue’s Digital Agriculture Initiative for Food Security Worldwide

Purdue University partners with HPE and Aruba in digital-agriculture initiative to fight world hunger

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

A discussion on how hyperconverged infrastructure and virtual desktop infrastructure are combining to make one of the more traditionally challenging workloads far easier to deploy, optimize, and operate.

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

A discussion on how Citrix and Hewlett Packard Enterprise are aligned to bring new capabilities to the coalescing architectures around data center core, hybrid cloud, and edge computing.

New strategies emerge to stem the costly downside of complex cloud choices

New strategies emerge to stem the costly downside of complex cloud choices

A discussion on what causes haphazard cloud use, and how new tools, processes, and methods are bringing actionable analysis to regain control over hybrid IT sprawl.

Keep the Party Going with a New IoT Security Solution Approach

 Author: Ty Tobin, Product Marketing Professional

Author: Ty Tobin, Product Marketing Professional

A wide variety of devices are being added to corporate networks in increasing numbers. IDC predicts that by 2020, over 30 billion IoT devices will be connected. A big concern is that most of the things in the Internet of Things are not designed with IT security in mind. IoT devices such as thermostats, copiers, cameras, sensors and the like are built to perform specific functions, and the ability to connect them to the Internet is an added feature. 

IoT opens up new threat vectors to information security.

In 2018, it was revealed that a Las Vegas casino was breached via a digital thermometer in a fish tank located in their lobby. A vulnerability was found in the Internet-connected thermometer, giving attackers access to a PC, then to the network; resulting in a high-roller database being infiltrated.

While the attack surface is getting wider, traditional information security approaches do not protect against threats from IoT. Now there is an effective security solution combination that can be layered on to an existing network with IoT devices connected inside and outside of the firewall.

User and entity behavior analytics (UEBA) when combined with network policy enforcement software can watch for suspicious behavior, and kick off anyone or anything acting up. UEBA sets the baseline of normal behavior of both people and devices on a network. When unusual activity is detected, a higher score is assigned to that event. Thresholds can be set to send alerts above a certain score. Alerts can be sent to administrators as well as to network policy management software. The network access control (NAC) part of this solution can automatically terminate any session with a high score. It could also be set to reboot switches or isolate certain network segments.

A good analogy is to think of the network as a nightclub. A network policy enforcement engine is like the bouncer at the door; checking ID’s and letting in those who meet the criteria. The UEBA is like the security guy inside the club. He is watching for any unacceptable behavior, like someone being too tipsy, or people pushing and shoving, or outright fights. He could also be looking for things such as smoke rising from the end of cigarettes (in a no smoking club) or a glint of light reflecting off a concealed weapon. Then the security guy calls in the bouncer, who throws the offenders out of the club.

Aruba, a Hewlett Packard Enterprise company offers the leading network policy management software with Aruba ClearPass, which includes network access control. Aruba also acquired a UEBA called Aruba IntroSpect. Together, ClearPass and IntroSpect can work like nightclub security on a network, but with automated responses that do not have to slow down for a human to react. 

Real-time detection using behavioral analytics together with automated policy enforcement can keep a network safe from both attackers and compromised IoT devices.

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451's Fellows

A discussion on how IT leaders face an increasingly complex mix of identifying and automating for both best performance and best price points across all of their cloud options.

Path to client workspace automation paved with hyperconverged infrastructure for New Jersey college

The next BriefingsDirect hyperconverged infrastructure (HCI) use case discussion explores how a New Jersey college has embarked on the time-saving, virtual desktop infrastructure (VDI) modernization journey.

We will now learn how the combination of HCI and VDI makes the task of deploying and maintaining the latest end-user devices far simpler -- and cheaper than ever before.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore how a new digital and data-driven culture can emerge from uniting the desktop edge with the hyper-efficient core are Tom Gillon, Director of Network and User Services at County College of Morris (CCM) in Randolph, New Jersey; Michael Gilchrist, Assistant Director of Network Systems at County College of Morris (CCM), and Felise Katz, CEO of PKA Technologies, Inc. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the trends driving your needs at County College of Morris to modernize and simplify your personal computer (PC) architecture?

Gillon: We need to be flexible and agile in terms of getting software to the students, when they need it, where they need it.

 Gillon

Gillon

With physical infrastructure that really isn’t possible. So we realized that VDI was the solution to meet our goals -- to get the software where the students need it, when they need it, and so that’s a top trend that got us to this point.

Gardner: And is the simplicity of VDI deployments something you are looking at universally, or is this more specific to just students?

Gillon: We are looking to deploy VDI all throughout the college: Faculty, staff, and students. We started out with a pilot of 300 units that we mostly put out in labs and in common areas for the students. But now we are replacing older PCs that the faculty and staff use as well.

Gardner: VDI has been around for a while, and for the first few years there was a lot of promise, but there was also some lag from complications in that certain apps and media wouldn’t run properly; there were network degradation issues. We’ve worked through a lot of that, but what are some of your top concerns, Michael, when it comes to some of those higher-order infrastructure performance issues that you have to conquer before you get to the proper payoff from VDI?

Get Your Gorilla Guide

To HCI Implementation Strategies

Gilchrist: You want to make sure that the user experience is the same as what they would experience on a physical device, otherwise they will not accept it.

Just having the horse power -- nowadays these servers are so powerful, and now you can even get graphics processing units (GPUs) in there -- you can run stuff like AutoCAD or Adobe and still give the user the same experience that they would normally have on a physical device. That’s what we are finding. Pretty good so far.

Gardner: Felise, as a Hewlett Packard Enterprise (HPE) Platinum Partner, you have been through this journey before, so you know how it was rough-and-tumble there for a while with VDI. How has that changed from your perspective at PKA Technologies?

Katz: When HPE made the acquisition of SimpliVity that was the moment that defined a huge game-changer because it enabled us, as a solution provider, to bring the right technology to CCM. That was huge.

Gardner: When you’re starting out on an IT transition, you have to keep the wings on the airplane while you’re changing the engines, or vice versa. You have to keep things going while you are doing change. Tom, how did you manage that? How did you keep your students getting their apps? How have you been able to swap things out in a way that hasn’t been disruptive?

Gillon: The beauty of VDI is that we can switch out a lab completely with thin clients in about an hour. And we didn’t realize that going in. We thought it would take us most of the day. And then when we did it, we were like, “Oh my God, we are done.” We were able to go in there first thing in the morning and knock it out before the students even came in.

That really helped us to get these devices out to where the students need them and to not be disruptive to them.

That really helped us to get these devices out to where the students need them and not be disruptive to them.

Gardner: Tom, how did it work from your perspective in terms of an orderly process? How was the support from your partners like PKA? Do you get to the point where this becomes routine?

Gillon: PKA has the expertise in this area. We worked with them previously on an Aruba wireless network deployment project, and we knew that’s who we wanted to work with, because they were professional and thorough.

Moving to the thin client systems deployments, we contacted PKA and they put together a solution that worked well for us. We had not been aware of SimpliVity combined with HPE. They determined that this would be the best path for us, and it turned out to be true. They came in and we worked with HPE, setting this up and deploying it. Michael did a lot of that work with HPE. It was very simple to do. We were surprised at how simple it was.

Academic pressure 

Gardner: Felise, as a solution partner that specializes in higher education, what’s different from working at a college campus environment from, say, a small- to medium-sized business (SMB) or another type of enterprise? Is there something specific about a college environment, such as the number of apps, the need for certain people and groups in the college to have different roles within responsibilities? How did it shake out?

Katz: That’s an interesting question. As a solution provider, as an owner of a business, we always put our best foot forward. It really doesn’t matter whether it’s an academic institution or a commercial customer, it always has to be done in the right way.

 Katz

Katz

As a matter of fact, in academics it’s even more profound, and a lot more pressured, because you are dealing with students, you are dealing with faculty, and you are dealing with IT staff. Once we are in a “go” mode, we are under a lot of pressure. We have a limited time span between semesters -- or vacations and holidays -- where we have to be around to help them to get it up and running.

We have to make sure that the customer is enabled. And with these guys at CCM, they were so fabulous to work with. They enabled us to help them to do more with less -- and that’s what the solution is all about. It’s all about simplification. It’s all about modernization. It’s all about being more efficient. And as Michael said so eloquently, it’s all about the experience for the students. That’s what we care about.

Choose an HCI for VDI Solution

That’s Right for Your Needs

Gardner: Michael, where are you on your VDI-enablement journey? We heard that you want to go pervasively to VDI. What have you had to put in place -- in terms of servers in the HPE SimpliVity HCI case -- to make that happen?

Gilchrist: So far, we have six servers in total. Three servers in each of our two data centers that we have on campus, for high redundancy. That’s going to allow us to cover our initial pilot of 300 thin clients that we are putting out there.

As far as the performance of the system goes, we are not even scratching the surface in terms of the computing or RAM available for those first 300 endpoints.

When it comes to getting more thin clients, I think we’re going to be able to initially tack on more thin clients to the initial subset of six servers. And as we grow, the beauty of SimpliVity is that we just buy another server, rack it up, and bolt it in -- and that’s it. It’s just plug and play.

Gardner: In order to assess how well this solution is working, let’s learn more about CCM. It’s 50 years old. What’s this college all about?

Data-driven college transformation 

Gillon: We are located in North Central New Jersey. We have an enrollment of about 8,000 students per semester; that’s for credit. We also have a lot of non-credit students coming and going as well.

As you said, we are 50-years-old, and I’ve been there almost 23 years. I was the second person hired in the IT Department.

I have seen a lot come and go, and we actually just last year inaugurated our third college president, just three presidents in 50 years. It’s a very stable environment, and it’s really a great place to work.

Gardner: I understand that you have had with this newest leadership more of a technical and digital transformation focus. Tell us how the culture of the college has changed and how that may have impacted your leaping into some of the more modern infrastructure to support VDI.

GillonOur new president is very data-driven. He wants data on everything, and frankly we weren't in a position to provide that. 

We also changed CIOs. Our new CIO came in about a year after the new president, and he has also a strong data background. He is more about data than technology. So, with that focus we really knew that we had to get systems in place that are capable of quick transitions, and this HCI system really did the job for us. We are looking to expand further beyond that. 

Gardner: Felise, I have heard other people refer to hyperconverged infrastructure architectures like SimpliVity as a gift that keeps giving. Clearly the reason to get into this was to support the VDI, which is a difficult workload. But there are also other benefits.

The simplification from HCI has uncomplicated their capability for growth and for scale.

What have been some of the other benefits that you have been able to demonstrate to CCM that come with HCI? Is it the compression, the data storage savings, or a clear disaster recovery path that they hadn’t had before? What do you see as some of the ancillary benefits? 

KatzIt's all of the above. But to me -- and I think to both Tom and Michael -- it's really the simplification, because [HCI] has uncomplicated their capability for growth and for scale. 

Look, they are in a very competitive business, okay, attracting students, as Tom said. That’s tough, that's where they have to make the difference, they have to make a difference when that student arrives on campus with his, I don’t know, how many devices, right?

One student, five devices 

Gillon: It averages five now, I think.

Katz: Five devices that come on board. How do you contend with that, besides having this huge pipe for all the data and everything else that they have to enable? And then you have new ways of learning that everybody has to step up and enable. It's not just about a classroom; it’s a whole different world. And when you’re in a rural part of New Jersey, where you’re looking to attract students, you have to make sure you are at the top of your game.

Gardner: Expectations are higher than ever, and the younger people are even more demanding because they haven’t known anything else.

KatzYes, just think about their Xbox, their cell phones, and more devices. It's just a huge amount. And it's not only for them, it's also for your college staff.

Gardner: We can’t have a conversation about IT infrastructure without getting into the speeds and feeds a little bit. Tell us about your SimpliVity footprint, energy, maintenance, and operating costs. What has this brought to you at CCM? You have been doing this for 23 years, you know what a high-maintenance server can be like. How has this changed your perspective on keeping a full-fledged infrastructure up and running?

Ease into IT

Gillon: There are tremendous benefits, and we are seeing that. The six servers that we have put in, they are replacing a lot of other devices. If we would have gone with a different solution, we would have had a rack full of servers to contend with. With this solution, we are putting three devices in each of our server rooms to handle the load of our initial 300 VDI deployments -- and hopefully more soon. 

There are a lot of savings involved, such as power. A lot of our time is being saved because we are not a big shop. Besides Michael and myself, I have a network administrator, and another systems administrator -- that’s it, four people. We just don't have the time to do a lot of things we need to do -- and this system solves a lot of those issues.

Gilchrist: From a resources utilization standpoint, the deduplication and compression that the SimpliVity system provides is just insane. I am logically provisioning hundreds of terabytes of information in my VMware system -- and only using 1.5 terabytes physically. And just the backup and restore, it's kind of fire and forget. You put this stuff in place and it really does do what they say. You can restore large virtual machines (VMs) in about one or two seconds and then have it back up and running in case something goes haywire. It just makes my life a lot easier. 

I’m no longer having to worry about, “Well, who was my back-up vendor? Or who is my storage area network (SAN) vendor? And then there’s trying to combine all of those systems into one. Well,HPE SimpliVity just takes care of all of that. It’s a one-stop shop; it’s a no-brainer. 

Gardner:All in one, Felise, is that a fair characterization?

Get Your Gorilla Guide

To HCI Implementation Strategies

KatzThat is a very, very true assessment. My goal, my responsibility is to bring forward the best solution for my customers and having HPE in my corner with this is huge. It gives me the advantage to help my clients, and so we are able to put together a really great solution for CCM. 

Gardner: There seems to be a natural progression with IT infrastructure adoption patterns. You move from bare metal to virtualization, then you move from virtualization to HCI, and then that puts you on a path to private cloud -- and then hybrid cloud. And in doing this modernization, you get used to the programmatic approach to infrastructure, so composable infrastructure

Do you feel that this progression is helping you modernize your organization? And where might that lead to, Tom?

Gillon: I do. With the experience we are gaining with SimpliVity, we see that this can go well beyond VDI, and we are excited about that. We are getting to a point where our current infrastructure is getting a little long in the tooth. We need to make some decisions, and right now the two of us are like, this is only decision we want to make. This is the way we are going to go.

Gardner: I have also read that VDI is like the New York of IT -- if you can do it there, you can do it anywhere. So what next workloads do you have in mind? Is this enterprise resource planning (ERP), is it business apps? What?

Gillon: All of the above. We are definitely looking to put some of our server loads into the VDI world, and just the benefits that SimpliVity gives to us in terms of business continuity and redundancy, it really is a no-brainer for us. 

And yes, ERP, we have our ERP system currently virtualized, and the way Michael has things set up now, it's going to be an easy transition for us when we get to that point. 

Gardner: We have talked a lot about the hardware, but we also have to factor in the software. You have been using the VMware Horizon approach to VDI and workspaces, and that’s great, but what about moving toward cloud?

Do you want to have more choice in your hypervisor? Does that set you on another path to make choices about private cloud? What comes next in terms of what you support on such a great HCI platform? 

A cloudy future?

Gillon: We have decisions to make when it comes to cloud. We are doing some things in the cloud now, but there are some things we don't want to do in the cloud. And HPE has a lot of solutions. 

We recently attended a discussion with the CEO of HPE [Antonio Neri] about where they are headed, and they say hybrid is the way to go. You are going to have some on-premises workloads, you are going to have some off-premises. And that's where we see CCM going as well.

Gardner: What advice would you give to other organizations that are maybe later in starting out with VDI? What might save them a step or two?

Get yourself a good partner because there are so many things that you don't know about these systems.

Gillon: First thing, get yourself a good partner because there are so many things that you don't know about these systems. And having a good partner like PKA, they brought a lot to the table. They could have easily provided a solution to us that was just a bunch of servers.

Gilchrist: Yes, they brought in the expertise. We didn’t know about SimpliVity, and once they showed us everything that it can do, we were skeptical. But it just does it. We are really happy with it, and I have to say, having a good partner is step number one.

Gardner: Felise, what recommendations do you have for organizations that are just now dipping their toe into workloads like VDI? What is it about HCI in particular that they should consider? 

Look to the future 

Katz: If they are looking for flexible architecture, if they are looking for the agility, to be able to make those moves down the road -- and that's where their minds are – then they really have to do the due diligence. Tom, Michael and their team did. They were able understand what their needs are, what right requirements are for them -- not just for today but also going down the road to the future.

When you adopt a new architecture, you are displacing a lot of your older methodologies, too. It’s a different world, a hybrid world. You need to be able to move, and to move the workloads back and forth. 

It’s a great time right now. It's a great place to be because things are working, and they are clicking. We have the reference architectures available now to help, but it’s really first about doing their homework.

Choose an HCI for VDI Solution

That’s Right for Your Needs

CCM is really a great team to work with. It's really a pleasure, and it’s a lot of fun. 

And I would be remiss not to say, I have a great team. From my sales to my technical: Strategic Account Manager Angie Moncada, Systems Engineer Patrick Shelley, and Vice President of Technology Russ Chow, they were just all-in with them. That makes a huge difference when you also connect with HPE on the right solutions. So that’s really been great.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

HPE Support Center: Empowering you with the support tools and information you need

From the moment we wake, we are inundated with information in countless formats. We might experience it via home systems, apps on our phones, personalized streaming content from smart TVs, and that’s before we even go to work. It can be a struggle not to get pulled down by the current of content. But the beauty of all of this information—is only when that information is useful. And more often than not, finding insights, or even an answer to a simple question, can be like finding a needle in a haystack.

What you need, when you need it

At HPE, we know that when it comes to something as important as keeping your IT assets up and running and at maximum performance, you need more than just the right information. You need it organized and delivered in a way that gets you the answers you need to quickly solve your issues and get back to focusing on your business.

That’s why we offer customers a full slate of options to access support information and tools at the HPE Support CenterIf you haven’t visited HPE’s Support Center in a while, it’s time to revisit. We’re constantly adding new capabilities to enhance your digital support experience, putting information in your hands that empowers you to self-solve your issues efficiently to make your job easier.

support center 1.png

Support Favorites

One feature that many customers find hugely useful is Warranty Check. No need to pick up a phone to retrieve the warranty status on your HPE hardware. Simply enter your serial number and country in which your product resides to retrieve detailed information on your warranty or support agreements. 

Need to open a support case? The HPE Support Center makes it easy to submit and manage your cases online and get expert advice to resolve issues fast. Or you may prefer an immediate online chat with an HPE support expert. Whatever works best for you, we’ve got you covered.

If you’re looking for the latest in product documentation, need to update software or drivers, or want to access recent security bulletins or top issues resolutions, the Support Center is your hub for all of this information and more. 

Delivering information the way that works for you is one of our goals in the Support Center. Visit the HPE Support Centertoday and watch for frequent new enhancements rolling out that will continue to enhance your support experience.

support center 2.png

How HPE and Docker together accelerate and automate hybrid cloud adoption

The next BriefingsDirect hybrid cloud strategies discussion examines how the use of containers has moved from developer infatuation to mainstream enterprise adoption.

As part of the wave of interest in containerization technology, Docker, Inc. has emerged as a leader in the field and has greased the skids for management and ease of use.

Meanwhile, Hewlett Packard Enterprise (HPE) has embraced containers as a way to move beyond legacy virtualization and to provide both developers and IT operators more choice and efficiency as they seek new hybrid clouddeployment scenarios.

Like the proverbial chocolate and peanut butter coming together -- or as I like to say, with Docker and HPE, fish and chips -- the two make a highly productive alliance and cloud ecosystem tag team.

Here to describe exactly how the Docker and HPE alliance accelerates modern and agile hybrid architectures, we are joined by two executives, Betty Junod, Senior Director of Product and Partner Marketing at Docker, and Jeff Carlat, Senior Director of Global Alliances at HPE. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
 

Gardner: Jeff, how do containers -- and how does Docker specifically -- help data center architects achieve their goals?

Carlat: When you look at the advent of where technology has gone, through virtualization of applications, we are moving into a whole new era where we need much more agility in in applications -- and IT operations.

We believe that our modern infrastructure and our partnership with Docker -- specifically around containers and container orchestration -- provides businesses of all sizes much lower acquisition cost of deploying infrastructure, and ongoing operation costs. And, of course, the game from a business standpoint is all about driving profitability and shareholder stock value.

Second, there is huge value when it comes to Docker and containers around extending the life of legacy applications. Modernizing traditional apps and being able to extend their life and bring them forward to a new modern architecture -- that drives greater efficiencies and lower risk.

Gardner: Betty, how do you see the alignment between what HPE’s long-term vision for hybrid computing and edge-to-core computing and what Docker and containerization can do? How do these align?

 

Align your apps

 Betty Junod

Betty Junod

Junod: It’s actually a wonderful alignment because what we look at from a Docker perspective is specifically at the application layer and bringing choice, agility, and security at the application layer in a way that can be married with what HPE is doing on the infrastructure layer across the hybrid cloud.

Our customers are saying, “We want to go to cloud, but we know the world is hybrid. We are going to be hybrid. So how do we do that in a way that doesn’t blow up all of our compliance if we make a change? Is this all for new apps? Or what do I do with all the stuff that I have accrued over the decades that’s eating into all of my budget?”

When it comes to transformation, it is not just an infrastructure story. It's not just an applications story. It's how do I use those two together in a way that's highly efficient and also very agile for managing the stuff I already have today. Can I make that cheaper, better, stronger -- and how do I enable the developers to build all the new services for the future that are going to provide more services, or better engage with my customers?

Gardner: How does DevOps, in particular, align? There is a lot of the developer allegiance to a Docker value proposition. But IT operators are also very much interested in what HPE is bringing to market, such as better management, better efficiency, and automation.

How are your two companies an accelerant to DevOps?

 

The future is Agile 

Junod: DevOps is interesting in that it's a word that's been used a lot, along with Agile development. It all stems from the desire for companies to be faster, right? They want to be faster in everything -- faster in delivering new services, faster in time-to-market, as well as faster in responses so they can deliver the best service-level agreements (SLAs) to the customer. It’s very much about how application teams and infrastructure teams work together.

What's great is that Docker brings the ability for developers and operations teams to have a common language, to be able to do their own thing on their timelines without messing up the other side of the house. No more of thatWaterfall. Developers can keep developing, shipping, and not break something that the infrastructure teams have set up, and vice versa.

No more of that Waterfall. Developers can keep developing and shipping, and not break something that the infrastructure teams have set up.

Carlat: Let’s be clear, the world is moving to Agile. I mean, companies are delivering continuous releases and ongoing builds. Those companies that can adopt and embrace that are going to get a leg up on their competition and provide better service levels. So the DevOps community and what we are doing is a perfect match. What Docker and HPE are delivering is ideal for that Dev orthe Ops environments. 

Gardner: When you have the fungibility of moving workloads around the operators benefit, because they get to finally gain more choice about what keeps the trains running on time regardless of who is inside those trains, so to speak.

Let's look at some of the hurdles. What prevents organizations from adopting these hybrid cloud and containerization benefits? What else needs to happen?

 

Make hybrid happen 

Junod: One of the biggest things we hear from our customers is, “Where should I go when it comes to cloud, and how?” They want to make sure that what they do is future-proof. The want to spend their time being beholden to what their application and customer needs are -- and not specifically a cloud A or cloud B.

Because with the new regulations regarding data privacy and data sovereignty, if you are a multinational company, your data sets are going to have to live in a bunch of different places. People want the ability to have things hybrid. But that presents an application and an infrastructure operational challenge.

What's great in our partnership is that we are saying we are going to provide you the safest way to do hybrid; the fastest way to get there. With the Docker layer on top of that, no matter what cloud you pick to marry with your HPE on-premises infrastructure, it’s seamless portability -- and you can have the same operational governance.

 Jeff Carlat

Jeff Carlat

Carlat: We also see enterprises, as they move to gain efficiencies, are on a journey. And the journey around containerization and containers in our modern infrastructure can be daunting at times.

One of the barriers, or prohibitions, to active adoption movement is complexity, of not knowing where to start. This is where we are partnering deeply; essentially around the services capabilities, to be able to bring in our consultative capabilities with Pointnext and do assessments and help customers establish that journey and get them through the maturity of testing and development, and progressing into full production-level environments.

Gardner: Is Cloud Technology Partners, a recent HPE acquisition, also a big plus given that they have been of, by, and for cloud -- and very heavily into containers?

Carlat: Yes. That snaps in naturally with the choice in our hybrid strategy. It's a great bridge, if you will, between what applications you may want on-premises and also using Cloud Technology Partners for leveraging an agnostic set of public cloud providers.

Gardner: Betty, when we think about adoption, sometimes too much of a good thing too soon can provide challenges. Is there anything about people adopting containers too rapidly without doing the groundwork -- the blocking and tackling, around management and orchestration, and even automation -- that becomes a negative? And how does HPE factor into that?

 

Too much transformation, too soon 

Junod: We have learned over these last few years, across 500 different customers, what does and doesn't work. It has a consistent pattern. The companies that say they want to do DevOps, and cloud, and microservices -- and they put all the buzzwords in – and they want to do it all right now for transformation -- those organizations tend to fail. That’s because it's too much change at once, like you mentioned.

What we have worked out by collaborating tightly with our partners as well as our customers is that we say, “Pick one, and maybe not the most complicated application you have. Because you might be deploying on a new infrastructure. You are using a new container model. You are going to need to evolve some of your processes internally.”

And if you are going to do hybrid, when is it hybrid? Is it during the development and test in the cloud, and then to on-premises for production? Or is it cloud bursting for scale up? Or is it for failover replication? If you don't have some of that sorted out before you go, well, then you are just stuck with too much stuff, too much of a good thing.

The companies that say they want to do DevOps, cloud, microservices, and do it all right now — those organizations tend to fail.

What we have partnered with HPE on -- and especially HPE Pointnextfrom a services standpoint -- is very much an advisory role, to say let's look at your landscape of applications that you have today and let's assess them. Let’s put them in buckets for you and we can pick one or two to start with. Then, let’s outline what’s going to happen with those. How does this inform your new platform choices? 

And then once we get some of those kinks worked out and try some of the operational processes that evolve, then after that it’s almost like a factory. They can just start funneling more in.

Gardner: Jeff, lot of what HPE has been doing is around management and monitoring, governance, being mindful of security and compliance issues. So things like HPE Synergy, things like HPE OneView that have been in the market for a long time, and newer products like HPE OneSphere, how are they factoring into allowing containers to be what they should be without getting out of control?

 

Hand in glove

Carlat: We have seen containerization evolve. And the modern architectures such as HPE Synergy and OneView are designed and built for bare metal deployment or containers or virtualization. It's all designed -- you say, it's like fish and chips, or it's like a hand in glove in my analogy – to allow customers choice, agility, and flexibility.

Our modern infrastructure is not purely designed for containers. We see a lot of virtualization, and Docker runs great in a virtualized environment as well. So it’s not one or the other. So again, it's like a hand in glove.

Gardner: By the way, I know that the Docker whale isn’t technically a fish, but I like to use it anyway.

Let's talk about the rapid adoption now around hyperconverged infrastructure (HCI). How is HCI helping move forward hybrid cloud and particularly for you on the Docker side? Are you seeing it as an accelerant?

Junod: What you are seeing with some of the hyperconverged -- and especially if you relate that over to what's going on with the adoption of containers -- it's all about agility. They want speed and they want to be able to spin things out fast, whether it's compute resources or whether it's application resources. I think it's a nice marriage of where the entire industry wants to go, and what companies are looking for to deliver services faster to our customers.

Carlat: Specifically, hyperconverged represents one of the fastest growing segments in the market for us. And the folks that are adopting hyperconverged clearly want the choice, agility, and rapid simplicity -- and rapid deployment -- of their applications.

Where we are partnering with Docker is taking HPE SimpliVity, our hyperconverged infrastructure, in building out solutions for either test or development and using scripting to be able to deploy this all in a complete environment in 30 minutes or less.

Yes, we are perfectly aligned, and we see hyperconverged as a great area for dropping in infrastructure and testing and development, as well as for midsize IT environments.

Gardner: Recently DockerCon wrapped up. Betty, what was some of the big news there, and how has that had an impact on going to market with a partner like HPE?

 

Choice, Agility, Security 

Junod: At DockerCon we reemphasized our core pillars: choice, agility, and security, because it's choice in what you want to build. You should as an organization be able to build the best applications with the best components that you feel are right for your application -- and then be able to run that anywhere, in whatever scenario.

Agility is really around speed for delivering new applications, as well as speed for operations teams. Back to DevOps, those two sides have to exist together and in partnership. One can't be fast and the other slow. We want to enable both to be fast together.

Organizations should be able to build the best applications with the best components and run them anywhere, in any scenario.

And lastly, security. It's really about driving security throughout the lifecycle, from development to production. We want to make sure that we have security built into the entire stack that's supporting the application.

We just advanced the platform along those lines. Docker Enterprise Edition 2.0 really started a couple of months ago, so 2.0 is out. But we announced as part of that some technology preview capabilities. We introduced the integration ofKubernetes, which is a very popular container orchestration engine, to allow into our core Enterprise Edition platform and then we added being able to do that all with Windows as well. 

So back to choice; it's a Linux and Windows world. You should be able to use any orchestration you like as part of that.

 

No more kicking the tires 

Carlat: One thing I really noticed at DockerCon was not necessarily just about what Docker did, but the significance of major enterprises -- Fortune 500, Fortune 100 enterprises – that are truly pivoting to the use of containers and Docker specifically on HPE.

No longer are they kicking the tires and evaluating. We are seeing full-scale production roll outs in major, major, major enterprises. The time is right for customers to modernize, embrace, and adopt containers and container orchestration and drop that onto a modern infrastructure or architecture. They can then gain the benefits of the efficiencies, agility, and the security that we have talked about. That is paramount.

Gardner: Along those lines, do you have examples that show how the combination of what HPE brings to the table and what Docker brings to the table combine in a way that satisfies significant requirements and needs in the market?

Junod: I can highlight two customers. One is Bosch, a major manufacturer in Europe, as well as DaVita healthcare.

What’s interesting is that Bosch began with a lot of organic use of Docker by their developers, spread all over the place. But they said, “Hang on a second, because developers are working with corporate intellectual property (IP), we need to find a way to centralize that, so it better scales for them -- and it’s also secure for us.”

This is one of the first accounts that Docker and HPE worked on together to bring them an integrated solution. They implemented a new development pipeline. Central IT at Bosch is doing the governance, management, and the security around the images and content. But each application development team, no matter where they are around the world, is able to spin up their own separate clusters and then be able to do the development and continuous integration on their own, and then publish the software to a centralized pipeline.

 

Containers at the intelligent edge 

Carlat: There are use cases across the board and in all industry verticals; healthcare, manufacturing. We are seeing strong interest in adoption outside of the data center and we call that the intelligent edge.

We see that containers, and containers-as-a-service, are joining more compute, data, and analytics at the edge. As we move forward, the same level of choice, agility, and security there is paramount. We see containers as a perfect complement, if you will, at the edge.

Gardner: Right; bringing down the necessary runtime for those edge apps -- but not any more than the necessary runtime. Let’s unpack that a little bit. What is it about container and edge devices, like an HPE Edgeline server, for example, that makes so much sense?

Junod: There is a broad spectrum on the edge. You will have like things like remote offices and retail locations. You will also see things like Industrial Internet of Things (IIoT). There you have very small devices for data ingest that feed into a distributed server that then ultimately feeds into the core, or the cloud, to do large-scale data analytics. Together this provides real-time insights, and this is an area we have been partnering and working with some of our customers on right now.

Security is actually paramount because -- if you start thinking about the data ingest devices -- we are not talking about, “Oh, hey, I have 100 small offices.” We are talking about millions and millions of very small devices out there that need to run a workload. They have minimal compute resources and they are going to run one or two workloads to collect data. If not sufficiently secured, they can be risk areas for attack.

So, what's really important from a Docker perspective is the security; integrated security that goes from the core -- all the way to the edge. Our ability, from a software layer, to provide trusted transport and digital signatures and the locking down of the runtime along the way means that these tiny sensor devices have one container on them. And it's been encrypted and locked with keys that can’t be attacked.

That’s very important, because now if someone did attack, they could also start getting access into the network. So security is even more paramount as you get closer to the edge.

Gardner: Any other forward-looking implications for your alliance? What should we be thinking about in terms of analyzing that data and bringingmachine learning (ML) to the edge? Is there something that between your two companies will help facilitate that?

Carlat: The world of containers and agile cloud-native applications is not going away. When I think about the future, enterprises need to pivot. Yet change is hard for all enterprises, and they need help.

They are likely going to turn to trusted partners. HPE and Docker are perfectly aligned, we have been bellwethers in the industry, and we will be there to help on that journey.

Gardner: Yes, this seems like a long-term relationship. 
 

Listen to the podcast. Find it on iTunes. Get the mobile app.  Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

3PAR and InfoSight QuickStart guide

calvin zito.png

About the Author

HPEStorageGuy
I have worked at HP and now HPE since 1983, all of it around storage but 100% focused on storage since 1990. I blog, create videos, and podcasts to help you better understand HPE Storage.

I've done a few recent articles on Around the Storage Block showing you HPE InfoSight. The purpose of this article is to share with you a QuickStart guide that the 3PAR on InfoSight team pulled together after they heard several questions from storage people around HPE asking how to get it working. A big thank you to Wiley Thrasher, an engineer on the team doing the work to integrate 3PAR and InfoSight, for this guide. On a personal note, I met Wiley in 1999 when HP acquired Transoft Networks; my son-in-law, who was a summer intern on the 3PAR team a couple of years ago, just joined the team that Wiley is on. He and my daughter are moving to Boise soon, so I'm excited to have them close by!

Ok, let's jump into this.

Get the QuickStart guides

NOTE: The QuickStart guide is updated regularily. The links in this section are the latest as of July 2018. Check back and I'll update the links or check my SlideShare account for the latest. 

I have two ways for you to get the guide. You can download it from my SlideShare.net account. Click on this link to the QuickStart guide and you'll find a "Download" button that will let you get a PDF version. I'll also embed it here but because some of the print is small, I suggest downloading it. 

HPE HPE InfoSight for 3PAR quickstart v1.4 from Calvin Zito

Note that if these have any updates to them, I'll update the links so you can get the latest.  This version is a very complete and detailed guide. Wiley created another guide that is more suited to 3PAR customers that have used the portal before as a user of StoreFront Remote. Here's a link to the shorter QuickStart guide

I talked with Wiley and want to call out a few things. 

What is required?

Customers that want to use 3PAR on InfoSight must have a support contract in place. There are a few other things that are "musts" so let me share those.

  • For global visibility, you have to have a minimum of 3PAR OS version 2.2.x and Service Processor (SP) version 2.4.2. I'll have a bit more about global visibility in a second. 
  • For the cross-stack analytics, there's a bit more you need. Today you have to have 3PAR OS version 3.3.1 and 3PAR SP version of 5.0.3 (aka 5.0 MU3). You must also enable the RDA transport. 

The longer QuickStart guide gives details about what you get with "global visibility" but I'll give a short summary here.  At a high-level, it includes customizable dashboards and PDF reports for HPE Storage (3PAR StoreServ, StoreOnce & RMC). Some of the specifics you'll see include:

  • 3PAR Models, OS Versions & Entitlement
  • Systems by Country & Region
  • Historical, Total & Allocated Capacity
  • Capacity Efficiency, Deduplication & Compression Ratios
  • Total & Average Front-End Performance
  • Device Type Count & Utilization
  • Wellness Score & Type

Check out the demo I did showing the cross-stack analytics. Its a short intro to what it is. I've had people ask me how are we getting this VMware information. We have a collector that is running on the 3PAR SP that is making "lightweight" REST API calls about once every one to three hours. The guide talks about how to set this up for your vCenter instances that you want to connect. Note that to get any cross-stack analytics information, the VMs in your vCenter must be running on 3PAR. InfoSight will only provide analytics for VMs running on 3PAR. 

Help, I need somebody!

3par.jpg

You've read the guide and you think you followed all the steps but something still isn't working. What now? When you are logged into HPE InfoSight, click on Resources. You'll then see two columns as shown in the image on the right. Under the 3PAR and StoreOnce column, click on HPE InfoSight Portal Support. 

On this page, you'll see an email address that you can email for support with HPE InfoSight. Do not manually enter the email address into your email but click on the link so that an email opens up. It will include information that the L4 engineering team needs to assist you. You should include information about whatever issues you are having and include screenshots if applicable. Also, include your 3PAR serial number as that will make it easier for the team to help you. 

You can see our articles about InfoSight (include recent demos I've shared) on Around the Storage Block