.banner-thumbnail-wrapper { display:none; }

Enterprise architecture

How an agile focus for Enterprise Architects builds competitive advantage for digital transformation

How an agile focus for Enterprise Architects builds competitive advantage for digital transformation

A discussion on how Enterprise Architects should embrace agile approaches to build more competitive advantage for their companies.

CEO Henshall on Citrix’s 30-year journey to make workers productive, IT stronger, and partners more capable

CEO Henshall on Citrix’s 30-year journey to make workers productive, IT stronger, and partners more capable

A discussion on how Citrix is charting a new future of work that abstracts productivity above apps, platforms, data, and clouds to empower, energize, and enlighten workers while simplifying and securing anywhere work across any deployment model.

How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure

How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure

A discussion on how IT operators are seeking increased automation, built-in intelligence, and robust security as they look for turnkey hyperconverged appliance approaches for both cloud and traditional workloads.

How Ferrara Candy depends on automated IT intelligence to support rapid business growth

How Ferrara Candy depends on automated IT intelligence to support rapid business growth

A discussion on how a global candy maker unlocks end-to-end process and economic efficiency through increased actionable insight and optimization of servers and storage.

How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’

How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’

Listen to this podcast discussion on how Texmark, with support from HPE and HPE channel partner CB Technologies, has been combining the refinery of the future approach with the best of OT, IT,  and IoT technology solutions to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.

How the composable approach to IT aligns automation and intelligence to overcome mounting complexity

How the composable approach to IT aligns automation and intelligence to overcome mounting complexity

Learn how higher levels of automation for data center infrastructure have evolved into truly workable solutions for composability. 

How HPC supports 'continuous integration of new ideas' for optimizing Formula 1 car design

How HPC supports 'continuous integration of new ideas' for optimizing Formula 1 car design

Learn how Alfa Romeo Racing in Switzerland leverages the latest in IT to bring hard-to-find but momentous design improvements -- from simulation to victory. 

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

A discussion on how healthcare providers employ new breeds of intelligent digital workspace technologies to improve doctor and patient experiences, make technology easier to use, and assist in bringing actionable knowledge resources to the integrated healthcare environment. 

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

Learn ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations. 

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

Learn from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure in the heavily-regulated financial services industry.

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

A discussion on how the IT4IT Reference Architecture for IT management works in many ways for many types of organizations and the demonstrated business benefits that are being realized as a result.

IT kit sustainability: A business advantage and balm for the planet

IT kit sustainability: A business advantage and balm for the planet

Learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment. 

Why enterprises should approach procurement of hybrid IT in entirely new ways

Why enterprises should approach procurement of hybrid IT in entirely new ways


Learn why changes in cloud deployment models are forcing a rethinking of IT economics, and maybe even the very nature of acquiring and cost-optimizing digital business services.

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

A discussion on how a Canadian maker of containers leverages the Internet of Things to create a positive cycle of insights and applied learning. 

Why enterprises struggle with adopting public cloud as a culture

Why enterprises struggle with adopting public cloud as a culture

Learn why a cultural solution to adoption may be more important than any other aspect of digital business transformation.

Who, if anyone, is in charge of multi-cloud business optimization?

Who, if anyone, is in charge of multi-cloud business optimization?

Learn from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach. 

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

A discussion on how new maturity in management over all facets of IT amounts to a culmination of 30 years of IT operations improvement and ushers in an era of comprehensive automation, orchestration, and AIOps.

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

A discussion on how a massive corporate split led to the re-architecting and modernizing of IT to allow for the right data center choices at the right price over time.

Dark side of cloud—How people and organizations are unable to adapt to improve the business

image.jpeg

The next BriefingsDirect cloud deployment strategies interview explores how public cloud adoption is not reaching its potential due to outdated behaviors and persistent dissonance between what businesses can do and will do with cloud strengths.

Many of our ongoing hybrid IT and cloud computing discussions focus on infrastructure trends that support the evolving hybrid IT continuum. Today’s focus shifts to behavior -- how individuals and groups, both large and small, benefit from cloud adoption. 

It turns out that a dark side to cloud points to a lackluster business outcome trend. A large part of the disappointment has to do with outdated behaviors and persistent dissonance between what line of business (LOB) practitioners can do and will do with their newfound cloud strengths. 

We’ll now hear from an observer of worldwide cloud adoption patterns on why making cloud models a meaningful business benefit rests more with adjusting the wetware than any other variable.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore why cloud failures and cost overruns are dogging many enterprises is Robert Christiansen, Vice President, Global Delivery, Cloud Professional Services and Innovation at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening now with the adoption of cloud that makes the issue of how people react such a pressing concern? What’s bringing this to a head now?

Chistiansen

Chistiansen

Christiansen: Enterprises are on a cloud journey. They have begun their investment, they recognize that agility is a mandate for them, and they want to get those teams rolling. They have already done that to some degree and extent. They may be moving a few applications, or they may be doing wholesale shutdowns of data centers. They are in lots of different phases in adoption situations. 

What we are seeing is a lack of progress with regard to the speed and momentum of the adoption of applications into public clouds. It’s going a little slower than they’d like.

Gardner: We have been through many evolutions, generations, and even step-changes in technology. Most of them have been in a progressive direction. Why are we catching our heels now?

Christiansen: Cloud is a completely different modality, Dana. One of the things that we have learned here is that adoption of infrastructure that can be built from the ground-up using software is a whole other way of thinking that has never really been the core bread-and-butter of an infrastructure or a central IT team. So, the thinking and the process -- the ability to change things on the fly from an infrastructure point of view -- is just a brand new way of doing things. 

And we have had various fits and starts around technology adoption throughout history, but nothing at this level. The tool kits available today have completely changed and redefined how we go about doing this stuff.

Gardner: We are not just changing a deployment pattern, we are reinventing the concept of an application. Instead of monolithic applications and systems of record that people get trained on and line up around, we are decomposing processes into services that require working across organizational boundaries. The users can also access data and insights in ways they never had before. So that really is something quite different. Even the concept of an application is up for grabs.

Christiansen: Well, think about this. Historically, an application team or a business unit, let’s say in a bank, said, “Hey, I see an opportunity to reinvent how we do funding for auto loans.”

We worked with a company that did this. And historically, they would have had to jump through a bunch of hoops. They would justify the investment of buying new infrastructure, set up the various components necessary, maybe landing new hardware in the organization, and going into the procurement process for all of that. Typically, in the financial world, it takes months to make that happen.

Today, that same team using a very small investment can stand up a highly available redundant data center in less than a day on a public cloud. In less than a day, using a software-defined framework. And now they can go iterate and test and have very low risk to see if the marketplace is willing to accept the kind of solution they want to offer.

And that just blows apart the procedural-based thinking that we have had up to this point; it just blows it apart. And that thinking, that way of looking at stuff is foreign to most central IT people. Because of that emotion, going to the cloud has come in fits and starts. Some people are doing it really well, but a majority of them are struggling because of the people issue.

Gardner: It seems ironic, Robert, because typically when you run into too much of a good thing, you slap on governance and put in central command and control, and you throttle it back. But that approach subverts the benefits, too.

How do you find a happy medium? Or is there such a thing as a happy medium when it comes to moderating and governing cloud adoption?

Control issues

Christiansen: That’s where the real rub is, Dana. Let’s give it an analogy. At Cloud Technology Partners (CTP), we do cloud adoption workshops where we bring in all the various teams and try to knock down the silos. They get into these conversations to address exactly what you just said. “How do we put governance in place without getting in the way of innovation?”

It’s a huge, huge problem, because the central IT team’s whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

When you have a structure like that but supplied by the public clouds like Amazon (AWS)Google, and Microsoft Azure, you still have the ability to put in a lot of those controls in the software. Before it was done either manually or at least semi-manually.

The central IT team's whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

The challenge is that the central IT teams are not necessarily set up with the skills to make that happen. They are not by nature software development people. They are hardware people. They are rack and stack people. They are people who understand how to stitch this stuff together -- and they may use some automation. But as a whole it’s never been their core competency. So therein lies the rub: How do you convert these teams over to think in that new way?

At the same time, you have the pressing issue of, “Am I going to automate myself right out of a job?” That’s the other part, right? That’s the big, 800-pound gorilla sitting in the corner that no one wants to talk about. How do you deal with that?

Gardner: Are we talking about private cloud, public cloud, hybrid cloud, hybrid IT -- all the above when it comes to these trends?

Public perceptions 

Christiansen: It’s mostly public cloud that you see the perceived threats. The public cloud is perceived as a threat to the current way of doing IT today, if you are an internal IT person. 

Let’s say that you are a classic compute and management person. You actually split across both storage and compute, and you are able to manage and handle a lot of those infrastructure servers and storage solutions for your organization. You may be part of a team of 50 in a data center or for a couple of data centers. Many of those classic roles literally go away with a public cloud implementation. You just don’t need them. So these folks need to pivot or change into new roles or reinvent themselves.

Let’s say you’re the director of that group and you happen to be five years away from retirement. This actually happened to me, by the way. There is no way these folks want to give up the range right before their retirement. They don’t want to reinvent their roles just before they’re going to go into their last years. 

They literally said to me, “I am not changing my career this far into it for the sake of a public cloud reinvention.” They are hunkering down, building up the walls, and slowing the process. This seems to be an undercurrent in a number of areas where people just don’t want to change. They don’t want any differences.

Gardner: Just to play the devil’s advocate, when you hear things around serverless, when we see more operations automation, when we see artificial intelligence (AI)Ops use AI and machine learning (ML) -- it does get sort of scary. 

You’re handing over big decisions within an IT environment on whether to use public or private, some combination, or multicloud in some combination. These capabilities are coming into fruition.

Maybe we do need to step back and ask, “Just because you can do something, should you?” Isn’t that more than just protecting my career? Isn’t there a need for careful consideration before we leap into some of these major new trends?

Transform fear into function 

Christiansen: Of course, yeah. It’s a hybrid world. There are applications where it may not make sense to be in the public cloud. There are legacy applications. There are what I call centers of gravity that are database-centric; the business runs on them. Moving them and doing a big lift over to a public cloud platform may not make financial sense. There is no real benefit to it to make that happen. We are going to be living between an on-premises and a public cloud environment for quite some time. 

The challenge is that people want to create a holistic view of all of that. How do I govern it in one view and under one strategy? And that requires a lot of what you are talking about, being more cautious going forward.

And that’s a big part of what we have done at CTP. We help people establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. How do you network between the two environments? How do you create low-latency communications between your sources of data and your sources of truth? Making that happen is what we have been doing for the last five or six years.

We help establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. 

The challenge we have, Dana, is that once we have established that -- we call that methodology the Minimum Viable Cloud (MVC). And after you put all of that structure, rigor, and security in place -- we still run into the problems of motion and momentum. Those needed governance frameworks are well-established.

Gardner: Before we dig into why the cloud adoption inertia still exists, let’s hear more about CTP. You were acquired by HPE not that long ago. Tell us about your role and how that fits into HPE.

CTP: A cloud pioneer

Christiansen: CTP was established in 2010. Originally, we were doing mostly private cloud, OpenStack stuff, and we did that for about two to three years, up to 2013.

1.jpeg

I am one of the first 20 employees. It’s a Boston-based company, and I came over with the intent to bring more public cloud into the practice. We were seeing a lot of uptick at the time. I had just come out of another company called Cloud Nation that I owned. I sold that company; it was an Amazon-based, Citrix-for-rent company. So imagine, if you would, you swipe a credit card and you get NetScaler, XenApp and XenDesktop running on top of AWS way back in 2012 and 2013. 

I sold that company, and I joined CTP. We grew the practice of public cloud on Google, Azure, and AWS over those years and we became the leading cloud-enabled professional services organization in the world.

We were purchased by HPE in October 2017, and my role since that time is to educate, evangelize, and press deeply into the methodologies for adopting public cloud in a holistic way so it works well with what people have on-premises. That includes the technologies, economics, strategies, organizational change, people, security, and establishing a DevOps practice in the organization. These are all within our world.

We do consultancy and professional services advisory types of things, but on the same coin, we flip it over, and we have a very large group of engineers and architects who are excellent on keyboards. These are the people who actually write software code to help make a lot of this stuff automated to move people to the public clouds. That’s what we are doing to this day.

Gardner: We recognize that cloud adoption is a step-change, not an iteration in the evolution of computing. This is not going from client/server to web apps and then to N-Tier architectures. We are bringing services and processes into a company in a whole new way and refactoring that company. If you don’t, the competition or a new upstart unicorn company is going to eat your lunch. We certainly have seen plenty of examples of that. 

So what prevents organizations from both seeing and realizing the cloud potential? Is this a matter of skills? Is it because everyone is on the cusp of retirement and politically holding back? What can we identify as the obstacles to overcome to break that inertia?

A whole new ball game

Christiansen: From my perspective, we are right in the thick of it. CTP has been involved with many Fortune 500 companies throughthis process.

The technology is ubiquitous, meaning that everybody in the marketplace now can own pretty much the same technology. Dana, this is a really interesting thought. If a team of 10 Stanford graduates can start up a company to disrupt the rental car industry, which somebody has done, by the way, and they have access to technologies that were only once reserved for those with hundreds of millions of dollars in IT budgets, you have all sorts of other issues to deal with, right?

So what’s your competitive advantage? It’s not access to the technologies. The true competitive advantage now for any company is the people and how they consume and use the technology to solve a problem. Before [the IT advantage] was reserved for those who had access to the technology. That’s gone away. We now have a level playing field. Anybody with a credit card can spin up a big data solution today – anybody. And that’s amazing, that’s truly amazing.

For an organization that had always fallen back on their big iron or infrastructure -- those processes they had as their competitive advantage -- that now has become a detriment. That’s now the thing that’s slowing them down. It’s the anchor holding them back, and the processes around it. That rigidity of people and process locks them into doing the same thing over and over again. It is a serious obstacle. 

Untangle spaghetti systems 

Another major issue came very much as a surprise, Dana. We observed it over the last couple of years of doing application inventory assessments for people considering shutting down data centers. They were looking at their applications, the ones holding the assets of data centers, as not competitive. And they asked, “Hey, can we shut down a data center and move a lot of it to the public cloud?”

We at CTP were hired to do what are called application assessments, economic evaluations. We determine if there is a cost validation for doing a lift-and-shift [to the public cloud]. And the number-one obstacle was inventory. The configuration management data bases (CMDBs), which hold the inventory of where all the servers are and what’s running on them for these organizations, were wholly out of date. Many of the CMDBs just didn’t give us an accurate view of it all. 

When it came time to understand what applications were actually running inside the four walls of the data centers -- nobody really knew. As a matter of fact, nobody really knew what applications were talking to what applications, or how much data was being moved back and forth. They were so complex; we would be talking about hundreds, if not thousands, of applications intertwined with themselves, sharing data back and forth. And nobody inside organizations understood which applications were connected to which, how many there were, which ones were important, and how they worked.

When it came time to understand what applications were actually running inside of the four walls of the data centers -- no one really knew. Nobody knew what applications were talking to what applications, or how much data was being moved back and forth.

Years of managing that world has created such a spaghetti mess behind those walls that it’s been exceptionally difficult for organizations to get their hands around what can be moved and what can’t. There is great integration within the systems.

The third part of this trifecta of obstacles to moving to the cloud is, as we mentioned, people not wanting to change their behaviors. They are locked in to the day-to-day motion of maintaining those systems and are not really motivated to go beyond that.

Gardner: I can see why they would find lots of reasons to push off to another day, rather than get into solving that spaghetti maze of existing data centers. That’s hard work, it’s very difficult to synthesize that all into new apps and services.

Christiansen: It was hard enough just virtualizing these systems, never mind trying to pull it all apart.

Gardner: Virtualizing didn’t solve the larger problem, it just paved the cow paths, gained some efficiency, reduced poor server utilization -- but you still have that spaghetti, you still have those processes that can’t be lifted out. And if you can’t do that, then you are stuck.

Christiansen: Exactly right.

Gardner: Companies for many years have faced other issues of entrenchment and incumbency, which can have many downsides. Many of them have said, “Okay, we are going to create a Skunk Works, a new division within the company, and create a seed organization to reinvent ourselves.” And maybe they begin subsuming other elements of the older company along the way.

Is that what the cloud and public cloud utilization within IT is doing? Why wouldn’t that proof of concept (POC) and Skunk Works approach eventually overcome the digital transformation inertia?

Clandestine cloud strategists

Christiansen: That’s a great question, and I immediately thought of a client who we helped. They have a separate team that re-wrote or rebuilt an application using serverless on Amazon. It’s now a fairly significant revenue generator for them, and they did it almost two and-a-half years ago.

It uses a few cloud servers, but mostly they rely on the messaging backbones and non-server-based platform-as-a-service (PaaS) layers of AWS to solve their problem. They are a consumer credit company and have a lot of customer-facing applications that they generate revenue from on this new platform.

The team behind the solution educated themselves. They were forward-thinkers and saw the changes in public cloud. They received permission from the business unit to break away from the central IT team’s standard processes, and they completely redefined the whole thing.

The team really knocked it out of the park. So, high success. They were able to hold it up and tried to extend that success back into the broader IT group. The IT group, on the other hand, felt that they wanted more of a multicloud strategy. They weren’t going to have all their eggs in Amazon. They wanted to give the business units options, of either going to Amazon, Azure, or Google. They wanted to still have a uniform plane of compute for on-premises deployments. So they brought in Red Hat’s OpenShift, and they overlaid that, and built out a [hybrid cloud] platform.

Now, the Red Hat platform, I personally had had no direct experience, but I had heard good things about it. I had heard of people who adopted it and saw benefits. This particular environment though, Dana, the business units themselves rejected it.

The core Amazon team said, “We are not doing that because we’re skilled in Amazon. We understand it, we’re using AWS CloudFormation. We are going to write code to the applications, we are going to use Lambda whenever we can.” They said, “No, we are not doing that [hybrid and multicloud platform approach].”

Other groups then said, “Hey, we’re an Azure shop, and we’re not going to be tied up around Amazon because we don’t like the Amazon brand.” And all that political stuff arose, they just use Azure, and decided to go shooting off on their own and did not use the OpenShift platform because, at the time, the tool stacks were not quite what they needed to solve their problems.

The company ended up getting a fractured view. We recommended that they go on an education path, to bring the people up to speed on what OpenShift could do for them. Unfortunately, they opted not to do that -- and they are still wrestling with this problem.

CTP and I personally believe that this was an issue of education, not technology, and not opportunity. They needed to lean in, sponsor, and train their business units. They needed to teach the app builders and the app owners on why this was good, the advantages of doing it, but they never invested the time. They built it and hoped that the users would come. And now they are dealing with the challenges of the blowback from that.

Gardner: What you’re describing, Robert, sounds an awful lot like basic human nature, particularly with people in different or large groups. So, politics, right? The conundrum is that when you have a small group of people, you can often get them on board. But there is a certain cut-off point where the groups are too large, and you lose control, you lose synergy, and there is no common philosophy. It’s Balkanization; it’s Europe in 1916.

Christiansen: Yeah, that is exactly it.

Gardner:Very difficult hurdles. These are problems that humankind has been dealing with for tens of thousands of years, if not longer. So, tribalism, politics. How does a fleet organization learn from what software development has come up with to combat some of these political issues? I’m thinking of Agile methodologiesscrums, and having short bursts, lots of communication, and horizontal rather than command-and-control structures. Those sorts of things.

Find common ground first

Christiansen: Well, you nailed it. How you get this done is the question. How do you get some kind of agility throughout the organization to make this happen? And there are successes out there, whole organizations, 4,000 or 5,000 or 6,000 people, have been able to move. And we’ve been involved with them. The best practices that we see today, Dana, are around allowing the businesses themselves to select the platforms to go deep on, to get good at.

Let’s say you have a business unit generating $300 million a year with some service. They have money, they are paying the IT bill. But they want more control, they want more the “dev” from the DevOps process.

The best practices that we see today are around allowing the businesses themselves to select the cloud platforms to go deep on, to get good at. ... They want the "dev" from the DevOps process.

They are going to provide much of that on their own, but they still need core common services from central IT team. This is the most important part. They need the core services, such as identity and access management, key management, logging and monitoring, and they need networking. There is a set of core functions that the central team must provide.

And we help those central teams to find and govern those services. Then, the business units [have cloud model choice and freedom as long as they] consume those core services -- the access and identity process, the key management services, they encrypt what they are supposed to, and they use the networking functions. They set up separation of the services appropriately, based on standards. And they use automation to keep them safe. Automation prevents them from doing silly things, like leaving unencrypted AWS S3 buckets open to the public Internet, things like that.

You now have software that does all of that automation. You can turn those tools on and then it’s like a playground, a protected playground. You say, “Hey, you can come out into this playground and do whatever you want, whether it’s on Azure or Google, or on Amazon or on-premises.”

 “Here are the services, and if you adopt them in this way, then you, as the team, can go deep, you can use Application programming interface (API) calls, you can use CloudFoundation or Python or whatever happens to be the scripting language you want to build your infrastructure with.”

Then you have the ability to let those teams do what they want. If you notice, what it doesn’t do is overlay a common PaaS layer, which isolates the hyperscale public cloud provider from your work. That’s a whole other food fight, religious battle, Dana, around lock-in and that kind of conversation.

Gardner: Imposing your will on everyone else doesn’t seem to go over very well.

So what you’re describing, Robert, is a right-sizing for agility, and fostering a separate-but-equal approach. As long as you can abstract to the services level, and as long as you conform to a certain level of compliance for security and governance -- let’s see who can do it better. And let the best approach to cloud computing win, as long as your processes end up in the right governance mix.

Development power surges

Christiansen: People have preferences, right? Come on! There’s been a Linux and .NET battle since I have been in business. We all have preferences, right? So, how you go about coding your applications is really about what you like and what you don’t like. Developers are quirky people. I was a C programmer for 14 years, I get it.

The last thing you want to do is completely blow up your routines by taking development back and starting over with a whole bunch of new languages and tools. Then they’re trying to figure out how to release code, test code, and build up a continuous integration/continuous delivery pipeline that is familiar and fast.

These are really powerful personal stories that have to be addressed. You have to understand that. You have to understand that the development community now has the power -- they have the power, not the central IT teams. That shift has occurred. That power shift is monumental across the ecosystem. You have to pay attention to that.

If the people don’t feel like they have a choice, they will go around you, which is where the problems are happening.

Gardner: I think the power has always been there with the developers inside of their organizations. But now it’s blown out of the development organization and has seeped up right into the line of business units.

Christiansen: Oh, that’s a good point.

Gardner: Your business strategy needs to consider all the software development issues, and not just leave them under the covers. We’re probably saying the same thing. I just see the power of development choice expanding, but I think it’s always been there.

But that leads to the question, Robert, of what kind of leadership person can be mindful of a development culture in an organization, and also understand the line of business concerns. They must appreciate the C-suite strategies. If you are a public company, keeping Wall Street happy, and keeping the customer expectations met because those are always going up nowadays.

It seems to me we are asking an awful lot of a person or small team that sits at the middle of all of this. It seems to me that there’s an organizational and a talent management deficit, or at least something that’s unprecedented.

Tech-business cross-pollination

Christiansen: It is. It really is. And this brings us to a key piece to our conversation. And that is the talent enablement. It is now well beyond how we’ve classically looked at it.

Some really good friends of mine run learning and development organizations and they have consulting companies that do talent and organizational change, et cetera. And they are literally baffled right now at the dramatic shift in what it takes to get teams to work together.

In the more flexible-thinking communities of up-and-coming business, a lot of the folks that start businesses today are technology people. They may end up in the coffee industry or in the restaurant industry, but these folks know technology. They are not unaware of what they need to do to use technology.

So, business knowledge and technology knowledge are mixing together. They are good when they get swirled together. You can’t live with one and not have the other.

For example, a developer needs to understand the implications of economics when they write something for cloud deployment. If they build an application that does not economically work inside the constructs of the new world, that’s a bad business decision, but it’s in the hands of the developer.

It’s an interesting thing. We’ve had that need for developer-empowerment before, but then you had a whole other IT group put restrictions on them, right? They’d say, “Hey, there’s only so much hardware you get. That’s it. Make it work.” That’s not the case anymore, right?

We have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud. 

At the same time, you now have an operations person involved with figuring out how to architect for the cloud, and they may think that the developers do not understand what has to come together.

As a result, we have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud.

We have found that much of an organization’s delay in rolling this out is because the people who are consuming the cloud are not ready or knowledgeable enough on how to maximize their investment in cloud. This is not for the people building up those core services that I talked about, but for the consumers of the services, the business units.

We are rolling that out later this year, a full Talent Enablement track around those new roles.

Gardner: This targets the people in that line of business, decision-making, planning, and execution role. It brings them up to speed on what cloud really means, how to consume it. They can then be in a position of bringing teams together in ways that hadn’t been possible before. Is that what you are getting at?

Teamwork wins 

Christiansen: That’s exactly right. Let me give you an example. We did this for a telecommunications company about a year ago. They recognized that they were not going to be able to roll out their common core services.

The central team had built out about 12 common core services, and they knew almost immediately that the rest of the organization, the 11 other lines of business, were not ready to consume them.

They had been asking for it, but they weren’t ready to actually drive this new Ferrari that they had asked for. There were more than 5,000 people who needed to be up-skilled on how to consume the services that a team of about 100 people had put together.

Now, these are not classic technical services like AWS architecture, security frameworks, or Access control list (ACL) and Network ACL (NACL) for networking traffic, or how you connect back and backhaul, that kind of stuff. None of that.

I’m talking about how to make sure you don’t get a cloud bill that’s out of whack. How do I make sure that my team is actually developing in the right way, in a safe way? How do I make sure my team understands the services we want them to consume so that we can support it?

It was probably 10 or 12 basic use domains. The teams simply didn’t understand how to consume the services. So we helped this organization build a training program to bring up the skills of these 4,000 to 5,000 people.

Now think about that. That has to happen in every global Fortune 2000 company where you may only have a central team of a 100, and maybe 50 cloud people. But they may need to turn over the services to 1,000 people.

We have a massive, massive, training, up-skilling, and enablement process that has to happen over the next several years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

The Open Group panel explores ways to help smart cities initiatives overcome public sector obstacles

Credit: Wikimedia Commons

Credit: Wikimedia Commons

The next BriefingsDirect thought leadership panel discussion focuses on how The Open Group is spearheading ways to make smart cities initiatives more effective.

Many of the latest technologies -- such as Internet of Things (IoT) platforms, big data analytics, and cloud computing -- are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.

The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.

The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibusDr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at BoeingDon Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?

Harding

Harding

Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens -- and money is just a means to achieving that end.

This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.

Bottom-up blueprint approach

Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down. 

In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.

There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.

Saha

Saha

Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.

This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help. 

Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken. 

These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.

Sunderland: I agree with everything that’s been said so far. In the particular case of New York City -- and with a lot of cities in the US -- cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set. 

The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive. 

Create demand to drive demand

Credit: Wikimedia Commons

Credit: Wikimedia Commons

Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone. 

If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need. 

In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract -- without somehow having demand developed by the user base -- is pretty difficult.

By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector. 

The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand. 

Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications? 

Lisdorf

Lisdorf

Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.

This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on. 

It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.

Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives? 

Framework for a common foundation 

Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously -- and I suspect that this is actually a problem that cities share with corporate enterprises. 

Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains -- and they need to share data. What’s needed is some kind of framework to allow data sharing to happen. 

This framework has to be at two levels. It has to be at a policy level -- and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.

Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things? 

Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.

Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned. 

One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.

Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.

The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority -- not a specific agency or department of the city.

Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services. 

Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?

Public- and private-sector parity

Brancato

Brancato

Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?

This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there. 

Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.

So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.

Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public -- and if we know a common data framework is important -- why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?

Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.

In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward. 

Sunderland

Sunderland

The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity -- like an Amazon or Facebook can -- would be doomed to failure, because nobody could actually afford to implement it.

Since you can’t do top-down solutions -- even if you pass a law -- the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up -- as opposed to the top-down.

Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?

Not something to legislate

Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful. 

The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”

That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management. 

It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.

A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice. 

Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?

Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.

Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?

How to pay for IT?

Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us -- and the vendors -- into a world of interoperability and a common data approach.

Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.

Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector. 

Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.

What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.

This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.

The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.

There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.

So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.

Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?

Cities in the cloud, but services grounded

Saha: Across the world, many governments -- not just local governments but even state and central governments -- are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.

It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.

Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.

Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?

Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.

One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.

There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.

This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.

Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.

Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?

Smart city success strategies

Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities. 

One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.

At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.

Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.

Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?

Collective, crystal-clear standards 

Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.

Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.

Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?

Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns.  That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.

Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.

We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.

The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.

I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.

Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?

Use-case consumer value

Sunderland: I like the idea of reference architectures attached to use cases because -- for better or worse -- when folks engage around these issues -- even in large entities like New York City -- they are going to be engaging for specific needs.

Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.

Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.

When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?

Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced -- or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.

Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.

This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.

Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?

Prioritize policy and people 

Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.

The development of policy around data sharing -- or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.

Gardner: Dr. Saha, same question to you. 

Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health -- or whatever the priority is for that city.

Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.

The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.

It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.

Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?

City smarts and safety first 

Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.

In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.

In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.

Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service -- all the way down to the data, so that eventually data can be tied back to the roles.

For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors. 

Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?

Mobile-app appeal

Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.

The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are -- whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.

When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: