Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  data center  enterprise architecture  Ariba  SOA  data analytics  HP DISCOVER  HPDiscover  Open Group Conference  Ariba Network  security  Tony Baer  desktop virtualization  Jennifer LeClaire  SAP  HP Vertica  IT  mobile computing  VMWorld  Ariba LIVE  Converged infrastructure 

How UK data solutions developer Systems Mechanics uses HP Vertica for BI, streaming and data analysis

Posted By Dana L Gardner, 12 hours ago

Three years ago, Systems Mechanics Limited used relational databases to assemble and analyze some 20 different data sources in near real-time. But most relational database appliances used 1980s technical approaches, and the ability to connect more data and manage more events capped off. The runway for their business expansion just ended.

So Systems Mechanics looked for a platform that scales well and provides real-time data analysis, too. At the volumes and price they needed, HP Vertica has since scaled without limit ... an endless runway.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how Systems Mechanics improved how their products best deliver business intelligence (BI), analytics streaming, and data analysis, BriefingsDirect spoke with Andy Stubley, Vice President of Sales and Marketing at Systems Mechanics, based in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner:  You've been doing a lot with data analysis at Systems Mechanics, and monetizing that in some very compelling ways.

Stubley: Yes, indeed. System Mechanics is principally a consultancy and a software developer. We’ve been working in the telco space for the last 10-15 years. We also have a history in retail and financial services.

Stubley

The focus we've had recently and the products we’ve developed into our Zen family are based on big data, particularly in telcos, as they evolve from principally old analog conversations into devices where people have smartphone applications -- and data becomes ever more important.

All that data and all those people connected to the network cause a lot more events that need to be managed, and that data is both a cost to the business and an opportunity to optimize the business. So we have a cost reduction we apply and a revenue upside we apply as well.

Quick example

Gardner: What’s a typical way telcos use Zen, and that analysis?

Stubley: Let’s take a scenario where you’re looking in network and you can’t make a phone call. Two major systems are catching that information. One is a fault-management system that’s telling you there is a fault on the network and it reports that back to the telecom itself.

The second one is the performance management system. That doesn’t specify faults basically, but it tells you if you’re having things like thresholds being affected, which may have an impact on performance every time. Either of those can have an impact on your customer, and from a customer’s perspective, you might also be having a problem with the network that isn’t reported by either of the systems.

We’re finding that social media is getting a bigger play in this space. Why is that? Now, particular the younger populations with consumer-based telcos, mobile telcos particularly, if they can’t get a signal or they can’t make a phone call, they get onto social media and they are trashing the brand.

They’re making noise. A trend is combining fault management and performance management, which are logical partners with social media. All of a sudden, rather than having a couple of systems, you have three.

In our world, we can put 25 or 30 different data sources on to a single Zen platform. In fact, there is no theoretical limit to the number we could, but 20 to 30 is quite typical now. That enables us to manage all the different network elements, different types of mobile technologies, LTE, 3G, and 2G. It could be Ericsson, Nokia, Huawei, ZTE, or Alcatel-Lucent. There is an amazing range of equipment, all currently managed through separate entities. We’re offering a platform to pull it all together in one unit.

The other way I tend to look at it is that we’re trying to turn the telcos into how you might view a human. We take the humans as the best decision-making platforms in the world and we probably still could claim that. As humans, we have conscious and unconscious processes running. We don’t think about breathing or pumping our blood around our system, but it’s happening all the time.

We use a solution with visualization, because in the world of big data, you can’t understand data in numbers.

We have senses that are pulling in massive amount of information from the outside world. You’re listening to me now. You’re probably doing a bunch of other things while you are tapping away on a table as well. They’re getting senses of information there and you are seeing, and hearing, and feeling, and touching, and tasting.

Those all contain information that’s coming into the body, but most of the activity is subconscious. In the world of big data, this is the Zen goal, and what we’re delivering in a number of places is to make as many actions as possible in a telco environment, as in a network environment, come to that automatic, subconscious state.

Suppose I have a problem on a network. I relate it back to the people who need to know, but I don’t require human intervention. We’re looking a position where the human intervention is looking at patterns in that information to decide what they can do intellectually to make the business better.

That probably speaks to another point here. We use a solution with visualization, because in the world of big data, you can’t understand data in numbers. Your human brain isn’t capable of processing enough, but it is capable of identifying patterns of pictures, and that’s where we go with our visualization technology.

Gather and use data

We have a customer who is one of the largest telcos in EMEA. They’re basically taking in 90,000 alarms from the network a day, and that’s their subsidiary companies, all into one environment. But 90,000 alarms needing manual intervention is a very big number.

Using the Zen technology, we’ve been able to reduce that to 10,000 alarms. We’ve effectively taken 90 percent of the manual processing out of that environment. Now, 10,000 is still a lot of alarms to deal with, but it’s a lot less frightening than 90,000, and that’s a real impact in human terms.

Gardner: Now that we understand what you do, let’s get into how you do it. What’s beneath the covers in your Zen system that allows you to confidently say you can take any volume of data you want?

If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

Stubley: Fundamentally, that comes down to the architecture we built for Zen. The first element is our data-integration layer. We have a technology that we developed over the last 10 years specifically to capture data in telco networks. It’s real-time and rugged and it can deal with any volume. That enables us to take anything from the network and push it into our real-time database, which is HP’s Vertica solution, part of the HP HAVEn family.

Vertica analysis is to basically record any amount of data in real time and scale automatically on the HP hardware platform we also use. If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.

We have two processing layers. Referring to our earlier discussion about conscious and subconscious activity, our conscious activity is visualizing that data, and that’s done with Tableau.

We have a number of Tableau reports and dashboards with each of our product solutions. That enables us to envision what’s happening and allows the organization, the guys running the network, and the guys looking at different elements in the data to make their own decisions and identify what they might do.

We also have a streaming analytics engine that listens to the data as it comes into the system before it goes to Vertica. If we spot the patterns we’ve identified earlier “subconsciously,” we’ll then act on that data, which may be reducing an alarm count. It may be "actioning" something.

It may be sending someone an email. It may be creating a trouble ticket on a different system. Those all happen transparently and automatically. It’s four layers simplifying the solution: data capture, data integration, visualization, and automatic analytics.

Developing high value

Gardner: And when you have the confidence to scale your underlying architecture and infrastructure, when you are able to visualize and develop high value to a vertical industry like a telco, this allows you to then expand into more lines of business in terms of products and services and also expand into move vertical. Where have you taken this in terms of the Zen family and then where do you take this now in terms of your market opportunity?

Stubley: We focus on mobile telcos. That’s our heritage. We can take any data source from a telco, but we can actually take any data source from anywhere, in any platform and any company. That ranges from binary to HTML. You name it, and if you’ve got data, we could load it.

That means we can build our processing accordingly. What we do is position what we call solution packs, and a solution pack is a connector to the outside world, to the network, and it grabs the data. We’ve got an element of data modeling there, so we can load the data into Vertica. Then, we have already built reports in Tableau that allows us to interrogate automatically. That’s at a component level.

Once you go to a number of components, we can then look horizontally across those different items and look at the behaviors that interact with each other. If you are looking at pure telco terms, we would be looking at different network devices, the end-to-end performance of the network, but the same would apply to a fraud scenario or could apply to someone who is running cable TV.

The very highest level is finding what problem you’re going to solve and then using the data to solve it.

So multi-play players are interesting because they want to monitor what’s happening with TV as well and that will fit in exactly in the same category. Realistically, anybody with high-volume, real-time data can take benefit from Vertica.

Another interesting play in this scenario is social gaming and online advertising. They all have similar data characteristics, very high volume and fixed data that needs to be analyzed and processed automatically.

Why Vertica?

Gardner: How long have you been using Vertica, and what is it that drove you to using it vis-à-vis alternatives?

Stubley: As far as the Zen family goes, we have used other technologies in the past, other relational databases, but we’ve used Vertica now for more than two-and-a-half years. We were looking for a platform that can scale and would give us real-time data. At the volumes we were looking at nothing could compete with Vertica at a sensible price. You can build yourself any solid solution with enough money, but we haven’t got too many customers who are prepared to make that investment.

So Vertica fits in with the technology of the 21st century. A lot of the relational database appliances are using 1980 thought processes. What’s happened with processing in the last few years is that nobody shares memory anymore, and our environment requires a non-shared memory solution. Vertica has been built on that basis. It was scaled without limit.

One of the areas we’re looking at that I mentioned earlier was social media. Social media is a very natural play for Hadoop, and Hadoop is clearly a very cost-effective platform for vast volumes of data at real-time data load, but very slow to analyze.

So the combination with a high-volume, low-cost platform for the bulk of data and a very high performing real-time analytics engine is very compelling. The challenge is going to be moving the data between the two environments. That isn’t going to go away. That’s not simple, and there is a number of approaches. HP Vertica is taking some.

There is Flex Zone, and there are any number of other players in that space. The reality is that you probably reach an environment where people are parallel loading the Hadoop and the Vertica. That’s what we probably plan to do. That gives you much more resilience. So for a lot of the data we’re putting into our system, we’re actually planning to put the raw data files into Hadoop, so we can reload them as necessary to improve the resilience of the overall system, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Andy Stubley  big data  BriefingsDirect  Dana Gardner  data analysis  data analytics  HAVEn  HP  HP Vertica  HPDiscover  Interarbor Solutions  System Mechanics  telco 

Share |
PermalinkComments (0)
 

Health data deluge requires secure information flow via standards, says The Open Group’s new healthcare director

Posted By Dana L Gardner, Tuesday, July 15, 2014

An expected deluge of data and information about patients, providers, outcomes, and needed efficiencies is pushing the healthcare industry to rapid change. But more than dealing with just the volume of data is required. Interoperability, security and the ability to adapt rapidly to the lessons in the data are all essential.

The means of enabling Boundaryless Information Flow, Open Platform 3.0 adaptation, and security for the healthcare industry are then, not surprisingly, headline topics for The Open Group’s upcoming event, Enabling Boundaryless Information Flow on July 21 and 22 in Boston.

And Boston is a hotbed of innovation and adaption for how technology, enterprise architecture, and open standards can improve the communication and collaboration among healthcare ecosystem players.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

In preparation for the conference, BriefingsDirect had the opportunity to interview Jason Lee, the new Healthcare and Security Forums Director at The Open Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I'm looking forward to the Boston conference next week and want to remind our listeners and readers that it's not too late to sign up to attend. You can learn more at www.opengroup.org.

Let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.

Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It's upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.

Lee

We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it's very difficult to get them to speak the same language. It's almost as if you're in a large house with lots of different rooms, and  every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts, and that’s what we're undertaking here at The Open Group.

Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What are the hurdles? What prevents a nice, seamless, easy flow and collaboration in information that creates better outcomes? What’s the holdup?

Many barriers

Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into a medical office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.

When there's been movement toward digitizing that information, not everyone has used the same system. It's almost like trains running on different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.

Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we're also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?

Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant [information] for the patient is a great challenge.

Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.

We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems.

So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it's perhaps the future. The solution is somewhere in there, too.

Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we're all about.

And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets -- big data, some people refer to it as. I'm talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them -- and healthcare can do the same. We're just not quite at the same level of evolution.

Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.

Quality side

Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.

Gardner: I'd like to get into a few of the hotter trends. But before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to healthcare.

Maybe you could provide us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22?

Lee: We have a packed day. We're very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.”

It will become an area where standards development and The Open Group can be very helpful.

Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority.  As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness. 

A great deal of data of potentially useful health data will be generated.  How this information can be used -- not just by consumers but also by the healthcare establishment that takes care of them as patients -- will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.

Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado, will discuss a major feature of the Affordable Care Act — the health insurance exchanges -- which are designed to bring health insurance to tens of millions of people who previous did not have access to it. 

He is going to talk about how enterprise architecture -- which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa -- has helped the State of Colorado develop their health insurance exchange.

After the plenaries, we will break up into three tracks, one of which is healthcare-focused. In this track there will be three presentations, all of which discuss how enterprise architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.

Care delivery

One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using enterprise architecture, focusing on one of our platinum members, Oracle, and a company called Intelligent Medical Objects, and how they're working together in a productive way, bringing IT and healthcare decision-making together.

Then, the final presentation in this track will focus on the development of an enterprise architecture-based solution at an insurance company. The payers, or the insurers -- the big companies that are responsible for paying bills and collecting premiums -- have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

With the increase in payer data brought on in large part by the adoption of a new coding system -- the ICD-10 -- which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers -- health insurance companies (some of which are integrated with providers) -- as very important stakeholders in the big picture.

In the afternoon, we're going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals' data-keeping and data-sharing systems.

Payers are not always  recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.

We'll have a panel of experts that responds to these pain points, these challenges, and then we'll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we're very much looking forward to the afternoon as well.

Gardner: I'd also like to remind our readers and listeners that they can take part in this by attending the conference, and there is information about that at the opengroup.org website.

It's really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.

We hear from folks like Apple, Samsung, Google, and Microsoft. They're all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.

In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there's an 87 percent faster uptick in the use of health and fitness applications.

Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game changer in terms of how people react to their own data -- and then bring more data into the interactions they have with healthcare providers?

Challenge to predict

Lee: It's always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person-generated data from mobile health devices is going to be a game changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.

The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds.

As you say, there are big players getting involved. Already some of the pedometer-type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in The New Yorker.

What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now.

There will be a question of course as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.

The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another, and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it's certainly lacking in the country.

There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?

After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood-sugar level let's say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.

In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.

Game changer

Gardner: It seems to me a real potential game changer as well, and that something like Boundaryless Information Flow and standards will play an essential role in. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.

So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what's really going on, on a day-to-day basis.

But then, it's also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.

Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?

Lee: As more mobile medical devices come to the market, we'll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.

There are issues, around security in particular, where healthcare will be at the leading edge.

So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.

Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It's going to be an interesting time.

Of course, we've also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what's going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.

Lee: It could become a leader. There is no question about it. Moreover, there is a lot healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.

There's a great future ahead here. It's not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  enterprise architecture  healthcare  Interarbor Solutions  Jason Lee  The Open Group  The Open Group Conference 

Share |
PermalinkComments (0)
 

HP network management heightens performance while reducing total costs for Nordic telco TDC

Posted By Dana L Gardner, Monday, July 14, 2014

When Swedish communications services provider TDC needed network infrastructure improvements from their disparate networks across several Nordic countries, they needed both simplicity in execution and agility in performance.

Our next innovation case study interview therefore highlights how TDC in Stockholm found ways to better determine root causes to any network disruption, and conduct deep inspection of the traffic to best manage their service-level agreements (SLAs).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect had an opportunity to learn first-hand how over 50,000 devices can be monitored and managed across a state-of-the-art network when we interviewed Lars Niklasson, the Senior Consultant at TDC. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: You have a number of main businesses in your organization. There’s TDC Solutions and mobile. There’s even television and some other hosting. Explain for us how large your organization is.

Niklasson: TDC is an operator in the Nordic region, where we have a network covering Norway, Sweden, Finland, and Denmark. In Sweden, we’re also an integrator and have a quite big consultant role in Sweden. In Sweden we’re around 800 people, and the whole TDC group is almost 10,000 people.

Niklasson

Gardner: So it’s obviously a very significant network to support this business and deliver the telecommunication services. Maybe you could define your network for us.

Niklasson: It's quite big, over 50,000 devices, and everything is monitored of course. It’s a state-of-the-art network.

Gardner: When you have so many devices to track, so many types of layers of activity and levels of network operations, how do you approach keeping track of that and making sure that you’re not only performing well, but performing efficiently?

Niklasson: Many years ago, we implemented HP Network Node Manager (NNM) and we have several network operating centers in all countries using NNM. When HP released different smart plug-ins, we started to implement those too for the different areas that they support, such as quality assurance, traffic, and so on.

Gardner: So you’ve been using HP for your network management and HP Network Management Center for some time, and it has of course evolved over the years. What are some of the chief attributes that you like or requirements that you have for network operations, and why has the HP product been so strong for you?

Quick and easy

Niklasson: One thing is that it has to be quick and easy to manage. We have lots of changes all the time, especially in Sweden, when a customer comes. And in Sweden, we’re monitoring end customers’ networks.

It's also very important to be able to integrate it with the other systems that we have. So we can, for example, tell which service-level agreement (SLA) a particular device has and things like that. NNM makes this quite efficient.

Gardner: One of the things that I’ve heard people struggle with is the amount of data that’s generated from networks that then they need to be able to sift through and discover anomalies. Is there something about visualization or other ways of digesting so much data that appeals to you?

Niklasson: NNM is quite good at finding the root cause. You don’t get very many incidents when something happens. If I look back at other products and older versions, there were lots and lots of incidents and alarms. Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps and so on.

Gardner: TDC uses network management capabilities and also sells it. They also provide it with their telecom services. How have you experienced the use in the field? Do any of your customers also manage their own networks and how has this been for your consumers of network services?

Niklasson: We’re also an HP partner in selling NNM to end customers. Part of my work is helping customers implement this in their own environment. Sometimes a customer doesn’t want to do that. They buy the service from us, and we monitor the network. It’s for different reasons. One could be security, and they don’t allow us to access the network remotely. They prefer to have it in-house, and I help them with these projects.

Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps.

Gardner: Lars, looking to the future, are there any particular types of technology improvements that you would like to see or have you heard about some of the roadmaps that HP has for the whole Network Management Center Suite? What interests you in terms of what's next?

Niklasson: I would say two things. One is the application visibility in the network, where we can have some of that with traffic that’s cleaner, but it's still NetFlow-based. So I’m interested in seeing more deep inspection of the traffic and also more virtualization of the virtual environments that we have.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  HP  HPDiscover  Interarbor Solutions  Lars Niklasson  Network Management  Network node management  TDC 

Share |
PermalinkComments (0)
 

Panel tackles how to make mobile devices as secure as they are indispensable

Posted By Dana L Gardner, Wednesday, July 09, 2014

As smartphones have become de rigueur in the global digital economy, users want them to do more work, and businesses want them to be more productive for their employees -- as well as powerful added channels to consumers.

But neither businesses nor mobile-service providers have a cross-domain architecture that supports all the new requirements for a secure digital economy, one that allows safe commerce, data sharing and user privacy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

So how do we blaze a better path to a secure mobile future? How do we make today’s ubiquitous mobile devices as low risk as they are indispensable?

BriefingsDirect recently posed these and other questions to a panel of experts on mobile security: Paul Madsen, Principal Technical Architect in the Office of the CTO at Ping Identity; Michael Barrett, President of the FIDO (Fast Identity Online) Alliance, and Mark Diodati, a Technical Director in the Office of the CTO at Ping Identity. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We're approaching this Cloud Identity Summit 2014 (CIS) in Monterey, Calif. on July 19 and we still find that the digital economy is not really reaching its full potential. We're still dealing with ongoing challenges for trust, security, and governance across mobile devices and network.

Even though people have been using mobile devices for decades—and in some markets around the world they're the primary tool for accessing the Internet—why are we still having problems? Why is this so difficult to solve?

Diodati: There are so many puzzle pieces to make the digital economy fully efficient. A couple of challenges come to mind. One is the distribution of identity. In prior years, the enterprise did a decent job -- not an amazing job, but a decent job -- of identifying users, authenticating them, and figuring out what they have access to.

Once you move out into a broader digital economy, you start talking about off-premises architectures and the expansion of user constituencies. There is a close relationship with your partners, employees, and your contractors. But relationships can be more distant, like with your customers.

Emerging threats

Additionally, there are issues with emerging security threats. In many cases, there are fraudsters with malware being very successful at taking people’s identities and stealing money from them.

Diodati

Mobility can do a couple of things for us. In the old days, if you want more identity assurance to access important applications, you pay more in cost and usability problems. Specialized hardware was used to raise assurance. Now, the smartphone is really just a portable biometric device that users carry without us asking them to do so. We can raise assurance levels without the draconian increase in cost and usability problems.

We’re not out of the woods yet. One of the challenges is nailing down the basic administrative processes to bind user identities to mobile devices. That challenge is part cultural and part technology. [See more on a new vision for identity.]

Gardner: So it seems that we have a larger set of variables, end users, are not captive on network, who we authenticate. As you mentioned, the mobile device, the smartphone, can be biometric and can be an even better authenticator than we've had in the past. We might actually be in a better position in a couple of years. Is there a transition that’s now afoot that we might actually come out better on the other end?

Madsen: The opportunities are clear. As Mark indicated, the phones, not just because of its technical features, but because of the relatively tight binding that users feel for them, make a really strong authentication factor.

Madsen

It's the old trope of something you have, something you know, and something you are. Phones are something you already have, from the user’s point of view. It’s not an additional hard token or hard USB token that we're asking employees to carry with them. It's something they want to carry, particularly if it's a BYOD phone.

So phones, because they're connected mobile computers, make a really strong second-factor authentication, and we're seeing that more and more. As I said, it’s one that users are happy using because of the relationship they already have with their phones, for all the other reasons. [See more on identity standards and APIs.]

Gardner: It certainly seems to make sense that you would authenticate into your work environment through your phone. You might authenticate in the airport to check in with your phone and you might use it for other sorts of commerce. It seems that we have the idea, but we need to get there somehow.

What’s architecturally missing for us to make this transition of the phone as the primary way in which people are identified session by session, place by place? Michael, any thoughts about that?

User experience

Barrett: There are a couple of things. One, in today’s world, we don’t yet have open standards that help to drive cross-platform authentication, and we don’t have the right architecture for that. In today’s world still, if you are using a phone with a virtual keyboard, you're forced to type this dreadful, unreadable tiny password on the keyboard, and by the way, you can’t actually read what you just typed. That’s a pretty miserable user experience, which we alluded to earlier.

Barrett

But also, it’s a very ugly. It’s a mainframe-centric architecture. The notion that the authentication credentials are shared secrets that you know and that are stored on some central server is a very, very 1960s approach to the world. My own belief is that, in fact, we have to move towards a much more device-centric authentication model, where the remote server actually doesn’t know your authentication credentials. Again, that comes back to both architecture and standards.

My own view is that if we put those in place, the world will change. Many of us remember the happy days of the late '80s and early '90s when offices were getting wired up, and we had client-server applications everywhere. Then, HTML and HTTP came along, and the world changed. We're looking at the same kind of change, driven by the right set of appropriately designed open standards.

Gardner: So standards, behavior, and technology make for an interesting adoption path, sometimes a chicken and the egg relationship. Tell me about FIDO and perhaps any thoughts about how we make this transition and adoption happen sooner rather than later?

Barrett: I gave a little hint. FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use. As an ex-CTO, I can tell you the experience when you try to give them stronger authenticators that are harder for them to use. They won’t voluntarily use them.

FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use.

We have to do better than we're doing today in terms of ease of use of authentication. We also have to come up with authentication that is stronger for the relying parties, because that’s the other face of this particular coin. In today’s world, passwords and pins work very badly for end users. They actually work brilliantly for the criminals. 

So I'm kind of old school on this. I tend to think that security controls should be there to make life better for relying parties and users and not for criminals. Unfortunately, in today’s world, they're kind of inverted.

So FIDO is simply an open-standards organization that is building and defining those classes of standards and, through our member companies, is promulgating deployment of those standards.

Madsen: I think FIDO is important. Beyond the fact that it’s a standard is the pattern that it’s normalizing. The pattern is one where the user logically authenticates to their phone, whether it be with a fingerprint or a pin, but the authentication is local. Then, leveraging the phone’s capabilities -- storage, crypto, connectivity. etc. -- the phone authenticates to the server. It’s that pattern of a local authentication followed by a server authentication that I think we are going to see over and over.

Gardner: Thank you, Paul. It seems to me that most people are onboard with this. I know that, as a user, I'm happy to have the device authenticate. I think developers would love to have this authentication move to a context on a network or with other variables brought to bear. They can create whole new richer services when they have a context for participation. It seems to me the enterprises are onboard too. So there's a lot of potential momentum around this. What does it take now to move the needle forward? What should we expect to hear at CIS?

Moving forward

Diodati: There are two dimensions to moving the needle forward: avoiding the failures of prior mobile authentication systems, and ensuring that modern authentication systems support critical applications. Both are crucial to the success of any authentication system, including FIDO.

At CIS, we have an in-depth, three-hour FIDO workshop and many mobile authentication sessions. 

There are a couple of things that I like about FIDO. First, it can use the biometric capabilities of the device. Many smart phones have an accelerometer, a camera, and a microphone. We can get a really good initial authentication. Also, FIDO leverages public-key technology, which overcomes some of the concerns we have around other kinds of technologies, particularly one-time passwords. 

Madsen: To that last point Mark, I think FIDO and SAML, or more recent federation protocols, complement each other wonderfully. FIDO is a great authentication technology, and federation historically has not resolved that. Federation didn't claim to answer that issue, but if you put the two together, you get a very strong initial authentication. Then, you're able to broadcast that out to the applications that you want to access. And that’s a strong combination.

Barrett: One of the things that we haven't really mentioned here -- and Paul just hinted at it -- is the relationship between single sign-on and authentication. When you talk to many organizations, they look at that as two different sides of the same coin. So the better application or ubiquity you can get, and the more applications you can sign the user on with less interaction, is a good thing.

Gardner: Before we go a little bit deeper into what’s coming up, let’s take another pause and look back. There have been some attempts to solve these problems. Many, I suppose, have been from a perspective of a particular vendor or a type of device or platform or, in an enterprise sense, using what they already know or have.

Proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.

We've had containerization and virtualization on the mobile tier. It is, in a sense, going back to the past where you go right to the server and very little is done on the device other than the connection. App wrapping would fall under that as well, I suppose. What have been the pros and cons and why isn’t containerization enough to solve this problem? Let’s start with Michael.

Barrett: If you look back historically, what we've tended to see are lot of attempts that are truly proprietary in nature. Again, my own philosophy on this is that proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.

There really hasn't been an attempt at this for some years. Pretty much, we have to go back to X.509 to see the last major standards-based push at solving authentication. But X.509 came with a whole bunch of baggage, as well as architectural assumptions around a very disconnected world view that is kind of antithetical to where we are today, where we have a very largely connected world view.

I tend to think of it through that particular set of lenses, which is that the standards attempts in this area are old, and many of the approaches that have been tried over the last decade have been proprietary.

For example, on my old team at PayPal, I had a small group of folks who surveyed security vendors. I remember asking them to tell me how many authentication vendors there were and to plot that for me by year?

Growing number of vendors

They sighed heavily, because their database wasn’t organized that way, but then came back a couple of weeks later. Essentially they said that in 2007, it was 30-odd vendors, and it has been going up by about a dozen a year, plus or minus some, ever since, and we're now comfortably at more than 100.

Any market that has 100 vendors, none of whose products interoperate with each other, is a failing market, because none of those vendors, bar only a couple, can claim very large market share. This is just a market where we haven’t seen the right kind of approaches deployed, and as a result, we're struck where we are today without doing something different.

Gardner: Paul, any thoughts on containerization, pros and cons?

Madsen: I think of phones as almost two completely orthogonal aspects. First is how you can leverage the phone to authenticate the user. Whether it’s FIDO or something proprietary, there's value in that.

Secondly is the phone as an application platform, a means to access potentially sensitive applications. What mobile applications introduce that’s somewhat novel is the idea of pulling down that sensitive business data to the device, where it can be more easily lost or stolen, given the mobility and the size of those devices.

IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.

The challenge for the enterprise is, if you want to enable your employees with devices, or enable them to bring their own in, how do you protect that data. It seems more and more important, or recognized as the challenge, that you can’t.

The challenge is not only protecting the data, but keeping the usage of the phone separate. IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.

So containerization or dual-persona systems attempt to slice and dice the phone up into two or more pieces. What is missing from those models, and it’s changing, is a recognition that, by definition, that’s an identity problem. You have two identities—the business user and the personal user—who want to use the same device, and you want to compartmentalize those two identities, for both security and privacy reasons.

Identity standards and technologies could play a real role in keeping those pieces separate.The employee might use Box for the business usage, but might also use it for personal usage. That’s an identity problem, and identity will keep those two applications and their usages separate.

Diodati: To build on that a little bit, if you take a look at the history of containerization, there were some technical problems and some usability problems. There was a lack of usability that drove an acceptance problem within a lot of enterprises. That’s changing over time.

To talk about what Michael was talking about in terms of the failure of other standardized approaches to authentication, you could look back at OATH, which is maybe the last big industry push, 2004-2005, to try to come up with a standard approach, and it failed on interoperability. OATH was a one-time password, multi-vendor  capability. But in the end, you really couldn’t mix and match devices. Interoperability is going to be a big, big criteria for acceptance of FIDO. [See more on identity standards and APIs.]

Mobile device management

Gardner: Another thing out there in the market now, and it has gotten quite a bit of attention from enterprises as they are trying to work through this, is mobile device management (MDM).  Do you have any thoughts, Mark, on why that has not necessarily worked out or won’t work out? What are the pros and cons of MDM?

Diodati: Most organizations of a certain size are going to need an enterprise mobility management solution. There is a whole lot that happens behind the scenes in terms of binding the user's identity, perhaps putting a certificate on the phone.

Michael talked about X.509. That appears to be the lowest common denominator for authentication from a mobile device today, but that can change over time. We need ways to be able to authenticate users, perhaps issue them certificates on the phone, so that we can do things like IPSec.

Also, we may be required to give some users access to offline secured data. That’s a combination of apps and enterprise mobility management (EMM) technology. In a lot of cases, there's an EMM gateway that can really help with giving offline secure access to things that might be stored on network file shares or in SharePoint, for example.

If there's been a stumbling block with EMM, it's just been that the heterogeneity of the devices, making it a challenge to implement a common set of policies.

The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device.

But also the technology of EMM had to mature. We went from BlackBerry Enterprise Server, which did a pretty good job in a homogeneous world, but maybe didn't address everybody’s needs. The AirWatchs and the Mobile Irons of the world, they've had to deal with heterogeneity and increased functionality.

Madsen: The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device. That worked okay when the enterprise was providing employees with their BlackBerry, but it's hard to reconcile in the BYOD world, where users are bringing in their own iPhones or Androids. In their mind, they have a completely justified right to use that phone for personal applications and usage.

So some of the mechanisms of MDM remain relevant, being able to wipe data off the phone, for example, but the device is no longer the appropriate granularity. It's some portion of the device that the enterprise is authoritative over.

Gardner: It seems to me, though, that we keep coming back to several key concepts: authentication and identity, and then, of course, a standardization approach that ameliorates those interoperability and heterogeneity issues. [See more on a new vision for identity.]

So let’s look at identity and authentication. Some people make them interchangeable. How should we best understand them as being distinct? What’s the relationship between them and why are they so essential for us to move to a new architecture for solving these issues? Let’s start with you, Michael.

Identity is center

Barrett: I was thinking about this earlier. I remember having some arguments with Phil Becker back in the early 2000s when I was running the Liberty Alliance, which was the standards organization that came up with SAML 2.0. Phil coined that phrase, "Identity is center," and he used to argue that essentially everything fell under identity.

What I thought back then, and still largely do, is that identity is a broad and complex domain. In a sense, as we've let it grow today, they're not the same thing. Authentication is definitely a sub-domain of security, along with a whole number of others. We talked about containerization earlier, which is a kind of security-isolation technique in many regards. But I am not sure that identity and authentication are exactly in the same dimension.

In fact, the way I would describe it is that if we talk about something like the levels-of-assurance model, we're all fairly familiar with in the identity sense. Today, if you look at that, that’s got authentication and identity verification concepts bound together.

Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.

In fact, I suspect that in the coming year or two, we're probably going to have to decouple those and say that it’s not really a linear one-dimensonal thing, with level one, level two, level three, and level four. Rather it's a kind of two-dimensional metric, where we have identity verification concepts on one side and then authentication comes from the other. Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.

Definitely, they're closely related. You can look at some of the difficulties that we've had with identity over the last decade and say that it’s because we actually ignored the authentication aspect. But I'm not sure they're the same thing intrinsically. 

Gardner: Interesting. I've heard people say that any high-level security mobile device has to be about identity. How else could it possibly work? Authentication has to be part of that, but identity seems to be getting more traction in terms of a way to solve these issues across all other variables and to be able to adjust accordingly over time and even automate by a policy.

Mark, how do you see identity and authentication? How important is identity as a new vision for solving these problems?

Diodati: You would have to put security at the top, and identity would be a subset of things that happen within security. Identity includes authorization -- determining if the user is authorized to access the data. It also includes provisioning. How do we manipulate user identities within critical systems -- there is never one big identity in the sky. Identity includes authentication and a couple of other things.

To answer the second part of your question, Dana, in the role of identity and trying to solve these problems, we in the identity community have missed some opportunities in the past to talk about identity as the great enabler.

With mobile devices, we want to have the ability to enforce basic security controls , but it’s really about identity. Identity can enable so many great things to happen, not only just for enterprises, but within the digital economy at large. There's a lot of opportunity if we can orient identity as an enabler.

Authentication and identity

Madsen: I just think authentication is something we have to do to get to identity. If there were no bad people in the world and if people didn’t lie, we wouldn’t need authentication.

We would all have a single identifier, we would present ourselves, and nobody else would lay claim to that identifier. There would be no need for strong authentication. But we don’t live there. Identity is fundamental, and authentication is how we lay claim to a particular identity.

Diodati: You can build the world's best authorization policies. But they are completely worthless, unless you've done the authentication right, because you have zero confidence that the users are who they say there are.

Gardner: So, I assume that multifactor authentication also is in the subset. It’s just a way  of doing it better or more broadly, and more variables and devices that can be brought to bear. Is that correct?

Madsen: Indeed.

We have to apply a set of adaptive techniques to get better identity assurance about the user.

Diodati: The definition of multifactor has evolved over time too. In the past, we talked about “strong authentication”. What we mean was “two-factor authentication,” and that is really changing, particularly when you look at some of the emerging technologies like FIDO.

If you have to look at the broader trends around adaptive authentication, the relationship to the user or the consumer is more distant. We have to apply a set of adaptive techniques to get better identity assurance about the user.

Gardner: I'm just going to make a broad assumption here that the authentication part of this does get solved, that multifactor authentication, adaptive, using devices that people are familiar with, that they are comfortable doing, even continuing to use many of the passwords, single sign-on, all that gets somehow rationalized.

Then, we're elevated to this notion of identity. How do we then manage that identity across these domains? Is there a central repository? Is there a federation? How would a standard come to bear on that major problem of the federation issue, control, and management and updating and so forth? Let’s go back to Michael on that.

Barrett: I tend to start from a couple of different perspectives on this. One is that we do have to fix the authentication standards problem, and that's essentially what FIDO is trying to do.

So, if you accept that FIDO solves authentication, what you are left with is an evolution of a set of standards that, over the last dozen years or so, starting with SAML 2.0, but then going on up through the more recent things like OpenID Connect and OAuth 2.0, and so on, gives you a robust backplane for building whatever business arrangement is appropriate, given the problem you are trying to solve.

Liability

I chose the word "business" quite consciously in there, because it’s fair to say that there are certain classes of models that have stalled out commercially for a whole bunch of reasons, particularly around the dreaded L-word, i.e, liability.

We tried to build things that were too complicated. We could just describe this grand long-term vision of what the universe looked like. Andrew Nash is very fond of saying that we can describe this rich ecosystem as identity-enabled services and so on, but you can’t get there from here, which is the punch line of a rather old joke.

Gardner: Mark, we understand that identity is taking on a whole new level of importance. Are there some examples that we can look to that illustrate how an identity-centric approach to security, governance, manageability for mobile tier activities, even ways it can help developers bring new application programming interfaces (APIs) into play and context for commerce and location, are things we haven’t even scratched the surface of yet really?

Identity is pretty broad when you take a look at the different disciplines that might be at play.

Help me understand, through an example rather than telling, how identity fits into this and what we might expect identity to do if all these things can be managed, standards, and so forth.

Diodati: Identity is pretty broad when you take a look at the different disciplines that might be at play. Let’s see if we can pick out a few.

We have spoken about authentication a lot. Emerging standards like FIDO are important, so that we can support applications that require higher assurance levels with less cost and usability problems.

A difficult trend to ignore is the API-first development modality. We're talking about things like OAuth and OpenID Connect. Both of those are very important, critical standards when we start talking about the use of API- and even non-API HTTP based stuff.

OpenID Connect, in particular, gives us some abilities for users to find where they want to authenticate and give them access to the data they need. The challenge is that the mobile app is interacting on behalf of a user. How do you actually apply things like adaptive techniques to an API session to raise identity assurance levels? Given that OpenID Connect was just ratified earlier this year, we're still in early stages of how that’s going to play out.

Gardner: Michael, any thoughts on examples, use cases, a vision for how this should work in the not too distant future?

Barrett: I'm a great believer in open standards, as I think I have shown throughout the course of this discussion. I think that OpenID Connect, in particular, and the fact that we now have that standard ratified, [is useful]. I do believe that the standards, to a very large extent, allow the creation of deployments that will address those use-cases that have been really quite difficult [without these standards in place].

Ahead of demand

The problem that you want to avoid, of course, is that you don’t want a standard to show up too far ahead of the demand. Otherwise, what you wind up with is just some interesting specification that never gets implemented, and nobody ever bothers deploying any of the implementations of it.

So, I believe in just-in-time standards development. As an industry, identity has matured a lot over the last dozen years. When SAML 2.0 came along in Shibboleth, it was a very federation-centric world, addressing a very small class of use cases. Now, we have a more robust sets of standards. What’s going to be really interesting is to see, how those new standards get used to address use cases that the previous standards really couldn’t?

I'm a bit of a believer in sort of Darwinian evolution on this stuff and that, in fact, it’s hard to predict the future now. Niels Bohr famously said, "Prediction is hard, especially when it involves the future.” There is a great deal of truth to that.

Prediction is hard, especially when it involves the future.

Gardner: Hopefully we will get some clear insights at the Cloud Identity Summit this month, July 19, and there will be more information to be had there.

I also wonder whether we're almost past the point now when we talk about mobile security, cloud security, data-center security. Are we going to get past that, or is this going to become more of a fabric of security that the standards help to define and then the implementations make concrete? Before we sign off, Mark, any last thoughts about moving beyond segments of security into a more pervasive concept of security?

Diodati: We're already starting to see that, where people are moving towards software as a service (SaaS) and moving away from on-premises applications. Why? A couple of reasons. The revenue and expense model lines up really well with what they are doing, they pay as they grow. There's not a big bang of initial investment. Also, SaaS is turnkey, which means that much of the security lifting is done by the vendor.

That's also certainly true with infrastructure as a service (IaaS). If you look at things like Amazon Web Services (AWS). It is more complicated than SaaS, it is a way to converge security functions within the cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Interarbor Solutions  Mark Diodati  Michael Barrett  mobile computing  mobile devices  OAuth  Paul Madsen  Ping Identity 

Share |
PermalinkComments (0)
 

As the digital economy ramps up, expect a new identity management vision to leapfrog passwords

Posted By Dana L Gardner, Monday, July 07, 2014

A stubborn speed bump continues to hobble the digital economy. We're referring to the outdated use of passwords and limited identity-management solutions that hamper getting all of our devices, cloud services, enterprise applications, and needed data to work together in anything approaching harmony. 

The past three years have seen a huge uptick in the number and types of mobile devices, online services, and media. Yet, we're seemingly stuck with 20-year-old authentication and identity-management mechanisms -- mostly based on passwords.

The resulting chasm between what we have and what we need for access control and governance spells ongoing security lapses, privacy worries, and a detrimental lack of interoperability among cross-domain cloud services. So, while a new generation of standards and technologies has emerged, a new vision is also required to move beyond the precarious passel of passwords that each of us seems to use all the time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

The fast approaching Cloud Identity Summit 2014 this July gives us a chance to recheck some identity-management premises -- and perhaps step beyond the conventional to a more functional mobile future. To help us define these new best ways to manage identities and access control in the cloud and mobile era, please join me in welcoming our guest, Andre Durand, CEO of Ping Identity. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The Cloud Identity Summit is coming up, and at the same time, we're finding that this digital economy is not really reaching its potential. There seems to be this ongoing challenge, as we have more devices, varieties of service and this need for this cross-domain interaction capability. It’s almost as if we're stymied. So why is this problem so intractable? Why are we still dealing with passwords and outdated authentication?

Durand: Believe it or not, you have to go back 30 years to when the problem originated, when the Internet was actually born. Vint Cerf, one of the founders and creators of the Internet, was interviewed by a reporter two or three years back. He was asked if he could go back 30 years, when he was creating the Internet, what would he do differently? And he thought about it for a minute and said, "I would have tackled the identity problem."

Durand

He continued, "We never expected the Internet to become the Internet. We were simply trying to route packets between two trusted computers through a standardized networking protocol. We knew that the second we started networking computers, you needed to know who the user was that was making the request, but we also knew that it was a complicated problem." So, in essence, they punted.

Roll forward 30 years, and the bulk of the security industry and the challenges we now face in identity management at scale, Internet or cloud scale, all result from not having tackled identity 30 years ago. Every application, every device, every network that touches the Internet has to ask you who you are. The easiest way to do that is via user name and password, because there was no concept of who the user was on the network at a more fundamental universal layer.

So all this password proliferation comes as a result of the fact that identity is not infrastructure today in the Internet, and it's a hard problem to retrofit the Internet for a more universal notion of who you are, after 30 years of proliferating these identity silos. 

Internet of things

Gardner: It certainly seems like it’s time, because we're not only dealing with people and devices. We're now going into the Internet of Things, including sensors. We have multiple networks and more and more application programming interfaces (APIs) and software-as-a-service (SaaS) applications and services coming online. It seems like we have to move pretty quickly. [See more on identity standards and APIs.]

Durand: We do. The shift that began to exacerbate, or at least highlight, the underlying problem of identity started with cloud and SaaS adoption, somewhere around 2007-2008 time frame. With that, it moved some of the applications outside of the data center. Then, starting around 2010 or 2011, when we started to really get into the smartphone era, the user followed the smartphone off the corporate network and the corporate-issued computer and onto AT and T’s network.

So you have the application outside of the data center. You have the user off the network. The entire notion of how to protect users and data broke. It used to be that you put your user on your network with a company-issued computer accessing software in the data center. It was all behind the firewall.

Those two shifts changed where the assets were, the applications, data, and the user. The paradigm of security and how to manage the user and what they have access to also had to shift and it just brought to light the larger problem in identity.

What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains.

Gardner: And the stakes here are fairly high. We're looking at a tremendously inefficient healthcare system here in the United States, for example. One of the ways that could be ameliorated and productivity could be increased is for more interactions across boundaries, more standards applied to how very sensitive data can be shared. If we can solve this problem, it seems to me there is really a flood of improvement in productivity to come behind it.

Durand: It's enormous and fundamental. Someone shared with me several years ago a simple concept that captures the essence of how much friction we have in the system today in and around identity and users in their browsers going places. The comment was simply this: In your browser you're no longer limited to one domain. You're moving between different applications, different websites, different companies, and different partners with every single click.

What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains, and not have to re-authenticate yourself every single time you click and are off to a new part of the Internet.

We need that whether that means employees sitting at their desktop on a corporate network, opening their browser and going to Salesforce.com, Office 365, Gmail, or Box, or whether it means a partner going into another partner’s application, say to manage inventory as part of their supply chain.

We have to have an ability for the identity to follow the user, and fundamentally that represents this next-gen notion of identity.

Gardner: I want to go back to that next-gen identity definition in a moment, but I notice you didn't mention authenticate through biometrics to a phone or to a PC. You're talking, I think, at a higher abstraction, aren’t you? At software or even the services level for this identity. Or did I read it wrong?

Stronger authentication

Durand: No, you read it absolutely correctly. I was definitely speaking at 100,000 feet there. Part of the solution that I play out is what's coming in the future will be stronger authentication to fewer places, say stronger authentication to your corporate network or to your corporate identity. Then, it's a seamless ability to access all the corporate resources, no matter if they're business applications that are proprietary in the data center or whether or not the applications are in the cloud or even in the private cloud.

So, stronger user authentication is likely through the mobile phone, since the phones have become such a phenomenal platform for authentication. Then, once you authenticate to that phone, there will be a seamless ability to access everything, irrespective of where it resides.

Gardner: Then, when you elevate to that degree, it allows for more policy-driven and intelligence-driven automated and standardized approaches that more and more participants and processes can then adopt and implement. Is that correct?

Durand: That’s exactly correct. We had a notion of who was accessing what, the policy, governance, and the audit trail inside of the enterprise, and that was through the '80s, '90s, and the early 2000s. There was a lot of identity management infrastructure that was built to do exactly that within the enterprise.

We now live in this much more federated scenario, and there is a new generation of identity management that we have to install.

Gardner: With directories.

Durand: Right, directories and all the identity management, Web access management, identity-management provisioning software, and all the governance software that came after that. I refer to all of those systems as Identity and Access Management 1.0.

It was all designed to manage this, as long as all the applications, user, and data were behind the firewall on the company network. Then, the data and the users moved, and now even the business applications are moving outside the data center to the public and private cloud.

We now live in this much more federated scenario, and there is a new generation of identity management that we have to install to enable the security, auditability, and governance of that new highly distributed or federated scenario.

Gardner: Andre, let’s go back to that "next-generation level" of identity management. What did you mean by that? 

Durand:  There are few tenets that fall into the next-generation category. For me, businesses are no longer a silo. Businesses are today fundamentally federated. They're integrating with their supply chain. They're engaging with social identities, hitting their consumer and customer portals. They're integrating with their clients and allowing their clients to gain easier access to their systems. Their employees are going out to the cloud.

Fundamentally integrated

All of these are scenarios where the IT infrastructure in the business itself is fundamentally integrated with its customers, partners, and clients. So that would be the first tenet. They're no longer a silo.

The second thing is that in order to achieve the scale of security around identity management in this new world, we can no longer install proprietary identity and access management software. Every interface for how security and identity is managed in this federated world needs to be standardized.

So we need open identity standards such as SAML, OAuth, and OpenID Connect, in order to scale these use cases between companies. It’s not dissimilar to an era of email, before we had Internet e-mail and the SMTP standard.

Companies had email, but it was enterprise email. It wouldn’t communicate with other companies' proprietary email. Then, we standardized email through SMTP and instantly we had Internet-scaled email.

I predict that the same thing is occurring, and will occur, with identity. We'll standardize all of these cases to open identity standards and that will allow us to scale the identity use cases into this federated world.

So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access.

The third tenet is that, for many years, we really focused on the browser and web infrastructure. But now, you have users on mobile devices and applications accessing APIs. You have as many, if not most, transactions occurring through the API mobile channel than you do through the web.

So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access. You don’t want two infrastructures for those two different channels of communication. Those are some of the big tenets of this new world that define an architecture for next-gen identity that’s very different from everything that came before it.

Gardner: To your last tenet, how do we start to combine without gaps and without security issues the ability to exercise a federated authentication and identity management capability for the web activities, as well as for those specific APIs and specific mobile apps and platforms?

Durand: I’ll give you a Ping product specific example, but it’s for exactly that reason that we kind of chose the path that we did for this new product. We have a product called PingAccess, which is a next-gen access control product that provides both web access management for the web browsers and users using web application. It provides API access management when companies want to expose their APIs to developers for mobile applications and to other web services.

Prior to PingAccess in a single product, allowing you to enable policy for both the API channel and the web channel, those two realms typically were served by independent products. You'd buy one product to protect your APIs and you’d buy another product to do your web-access management.

Same product

Now with this next-gen product, PingAccess, you can do both with the same product. It’s based upon OAuth, an emerging standard for identity security for web services, and it’s based upon OpenID Connect, which is a new standard for single sign-on and authentication and authorization in the web tier. [See more on identity standards and APIs.]

We built the product to cross the chasm, between API and web, and also built it based upon open standards, so we could really scale the use cases.

Gardner: Whenever you bring out the words "new" and "standard," you'll get folks who might say, "Well, I'm going to stick with the tried and true." Is there any sense of the level of security, privacy control management, and governance control with these new approaches, as you describe them, that would rebut that instinct to stick with what you have?

Durand: As far as the instinct to stick with what you have, keep in mind that the alternative is proprietary, and there is nothing about proprietary that necessarily means you have better control or more privacy.

There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.

The standards are really defining secure mechanisms to pursue a use case between two different entities. You want a common interface, a common language to communicate. There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.

I'd argue that it's more secure and privacy enabling than the one-off proprietary systems and/or the homegrown systems that many companies developed in the absence of these open standards.

Gardner: Of course, with standards, it's often a larger community, where people can have feedback and inputs to have those standards evolve. That can be a very powerful force when it comes to making sure that things remain stable and safe. Any thoughts about the community approach to this and where these standards are being managed?

Durand: A number of the standards are being managed now by the Internet Engineering Task Force (IETF), and as you know, they're well-regarded, well-known, and certainly well-recognized for their community involvement and having a cycle of improvement that deals with threats, as they emerge, as the community sees them, as a mechanism to improve the standards over time to close those security issues.

Gardner: Going back to the Cloud Identity Summit 2014, is this a coming-out party of sorts for this vision of yours? How do you view the timing right now? Are we at a tipping point, and how important is it to get the word out properly and effectively?

Durand: This is our fifth annual Cloud Identity Summit. We've been working toward this combination of where identity and the cloud and mobile ultimately intersect. All of the trends that I described earlier today -- cloud adoption, mobile adoption, moving the application and the user and the device off the network -- is driving more and more awareness towards a new approach to identity management that is disruptive and fundamentally different than the traditional way of managing identity.

On the cusp

We're right on the cusp where the adoption across both cloud and mobile is irrefutable. Many companies now are moving all in in their strategies to make adoption by their enterprises across those two dimensions a cloud-first and mobile-first posture.

So it is at a tipping point. It's the last nail in the coffin for enterprises to get them to realize that they're now in a new landscape and need to reassess their strategies for identity, when the business applications, the ones that did not convert to SaaS, move to Amazon Web Services, Equinix, or to Rackspace and the private-cloud providers.

That, all of a sudden, would be the last shift where applications have left the data center and all of the old paradigms for managing identity will now need to be re-evaluated from the ground up. That’s just about to happen.

Gardner: Another part of this, of course, is the user themselves. If we can bring to the table doing away with passwords, that itself might encourage a lot of organic adoption and calls for this sort of a capability. Any sense of what we can do in terms of behavior at the user level and what would incentivize them to knock on the door of their developers or IT organization and ask for this sort of capability and vision that we described.

When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time. .

Durand: Now you're highlighting my kick-off speech at PingCon, which is Ping’s Customer and Partner Conference the day after the Cloud Identity Summit. We acquired a company and a technology last year in mobile authentication to make your mobile phone the second factor, strong authentication for corporations, effectively replacing the one-time tokens that have been issued by traditional vendors for strong authentication.

It’s an application you load on your smartphone and it enables you an ability to simply swipe across the screen to authenticate when requested. We'll be demonstrating the mobile phone as a second-factor authentication. What I mean there is that you would type in your username and password and then be asked to swipe the phone, just to verify your identity before getting into the company.

We'll also demonstrate how you can use the phone as a single-factor authentication. As an example, let’s say I want to go to some cloud service, Dropbox, Box, or Salesforce. Before that, I'm asked to authenticate to the company. I'd get a notification on my phone that simply says, "Swipe." I do the swipe, it already knows who I am, and it just takes me directly to the cloud. That user experience is phenomenal.

When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time.

Gardner: This aligns really well, and the timing is awesome for what both Google with Android and Apple with iOS are doing in terms of being able to move from screen to screen seamlessly. Is that something that’s built in this as well?

If I authenticate through my mobile phone, but then I end up working through a PC, a laptop, or any other number of interfaces, is this is something that carries through, so that I'm authenticated throughout my activity?

Entire vision

Durand: That's the entire vision of identity federation. Authenticate once, strongly to the network, and have an ability to go everywhere you want -- data center, private cloud, public SaaS applications, native mobile applications -- and never have to re-authenticate.

Gardner: Sounds good to me, Andre. I'm all for it.  Before we sign off, do we have an example? It's been an interesting vision and we've talked about the what and how, but is there a way to illustrate to show that when this works well perhaps in an enterprise, perhaps across boundaries, what do you get and how does it work in practice?

Durand: There are three primary use cases in our business for next-generation identity, and we break them up into workforce, partner, and customer identity use cases. I'll give you quick examples of all three.

In the workforce use case, what we see most is a desire for enterprises to enable single sign-on to the corporation, to the corporate network, or the corporate active directory, and then single-click access to all the applications, whether they're in the cloud or in the data center. It presents employees in the workforce with a nice menu of all their application options. They authenticate once to see that menu and then, when they click, they can go anywhere without having to re-authenticate.

That's the entire vision of identity federation. Authenticate once, strongly to the network.

That's primarily the workforce use case. It's an ability for IT to control what applications, where they're going in the cloud, what they can do in the cloud to have an audit trail of that, or have full control over the use of the employee accessing cloud applications. The next-gen solutions that we provide accommodate that use case.

The second use case is what we call a customer portal or a customer experience use case. This is a scenario where customers are hitting a customer portal. Many of the major banks in the US and even around the world use Ping to secure their customer website. When you log into your bank to do online banking, you're logging into the bank, but then, when you click on any number of the links, whether to order checks, to get check fulfillment, that goes out to Harland Clarke or to Wealth Management.

That goes to a separate application. That banking application is actually a collection of many applications, some run by partners, some by run by different divisions of the bank. The seamless customer experience, where the user never sees another login or registration screen, is all secured through Ping infrastructure. That’s the second use case.

The third use case is what we call a traditional supply chain or partner use case. The world's largest retailer is our customer. They have some 100,000 suppliers that access inventory applications to manage inventory at all the warehouses and distribution centers.

Prior to having Ping technology, they would have to maintain the username and password of the employees of all those 100,000 suppliers. With our technology they allow single sign-on to that application, so they no longer have to manage who is an employee of all of those suppliers. They've off-loaded the identity management back to the partner by enabling single sign-on.

About 50 of the Fortune 100 are all Ping customers. They include Best Buy, where you don’t have to login to go to the reward zone. You're actually going through Ping.

If you're a Comcast customer and you log into comcast.net and click on any one of the content links or email, that customer experience is secured though Ping. If you log into Marriott, you're going through Ping. The list goes on and on.

In the future

Gardner: This all comes to a head as we're approaching the July Cloud Identity Summit 2014 in Monterey, Calif., which should provide an excellent forum for keeping the transition from passwords to a federated, network-based intelligent capability on track.

Before we sign-off, any idea of where we would be in a year from now? Is this a stake in the ground for the future or something that we could extend our vision toward in terms of what might come next, if we make some strides and a lot of what we have been talking about today gets into a significant uptake and use.

Durand: We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication. That adoption is going to be fairly quick. I expect that, and you're going to see enterprises adopting en masse stronger authentication using the smartphone.

Gardner: I suppose that is an accelerant to the bring-your-own-device (BYOD) trend. Is that how you see it as well?

We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication.

Durand: It’s a little bit orthogonal to BYOD. The fact that corporations have to deal with that phenomenon brings its own IT headaches, but also its own opportunities in terms of the reality of where people want to get work done.

But the fact that we can assume that all of the devices out there now are essentially smartphone platforms, very powerful computers with lots of capabilities, is going to allow the enterprises now to leverage that device for really strong multi-factor authentication to know who the user is that’s making that request, irrespective of where they are -- if they're on the network, off the network, on a company-issued computer or on their BYOD.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  Andre Durand  API  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Identity management  Interarbor Solutions  OAuth  OpenID Connect  Ping Identity  Single sign-on 

Share |
PermalinkComments (0)
 

Standards and APIs: How to best manage identity and security in the mobile era

Posted By Dana L Gardner, Wednesday, July 02, 2014

The advent of the application programming interface (API) economy has forced a huge, pressing need for organizations to both seek openness and improve security for accessing mobile applications, data, and services anytime, anywhere, and from any device.

Awash in inadequate passwords and battling subsequent security breaches, business and end-users alike are calling for improved identity management and federation technologies. They want workable standards to better chart the waters of identity management and federation, while preserving the need for enterprise-caliber risk remediation and security.

Meanwhile, the mobile tier is becoming an integration point for scads of cloud services and APIs, yet unauthorized access to data remains common. Mobile applications are not yet fully secure, and identity control that meets audit requirements is hard to come by. And so developers are scrambling to find the platforms and tools to help them manage identity and security, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

Clearly, the game has clearly changed for creating new and attractive mobile processes, yet the same old requirements remain wanting around security, management, interoperability, and openness.

BriefingsDirect assembled a panel of experts to explore how to fix these pressing needs: Bradford Stephens, the Developer and Platforms Evangelist in the CTO’s Office at Ping Identity; Ross Garrett, Senior Director of Product Marketing at Axway, Kelly Grizzle, Principal Software Engineer at SailPoint Technologies. Welcome, Kelly. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are approaching the Cloud Identity Summit 2014 (CIS), which is coming up on July 19 in Monterey, Calif. There's a lot of frustration with identity services that meet the needs of developers and enterprise operators as well. So let’s talk a little bit about what’s going on with APIs and identity.

What are the trends in the market that keep this problem pressing? Why is it so difficult to solve?

Interaction changes

Stephens: Well, as soon as we've settled on a standard, the way we interact with computers changes. It wasn’t that long ago that if you had Active Directory and SAML and you hand-wrote security endpoints of model security products, you were pretty much covered.

Stephens

But in the last three or four years, we've gone to a world where mobile is more important than web. Distributed systems are more important than big iron. And we communicate with APIs instead of channels and SDKs, and that requires a whole new way of thinking about the problem.

Garrett: Ultimately, APIs are becoming the communication framework, the fabric, in which all of the products that we touch today talk to each other. That, by extension, provides a new identity challenge. That’s a lot of reason why we've seen some friction and schizophrenia around the types of identity technologies that are available to us.

So we see waves of different technologies come and go, depending on what is the flavor of the month. That has caused some frustration for developers, and will definitely come up during our Cloud Identity Summit in a couple of weeks.

Grizzle: APIs are becoming exponentially more important in the identity world now. As Bradford alluded to, the landscape is changing. There are mobile devices as well as software-as-a-service (SaaS) providers out there who are popping up new services all the time. The common thread between all of them is the need to be able to manage identities. They need to be able to manage the security within their system. It makes total sense to have a common way to do this.

Grizzle

APIs are key for all the different devices and ways that we connect to these service providers. Becoming standards based is extremely important, just to be able to keep up with the adoption of all these new service providers coming on board.

Gardner: As we describe this as the API economy, I suppose it’s just as much a marketplace and therefore, as we have seen in other markets, people strive for predominance. There's jockeying going on. Bradford, is this a matter of an architectural shift? Is this a matter of standards? Or is this a matter of de-facto standards? Or perhaps all of the above?

Stephens: It’s getting complex quickly. I think we're settling on standards, like it or not, mostly positively. I see most people settling on at least OAuth 2.0 as a standard token, and OpenID Connect for implementation and authentication of information, but I think that’s about as far as we get.

There's a lot of struggle with established vendors vying to implement these protocols. They try to bridge the gap between the old world of say SAML and Active Directory and all that, and the new world of SCIM, OAuth, OpenID Connect. The standards are pretty settled, at least for the next two years, but the tools, how we implement them, and how much work it takes developers to implement them, are going to change a lot, and hopefully for the better.

Evolving standards

Garrett: We have identified a number of new standards that are bridging this new world of API-oriented connectivity. Learning from the past of SAML and legacy, single sign-on infrastructure, we definitely need some good technology choices.

Garrett

The standards seem to be leading the way. But by the same token, we should keep a close eye on the market changing with regards to how fast standards are changing. We've all seen things like OAuth progress slower than some of the implementations out there. This means the ratification of the standard was happening after many providers had actually implemented it. It's the same for OpenID Connect.

We are in line there, but the actual standardization process doesn’t always keep up with where the market wants to be.

Gardner: We've been this play out before that the standards can lag. Getting consensus, developing the documentation and details, and getting committees to sign off can take time, and markets move at their own velocity. Many times in the past, organizations have hedged their bets by adopting multiple standards or tracking multiple ways of doing things, which requires federation and integration.

Kelly, are there big tradeoffs with standards and APIs? How do we mitigate the risk and protect ourselves by both adhering to standards, but also being agile in the market?

Grizzle: That’s kind of tricky. You're right in that standards tend to lag. That’s just part and parcel of the standardization process. It’s like trying to pass a bill through Congress. It can go slow.

You're right in that standards tend to lag. That’s just part and parcel of the standardization process.

Something that we've seen some of these standards do right, from OAuth and from the SCIM perspective, is that both of those have started their early work with a very loose standardization process, going through not one of the big standards bodies, but something that can be a little bit more nimble. That’s how the SCIM 1.0 and 1.1 specs came out, and they came out in a reasonable time frame to get people moving on it.

Now that things have moved to the Internet Engineering Task Force (IETF), development has slowed down a little bit, but people have something to work with and are able to keep up with the changes going on there.

I don’t know that people necessarily need to adopt multiple standards to hedge their bets, but by taking what’s already there and keeping a pulse on the things that are going to change, as well as the standard being forward-thinking enough to allow some extensibility within it, service providers and clients, in the long run, are going to be in a pretty good spot.

Quick primer

Gardner: We've talked a few technical terms so far, and just for the benefit of our audience, I'd like to do a quick primer, perhaps with you Bradford. To start: OAuth, this is with the IETF now. Could you just quickly tell the audience what OAuth is, what it’s doing, and why it’s important when we talk about API, security and mobile?

Stephens: OAuth is the foundation protocol for authorization when it comes to APIs for web applications. OAuth 2 is much more flexible than OAuth 1.

Basically, it allows applications to ask for access to stuff. It seems very vague, but it’s really powerful once you start getting the right tokens for your workflows. And it provides the same foundation for everything else we do for identity and APIs.

The best example I can think of is when you log into Facebook, and Facebook asks whether you really want this app to see your birthday, all your friends’ information, and everything else. Being able to communicate all that over the OAuth 2.0 is a lot easier than how it was with OAuth 1.0 a few years ago.

Gardner: How about OpenID Connect. This is with the OpenID Foundation. How does that relate, and what is it?

If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself.

Stephens: If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself. That’s usually done with the Token, Javascript object notation (JSON) Web Token, but OpenID Connect provides the actual identity information.

When you access an API and you authenticate, you choose a scope, and one of the most common scopes is OpenID Profile. This OpenID Profile will just have things like your username, maybe your address, various other pieces of identity information, and it describes who the "you" is, who you are.

Gardner: And SCIM, you mentioned that Kelly, and I know you have been involved with it. So why don’t you take the primer for SCIM, and I believe it’s Simple Cloud Identity Management?

Grizzle: That's the historical name for it, Simple Cloud Identity Management. When we took the standard to the IETF, we realized that the problems that we were solving were a little bit broader than just the cloud and within the cloud. So the acronym now stands for the System for Cross-domain Identity Management.

That’s kind of a mouthful, but the concept is pretty simple. SCIM is really just an API and a schema that allows you to manage identities and identity-related information. And by manage them, I mean to create identities in systems to delete them, update them, change the entitlements and the group memberships, and things like that.

Gardner: From your perspective, Kelly, what is the relationship then between OAuth and SCIM?

Managing identities

Grizzle: OAuth, as Bradford mentioned, is primarily geared toward authorization, and answers the question, "Can Bob access this top-secret document?" SCIM is really not in the authorization and authentication business at all. SCIM is about managing identities.

OAuth assumes that an identity is already present. SCIM is able to create that identity. You can create the user "Bob." You can say that Bob should not have access to that top-secret document. Then, if you catch Bob doing some illicit activity, you can quickly disable his account through a SCIM call. So they fit together very nicely.

Gardner: In the real world, developers like to be able to use APIs, but they might not be familiar with all the details that we've just gone through on some of these standards and security approaches.

How do we make this palatable to developers? How do we make this something that they can implement without necessarily getting into the nitty-gritty? Are there some approaches to making this a bit easier to consume as a developer?

The best thing we can do is have tool-providers give them tools in their native language or in the way developers work with things.

Stephens: As a developer who's relatively new to this field -- I worked in database for three years -- I've had personal experience of how hard it is to wrap your head around all the standards and all these flows and stuff. The best thing we can do is have tool providers give them tools in their native language, or in the way developers work with things.

This needs well-documented, interactive APIs -- things like Swagger -- and lots of real-world code examples. Once you've actually done the process of authentication through OAuth, getting a JSON Web Token, and getting an OpenID Connect profile, it’s really  simple to see how it all works together, if you do it all through a SaaS platform that handles all the nitty-gritty, like user creation and all that.

If you have to roll your own, though, there's not a lot of information out there besides the WhitePages and Wall Post. It’s just a nightmare. I tried to roll my own. You should never roll your own.

So having SaaS platforms to do all this stuff, instead of having documents, means that developers can focus on providing their applications, and just understand that I have this media and project, not about which tokens carry information that accesses OAuth and OpenID Connect.

I don’t really care how it all works together; I just know that I have this token and it has the information I need. And it’s really liberating, once you finally get there.

So I guess the best thing we can do is provide really great tools that solve the identity-management problems.

Tools: a key point

Garrett: Tools, that’s the key point here. Whether we like it or not, developers tend to be kind of lazy sometimes and they certainly don’t have the time or the energy to understand every facet of the OAuth specification. So providing tools that can wrap that up and make it as easy to implement as possible is really the only way that we get to really secure mobile applications or any API interaction. Because without a deep understanding of how this stuff works, you can make pretty fundamental errors.

Having said that, at least we've started to take steps in the right direction with the standards. OAuth is built at least with the idea of mobile access in mind. It’s leveraging REST and JSON types, rather than SOAP and XML types, which are really way too heavyweight for mobile applications.

So the standards, in their own right, have taken us in the right direction, but we absolutely need tools to make it easy for developers.

Grizzle: Tools are of the utmost importance, and some of the identity providers and people with skin in the game, so to speak, are helping to create these tools and to open-source them, so that they can be used by other people.

Identity isn’t the most glamorous thing to talk about, except when it all goes wrong, and some huge leak makes the news headlines.

Another thing that Ross touched on was keeping the simplicity in the spec. These things that we're addressing -- authorization, authentication, and managing identities -- are not extremely simple concepts always. So in the standards that are being created, finding the right balance of complexity versus completeness and flexibility is a tough line to walk.

With SCIM, as you said, the first initial of the acronym used to stand for Simple. It’s still a guiding principle that we use to try to keep these interactions as simple as possible. SCIM uses REST and JSON, just like some of these other standards. Developers are familiar with that. Putting the burden on the right parties for implementation is very important, too. To make it easy on clients, the ones who are going to be implementing these a lot, is pretty important.

Gardner: Do these standards do more than help the API economy settle out and mature? Cloud providers or SaaS providers want to provide APIs and they want the mobile apps to consume them. By the same token, the enterprises want to share data and want data to get out to those mobile tiers. So is there a data-management or brokering benefit that goes along with this? Are we killing multiple birds with one set of standards?

Garrett: The real issue here, when we think about the new types of products and services that the API economy is helping us deliver, is around privacy and ultimately customer confidence. Putting the user in control of who gets to access which parts of my identity profile, or how contextual information about me can perhaps make identity decisions easier, allows us to lock down, or better understand, these privacy concerns that the world has.

Identity isn’t the most glamorous thing to talk about -- except when it all goes wrong -- and some huge leak makes the news headlines, or some other security breach has lost credit-card numbers or people’s usernames and passwords.

Hand in hand

In terms of how identity services are developing the API economy, the two things go hand in hand. Unless people are absolutely certain about how their information is being used, they simply choose not to use these services. That’s where all the work that the API management vendors and the identity management vendors are doing and bringing that together is so important.

Gardner: You mentioned that identity might not be sexy or top of mind, but how else can you manage all these variables on an automated or policy-driven basis? When we move to the mobile tier, we're dealing with multiple networks. We're dealing with multiple services ... cloud, SaaS, and APIs. And then we're linking this back to enterprise applications. How other than identity can this possibly be managed?

Stephens: Identity is often thought of as usernames and passwords, but it’s evolving really quickly to be so much more. This is something I harp on a lot, but it’s really quickly becoming that who we are online is more important than who we are in real life. How I identify myself online is more important than the driver's license I carry in my wallet.

And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

As you know, your driver’s license is like a real-life token of information that describes what you're allowed to do in your life. That’s part of your identity. Anybody who has lost their license knows that, without that, there's not a whole lot you can do.

Bringing that analogy back to the Internet, what you're able to access and what access you're able to give other people or other applications to change important things, like your Facebook posts, your tweets, or go through your email and help categorize that is important. All these little tasks that help define how you live, are all part of your identity. And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

Gardner: Let me pose the same question, but in a different way. When you do this well, when you can manage identity, when you can take advantage of these new standards that extend into mobile requirements and architectures, with the API economy in mind, what do you get? What does it endow you with? What can you do that perhaps you couldn’t do if you were stuck in some older architectures or thinking?

Grizzle: Identity is key to everything we do. Like Bradford was just saying, the things that you do online are built on the trust that you have with who is doing them. There are very few services out there where you want completely anonymous access. Almost every service that you use is tied to an identity.

So it’s of paramount importance to get a common language between these. If we don’t move to standards here, it's just going to be a major cost problem, because there are a ton of different providers and clients out there.

If every provider tries to roll their own identity infrastructure, without relying on standards, then, as a client, if I need to talk to two different identity providers, I need to write to two different APIs. It’s just an explosive problem, with the amount that everything is connected these days.

So it’s key. I can’t see how the system will stand up and move forward efficiently without these common pieces in place.

Use cases

Gardner: Do we have any examples along these same lines of what do you get when you do this well and appropriately based on what you all think is the right approach and direction? We've been talking at a fairly abstract level, but it really helps solidify people’s thinking and understanding when they can look at a use-case, a named situation or an application.

Stephens: If you want a good example of how OAuth delegation works, building a Facebook app or just working on Facebook app documentation is pretty straightforward. It gives you a good idea of what it means to delegate certain authorization.

Likewise, Google is very good. It’s very integrated with OAuth and OpenID Connect when it comes to building things on Google App Engine.

The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure.

So if you want to secure an API that you built using Google Cloud on Google App Engine, which is trivial to do, Google Cloud Endpoints provides a really good example. In fact, there is a button that you can hit in their example button called Use OAuth and that OAuth transports OpenID Connect profile, and that’s a pretty easy way to go about it.

Garrett: I'll just take a simple consumer example, and we've touched on this already. It was the idea in the past, where every individual service or product is offering only their identity solution. So I have to create a new identity profile for every product or service that I'm using. This has been the case for a long time in the consumer web and in the enterprise setting as well.

So we have to be able to solve that problem and offer a way to reuse existing identities. It involves so taking technologies like OpenID Connect, which is totally hidden to the end user really, but simply saying that you can use this existing identity, your LinkedIn or  Facebook credentials, etc., to access some new products, takes a lot of burden away from the consumer. Ultimately, that provides us a better security model end to end.

The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure. Even though some people might shy away from using their Facebook identity across multiple applications, in many ways it’s actually better to, because that’s really one centralized place where I can actually see, audit, and adjust the way that I'm presenting my identity to other people.

That’s a really great example of how these new technologies are changing the way we interact with products everyday.

Standardized approach

Grizzle: At SailPoint, the company that I work for, we have a client, a large chip maker, who has seen the identity problem and really been bitten by it within their enterprise. They have somewhere around 3,500 systems that have to be able to talk to each other, exchange identity data, and things like that.

The issue is that every time they acquire a new company or bring a new group into the fold, that company has its own set of systems that speak their own language, and it takes forever to get them integrated into their IT organization there.

So they've said that they're not going to support every app that these people bring into the IT infrastructure. They're going with SCIM and they are saying that all these apps that come in, if they speak SCIM, then they'll take ownership of those and pull them into their environment. It should just plug in nice and easy. They're doing it just because of a resourcing perspective. They can't keep up with the amount of change to their IT infrastructure and keep everything automated.

They can't keep up with the amount of change to their IT infrastructure and keep everything automated.

Gardner: I want to quickly look at the Cloud Identity Summit that’s coming up. It sounds like a lot of these issues are going to be top of mind there. We're going to hear a lot of back and forth and progress made.

Does this strike you, Bradford, as a tipping point of some sort, that this event will really start to solidify thinking and get people motivated? How do you view the impact of this summit on cloud identity?

Stephens: At CIS, we're going to see a lot of talk about real-world implementation of these standards. In fact, I'm running the Enterprise API track and I'll be giving a talk on end-to-end authentication using JAuth, OAuth, and OpenID Connect. This year is the year that we show that it's possible. Next year, we'll be hearing a lot more about people using it in production.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Tags:  API  Bradford Stephens  BriefingsDirect  Cloud Identity Summit  Dana Gardner  Identity management  Interarbor Solutions  OAuth  OpenID Connect  Ping Identity  Single sign-on 

Share |
PermalinkComments (0)
 

How Capgemini's UK financial services unit helps clients manage risk using big data analysis

Posted By Dana L Gardner, Thursday, June 26, 2014

When Capgemini's business information management (BIM) practices unit needed to provide big data capabilities to its insurance company customers, it needed to deliver the right information to businesses much faster from the very bottom up.

That means an improved technical design and an architectural way of delivering information through business intelligence (BI) and analytics. The ability to bring together structured and unstructured data -- and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there -- was critical for CapGemini.

And that's because Capgemini's Financial Services Global Business Unit, based in the United Kingdom, must drive better value to its principal-level and senior-level consultants as they work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those COO- and CFO-level customers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Our next innovation case study interview therefore highlights how Capgemini is using big data and analysis to help its organization clients better manage risk.

BriefingsDirect had an opportunity to learn first-hand how big data and analysis help its Global 500 clients identify the most pressing analysis from huge data volumes we interviewed Ernie Martinez, Business Information Management Head at the Capgemini Financial Services Global Business Unit in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Risk has always been with us. But is there anything new, pressing, or different about the types of risks that your clients are trying to reduce and understand?

Martinez

Martinez: I don't think it's as much about what's new within the risk world, as much as it's about the time it takes to provision the data so companies can make the right decisions faster, therefore limiting the amount of risk they may take on in issuing policies or taking on policies with new clients.

Gardner: In addition to the risk issue, of course, there is competition. The speed of business is picking up, and we’re still seeing difficult economic climates in many markets. How do you step into this environment and find a technology that can improve things? What have you found?

Martinez: There is the technology aspect of delivering the right information to business faster. There is also the business-driven way of delivering that information faster to business.

Bottom up

The BIM practice is a global practice. We’re ranked in the top upper right-hand quadrant in Gartner as one of the best BIM practices out there with about 7,000 BIM resources worldwide.

Our focus is on driving better value to the customer. So we have principal-level and senior-level consultants that work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those, the COO- and CFO-level customers.

We then drive more business into the technical design and architectural way of delivering information in business intelligence (BI) and analytics. Once we define what the road to good looks like for an organization, when you talk about integrating information across the enterprise, it's about what is that path to good looks like and what are the key initiatives that an organization must do to be able to get there.

This is where our technical design, business analysis, and data analysis consultants fit in. They’re actually going in to work with business to define what do they need to see out of their information to help them make better decisions.

Gardner: Of course, the very basis of this is to identify the information, find the information, and put the information in a format that can be analyzed. Then, do the analysis, speed this all up, and manage it at scale and at the lowest possible cost. It’s a piece of cake, right? Tell us about the process you go through and how you decide what solutions to use and where the best bang for the buck comes from?

Martinez: Our approach is to take that senior-level expertise in big data and analytics, bring that into our practice, put that together with our business needs across financial services, insurance, and capital markets, and begin to define valid use cases that solve real business problems out there.

We’re a consulting organization, and I expect our teams to be able to be subject matter experts on what's happening in the space and also have a good handle on what the business problems are that our customers are facing. If that’s true, then we should be able to outline some valid use cases that are going to solve some specific problems for business customers out there.

In doing so, we’ll define that use case. We’ll do the research to validate that indeed it is a business problem that's real. Then we’ll build the business case that outlines that if we do build this piece of intellectual property (IP), we believe we can go out and proactively affect the marketplace and help customers out there. This is exactly what we did with HP and the HAVEn platform.

Why Capgemini and our BIM practices jumped in with a partnership with HP and Vertica in the HAVEn platform is really about the ability to deliver the right information to business faster from the bottom up. That means the infrastructure and the middleware by which we serve that data to business. From the top down, we work with business in a more iterative fashion in delivering value quickly out of the data that they are trying to harvest.

Wide applicability

Gardner: So we’re talking about a situation where you want to have wide applicability of the technology across many aspects of what you are doing, that make sense economically, but of course it also has to be the right tool for the job, that's to go deep and wide. You’re in a proof-of-concept (POC) stage. How did you come to that? What were some of the chief requirements you had for doing this at that right balance of deep and wide?

Martinez: We, as an organization, believe that our goal as BI and analytics professionals is to deliver the right information faster to business. In doing so, you look at the technologies that are out there that are positioned to do that. You look at the business partners that have that mentality to actually execute in that manner. And then you look at the organization, like ours, whose sole purpose is to mobilize quickly and deliver value to customer.

I think it was a natural fit. When you look at HP Vertica in the HAVEn platform, the ability to integrate social media data through Autonomy and then of course through Vertica and Hadoop -- the integration of the entire architecture -- gives us the ability to do many things.

But number one, it's the ability to bring in structured and unstructured data, and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there.

Being here at HP Discover this week has certainly solidified in my mind that we’re betting on the right horse.

Over the course of the last six months of 2013, that conversation began to blossom into a relationship. We all work together as a team and we think we can mobilize not just the application or the solution that we’re thinking about, but the entire infrastructure derivatives to our customers quickly. That's where we’re at.

What that means is that once we partnered and got the go ahead with HP Vertica to move forward with the POC, we mobilized a solution in less than 45 days, which I think shows the value of the relationship from the HP side as well as from Capgemini.

Gardner: Down the road, after some period of implementation, there are general concerns about scale when you’re dealing with big data. Because you’re near the beginning of this, how do you feel about the ability for the platform to work to whatever degree you may need?

Martinez: Absolutely no concern at all. Being here at HP Discover has certainly solidified in my mind that we’re betting on the right horse with their ability to scale. If you heard some of the announcements coming out, they’re talking about the ability to take on big data. They’re using Vertica and the HAVEn network.

There’s absolutely zero question in my mind that organizations out there can leverage this platform and grow with it over time. Also, it gives us the ability to be able to do some things that we couldn’t do a few years back.

Business value

Gardner: Ernie, let's get back to the business value here. Perhaps you can identify some of the types of companies that you think would be in the best position to use this. How will this hit the road? What are the sweet spots in the market, the applications you think would be the most urgently that make a right fit for this?

Martinez: When you talk about the largest insurers around the world, whether from Zurich to Farmers in the US to Liberty Mutual, you name it, these are some of our friendly customers that we are talking to that are providing feedback to us on this solution.

We’ll incorporate that feedback. We’ll then take that to some targeted customers in North America, UK, and across Europe, that are primed and in need of a solution that will give them the ability to not only assess risk more effectively, but reduce the time to be able to make these type of decisions.

Reducing the time to provision data reduces costs by integrating data across multiple sources, whether it be customer sentiment from the Internet, from Twitter and other areas, to what they are doing around their current policies. It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs. It gives them the ability to do many more things than they were able to do in the past.

It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs.

Gardner: And Capgemini is in the position of mastering this platform and being able to extend the value of that platform across multiple clients and business units. Therefore, that reduces the total cost of that technology, but at the same time, you’re going to have access to data across industries, and perhaps across boundaries that individual organizations might not be able to attain.

So there's a value-add here in terms of your penetration into the industry and then being able to come up with the inferences. Tell me a little bit about how the access-to-data benefit works for you?

Martinez: If you take a look at the POC or the use case that he POC was built on, it was built on a commercial insurance risk assessment. If you take a look at the underlying architecture around commercial insurance risk, our goal was to be able to build an architecture that will serve the uses case that HP bought into, but at the same time, flatten out that data model and that architecture to also bring in better customer analytics for commercial insurance risk.

So we’ve flattened out that model and we’ve built the architecture so we could go after additional business, instead of more clients, across not just commercial insurance, but also general insurance. Then, you start building in the customer analytics capability within that underlying architecture and it gives us the ability to go from the insurance market over to the financial services market, as well as into the capital markets area.

Gardner: All the data in one place makes a big difference.

Martinez: It makes a huge difference, absolutely.

Future plans

Gardner: Tell us a bit about the future. We’ve talked about a couple of aspects of the HAVEn suite. Autonomy, Vertica, and Hadoop seem to be on everyone's horizon at some point or another due to scale and efficiencies. Have you already been using Hadoop, or how do expect to get there?

Martinez: We haven’t used Hadoop, but certainly, with its capability, we plan to. I’ve done a number of different strategies and roadmaps in engaging with larger organizations, from American Express to the largest retailer in the world. In every case, they have a lot of issues around how they’re processing the massive amounts of data that are coming into their organization.

When you look at the extract, transform, load (ETL) processes by which they are taking data from systems of record, trying to massage that data and move it into their large databases, they are having issues around load and meeting load windows.

The HAVEn platform, in itself, gives us the ability to leverage Hadoop, maybe take out some of that processing pre-ETL, and then, before we go into the Vertica environment, be able to take out some of that load and make the Vertica even more efficient than it is today, which is one of the biggest selling points of Vertica. It certainly is in our plans.

This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations.

Gardner: Another announcement here at Discover has been around converged infrastructure, where they’re trying to make the hardware-software efficiency and integration factor come to bear on some of these big-data issues. Have you thought about the deployment platform as well as the software platform?

Martinez: You bet. At the beginning of this interview, we talked about the ability to deliver the right information faster to business. This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations. To be able to have a partner like HP who is talking about the convergence of software and infrastructure all at the same time to help companies manage this better, is one of the biggest reasons why we're here.

We, as a consulting organization, can provide the consulting services and solutions that are going to help deliver the right information, but without that infrastructure, without that ability to be able to integrate faster and then be able to analyze what's happening out there, it’s a moot point. This is where this partnership is blossoming for us.

Gardner: Before we sign off, Ernie, now that you have gone through this understanding and have developed some insights into the available technologies and made some choices, is there any food for thought for others who might just be beginning to examine how to enter big data, how to create a common platform across multiple types of business activities? What did you not think of before that you wish you had known?

Lessons learned

Martinez: If I look back at lessons learned over the last 60 to 90 days for us within this process, it’s one thing to say that you're mobilizing the team right from the bottom up, meaning from the infrastructure and the partnership with HP, and as well as the top-down with your business needs to finding the right business requirements and then actually building to that solution.

In most cases, we’re dealing with individuals. While we might talk about an entrepreneurial way of delivering solutions into the marketplace, we need to challenge ourselves, and all of the resources that we bring into the organization, to actually have that mentality.

What I’ve learned is that while we have some very good tactical individuals, having that entrepreneurial way of thinking and actually delivering that information is a different mindset altogether. It's about mentoring our resources that we currently have, bringing in that talent that has more of an entrepreneurial way of delivering, and trying to build solutions to go to market into our organization.

I didn’t really think about the impact of our current resources and how it would affect them. We were a little slow as we started the POC. Granted, we did this in 45 days, so that’s the perfectionist coming out in me, but I’d say it did highlight a couple of areas within our own team that we can improve on.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Capgemini  Dana Gardner  data analytics  Ernie Martinez  Hadoop  HAVEn  HP  HP Vertica  HPDiscover  Interarbor Solutions 

Share |
PermalinkComments (0)
 

The Open Group Amsterdam panel delves into how to best gain business value from Open Platform 3.0

Posted By Dana L Gardner, Tuesday, June 24, 2014
Updated: Tuesday, June 24, 2014

The next BriefingsDirect panel discussion defines new business values from the massive Open Platform 3.0 shift that combines the impacts and benefits of big datacloudInternet of thingsmobile and social.

 

Our discussion comes to you from The Open Group Conference held on May 13, 2014 in Amsterdam, where the focus was on enabling boundaryless information flow.

 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

 

To learn more about making Open Platform 3.0 a business benefit in an architected fashion, please join moderator Stuart Boardman, a Senior Business Consultant at KPN and Open Platform 3.0 Forum co-chairman; Dr. Chris Harding, Director for Interoperability at The Open Group, and Open Platform 3.0 Forum Director; Lydia Duijvestijn, Executive Architect at IBM Global Business Services in The Netherlands; Andy Jones, Technical Director for EMEA at SOA Software; TJ Virdi, Computing Architect in the Systems Architecture Group at Boeing and also a co-chair of the Open Platform 3.0 Forum; Louis Dietvorst, Enterprise Architect at Enexis in The Netherlands; Sjoerd Hulzinga, Charter Lead at KPN Consulting, and Frans van der Reep, Professor at the Inholland University of Applied Sciences.

 

Here are some excerpts:

 

Boardman: Welcome to the session about obtaining value from Open Platform 3.0, and how we're actually going to get value out of the things that we want to implement from big data, social, and the Internet-of-Things, etc., in collaboration with each other. 

 

Boardman

We're going to start off with Chris Harding, who is going to give us a brief explanation of what the platform is, what we mean by it, what we've produced so far, and where we're trying to go with it. 

 

He'll be followed by Lydia Duijvestijn, who will give us a presentation about the importance of non-functional requirements (NFRs). If we talk about getting business value, those are absolutely central. Then, we're going to go over to a panel discussion with additional guests. 

 

Without further ado, here's Chris Harding, who will give you an introduction to Open Platform 3.0. 

 

Purpose of architecture

 

Harding: Hello, everybody. It's a great pleasure to be here in Amsterdam. I was out in the city by the canals this morning. The sunshine was out, and it was like moving through a set of picture postcards. 

 

Harding

It's a great city. As you walk through, you see the canals, the great buildings, the houses to the sides, and you see the cargo hoists up in the eaves of those buildings. That reminds you that the purpose of the arrangement was not to give pleasure to tourists, but because Amsterdam is a great trading city, that is a very efficient way of getting goods distributed throughout the city. 

 

That's perhaps a reminder to us that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well. 

 

Probably when those canals were first thought of, it was not obvious that this was the right thing to do for Amsterdam. Certainly it would not be obvious that this was the right layout for that canal network, and that is the exciting stage that we're at with Open Platform 3.0 right now.

 

We have developed a statement, a number of use cases. We started off with the idea that we were going to define a platform to enable enterprises to get value from new technologies such as cloud computing, social computing, mobile computing, big data, the Internet-of-Things, and perhaps others.

 

We developed a set of business use cases to show how people are using and wanting to use those technologies. We developed an Open Group business scenario to capture the business requirements. That then leads to the next step. All these things sound wonderful, all these new technologies sound wonderful, but what is Open Platform 3.0? 

 

Jones

Though we don't have the complete description of it yet, it is beginning to take shape. That's what I am hoping to share with you in this presentation, our current thoughts on  it. 

 

Looking historically, the first platform, you could say, was operating systems -- the Unix operating system. The reason why The Open Group, X/Open in those days, got involved was because we had companies complaining, "We are locked into a proprietary operating system or proprietary operating systems. We want applications portability." The value delivered through a common application environment, which was what The Open Group specified for Unix, was to prevent vendor lock-in. 

 

The second platform is the World Wide Web. That delivers a common services environment, for services either through accessing web pages for your browser or for web services where programs similarly can retrieve or input information from or to the web service. 

 

The benefit that that has delivered is universal deployment and access. Pretty much anyone or any company anywhere can create a services-based solution and deploy it on the web, and everyone anywhere can access that solution. That was the second platform. 

 

Common environment

 

The way Open Platform 3.0 is developing is as a common architecture environment, a common environment in which enterprises can do architecture, not as a replacement for TOGAF. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. 

 

Open Platform 3.0 is more about what kind of architecture you will create, and by the definition of a common environment for doing this, the big business benefit that will be delivered will be integrated solutions. 

 

Yes, you can develop a solution, anyone can develop a solution, based on services accessible over the World Wide Web, but will those solutions work together out of the box? Not usually. Very rarely. 

There is an increasing need, which we have come upon in looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions developed for those technologies independently of each other that need to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently. That's what I am going talk about. 

 

The Open Group has recently published its first thoughts on Open Platform 3.0, that's the White Paper. I will be saying what’s in that White Paper, what the platform will do -- and because this is just the first rough picture of what Open Platform 3.0 could be like -- how we're going to complete the definition. Then, I will wrap up with a few conclusions. 

 

So what is in the current White Paper? Well, what we see as being eventually in the Open Platform 3.0 standards are a number of things. You could say that a lot of these are common architecture artifacts that can be used in solution development, and that's why I'm talking about a common architecture environment.

 

Statement of need objectives and principles is not that of course; it's why we're doing it. 

 

Dietvorst

Definition of key terms: clearly you have to share an understanding of the key terms if you're going to develop common solutions or integrable solutions. 

 

Stakeholders and their concerns: an important feature of an architecture development. An understanding of the stakeholders and their concerns is something that we need in the standard. 

 

A capabilities map that shows what the products and services do that are in the platform. 

 

And basic models that show how those platform components work with each other and with other products and services. 

 

Explanation: this is an important point and one that we haven’t gotten to yet, but we need to explain how those models can be combined to realize solutions. 

 

Standards and guidelines

 

Finally, it's not enough to just have those models; there needs to be the standards and guidelines that govern how the products and services interoperate. These are not standards that The Open Group is likely to produce. They will almost certainly be produced by other bodies, but we need to identify the appropriate ones and, probably in some cases, coordinate with the appropriate bodies to see that they are developed.

 

van der Reep

What we have in the White Paper is an initial statement of needs, objectives, and principles; definitions of some key terms; our first-pass list of stakeholders and their concerns; and maybe half a dozen basic models. These are in an analysis of the use cases, the business use cases, for Open Platform 3.0 that were developed earlier. 

 

These are just starting points, and it's incomplete. Each of those sections is incomplete in itself, and of course we don't have the complete set of sections. It's all subject to change. 

 

This is one of the basic models that we identified in the snapshot. It's the Mobile Connected Device Model and it comes up quite often. And you can see, that stack on the left is a mobile device, it has a user, and it has a platform, which would probably be Android or iOS, quite likely. And it has infrastructure that supports the platform. It’s connected to the World Wide Web, because that’s part of the definition of mobile computing. 

 

On the right, you see, and this is a frequently encountered pattern, that you don't just use your mobile phone for running an app. Maybe you connect it to a printer. Maybe you connect it to your headphones. Maybe you connect it to somebody's payment terminal. You might connect it to various things. You might do it through a USB. You might do it through Bluetooth. You might do it by near field communication (NFC)

 

But you're connecting to some device, and that device is being operated possibly by yourself, if it was headphones; and possibly by another organization if, for example, it was a payment terminal and the user of the mobile device has a business relationship with the operator of the connected device.

 

That’s the basic model. It's one of the basic models that came up in the analysis of use cases, which is captured in the White Paper. As you can see, it's fundamental to mobile computing and also somewhat connected to the Internet-of-Things.

 

That's the kind of thing that's in the current White Paper, a specific example of all those models in the current White Paper. Let’s move on to what the platform is actually going to do? 

 

There are three slides in this section. This slide is probably familiar to people who have watched presentations on Open Platform 3.0 previously. It captures our understanding of the need to obtain information from these new technologies, the social media, the mobile devices, sensors, and so on, the need to process that information, maybe on the cloud, and to manage it, stewardship, query and search, all those things. 

 

Ultimately, and this is where you get the business value, it delivers it in a form where there is analysis and reasoning, which enables enterprises to take business decisions based on that information.

 

So that’s our original picture of what Open Platform 3.0 will do. 

 

IT as broker

 

This next picture captures a requirement that we picked up in the development of the business scenario. A gentleman from Shell gave the very excellent presentation this morning. One of the things you may have picked up him saying was that the IT department is becoming a broker.

 

Traditionally, you would have had the business use in the business departments and pretty much everything else on that slide in the IT department, but two things are changing. One, the business users are getting smarter, more able to use technology; and two, they want to use technology either themselves or to have business technologists closely working with them.

 

Systems provisioning and management is often going out to cloud service providers, and the programming, integration, and helpdesk is going to brokers, who may be independent cloud brokers. This is the IT department in a broker role, you might say. 

 

But the business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, it's not a good defense to say, "Well, our broker did it that way." You are responsible. 

 

Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved. And that’s a way of using IT that Open Platform 3.0 must and will accommodate. 

 

Finally, I mentioned the integration of independently developed solutions. This next slide captures how that can be achieved. Both of these, by the way, are from the analysis of business use cases. 

 

Also, you'll also notice they are done in ArchiMate, and I will give ArchiMate a little plug at this point, because we have found it actually very useful in doing this analysis. 

 

But the point is that if those solutions share a common model, then it's much easier to integrate them. That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.

 

It will also have common artifacts, such as architectural principles, stakeholders, definitions, descriptions, and so on. If the independently developed architectures use those, it will mean that they can be integrated more easily.

 

So how are we going to complete the definition of Open Platform 3.0? This slide comes from our business use cases’ White Paper and it shows the 22 use cases we published. We've added one or two to them since the publication in a whole range of areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on. Those use cases touch on a wide variety of areas.

 

You can see that we've started an analysis of those use cases. This is an ArchiMate picture showing how our first business use case, The Mobile Smart Store, could be realized. 

 

Business layer

 

And as you look at that, you see common models. If you notice, that is pretty much the same as the TOGAF Technical Reference Model (TRM) from the year dot. We've added a business layer. I guess that shows that we have come architecturally a little way in that direction since the TRM was defined. 

 

But you also see that the same model actually appears in the same use case in a different place, and it appears all over the business use cases.

 

But you can also see there that the Mobile Connected Device Model has appeared in this use case and is appearing in other use cases. So as we analyze those use cases, we're finding common models that can be identified, as well as common principles, common stakeholders, and so on. 

 

So we have a development cycle, whereby the use cases provide an understanding. We'll be looking not only at the ones we have developed, but also at things like the healthcare presentation that we heard this morning. That is really a use case for Open Platform 3.0 just as much as any of the ones that we have looked at. We'll be doing an analysis of those use cases and the specification and we'll be iterating through that.  

 

The White Paper represents the very first pass through that cycle. Further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt, more than one version of that standard.

 

In conclusion, Open Platform 3.0 provides a common environment for architecture development. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies. 

 

Cognitive computing no doubt has been suggested as another technology that Open Platform 3.0 might, in due course, accommodate. What would that lead to? That would lead to additional use cases and further analysis, which would no doubt identify some basic models for common computing, which will be added to the platform. 

 

Open Platform 3.0 enables enterprise IT to be user-driven. This is really the revolution on that slide that showed the IT department becoming a broker, and devolvement of IT to cloud suppliers and so on. That's giving users the ability to drive IT directly themselves, and the platform will enable that. 

 

It will deliver the ability to integrate solutions that have been independently developed, with independently developed architectures, and to do that within a business ecosystem, because businesses typically exist within one or more business ecosystems. 

 

Those ecosystems are dynamic. Partners join, partners leave, and businesses cannot necessarily standardize the whole architecture across the ecosystem. It would be nice to do so, but by the time you finish the job, the business opportunity would be gone. 

 

So independently developed integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them. 

 

Iterative process

 

The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features, as I have explained.

 

The development is being done by The Open Platform 3.0 Forum, and these are representatives of Open Group members. They are defining the platform. And the forum is not only defining the platform, but it's also working on standards and guides in the technology areas. 

 

For example, we have reformed a group to develop a White Paper on big data. If you want to learn about that, Ken Street, who is one of the co-chairs, is in this conference. And we also have cloud projects and other projects.

 

But not only are we doing the development within the Forum, we welcome input and comments from other individuals within and outside The Open Group and from other industry bodies. That’s part of the purpose of publishing the White Paper and giving this presentation to obtain that input and comment. 

 

If you need further information, here's where you can download the White Paper from. You have to give your name and email address and have an Open Group ID and then it's free to download. 

 

If you are looking for deeper information on what the Forum is doing, the Forum Plato page, which is the next URL, is the place to find it. Nonmembers get some information there; Forum members can log in and get more information on our work in progress. 

 

If your organization is not a member of The Open Group, you can find out about Open Group membership from that URL. So thank you very much for your attention.

 

Boardman: Next is Lydia Duijvestijn, who is one of these people who, years ago when I first got involved in this business, we used to call Technical Architects, when the term meant something. The Technical Architect was the person who made sure that the system actually did what the business needed it to do, that it performed, that it was reliable, and that it was trustworthy. 

 

That's one of her preoccupations. Lydia is going to give us a short presentation about some ideas that she is developing and is going to contribute to The Open Platform 3.0. 

 

Quality of service

 

Duijvestijn: Like Stuart said, my profession is being an architect, apart from your conventional performance engineer. I lead a worldwide community within IBM for performance and competency. I've been working a couple of years with the Dutch Research Institute on projects around quality of service. That basically is my focus area within the business. I work for Global Services within IBM. 

 

Duijvestijin

What I want to achieve with this presentation is for you to get a better awareness of what functional requirements, functional characteristics, or quality of service characteristics are, and why they won't just appear out of the blue when the new world of Platform 3.0 comes along. They are getting more and more important. 

 

I will zoom in very briefly on three categories; performance and scalability, availability and business continuity, and security and privacy. I'm not going to talk in detail about these topics. I could do that for hours, but we don’t have the time. 

 

Then, I'll briefly start the discussion on how that reflects into Platform 3.0. The goal is that when we're here next year at the same time, maybe we would have formed a stream around it and we would have many more ideas, but now, it's just in the beginning.

 

This is a recap, basically, of a non-functional requirement. We have to start the presentation with that, because maybe not everybody knows this. They basically are qualities or constraints that must be satisfied by the IT system. But normally, it's not the highest priority. Normally, it's functionality first and then the rest. We'll find out about that later when the thing is going into production, and then it's too late. 

 

So what sorts of non-functionals do we have? We have run-time non-functionals, things that can be observed at run-time, such as performance, availability, or what have you. We also have non-run-time non-functionals, things that cannot apparently be tested, such as maintainability, but they are all very important for the system. 

 

Then, we have constraints, limitations that you have to be aware of. It looks like in the new world, there are no limitations, cloud is endless, but in fact it's not true. 

 

Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen. You could lose business. You could lose image. And many other things could happen to you. It's not seen as something positive to work on it. It's seen as a risk if you don’t do it, but it's a significant risk. 

 

We've seen occasions where a system was developed that was really doing what it should do in terms of functionality. Then, it was rolled into production, all these different users came along, and the website completely collapsed. The company was in the newspapers, and it was a very bad place to be in. 

 

As an example, I took this picture in Badaling Station, near the Great Wall. I use this in my performance class. This depicts a mismatch between the workload pattern and the available capacity. 

 

What happens here is that you take the train in the morning and walk over to Great Wall. Then you've seen it, you're completely fed up with it, and you want to go back, but you have to wait until 3 o’clock for the first train. The Chinese people are very patient people. So they accept that. In the Netherlands people would start shouting and screaming, asking for better.

 

Basic mismatch

 

This is an example from real life, where you can have a very dissatisfied user because there was a mismatch between the workload, the arrival pattern, and available capacity. 

 

But it can get much worse, here we have listed a number of newspaper quotes as a result of security incidents. This is something that really bothers companies. This is also non-functional. It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue. 

 

There are many, many non-functional aspects, as you can see. This guy is not making sense out of it. He doesn’t know how to balance it, because it's not as if you can have them all. If you put too much focus on one, it could be bad for the other. So you really have to balance and prioritize. 

 

Not all non-functionals are equally important. We picked three of them for our conference in February: performance, availability and security. I now want to talk about performance.  

 

Everybody recognizes this picture. This was Usain Bolt winning his 100 meters in London. Why did I put this up? Because it very clearly shows what it's all about in performance. There are three attributes that are important.

 

You have the response time, basically you compare the 100 meters time from start to finish. 

 

You have the throughput, that is the number of items that can be processed with the time limit. If this is an eight-lane track, you can have only eight runners at the same time. And the capacity is basically the fact that this was an eight lane track, and they are all dependent on each other. It's very simple. But you have to be aware of all of them when you start designing your system. So this is performance. 

 

Now, let’s go to availability. That is really a very big point today, because with the coming of the Internet in the '90s, availability really became important. We saw that when companies started opening up their mainframes for the Internet, they weren't designed for being open all the time. This is about scheduled downtime. Companies such as eBay, Amazon, Google are setting the standard. 

 

We come to a company, and they ask us for our performance engineering. We ask them what their non-functional requirements are. They tell us that it has to be as fast as Google.

 

Well, you're not doing the same thing as Google; you are doing something completely different. Your infrastructure doesn’t look as commodity as Google's does. So how are you going to achieve that? But that is the perception. That is what they want. They see that coming their way.

 

Big challenge

 

They're using mobile devices, and they want it also in the company. That is the standard, and disaster recovery is slowly going away. RTO/RPO are going to 0. It's really a challenge. It's a big challenge.

 

The future is never-down technology independence, and it's very important to get customer satisfaction. This is a big thing.

 

Now, a little bit about security incidents. I'm not a security specialist. This was prepared by one of my colleagues. Her presentation shows that nothing is secure, nothing, and you have all these incidents. This comes from a report that tracks over several months what sort of incidents are happening. When you see this, you really get frightened. 

 

Is there a secure site? Maybe, they say, but in fact, no, nothing is secure. This is also very important, especially nowadays. We're sharing more and more personal information over the net. It's really important to think about this. 

 

What does this have to do with Platform 3.0? I think I answered it already, but let's make it a little bit more specific. Open Platform 3.0 has a number of constituents, and Chris has introduced that to you. 

 

I want to highlight the following clouds, the ones with the big letters in it. There is Internet-of-Things, social, mobile, cloud, big data, but let’s talk about this and briefly try to figure out what it means in terms of non-functionals. 

 

In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place. 

 

If this is about healthcare, you can understand that privacy must be ensured. Social security privacy is very important in that respect. And it doesn’t come for free. We have to design it into the systems. 

 

Now, big data. We have the four Vs there; Volume, Variety, Velocity, and Veracity. That already suggests a high focus on non-functionals, especially volume, performance, veracity, security, velocity, performance, and also availability, because you need this information instantaneously. When decisions have to be made based on it, it has to be there. 

 

So non-functionals are really important for big data. We wrote a white paper about this, and it's very highly rated. 

 

Cloud has a specific capacity of handling multi-tenant environments. So we have to make sure that the information of one tenant isn’t entered in another tenant’s environment. That's a very important security problem again. There are different workloads coming in parallel, because all these tenants have to have very specific types of workloads. So we have to handle it and balance it. That’s a performance problem. 

 

Non-functional aspects

 

Again, there are a lot of non-functional aspects. For mobile and social, the issue is that  you have to be always on, always there, accessible from anywhere. In social especially, you want to share your photos, you personal data, with your friends. So it's social security again. 

 

It's actually very important in Platform 3.0 and it doesn’t come for free. We have to design it into our model. 

 

That's basically my presentation. I hope that you enjoyed it and that it has made you aware of this important problem. I hope that, in the next year, we can start really thinking about how to incorporate this in Platform 3.0. 

Boardman: Let me introduce the panelists: Andy Jones of SOA Software, TJ Virdi from Boeing, Louis Dietvorst from Enexis, Sjoerd Hulzinga from KPN, and Frans van der Reep from Inholland University. 

 

We want the panel to think about what they've just heard and what they would like Platform 3.0 to do next. What is actually going to be the most important, the most useful, for them, which is not necessarily the things we have thought of.

 

Jones: The subject of interoperability, the semantic layer, is going to be a permanent and long running problem. We're seeing some industries. for example, clinical trials data, where they can see movement in that area. Some financial services businesses are trying to abstract their information models, but without semantic alignment, the vision of the platform is going to be difficult to achieve. 

 

Dietvorst: For my vision on Platform 3.0 and what it should support, I am very much in favor of giving the consumer or the asking party the lead, empower them. If you develop this kind of platform thinking, you should do it with your stakeholders and not for your stakeholders. And I wonder how can we attach those kind of stakeholders that they become co-creators. I don’t know the answer. 

 

Male Speaker: Neither do I, but I feel that what The Open Group should be doing next on the platform is, just as my neighbor said, keep the business perspective, the user perspective, continuously in your focus, because basically that’s the only reason you're doing it. 

 

In the presentation just now from Lydia about NFRs, you need to keep in mind that one of the most difficult, but also most important, parts of the model ought to be the security and blind spots over it. I don’t disagree that they are NFRs. They are probably the most important requirements. It’s where you start. That would be my idea of what to do next. 

 

Not platform, but ecosystem

 

Male Speaker: Three remarks. First, I have the impression this is not a platform, but an ecosystem. So one should change the wording, number one.You should correct the wording. 

 

Second, I should stress the business case. Why should I buy this? What problem does it solve? I don’t know yet. 

 

The third point, as the Open Group, I would welcome a lobby to make IT vendors, in a formal sense, product reliable like other industries -- cars, for example. That will do a lot for the security problem the last lady talked about. IT centers are not reliable. They are not responsible. That should change in order to be a grownup industry. 

 

Virdi: I agree about what’s been said, but I will categorize in three elements here what I am looking for from a Boeing perspective on what platform should be doing: how enterprises could create new business opportunities, how they can actually optimize their current business processes or business things, and how they can optimize the operational aspects. 

 

So if there is a way to expedite these by having some standardized way to do things, Open Platform 3.0 would be a great forum to do that. 

 

Boardman: Okay, thanks.Louis made the point that we need to go to the stakeholders and find out what they want. Of course, we would love if everybody in the world were a member of The Open Group, but we realize that that isn’t going to be the case tomorrow, perhaps the day after, who knows. In the meantime, we're very interested in getting the perspectives of a wider audience. 

 

So if you have things you would like to contribute, things you would like to challenge us with, questions, more about understanding, but particularly if you have ideas to contribute, you should feel free to do that. Get in touch probably via Chris, but you could also get in touch with either TJ or me as co-chairs, and put in your ideas. Anybody who contributes anything will be recognized. That was a reasonable statement, wasn’t it Chris? You're official Open Group? 

 

Is there anybody down there who has a question for this panel, raise your hand? 

 

Duijvestijn: Your remark was that IT vendors are not reliable, but I think that you have to distinguish the layers of the stack. In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time. 

 

If you look at the Gartner reports about incidents in performance and availability, what you see is that most of these happen because of process problems and application problems. That is where the focus has to be. Regarding the availability of applications, nobody ever publishes their book rate.

 

Boardman: Would anybody like to react to that?

 

Male Speaker: I totally agree with what Lydia was just saying. As soon as you go up in the stack, that’s where the variation starts. That’s where we need to make sure that we provide some kind of capabilities to manage that easily, so the business can make those kind of expedited way to provide business solutions on that. That’s where we're actually targeting it. 

 

The lower in the stack we go, it's already commoditized. So we're just trying to see how far high we can go and standardize those things.

 

Two discussions

 

Male Speaker: I think there are two discussions together; one discussion is about the reliability on the total [IT process], where the fault is in a [specific IT stack]. I think that’s two different discussions.

 

I totally agree that IT, or at least IT suppliers, need to focus more on reliability when they get the service as a whole. The customers aren’t interested in where in the stack the problem is. It should be reliable as a whole, not on a platform or in the presentation layer. That’s a non-issue, non-operational, but a non-issue. The issue is it should be reliable, and I totally agree that IT has a long way to go in that department.  

Boardman: I'm going to move on to another question, because an interesting question came up on the Tweets. The question is: "Do you think that Open Platform 3.0 will change how enterprises will work, creating new line of business applications? What impact do you see?" An interesting question. Would anybody like to endeavor to answer that?

 

Male Speaker: That’s an excellent question actually. When creating new lines of business applications, what we're really looking for is semantic interoperability. How can you bridge the gap between social and business media kind of information, so you can utilize the concept of what’s happening in the social media? Can you migrate that into a business media kind of thing and make it a more agile knowledge or information transfer. 

 

For example, in the morning we were talking about HL7 as being very heavyweight for healthcare systems. There may be need to be some kind of an easy way to transform and share information. Those kind of things. If we provide those kind of capabilities in the platform, that will make the new line-of-business applications easier to do, as well as it will have an impact in the current systems as well. 

 

Jones: We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources. And with new functionality being more focused on a particular application area, there's less utility bundling. 

 

It also leads on to the question of what happens to the existing line of business apps. How will they exist in an enterprise, which is trying to go for a Platform 3.0 kind of strategy? Lydia’s point about NFRs and the importance of the NFRs brings into light a question of applications that don’t meet NFRs which are appropriate to the new world, and how you retrofit and constrain their behavior, so that they do play well in that kind of architecture. This is an interesting problem for most enterprises. 

Boardman: There's another completely different granularity question here. Is there a concept of small virtualization, a virtual machine on a watch or phone? 

 

Male Speaker: On phones and all, we have to make a compartmentalized area, where it's kind of like a sandbox. So you can consider that as a virtualization of area, where you would be doing things and then tearing that apart. 

 

It's not similar to what virtualization is, but it's creating a sandbox in smart devices, where enterprises could utilize some of their functionality, not mingling up with what are called personal device data. Those things are actually part of the concept, and could be utilized in that way. 

 

Architectural framework

 

Question: My question about virtualization is linked to whether this is just an architectural framework. Because when I hear the word platform, it's something I try to build something on, and I don’t think this is something I build on. If you can, comment on the validity of the use of the word platform here. 

 

Male Speaker: I don’t care that much what it is called. If I can use it in whatever I am doing and it produces a positive outcome for me, I'm okay with it. I gave my presentation the Internet-of-Things, or the Internet of everything, or the everywhere or the Thing of Net, or the Internet of People. Whatever you want to call it, just name it, if you can identify its object that’s important to you. That’s okay with me. The same thing goes for Platform 3.0 or whatever.

 

I'm happy with whatever you want to call it. Those kinds of discussions don't really contribute to the value that you want to produce with this effort. So I am happy with anything. You don't agree?

 

Male Speaker: A large part of architecture is about having clear understandings and what they mean.

 

Male Speaker: Let me augment what was just said, and I think Dr. Harding was also alluding to this. It is in the stage where we're defining what Platform 3.0 is. One thing for sure is that we're going to be targeting it as to how you can build that architectural environment. 

 

Whether it may have frameworks or anything is still to be determined. What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that. Whether it's a pure translation of a platform per se is still to be determined. 

 

Boardman: The Internet-of-Things is still a very fuzzy definition. Here we're also looking at fuzzy definitions, and it's something that we constantly get asked questions about. What do we mean by Platform 3.0? 

 

The reason this question is important, and I also think Sjoerd’s answer to it is important, is because there are two aspects of the problem. What things do we need to tie down and define because we are architects and what things can we simply live with. As long as I know that his fish is my bicycle, I'm okay. 

 

It's one of the things we're working on. One of the challenges we have in the Forum is what exactly are we going to try and tie down in the definition and what not? Sorry, I had to slip that one in. 

 

I wanted to ask about trust, how important you see the issue of trust. My attention was drawn to this because I just saw a post that the European Court of Justice has announced that Google has to make it possible for any person or organization who asks for it to have Google erase all information that Google has stored anywhere about them

 

I wonder whether these kinds of trust issues going to become critical for the success of this kind of  ecosystem, because whether we call it a platform or not, it is an ecosystem.

 

Trust is important

 

Male Speaker: I'll try to start an answer. Trust is a very important part ever since the Internet became the backbone of all of those processes and all of those systems in those data exchanges. The trouble is that it's very easy to compromise that trust, as we have seen with the word from the NSA as exposed by Snowden. So yes, trust ought to be a part of it, but trust is probably pretty fragile the way w're approaching it right now. 

 

Do I have a solution to that problem? No, I don't. Maybe it will come in this new ecosystem. I don't see it explicitly being addressed, but I am assuming that, between all those little clouds, there ought to be some kind of a trust relationship. That's my start of an answer.

 

Jones: Trust is going to be one of those permanently difficult questions. In historical times, maybe the types of organizations that were highest in trust ratings would have been perhaps democratic governments and possibly banks, neither of which have been doing particularly well in the last five years in that area. 

 

It’s going to be an ethical question for organizations who are gathering and holding data on behalf of their consumers. We know that if you put a set of terms and conditions in front of your consumers, they will probably click on "agree" without reading it. So you have to decide what trust you're going to ask for and what trust you think you can deliver on. 

 

Data ownership and data usage is going to be quite complex. For example, in clinical trials data, you have a set of data, which can be identified against the named individual. That sounds quite fine, but you can then make that set of data so it’s anonymized and is known to relate to a single individual, but can no longer identify who. Is that as private? 

 

That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then? It seems to quickly goes out of the scope of reason and understanding of the consumer themselves. So the responsibility for ethical behavior appears to lie with the experts, which is always quite a dangerous place.

 

Male Speaker: We probably all agree that trust management is a key aspect when we are converging different solutions from so many partners and suppliers. When we're talking about Internet of data, Internet-of-Things, social, and mobile, no one organization would be providing all the solutions from scratch. 

 

So we may be utilizing stuff from different organizations or different organizational boundaries. Extending the organizational boundaries requires a very strong trust relationship, and it is very significant when you are trying to do that.

 

Boardman: There was a question that went through a little while ago. I'm noticing some of these questions are more questions to The Open Group than to our panel, but one I felt I could maybe turn around. The question was: "What kind of guidelines is the Forum thinking of providing?"

 

I'd like to do is turn that around to the panel and ask: what do you think it would be useful for us to produce? What would you like a guideline on, because there would be lots of things where you would think you don’t need that, you'll figure it out for yourself. But what would actually be useful to you if we were to produce some guidelines or something that could be accepted as a standard? 

 

Does it work?

 

 

Male Speaker: Just go to a number of companies out there and test whether it works. 

 

Male Speaker: In terms of guidelines, you mentioned it very well about semantic interoperability. How do you exchange information between different participants in an ecosystem or things built on a platform. 

 

The other thing is how you can standardize things that are yet to be standardized. There's unstructured data. There are things that need to be interrogated through that unstructured data. What are the guiding principles and guidelines that would do those things? So maybe in those areas, Platform 3.0 with the participations from these Forum members, can advance and work on it. 

 

Jones: I think contract, composition, and accumulation. If an application is delivering service to its end users by combining dozens of complementary services, each of which has a separate contract, what contract can it then offer to its end user?

 

Boardman: Does the platform plan to define guidelines and directions to define application programming interfaces (APIs) and data models or specific domains? Also, how are you integrating with major industry reference models? 

 

Just for the information, some of this is work of other parts of The Open Group's work around industry domain reference models and that kind of thing. But in general, one of the things we've said from the Platform, from the Forum, is that as much as possible, we want to collate what is out there in terms of standards, both in APIs, data models, open data, etc.

 

We're desperate not to go and reproduce anybody else’s work. So we are looking to see what’s out there, so the guideline would, as far as possible, help to understand what was available in which domain, whether that was a functional domain, technical domain, or whatever. I just thought I would answer those because we can’t really ask the panel that.

 

We said that the session would be about dealing with realizing business value, and we've talked around issues related to that, depending on your own personal take. But I'd like to ask the members of the panel, and I'd like all of you to try and come up with an answer to it: What do you see are the things that are critical to being able to deliver business value in this kind of ecosystem?

 

I keep saying ecosystem, not to be nice to Frans, I am never nice to Frans, but because I think that that captures what we are talking about better. So do you want to start TJ? What are you looking for in terms of value? 

 

Virdi: No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly. The expectations from business values or businesses to provide new solutions in real-time, information exchange, and all those things are the norm now. 

 

We can provide some of those as a baseline to provide as maybe foundational aspects to business to realize those new things what we are looking as in social media or some other places, where things are getting exchanged so quickly, and the kind of payload they have is a very small payload in terms of information exchange.

 

So keeping the integrity of information, as well as sharing the information with the right people at the right time and in the right venue, is really the key when we can provide those kind of enabling capabilities.

 

Ease of change

 

Jones: In Lydia’s presentation, at the end, she added the ease of use requirement as the 401st. I think the 402nd is ease of change and the speed of change. Business value pretty much relies on dynamism, and it will become even more so. Platforms have to be architected in a way that they are sufficiently understood that they can change quickly, but predictably, maintaining the NFRs. 

 

Dietvorst: One of the reasons why I would want to adopt this new ecosystem is that it gives me enough feeling that it is a reliable product. What we know from the energy system innovations we've done the last three or four years is that the way you enable and empower communities is to build up the trust themselves, locally, like you and your neighbor, or people who are close in proximity. Then, it’s very easy to build trust. 

 

Some call it social evidence. I know you, you know me, so I trust you. You are my neighbor and together we build a community. But the wider this distance is, the less easy it is to trust each other. That’s something you need to build in into the whole concept. How do you get the trust if it is something that's a global concept. It seems hardly possible.

 

van der Reep: This ecosystem, or whatever you're going to call it, needs to bring the change, the rate of change. "Change is life" is a well-known saying, but lightning-fast change is the fact of life right now, with things like social and mobile specifically. 

 

One Twitter storm and the world has a very different view of your company, of your business. Literally, it can happen in minutes. This development ought to address that, and also provide the relevant hooks, if you will, for businesses to deal with that. So the rate of change is what I would like to see addressed in Platform 3.0, the ecosystem. 

 

Male Speaker: It should be cheap and reliable, it should allow for change, for example Cognition-as-a-Service, and it should hide complexity for those "stupid businesspeople" and make it simple. 

 

Boardman: I want to pick up on something that Frans just said because it connects to a question I was going to ask anyway. People sometimes ask us why the particular five technologies that we have named in the Forum: cloud, big data, big-data analysis, social, mobile, and the Internet-of-Things? It's a good question, because fundamental to our ideas in the Forum that it’s not just about those five things. Other things can come along and be adopted. 

 

One of the things that we had played with at the beginning and decided not to include, just on the basis of a feeling about lack of maturity, was cognitive computing. Then, here comes Frans and just mentions cognitive things. 

 

I want to ask the panel: "Do you have a view on cognitive computing? Where is it? When we can expect it to be something we could incorporate? Is it something that should be built into the platform, or is it maybe just tangential to the platform?" Any thoughts? 

 

Male Speaker: I did a speech on this last week. In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in. That's a very big market and there's no reason not to include it in the lower levels of the platform and to make it into cloud. 

 

We have lots of examples already in the Netherlands that ICT devices recognize emotions and from recognizing speech. Recognizing emotion, you can optimize the matching of the company with the customer, and you can hide complexity. I think there’s a big market for that. 

 

What the business wants

 

Virdi: We need to look at it in the context of what business wants to do with that. It could be enabling things that could be what I consider as proprietary things, which may not be part of the platform for others to utilize. So we have to balance out what would be the enabling things we can provide as a base of foundation for everyone to utilize. Or companies can build on top of it what values it would provide. We probably have to do a little bit further assessment on that.

 

Male Speaker: I'd like to follow up on this notion of cognitive computing, the notion that maybe objects are self-aware, as opposed to being dumb -- self-aware being an object, a sensor that’s aware of its neighbor. When a neighbor goes away, it can find other neighbors. Quite simple as opposed to a bar code. 

 

We see that all the time. We have kids that are civil engineers and they pour it in concrete all the time. In terms of cost, in terms of being able to have the discussion, it's something that’s in front of us all the time. So at this time, should we probably think about at least the binary aspect of having self-aware sensors as opposed to dumb sensors?

 

Male Speaker: From aviation perspective, there are some areas where dumb devices would be there, as well as active devices. There are some passive sensor devices where you can just interrogate them when you request and there are some devices that are active, constantly sending sensor messages. Both are there in terms of utilization for business to create new business solutions. 

 

Both of them are going to be there, and it depends upon what business needs are to support those things. Probably we could provide some ways to standardize some of those and some other specifications. For example, an ATA, for aviation. They're doing that already. Also, in healthcare, there's HL7, looking for doing some smart sensor devices to exchange information as well. So some work is already happening in the industry. 

 

There are so many business solutions that have already been built on those. Maybe they're a little bit more proprietary. So a platform could provide some ways to provide a standard base to exchange that information. It may be some things relate to guidelines and how you can exchange information in those active and passive sensor devices.

 

Jones: I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling. I have an interest in complex consumer devices in retail and other field locations, especially self-service kiosks, and in that field quite a lot of effort has been spent trying to infer the states of devices by their behavior, rather than just having them tell you what's going on, which should be so much easier. 

 

Male Speaker: Of course, it depends on where the boundary is between aware and not aware. If there is thermometer in the field and it sends data that it's 15 degrees centigrade, for example, do I really want to know whether it thinks it's chilly or not? I'm not really sure about it. 

 

I'd have to think about it a long time to get a clear answer on whether ther's a benefit in self-aware devices in those kinds of applications. I can understand that there will be an advantage in self-aware sensor devices, but I struggle a little to see any pattern or similarities in those circumstances. 

 

I could come up with use cases, but I don’t think it's very easy to come up with a certain set of rules that leads to the determination whether or not a self-aware device is applicable in that particular situation. It's a good question. I think it deserves some more thought, but I can't come up with a better answer than that right now.

 

Case studies

 

Skilton: I just wanted to add to the embedded question, because I thought it was a very good one. Three case studies happened to me recently. I was doing some work with Rolls Royce and the MH370,  the flight that went down. One of the key things about the flight was that the engines had telemetry built in. TJ, you're more qualified to talk about this than I am, but essentially there was information that was embedded in the telemetry of the technology of the plane. 

 

As we know from the mass media that reported on that, that they were able to analyze from some of the data potentially what was going on in the flight. Clearly, with the band connection, it was the satellite data that was used to project it was going south, rather than north. 

 

So one of the lessons there was that smart information built into the object was of value. Clearly, there was a lesson learned there. 

 

With Coca Cola, for example, what's very interesting in retail is that a lot of the shops now have embedded sensors in the cooler systems or into products that are in the warehouse or on stock. Now, you're getting that kind of intelligence over RFID coming back into the supply chain to do backfilling, reordering, and stuff like that. So all of this I see is smart. 

 

Another one is image recognition when you go into a car park court. You have your face being scanned in, whether you want it or not. Potentially, they can do advertising in context. These are all smart feedback loops that are going on in these ecosystems and are happening right now. 

 

There are real equations of value in doing that. I was just looking at the Open Automotive Alliance. We've done some work with them around connected car forecast. Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years with BMW, Jaguar Land Rover, and Volvo. All the major car players are doing this right now. 

 

So Open Platform 3.0 for me is riding that wave of understanding where the  intelligence and the feedback mechanisms work within each of the supply chains, within each of the contexts, either in the plane, in the shop, or whatever, starting to get intelligence built in. 

 

We talk about big data and small data at the university that I work at. At the moment, we're moving from a big-data era, which is analytics, static, and analyzing the process in situ. Most likely it's Amazon sort of purchasing recommendations or advertisement that you see on your browser today. 

 

We 're moving to a small-data era, which is where you have very much data in context of what's going on in the events at that time. I would expect this with embedded technologies. The feedback loops are going to happen within each of the traditional supply chains and will start to build that strength.

 

The issue for The Open Group is to capture the sort of standards of interoperability and connectivity much like what Boeing is already leading with, with the automotive sector , and with the airline sector. It's riding that wave, because the value of bringing that feedback into context, the small-data context is where the future lies. 

 

Infrastructure needed

 

Male Speaker: I totally agree. Not only are the devices or individual components getting smarter, but that requires infrastructures to be there to utilize that sensing information in a proper way. From the Platform 3.0 guidelines or specifications perspective, determining how you can utilize some devices, which are already smart, and others, which are still considered to be legacy, and how you can bridge those gap would be a good thing to do.

Boardman: Would anyone like to add anything, closing remarks?

 

Jones: Everybody’s perspective and everybody’s context is going to be slightly different. We talked about whether it's a platform ora framework. In the end there will be a built universal 3.0 Platform, but everybody will still have a different view and a different perspective of what it does and what it means to them. 

 

Male Speaker: My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment, where you can experiment and see what happens. Do it at many places at the same time in the world, and let the factors be proof of the pudding. 

 

Male Speaker: Whatever you are going to call it, keep to 3.0, that sounds snappy, but just get the beneficiaries in, get the businesses in, and get the users in.

 

Male Speaker: The more open, the more a commodity it will be. That means that no company can get profit from it. In the end, human interaction and stewardship will enter the market. If you come to London city airport and you find your way in the Tube, there is a human being there who helps you into the system. That becomes very important as well. I think you need to do both, stewardship and these kinds of ecosystems that spread complexity. 

 

Listen to the podcast. Find it on iTunesRead a full transcript or download a copy. Sponsor: The Open Group.

 

You may also be interested in:

Tags:  big data  BriefingsDirect  Dana Gardner  Interarbor Solutions  Internet of things  mobile computing  mobile devices  Platform 3.0  social media  The Open Group  The Open Group Conference 

Share |
PermalinkComments (0)
 

Big data meets the supply chain — SAP’s Supplier InfoNet and Ariba Network combine to predict supplier risk

Posted By Dana L Gardner, Wednesday, June 18, 2014

The next BriefingsDirect case study interview explores how improved visibility analytics and predictive responses are improving supply-chain management. We’ll now learn how SAP’s Supplier InfoNet, coupled with the Ariba Network, allows for new levels of transparency in predictive analytics that reduce risk in supplier relationships.

 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

 

BriefingsDirect had an opportunity to uncover more about about how the intelligent supply chain is evolving at the recent 2014 Ariba LIVE Conference in Las Vegas when we spoke to David Charpie, Vice President of Supplier InfoNet at SAP, and Sundar Kamakshisundaram, Senior Director of Solutions Marketing at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

 

Here are some excerpts:

 

Gardner: We’ve brought two things together here, SAP’s Supplier InfoNet and Ariba Network. What is it about these two that gives us the ability to analyze or predict, and therefore reduce, risk?

 

Charpie: To be able to predict and understand risk, you have to have two major components together. One of them is actually understanding this multi-tiered supply chain. Who is doing business with whom, all the way down the line, from the customer to the raw material in a manufacturing sense? To do that you need to be able to bring together a very large graph, if you will, of how all these companies are inter-linked.

 

Charpie

And that is ultimately what the Ariba Network brings to bear. With over 1.5 million companies that are inter-linked and transacting with each other, we can really see what those supply chains look like.

 

The second piece of it is to bring together, as Sundar talked about, lots of information of all kinds to be able to understand what’s happening at any point within that map. The kinds of information you need to understand are sometimes as simple as who is the company, what do they make, where are they located, what kind of political, geopolitical  issues are they dealing with?

What we find is that suppliers don’t behave the same for everybody.

 

The more complex issues are things around precisely what exact product are they making with what kind of requirements, in terms of performance, and how they’re actually doing that on a customer-by-customer basis. What we find is that suppliers don’t behave the same for everybody.

 

So InfoNet and the network have come together to bring those two perspectives, all the data about how companies perform and what they are about with this interconnectedness of how companies work with each other. That really brings us to the full breadth of being able to address this issue about risk.

Gardner: Sundar, we have a depth of transactional history. We have data, we have relationships, and now we’re applying that to how supply chains actually behave and operate. How does this translate into actual information? How does the data go from your systems to someone who is trying to manage their business process?

 

Kamakshisundaram

Kamakshisundaram: A very good question. If you take a step back and understand the different data points you need to analyze to predict risk, they fall into two different buckets. The first bucket is around the financial metrics that you typically get from any of the big content providers you have in place. We can understand how the supplier is performing, based on current data, and exactly what they’re doing financially, if they’re a public company.

 

The second aspect, through the help of Ariba Network or Supplier InfoNet, is the ability to understand the operational and the transactional relationship a supplier has in place to predict how the supplier is going to behave six-to-eight months from now.

 

For example, you may be a large retailer or a consumer packaged goods (CPG) organization, maybe working with a very large trucking company. This particular trucking company may be doing really well and they may have great historical financial information, which basically puts them in a very good shape.

 

Financial viability

 

But if only one-third of the business is from retail and CPG and the remaining two-thirds comes from some of the challenging industries, all of a sudden, operational and financial viability of the transportation supply chain may not look good. Though the carrier's historical financials may be in good shape, you can’t really predict how the supplier is going to have working capital management in terms of cash available for them to run the business and maintain the operation in a sustainable manner.

 

How does Ariba, Ariba Network, and InfoNet help? By taking all the information across this multitude of variables, not only in a financial metrics, but also the operational metrics, and modeling the supply chain.

 

You don’t limit yourself with the first tier or second tier, but go all the way to the multi-tier supply chain and also the interactions that some of these suppliers may have with their customers. It will help you understand whether this particular supplier will be able to supply the right product and get you the right product to your docks at the right time.

 

Without having this inter-correlation of network data well laid out in a multi-tier supply chain, it would have been almost impossible to predict what is going to happen in this particular supply-chain example.

 

Gardner: What sort of trends or competitive pressures are making companies seek better ways to identify, acquire, or manage information and data to have a better handle on their supply chains?

 

Kamakshisundaram: The pressures are multifaceted. To start with, many organizations are faced with globalization pressure. Finding the right suppliers who can actually supply both the product and service at the right time is a second challenge. And the third challenge many companies grapple with right now is the ability to balance savings and cost reductions with risk mitigation.

 

These two opposing variables have to be in check in order to drive sustainable savings from the bottom line. These challenges, coupled with the supply-chain disruptions, are making it difficult not only to find suppliers, but also to get the right product at the right time.

 

Gardner: When we talk about risk in a supply-chain environment what are we really talking about? Risk can be a number of things in a number of different directions.

 

Many variables

 

Kamakshisundaram: Risk, at a very high level, is composed of many different variables. Many of us understand that risk is a function of, number one, the supply. If you don’t have the right supplier, if you don’t have the right product at the right time, you have risk.

 

And, there is the complexity involved in finding the suppliers to address needs in different parts of the world. You may have a supplier in North America, but if you really want to expand your market share in the Far East, especially in China, you need to have the right supply chain to do that.

 

Companies traditionally have looked at historical information to predict risk. And this is no longer enough because more and more, supply chains are becoming complex. Supply chains are affected by the number of globalized variables including the ability to have suppliers in different parts of the world, and also other challenges which will make risk more difficult to predict in the long run.

 

Gardner: Where do you see the pressures to change or improve how supply-chain issues are dealt with, and how do you also define the risks that are something to avoid in supply-chain management?

 

Charpie: When we think about risk we’re really thinking about it from two dimensions. One of them is environmental risk. That is, what are all the factors outside of the company that are impacting performance?

 

That can be as varied as wars, on one hand, right down to natural disasters and other political types of events that can also cause them to be disrupted in terms of managing their supply base and keeping the kind of cost structure they are looking for.

 

The other kind are more inherent operational types of risks. These are the things like on-time performance risk, as Sundar was referring to. What do we have in terms of quality? What do we have in terms of product and deliverables, and do they meet the needs of the customer?

 

As we look at these two kinds of risks, we’ve seen increasing amounts of disruption, because we’re in a time where the supply chains are getting much longer, leaner, and more complex to manage. As a result of that, you’re seeing that over 40 percent of interruptions right now are caused by interruptions in the supply chain downstream, tiers two, tier three, all the way to tier N.

 

So now we need a different way of managing suppliers than we had in the past. Just working with them  and talking to them about how they do things and what they do isn’t enough. We need to understand how they’re actually managing their suppliers, and so on, down the line.

 

Predicting risk

These are models that behave more like the human brain than like some of the statistical math we learned when we were back in high school

 

Gardner: So, David, it sounds to me as algorithmic or as if a score card is there to generate this analysis. Is that the right way to look at this, or is it just making the data available for other people to reach conclusions that then allows them to reduce their risk?

 

Charpie: There absolutely is an algorithmic component to this. In fact, what we do in Supplier InfoNet and with the Ariba Network is to run machine-learning models. These are models that behave more like the human brain than like some of the statistical math we learned when we were back in high school and college.

 

What it looks for is patterns of behavior, and as Sundar said, we’re looking at how a company has performed in the past with all of their customers. How is that changing? What other variables are changing at the same time or what kinds of events are going on that may be influencing them?

 

We talked about environmental risk a bit ago. We capture information from about 160,000 newswire sources on a daily basis and, on an automated basis, are able to extract what that article is about, who it’s about, and what the impact on supply chain could be.

 

By integrating that with the transactional history of the Ariba Network and by integrating that with all the linkage on who does business with whom, we can start to see a pattern of behavior. That pattern of behavior can then help us understand what’s likely to happen moving forward.

 

To make it a little more concrete, let’s take Sundar’s example of a company having financial trouble. If I take a company, for example, under $100 million, what we have found is that if we see a company that begins to deliver late, within three months of that begins to have quality problems, and within two months or less begins to have cash-flow problems and can’t pay their bills on time, we may be seeing the beginning of a company that’s about to have a financial disaster.

 

Interestingly, what we find is for the pattern that really means something, after those three events. If they begin paying their bills on time all of a sudden, that’s the worst indicator there possibly could be. It’s very counterintuitive, but the models tell us that when that happens, we’re on the verge of someone who will go bankrupt within two to three months of that time frame.

 

Delivery model

 

Gardner: Now I can see why this wasn’t something readily available until fairly recently. We needed to have a cloud infrastructure delivery model. We needed to have the data available and accessible. And then we needed to have a big data capability to drive real-time analysis across multiple tiers on a global scale.

 

So here we are, Ariba LIVE 2014. What are we going to hear when can people start to actually use this? Where are we on the timeline for delivery in this really compelling value?

 

Kamakshisundaram: Both Supplier InfoNet and Ariba Network are available today for customers, so that they can continue to leverage these solutions. With the help of SAP’s innovation team, we’re planning to bring in additional solutions that not only help customers look at real-time risk modeling, but also more of predicted analytical capability show.

They can identify the suppliers they want to track to as many as the entire supply base.

 

Charpie: In terms of the business benefits in what we are offering, the features that really bring to life this notion of integrating the Ariba Network with InfoNet are, first and foremost, an ability to push alerts to our customers on a proactive basis to let them know when something is happening within their supply chain and could be impacting them in any way whatsoever.

 

That is, they can set their own levels. They can set what interests them. They can identify the suppliers they want to track to as many as the entire supply base. We will track those on an automated basis and give them updates to keep them abreast of what’s happening.

 

Second, we’re also going to give them the ability to monitor the entire supply base, from a heat-mapperspective, to strategically see the hot pockets -- by industry, by spend, or by geography -- that they need to pay particular attention to.

 

Third, we’re also going to bring to them this automated capability to look at these 160,000 newswire sources and tell them the newswires that they need to pay attention to, so they can determine what kind of actions can they take from those, based on the activity that they see.

 

We’re also going to bring those predictions to them. We have the ability now to look at and predict performance and disruption and deliver those also as alerts, as well as deeper analytics. By leveraging the power of HANA, we’re able to bring real-time analysis to the customer.

 

They have those tools today, and so it’d be creating a totally personalized experience, where they can look at big data, look at it the way they want to, look at it the way that they believe risk should be measured and monitored, and be able to use that information right there and then for themselves.

 

Sharing environment

 

Last, they also have the ability to do this in an environment where they can share with each other, with their suppliers, and with others in the network, if they choose. What I mean by that is the model that we have used within Supplier InfoNet is very much like you see in Facebook.

 

When you have a supplier and you would like to see more of their supply base you request that you can see that, much like friending someone on Facebook. They will open up that portion -- some, little, none -- of their supply base that they would like you to be able to have access to. Once you have that, you can get alerts on them, you can manage them, and you can get input on them as well.

 

So there’s an ability for the community to work together, and that’s really the key piece that we see in the future, and it’s going to continue to expand and grow as we take InfoNet and the Network out to the market.

Focusing on a certain industry and having the suppliers only in that particular industry will give you only a portion of that information to understand and predict risk.

 

Kamakshisundaram: If you take a step back, you can see why companies haven’t been able to do something like this in the past. There were analytical models available. There were tools and technologies available, but in order to build a model that will help customers identify a multi-tier supply chain risk, you need a community of suppliers who are able to participate and provide information which will continue to help understand where the risk points are.

 

As David mentioned, where is your heat map? What does it say? And also, point to how you not only collect the information, but what kind of mitigating processes you have to put in place to mitigate those risks.

 

In certain industries, we see certain trends, whether it’s automotive or aerospace. A lot of the suppliers that are critical in these industries are cross-industry. Focusing on a certain industry and having the suppliers only in that particular industry will give you only a portion of that information to understand and predict risk.

 

And this is where a community where participants actively share information and insights for the greater good helps. And this is exactly what we’re trying to do with the Ariba Network and Supplier InfoNet.

 

Gardner: I’m trying to help our listeners solidify their thinking of how this would work in a practical sense in the real world. David, do you have any use-case scenarios that come to mind that would demonstrate the impact and the importance and reinforce this notion that you can’t do this without the community involvement?

 

Case study

 

Charpie: Let’s start with a case study. I’m going to talk about one of our customers that is a relatively small electronics distributor.

 

They signed on to use InfoNet and the Ariba Network to better understand what was happening down the multiple tiers of their supply chain. They wanted to make sure that they could deliver to their ultimate customers, a set of aerospace and defense contractors. They knew what they needed, when they needed it, and the quality that was required.

 

To manage that and find out what was going to happen, they loaded up Supplier InfoNet, began to get the alerts, and began to react to them. They found very quickly that they were able to find savings in three different areas that ultimately they could pass on to their customers through lower prices.

 

One of them was that they were able to reduce the amount of time their folks would spend just firefighting the risks that would come up when they didn’t have information ahead of time. That saved about 20 percent on an annual basis.

They needed an independent third party doing it, and SAP and Ariba are a trusted source for doing that.

 

Second, they also found that they were able to reduce the amount of inventory obsolescence by almost 15 percent on an annual basis as a result of that.

 

And third, they found that they were avoiding shortages that historically cut their revenues by about 5 percent due to the fact that previously they couldn’t deliver on product that was demanded often on short notice. With the InfoNet all of these benefits were realized for them and became practical to achieve.

 

Their own perspective on this, relative to the second part of your question, was they couldn’t do this on their own and that no one else could. As they like to say, I certainly wouldn’t share my supply base with my competitor. The idea is that we can take those in aggregate, anonymize them, and make sure the information is cleansed in such a way that no one can know who the contributing folks are.

 

The fact that they ultimately have control of what people see and what they don’t allows them to have an environment where they feel like they can trust it and act on it, and ultimately, they can. As a result, they’re able to take advantage of that in a way that no one could on their own.

 

We’ve even had a few of the aerospace and defense folks who tried to build this on their own. All of them ultimately came back because they said they couldn’t get the benchmark data and the aggregate community data. They needed an independent third party doing it, and SAP and Ariba are a trusted source for doing that.

 

Gardner: For those folks here at Ariba LIVE who are familiar with one or other of these services and programs or maybe not using either one, how do they start? They’re saying, “This is a very compelling value in the supply chain, taking advantage of these big-data capabilities, recognizing that third party role that we can’t do on our own.” How do they get going on this?

 

Two paths

 

Charpie: There are two paths you can take. One of them is that you can certainly call us. We would be more than happy to sit down and go through this and look at what your opportunities are by examining your supply base with you.

 

Second, is to look at this a bit on your own and be reflective. We often take customers through a process, where sit down and look at supply risk and disruption they’ve have had in the past, and based on that, categorize those into the types of disruptions they’ve seen. What is based on quality? What is based on sub-tier issues? What is based on environmental things like natural disasters? Then, we group them.

 

Then we say, let’s reflect on if you had known these problems were going to happen, as Sundar said three, six, eight months ahead, could you have done something that would have impacted the business, saved money, driven more revenue, whatever the outcome may be?

 

If the answer to those questions is yes, then we’ll take those particular cases where the impact is understood and where an early warning system would have made a difference financially. We’ll analyze what that really looks like and what the data tells us. And if we can find a pattern within that data, then we know going in that you're going to be successful with the Network and with InfoNet before you ever start.

We would be more than happy to sit down and go through this and look at what your opportunities are by examining your supply base with you.

 

Gardner: This also strikes me as something that doesn’t fall necessarily into a traditional bucket, as to who would go after these services and gain value from them. That is to say, this goes beyond procurement and just operations, and it enters well into governance, risk, and compliance (GRC).

 

Who should be looking at this in a large organization or how many different types of groups or constituencies in a large organization should be thinking about this unique service?

 

Kamakshisundaram: We have found that it depends on the vertical and the industry. Typically, it all starts with the procurement, trying to understand, making sure they can assure supply, that they can get the right suppliers.

 

Very quickly, procurement also continues to work with supply chain. So you have procurement, supply chain, and depending on how the organization is set up, you also have finance involved, because you need all these three areas to come together.

 

This is one of the projects where you need complete collaboration and trust within the internal procurement organization, supply chain/operations organization, and finance organization.

 

As David mentioned, when we talk to aerospace, as well as automotive or even heavy industrial or machinery companies, some of these organizations already are working together. If you really think about how product development is done, procurement participates at the start of the black-box process, where they actually are part and parcel of the process. You also have finance involved.

 

Assurance of supply

 

To really understand and manage risk in your supply chain, especially for components that go into your end-level product, which makes up significant revenue for your organization, Supplier Management continues all the way through, even after you actually have assurance of supply.

 

The second type of customers we have worked with are in the business services/financial/insurance companies, where the whole notion around compliance and risk falls under a chief risk officer or under the risk management umbrella within the financial organization.

 

Again, here in this particular case, it's not just the finance organization that's responsible for predicting, monitoring, and managing risk. In fact, finance organizations work collaboratively with the procurement organization to understand who their key suppliers are, collect all the information required to accurately model and predict risk, so that they can execute and mitigate risk.

 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  BriefingsDirect  Dana Gardner  David Charpie  InfoNet  Interarbor Solutions  Kamakshisundaram  risk  SAP  supply chain management 

Share |
PermalinkComments (0)
 

Latest ServiceNow update makes turning any awkward process into a managed service available to more workers

Posted By Dana L Gardner, Tuesday, June 17, 2014

IT service management (ITSM) has long been a huge benefit to complex and exception-rich IT operations by helping to standardize, automate and apply a common system-of-record approach to tasks, incidents, assets, and workflows.

 

ServiceNow has been growing rapidly as a software-as-a-service (SaaS) provider of ITSM, but clearly sees a larger opportunity — making service creation, use, and management a benefit to nearly all workers for any number of business processes.

 

It’s one of those rare instances where IT has been more mature and methodological in solving complexity than many other traditional business functions. Indeed, siloed and disjointed "productivity applications" that require lots of manual effort have been a driver to bring service orientation to the average business process.

Traditional applications in any business setting can soon reach their point in inflexibility and break down and therefore don’t scale.

 

Just as in IT operations and performance monitoring, traditional applications in any business setting can soon reach their point in inflexibility and break down and therefore don’t scale. Despite human productivity efforts — via shuffling emails, spreadsheets, phone calls, sticky pads and text messages — processes bog down. Exceptions are boondoggles. Tasks go wanting. Customers can sense it all through lackluster overall performance.

 

So ServiceNow this week launched its Eureka version of its online service management suite with new features aimed at letting non-technical folks build custom applications and process flows, just like the technical folks in IT have been doing for years. Think of it as loosely coupled interactions that span many apps and processes for the rest of us.

 

Available globally

 

Now available globally, the fifth major release of ServiceNow includes more than 100 changes, new modules, and has a new user interface (UI) that allows more visualizations and drag and drop authoring and is more "mobile friendly," says Dave Wright, Chief Strategy Officer at ServiceNow, based in Santa Clara, CA.

 

“Enterprise users just can’t process work fast enough,” says Wright. “So our Service Creator uses a catalog and an new UI to allow workers to design services without IT.”

 

IT does, however, get the opportunity to vet and manage these services, and can decide what gets into the service catalog or not. Those of us who have been banging the SOA drum for years, well predicted this level of user-driven services and self-service business process management.

 

I, for one, am very keen to see how well enterprises pick up on this, especially as the cloud-deployed nature of ServiceNow can allow for extended enterprise process enablement and even a federated approach to service catalogs. Not only are internal processes hard to scale, but those work flows and processes that include multiple companies and providers are also a huge sticking point.

Systems integrators and consultancies may not like it as much, but the time has come for an organic means of automating tasks and complexity that most power users can leverage and innovate on.

 

Systems integrators and consultancies may not like it as much, but the time has come for an organic means of automating tasks and complexity that most power users can leverage and innovate on.

 

With this new release, it’s clear that ServiceNow has a dual strategy. One, it’s expanding its offerings to core IT operators, along the traditional capabilities of application lifecycle management, IT operations management, IT service management, project management, and change management. And there are many features in the new release to target this core IT user.

 

Additionally, ServiceNow has its sights on a potentially much larger market, the Enterprise Service Management (ESM) space. This is where today’s release is more wholly focused. Things like visualization, task boards, a more social way of working, and use of HTML 5 for the services interface, giving the cloud-delivered features native support and adaptability across devices. There is also a full iOS client on the App Store.

 

Indeed, this shift to ESM is driving the ServiceNow roadmap. I attended last month’s Knowledge 14 conference in Las Vegas, and came away thinking that this level of services management could be a sticky on-ramp to a cloud relationship for enterprises. Other cloud on-ramps include public cloud infrastructure as a service (IaaS)hybrid cloud platforms and management, business SaaS apps like Salesforce and Workday, and data lifecycle and analytics services. [Disclosure: ServiceNow paid my travel expenses to the user conference.]

 

Common data model

 

But as a cloud service, ServiceNow, if it attracts a large clientele outside of IT, could prove sticky too. That’s because all the mappings and interactions for more business processes would be within its suite — with the common data model shared by the entire ServiceNow application portfolio.

 

The underlying portfolio of third-party business apps and data are still important, of course, but the ways that enterprises operate at the process level — the very rules of work across apps, data and organizations — could be a productivity enhancement offer too good to refuse if they solve some major complexity problems.

 

Strategically, the cloud provider that owns the processes solution also owns the relationship with the manager corps at companies. And if the same cloud owns the relationship with IT processes — via the same common data model, well, then, that’s where a deep, abiding and lasting cloud business could long dwell. Oh, and its all paid for on an as-needed, per user, OpEx basis.

As a cloud service, ServiceNow, if it attracts a large clientele outside of IT, could prove sticky too. 

 

Specifically, the new ServiceNow capabilities include:

  • Service Creator -- a new feature that allows non-technical business users to create service-oriented applications faster than ever before
  • Form Designer -- a new feature that enables rapid creation and modification of forms with visual drag-and-drop controls
  • Facilities Service Automation -- a new application that routes requests to the appropriate facilities specialists and displays incidents on floor plan visualizations
  • Visual Task Boards -- a new feature to organize services and othervtasks using kanban-inspired boards that foster collaboration and increase productivity
  • Demand Management -- a new application that consolidates strategic requests from the business to IT and automates the steps in the investment decision process
  • CIO Roadmap — a new timeline visualization feature that displays prioritized investment decisions across business functions
  • Event Management - a new application that collects and transforms infrastructure events from third-party monitoring tools into meaningful alerts that trigger service workflows
  • Configuration Automation -- an application that controls and governs infrastructure configuration changes, enhanced to work in environments managed with Chef data center automation.

For more, a blog post on today's news from Wright.

You may also be interested in:

Tags:  BriefingsDirect  cloud computing  Dana Gardner  Dave Wright  Interarbor Solutions  ITSM  SaaS  ServiceNow 

Share |
PermalinkComments (0)
 
Page 1 of 54
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal/Privacy