Posted By Dana L Gardner,
13 hours ago
| Comments (0)
As a provider of both application development management and infrastructure outsourcing, Denmark-based NNIT needed a better way to track, manage and govern the more than 10,000 services across its global data centers.
Beginning in 2010, the journey to better overall services automation paved the way to far stronger cloud services delivery, too. NNIT uses HP Cloud Service Automation (CSA) to improve their deployment of IT applications and data, and to provide higher overall service delivery speed and efficiency.
To learn more about how services standardization leads to improved cloud automation, BriefingsDirect spoke with Jesper Bagh, IT Architect and cloud expert at NNIT, based in Copenhagen. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Tell us about your company and what you do. Then, we’ll get into some of the services delivery problems and solutions that you've been tasked with resolving.
Bagh: NNIT is a service provider located in Denmark. We have offices around the world, China, Philippines, Czech Republic, and the United States. We’re 2,200 employees globally and we're a subsidiary of Novo Nordisk, the pharmaceutical company.
My responsibility is to ensure for the company that business goals can be delivered through functional requirements, and in turning the functional requirements into projects that can be delivered by the organization.
We’re a wall-to-wall, full-service provider. So we provide both application development management and infrastructure outsourcing. Cloud is just one aspect that we’re delivering services on. We started off by doing service-portfolio management and cataloging of our services, trying to standardize the services that we have on the shelf ready for our customers.
That allowed us to then put offerings into a cloud, and to show the process benefits of standardizing of services, doing cloud well, and of focusing on the dedicated customers. We still have customers using our facility management who are not able to leverage cloud services because of compliance or regulatory demands.
We have roughly over 10,000 services in our data centers. We’re trying now to broaden the capabilities of cloud delivery to the rest of the infrastructure so that we get a more competitive edge. We’re able to deliver better quality, and the end users -- at the end of the day -- get their services faster.
Back in the good old days, developers were in one silo and operations were in another silo. Now, we see a mix of resources, both in operations and in development.
We embarked on CSA together with HP back in 2010. Back then, CSA consisted of many different software applications. It wasn't really complete software back then. Now, it’s a full suite of software.
It has helped us to show to our internal groups -- and our customers -- that we have services in the cloud. For us it has been a tremendous journey to show that you can deliver these services fully automatically, and by running them well, we can gain great efficiency.
Gardner: How has this benefited your speed-to-value when it comes to new applications?
Bagh: The adoption of automation is an ongoing journey. I imagine other companies have also had the opportunity of adopting a new breed of software, and a new life in automation and orchestration. What we see is that the traditional operations divisions now suddenly get developers trying to comprehend what they mean, and trying to have them work together to deliver operations automatically.
Back in the good old days, developers were in one silo, and operations were in another silo. Now, we see a mix of resources -- both in operations and in development. So the organizational change management derived from automation projects is key. We started up, when we did service cataloging and service portfolio management, by doing organizational change to see if this could fit into our vision.
Gardner: Now, a lot of people these days like to measure things. It’s a very data-driven era. Have you been able to develop any metrics of how your service automation and cloud-infrastructure developments have shown results, whether it’s productivity benefits or speeds and feeds? Have you measured this as a time-to-value or a time-to-delivery benefit? What have you come up with?
Bagh: As part of the cloud project, we did two things. We did infrastructure as a service (IaaS), but we also did a value add on IaaS. We were able to deliver qualified IaaS to the life science industry fully compliant. That alone, in the traditional infrastructure, would have taken us weeks or months to deliver servers because of all the process work involved. When we did the CSA and the GxP Cloud, we were able to deliver the same server within a matter of hours. So that’s a measurable efficiency that is highly recognized.
Gardner: For other organizations that are also grappling with these issues and trying to go over organization and silo boundaries for improvement in collaboration, do you have any words of advice? Now that you've been doing this for some time and at that key architect level, which I think is really important, what thoughts do you have that you could share with others, lessons learned perhaps?
Bagh: The lesson learned is that having senior management focus on the entire process is key. Having the organization recognized is a matter of change management. So communication is key. Standardization before automation is key.
You need to start out by doing your standardization of your services, doing the real architectural work, identifying which components you have and which components you don't have, and matching them up. It’s trying to do all the Lego blocks in order to build the house. That’s key. The parallel that I always use is there is nothing different for me as an architect than there is for an architect building a house.
The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers.
Gardner: Looking to the future, are there other aspects of service delivery, perhaps ways in which you could gather insights into what's happening across your infrastructure and the results, that end users are seeing through the applications? Do you have any thoughts about where the next steps might be?
Bagh: The next step for us is to be more transparent to our customers. So the vision is now we can deliver services fully automatically. We can run them semi-automatically. We will still do funny stuff from time to time that you need to keep your eyes on. But in order for us to show the value, we need to report on it.
The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers. We have a policy called Open and Honest Value-Adding. From that, we want to show our customers that if we can deliver a service fully automatically and standardized, they know what they get because they see it in a catalog. Then, we should be able to report on it live for the users.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Cloud Service Automation
Posted By Dana L Gardner,
Wednesday, July 30, 2014
| Comments (0)
Over the past five years, the impetus for cloud adoption has been primarily about advancing the IT infrastructure-as-a-service (IaaS) fabric or utility model, and increasingly seeking both applications and discrete IT workload support services from Internet-based providers.
But as adoption of these models has unfolded, it's become clear that the impacts and implications of cloud commerce are much broader and much more of a benefit to the business as a whole as an innovation engine, even across whole industries.
Recent research shows us that business leaders are now eager to move beyond cost and efficiency gains from cloud to reap far greater rewards, to in essence rewrite the rules of commerce.
Our latest BriefingsDirect discussion therefore explores the expanding impact that cloud computing is having as a strategic business revolution -- and not just as an IT efficiency shift. Join a panel of experts and practitioners of cloud to unpack how modern enterprises have a unique opportunity to gain powerful new means to greater business outcomes.
Our panelists are: Ed Cone, the Managing Editor of Thought Leadership at Oxford Economics; Ralf Steinbach, Director of Global Software Architecture at Groupe Danone, the French food multinational based in Paris; Bryan Acker, Culture Change Ambassador for the TELUS Transformation Office at TELUS, the Canadian telecommunications firm, and Tim Minahan, Chief Marketing officer for SAP Cloud and Line of Business Solutions. The panel is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: What has the research at Oxford Economics been telling you about how cloud is reshaping businesses?
Cone: We did a survey for SAP last year, and that became the basis for this program. We went out to 200 executives around the world and asked them, "What are you doing in the cloud? Are you still looking at it for just process speed, efficiency, and cost cutting?"
The numbers that came back were really strong in terms of actually being a part of the business function. Beyond those basics, cloud is very much part of the daily reality of companies today.
We saw that the leading expectation for cloud to deliver significant improvement was in productivity, innovation, and revenue generation. So obviously process, speed, efficiency, and cost cutting are still very important to business, but people are looking to cloud for new lines of business, entering new markets, and developing new products.
In this program, what we did was take that information and go out to executives for live interviews to dive deep into how cloud has become the new engine of business, how these expectations are being met at companies around the world.
Gardner: Are businesses doing this intentionally, or are they basically being forced by what's happening around them?
Minahan: Increasingly, as was just indicated, businesses are moving beyond the IT efficiencies and the total cost of ownership (TCO) benefits of the cloud, and the cloud certainly offers benefits in those areas.
But really what's driving adoption, what's moving us to this tipping point, is that now, by some estimates, 75 percent of all new investments are going into the cloud or hybrid models. Increasingly, businesses are viewing the cloud as a platform for innovation and entirely new engagement models with their customers, their employees, their suppliers and partners, and in some cases, to create entirely new business models.
Just think about what cloud has done for our personal lives. Who would have thought that Apple, a few years ago, would be used to run your home. This is the Apple Home concept that allows you to monitor and manage all of your devices -- your air-conditioning, your alarm, music, and television -- remotely through the cloud.
There's even the quasi business B2B and B2C models around crowd sourcing and crowd funding from folks like Kickstarter or payment offerings like Square. These are entirely new engagement models, new business models that are built on the back of this emergence of cloud, mobile, and social capabilities.
Gardner: Right, and it seems that one of these benefits is that we can cross boundaries of time, space geography, what have you, very easily, almost transparently, and that requires new thinking in order to take advantage of that.
Bryan, at TELUS as Culture of Change Ambassador, are you part of the process for helping people think differently and therefore be able to exploit what cloud enables?
Flexible work schedule
Acker: One hundred percent. It's actually a great segue, because at TELUS we have a flexible work arrangement, where we want 70 percent of our employees to be working either from home or remotely. What that means is we have to have the tools and the culture in place that people understand, that they can access data and relevant information, wherever they are.
It doesn't matter if they're at home, like I am today, on the road, or at a client site, they need to be able to get the information to provide the best customer experience and provide the right answer at the right time.
So by switching from some of the great tools we already offered, because collaboration is part of TELUS’s cultural DNA, we've actually been able to tear down silos we didn't even know we were creating.
We were trying to provide all the tools, but now people have an end-to-end view of every record for customers, as well as employees and the collaboration involving courses and learning opportunities. They have access to everything when they need it and they can take ownership of the customer experience or even their own career, which is fantastic for us.
Gardner: Ralf, at Danone, as Director of Global Software Architecture, you clearly have your feet on the IT path and you've seen how things have evolved. Do you see the shift to cloud as a modest evolution, or is this something that changes the game?
Steinbach: We've been looking at cloud for quite sometime now. We've started several projects in the cloud, mainly in two areas. One involves the supporting functions of our business which is HR, travel expenses, and mail. There, we see a huge advantage of using standardized services in the cloud.
In these functions we do not need any specifics. The cloud comes standard and you can not change, as you can with SAP systems. You can't adapt the code. But that is one area where we think there's value in using cloud applications.
The other area where we really see the cloud as valued is in our digital marketing initiatives. There, we really need the flexibility of the cloud. Digital marketing is changing every day. There's a lot of innovation there and there the cloud gives us flexibility in terms of resources that we need to support that. And, the innovation cycles of our providers are much faster than they would be on premises. These are the two main areas where we use the cloud today.
Cone: Ralf, it was interesting to me, when I was reading through the transcript of your interview and working on the case studies we did, that it is even changing business models. It's allowing Danone to go straight to the consumer, where previously your customer had been the retailer. Cloud in new geographic markets is letting you reach straight to the end user, the end buyer.
Steinbach: That's what I meant when I talked about digital marketing. Today, all consumer product goods company like Danone are looking at connecting to their consumers and not to the retailers as in the past. We're really focusing on the end-consumer, and the cloud offers us new possibilities to do that, whether it is via mobile applications or websites and so on.
One thing that's important is the flexibility of the systems, because we don't know how many consumers we'll address. It can be a few, but it could be over a million. So we need to have a flexible architecture, and on-premise we could not manage that.
Gardner: The concept of speed seems to come up more and more. We're talking about speed of innovation, agility, direct lines of communication to customers and, of course, also supply-chain direct communication speed as well. How prominent did you see speed and the need for speed in business in your recent research?
We're really focusing on the end consumer, and the cloud offers us new possibilities to do that.
Cone: Well, speed was important -- and it's speed across different dimensions. It's speed to enter a new market or it's speed to collaborate within your own company, within your own organization.
This idea of taking IT and pushing it out to the people, to the customer, and really to the line of business allows them to have intimate contact and to move quickly, but also to break down these barriers of geography.
We did a case study with another large company, Hero, which is a large maker of motorcycles and two wheeled vehicles in India. What they're doing with cloud- enabled customer-facing technology is moving their service operation outside of dealerships into the countryside, out across India. They go to parks and they set up what they call service camps.
There, the speed element is the speed and the convenience with which you are able to get your bike serviced, and that's having a large measurable impact on their business. So it is speed, but it is speed across multiple dimensions.
Minahan: At the core, the cloud is really all about unlocking new innovations, providing agility in the business, allowing companies to be able to adapt their processes very, very quickly, and even create entirely new engagement models, and that's what we are seeing.
It is not just the cloud, though. This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing, one where applications are no longer just built for enterprise compliance or to be the system of record. Instead, they're really designed to engage and empower the individual user.
It's one that ushers in a new era of innovation for the business, where we can enable new engagement models with customers, employees, suppliers, and other partners.
We've heard some great examples here, but some others were very similar to the experience that Danone has seen. T-Mobile is leveraging the cloud not to replace its traditional systems of records, but to extend them with the cloud, to create a new model for social care, helping monitor conversations on its brand, and engage customer issues across multiple channels.
This convergence of cloud, big data, analytics, mobile and social, and business networks really ushers in ultimately a new paradigm for business computing.
So not just their traditional support channels, but Twitter and Facebook, where these conversations are happening and really it is empowered them to deliver what has become a phenomenal kind of “Cinderella-worst-to-first” story for customer support and satisfaction.
Now, they're seeing first time resolution rates that have gone from the low teens to greater than 94 percent. Obviously, that has a massive impact on customer satisfaction and renewals and is all powered by not throwing out the systems that they've used so long, but by extending them with the cloud to achieve new innovations and then drive new engagement models.
Gardner: Tim, another factor here, in a sense, levels the playing field. When you move to the cloud, small-to-medium-sized enterprises (SMBs) can enjoy the same benefit that you just described for example from T-Mobile. Are you at SAP seeing any movement in terms of the size or type of organizations that can exploit these new benefits?
Minahan: What's interesting, Dana, is that you and I have been around this industry for quite some time and the original thought was that the cloud was the big, democratized computing power.
It allowed SMBs to get the same level of applications and infrastructure support that their larger competitors have had for years. That's certainly true, but it is really the large enterprises that have been aggressively adopting this on an equal pace with their SMBs.
All sizes of companies
The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies. Large enterprises like UPS, Deutsche Bank, and Danone are using cloud-based business applications. In the case of UPS and Deutsche Bank, they're using business networks to extend their traditional supply chain and financial systems to collaborate better with their suppliers, bankers, and other partners.
It's being used by small upstarts as well. These are companies that we talked about in the past like Mediafly, a mobile marketing start-up. It's using dynamic discounting solutions in the cloud to get paid faster, fund development of new features, and take on new business.
There's Sage Health Solutions, a company started by two stay-at-home moms in South Africa that is really grown from zero to a multi-million dollar operation. That is all powered by the leveraging the cloud to enable new business models.
Cone: To follow on with what Tim said about the broad gamut of usage from company sites and also earlier mentioning mobile, what we saw in our survey is that mobile is of great importance to companies as a way of reaching their customers for internal productivity as well. But reaching customers is actually a higher priority and that comes down to the old adage: You have to fish where the fish are.
The cloud is being used to not only accelerate process efficiency and productivity, but to unlock innovations for all sized companies.
Look at what Danone is doing when they're setting up direct-to-customer technologies and marketing. They're going into markets where people don't necessarily have laptops or landlines. They're leapfrogging that to a world where people have mobile devices.
So if you have mobile customers, and as Tim said, think of the consumer experience, that is how we all live our lives now. No matter what size your company is, you have to reach your customers the way your customer lives now -- and that is mobile.
Gardner: Tell us a little bit about your research, how you have gone about it, and how that new level of pervasive collaboration was demonstrated in your findings.
Cone: In terms of the research, as I said, we went out to 200 execs around the world and asked them a series of questions about what their investment plans were. It was baseline survey information. What are you doing in the cloud, how much of it are you doing, and what are the key benefits that you're getting?
Then, as we went deeper in this phase of the project, we found that collaboration has different meanings. It can be collaboration within the company. It can be with partners, which cloud platforms allow you to do more easily. It's also this key relationship, a key area of collaboration between IT and the business.
What we see in this research is that IT is increasingly seen as a partner for the business as a way of driving revenue via the cloud. But across the four regions that we surveyed -- North America, Latin America, EMEA and APAC -- we saw a very high percentage of companies say that they see that IT is emerging as a valued partner of the business, not just a support function for the business. I think that's a key collaborative relationship that I'm sure our guests are seeing in their own companies.
Gardner: Just to be clear, Ed, this is ongoing research. You're already back in the field and you'll be updating some of these findings soon?
We're really interested to see how people are doing compared to the targets they set and what their new targets are.
Cone: Yes, we're really excited about that, Dana. We did this survey last year for SAP. Then, we jumped in about a year later using those numbers and did these in-depth research interviews to look at the use of the cloud to drive business. This summer, we're refielding the survey to see how things have changed and to see how the view of the future has changed.
We ask a lot of questions about where they are now, and where they think they'll be in three years. We're really interested to see how people are doing compared to the targets they set and what their new targets are. So we will have some fresh numbers and fresh reports to talk to you about by Q3 or Q4.
Gardner: Let us look into those actual examples now and go back to Bryan at TELUS.
Acker: I have a tangible example that might help express the value of collaboration at TELUS and something that people don't think about, and that is safety.
We have a lot of field technicians who are in remote areas, but have mobile access. A perfect example is that we can go into situation where a technician may be a little unsure of what to do in a situation and it's potentially unsafe.
Because of the mobile access and the cloud, we've enabled them to quickly record a video, upload it directly to our SAP Jam system, which is our collaborative tool suite that we use, and share it with a collection of other technicians, not just the person they can call.
What happens is then people can say this is unsafe, you need to do X, Y and Z. We can even push them required training, so they can be sure that they're making the right decision. All of a sudden, that becomes a safer situation and the technician is not putting themselves at risk. This is really important because people do not think of those real, tangible examples. They often feel that they're just sharing information back and forth.
But in terms of what we are doing and where we are going, I sit in HR, and we're trying to improve the business process. We now have all of our information, the system of record, an integrated learning management system (LMS), our ability to analyze talent, so we make the correct hires.
We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.
Now, we're in a situation where we're only going to maximize and try to leverage the cloud for even more innovation, because now people are singing from the same choir sheet, so to speak.
We now trust the information implicitly and we're able to make the correct decision, whether it means customer information, recruiting choices, hiring choices, or performance choices.
We have access to the same system or record of truth, and that's the first time we've had that. Now, recruiting can talk to learning, who can talk to performance, who can talk to technicians and we know they all get a consistent version of the truth. That is really important for us.
Gardner: Those are some excellent examples of how mobile enhances cloud. That extends the value of mobile. That brings in collaboration and, at the same time, creates data and analysis benefits that can then be fed back into that process.
So there really is a cyclical adoption value here. I'd like to go back to the cultural part of this. Bryan, how do you make sure that that adoption cycle doesn't spin out of control? Is there a lack of governance? Do you feel like you can control what goes on, or are we perhaps in the period of creative chaos that we should let spin off on its own in any way?
Acker: That’s a great question, and I'm not sure if TELUS handles this in a unique way, but we definitely had a very detailed plan. The first thing we did was have collaboration as one of our valued attributes or one of our leadership competencies. People are expected to collaborate, and their performance review is dependent on that.
What that means is we can provide tools to say that we're trying to facilitate collaboration. It doesn't mean matter if you're collaborating through a phone call, through a water-cooler chat, or through technology. Our employees are expected to collaborate. They know that it’s part of their performance cycle and it’s targeted towards their achievements for the year. We trust them to do the right thing.
We actually encourage a little bit of freedom. We want to push the boundaries. Our governance is not so tight that they are afraid to comment incorrectly or afraid to ask a tough question.
Flattening the hierarchy
What we're seeing now is individual team members are challenging leadership positions on specific questions, and we're having an honest and frank discussion that’s pushing the organization forward and making us make the accurate correct choice at all time, which is really encouraging. Now, we're really flattening our hierarchy and the cloud is enabling us to do that.
Gardner: That sounds like a very powerful engine of innovation, allowing that freedom, but then having it be controlled, managed, and understood at the same time. That’s amazing. Ed, do you have any reactions to what Bryan just said about how innovation is manifesting itself newly there at TELUS?
Cone: When we spoke to TELUS, I was intertested in that cultural aspect of it. I'm sure the guys on the call would disagree with me on a technical level, but we like to say that technology is easy, and culture is hard. The technology works, and you implement it and you figure that out, but getting people to change is really difficult.
The example that we use in the case study, SAP on TELUS, was about changing culture through gamification, allowing people to learn via an online cloud-based virtual game. It was this massive effort and it engaged a huge number of employees across this large company.
It really shifted the employee culture, and that had an impact on customer service and therefore on business performance
It really shifted the employee culture, and that had an impact on customer service and therefore on business performance. It’s a way that the cloud is moving mountains and it’s addressing the hard thing to change, which is human behavior and attitudes.
Minahan: We talk about the convergence of these different technologies in cloud, social, and mobile and ultimately we had this convergence going on in technology that we talked about all the time. There is massive change going on in the workforce and what constitutes the workforce.
Bryan talked about how there is a leveling of the organization, doing away with the traditional hierarchical command and control, where information is isolated in the hands of a few, and the new eager employees doesn’t get access to solving some of the tough problems. All that’s being flattened and accelerated and powered by cloud and social collaboration tools.
Also, we're seeing a shift in what constitutes the workforce. One of the biggest examples is the major shift in how companies are viewing the workforce. Contingent and statement of work (SOW) workers, basically non-payroll employees, now represent a third of the typical workforce. In the next few years, this will grow to more than half.
It’s already occurring in certain industries, like pharmaceuticals, mining, retail, and oil and gas. It's changing how folks view the workforce. They're moving from a functional management of someone -- this is their job; this is what they do -- to managing pools of talent or skills that can be rapidly deployed to address a given problem or develop a new innovative product or service.
These pools of talent will include both people on your payroll and off your payroll. Tracking, managing, organizing, and engaging these pools of talent is only possible through the cloud and through mobile, where multiple parties from multiple organizations could view, access, collaborate, and share knowledge and experiences running on a shared-technology platform.
Customer is evolving
Acker: That extends quite naturally to the customer. The customer is evolving faster than almost anything and they expect 24x7 access to support. They expect authentic responses and they now have access to just as much information as the customer service agent.
Without mobile, if you can't connect with those customers and be factual, you're in trouble. Your customers are going to reply in social-media channels and in public forums, and you're going to lose business and you're going to lose trust with your existing customers as well.
Minahan: I fully agree. The only addition to that is that they also expect to be able to engage you through any channel, whether it’s their mobile phone, their laptop, or in some cases, directly face to face, on the phone, or in a retail outlet and have the same consistent experience and not need to reintroduce who they are and what their problem as they move from channel to channel.
Gardner: Clearly we're seeing how things that just weren’t possible before the cloud are having pervasive impacts on businesses. Let’s look at a new business example, again with Danone. Ralf, tell us a little bit about how cloud has had strategic implications for you. You have many brands, many lines of business. How is cloud allowing Danone to function better as a whole?
The cloud is definitely the best option for us to start these new businesses and connect to all consumers.
Steinbach: We have a strategy around digital marketing and, as you know, we're operating in almost every country in the world. Even though we're a big company, locally, we're sometimes quite small. We're trying to build up new markets in emerging countries with very small investments in the beginning. There, the cloud is definitely the best option for us to start these new businesses and connect to all consumers.
Money matters, even for a big company like Danone. That’s very important for us. If you look at Africa, there are completely different business models that we need to address.
People in Africa pay with their mobile phones. Some sell yogurt on a bicycle. Women pick up some yogurt in the morning and then they sell them on the road. We need to do businesses with these people as well. Obviously, an enterprise resource planning (ERP) system isn't able to do that, but the cloud is a much better adapted platform to do this sort of business.
Gardner: The C-suite likes to look at numbers. How do we measure innovation?
Cone: We're doing some research on another program right now on that very topic for a non-SAP program. That is showing us that metrics for success on basic things like key performance indicators (KPIs) for progress of migration into the cloud are lacking at a lot of companies. Basic return on investment (ROI) numbers are lacking at a lot of companies.
We're really old school. To go back to your definition of what a business is, we think it’s an organization that’s set up to make money for shareholders and deliver value for stakeholders. By those measures, at least by dotted line, the key metrics are your financial performance? Are you entering, as we mentioned before, new markets and creating new products?
So the metrics we're seeing that are cloud specific aren't universal yet. In a broader sense, as cloud becomes an everyday set of tools, the point of those tools is to make the business run better, and we are seeing a correlation between effective use of the cloud and business performance.
There are entirely new engagement models and business models that the companies hadn’t even thought of before.
Minahan: What the cloud, mobile, and social bring to bear in addition to new collaboration models is that they kick off an unbelievable amount of new information, and oftentimes not in a structured way. There's a need to aggregate that information and analyze that in new ways to detect and predict propensity modeling on your customers, your supply chain, and your employees. Progression and development are extremely powerful.
I think we’ve just scratched the surface. As an industry, we provided the channels through which to collaborate, as we heard today. There are entirely new engagement models and business models that the companies hadn’t even thought of before. Once you have that information, once you have that connectivity, once you have that collaboration, you can begin to investigate and trial and error.
To answer your question about measurement on this, yes, we need measurement of the business process and the business outcome. Let’s not forget why companies adopt technology. It’s not just for technology sake. It’s to effect the change. It’s to effect more efficiency, greater productivity, and new engagement capabilities.
Measuring the business benefit is what we're seeing and what we’re advising our customers to do. And rather than just measuring, are we tracking towards an adoption of having more cloud in our infrastructure portfolios.
The focus today is largely driven by the fact that the lines of business are now more engaged in the buying decision and in shaping what they want from a technology standpoint to help them enable their business process. So the metrics have shifted from one of speeds and feeds and users to one of business outcomes.
Gardner: Bryan at TELUS in Toronto, you're closely associated with the human resources productivity and the softer metrics of the employee involvement and dedication that sort of thing. Are there any ways that you can think of that cloud adoption and innovation, as we’ve been describing, has this unintended set of consequences when it comes to employee empowerment or that innovation equation? How do you view measuring success of cloud adoption?
Simplifying the process
Acker: We measure our customers success by the likelihood to recommend. Will a TELUS customer recommend our services and products to friends, family, and peers?
We measure internal success by our employee engagement metric. If the customers are satisfied and the employees are engaged and fulfilled at work, that means that we're probably moving in the right direction. We can kind of reverse engineer to see what changes are helping us. That allows us to take our information and innovation from the cloud and inspire better behaviors and better process.
We can say, "You know what, in this pocket we’ve analyzed that our customers are likely to recommend it higher than anywhere else in Canada. What are they doing?" We can look back through the information shared on the cloud and see the great customer success stories or the great team building that’s driving engagement through the roof.
We can say, "This is the process we have to replicate and spread throughout all of our centers." Then, we can tweak it for cultural specifics. But because of that, we can use the cloud to inspire better behavior, not just say that we had 40,000 users and 2,000 hits on this blog post. We're really trying to get away from the quantitative and get into the qualitative to drive change throughout the organization.
Gardner: What comes next? Where do you see the impacts of cloud adoption in your business over the next couple of years?
Steinbach: There are still some challenges in front of us. One of the challenges is China. China is one of the biggest markets, but cloud services are not always available or they're very slow. If your cloud solution is hosted outside of China, there's a big problem. These are probably technical challenges, but we have to find solutions with our partners there, so that they can establish their services in China.
That’s one of the challenges. The other is that that the cloud might change the role of IT in our organization. In the past we owned the systems and the applications. Today, the business can basically buy cloud services with a credit card. So you could imagine that they won’t need us anymore in the future, but that's not true.
As an IT organization, we probably have to find our role inside the organization, from just providing solutions or hardware to being an ambassador for the business and to help them to make the right decisions. There are still problems that will remain as the integration between different applications. It doesn’t get easier in the cloud, so that’s where I see the challenge.
And last but not least, it's about security. We take that really seriously. If we store data, whether it's employees or of our consumers, we have to make sure that that our cloud providers have the same standards of security and there are no leaks. That’s very, very important for us. And there are legal aspects as well.
We've just started. There are still a lot of things to do in the next few years, but we're definitely going on with our strategy towards the cloud and toward mobile. And, at the end of the day, it all fits together. I think it was said before that it's not only cloud, but it's the big data, collaboration, and mobile. You have to see the whole thing as one package of opportunities.
Gardner: What do you think might be some of the impacts a few years from now that we're only just starting to realize?
Acker: On a more positive note, which is just the other side of the coin, obviously the challenges are there, but we're actually just starting to be able to experience the fact that innovation at TELUS is moving faster than it used to. We're no longer dependent on the speed at which our pre-assigned resources can make change and develop new products.
IT can now look at it from a more strategic point of view, which is great. Now, we're maximizing quarterly releases from systems that are leveraging the input from multiple companies around the world, not just how fast our learning team can develop something or how fast our IT team can build new functionality into our products.
We're no longer limited by the resources, and innovation is flying forward. That, for us, is the biggest unexpected gain. We're seeing all this technology that used to take months or years to change now on a quarterly release schedule. This is fantastic. Even within a year of being on our cloud-computing system, we're so happy, and that is inspiring to people. They're maximizing that and trying to push the organization forward as well. So, that’s a real big benefit.
Gardner: Tim, do you have any thoughts about where this can lead us in the next few years that we haven’t yet hit upon, things you're just starting to see the first really glimmers of it?
I think the biggest thing is that the cloud is going to unlock new business models and new organization models.
Minahan: A lot of it has been touched on here. We're seeing a massive shift in what the role of IT is, moving from one of deploying technology and integrating things to really becoming business process experts.
We talked a bit about the amount of data and the insights that are now available to help you better understand and predict the appetites of your customers to help you even determine when your machines might fail and when it's time to reorder or set a service repair.
I think the biggest thing is that the cloud is going to unlock new business models and new organization models. We talked a bit about TELUS and their work patterns, in which most of the workers are remote and how they are engaging the field service technicians in the field.
We talked about the growing contingent workforce and how the cloud is enabling folks to collaborate, onboard, and skill up those employees, non-payroll employees much more quickly. We're going to see your new virtual enterprises. We're talking about borderless enterprises that allow you to organize not just pools of talent, but entire value chains, and be able to collaborate in a more much transparent way.
We mentioned before about Apple Home. You're beginning to see it with 3D printers. It's this whole idea where more and more companies become digital businesses. This isn’t just about on-the-channel commerce providing a single customer experience across multiple channels.
It's actually about moving more and more of what you deliver, the solutions you deliver, the former products your deliver, to digital bits that can be tested, experienced, and downloaded all online.
All of this is being empowered by this massive convergence of cloud, mobility, social and business networks, and big data.
What comes next
Cone: To follow on what Tim said about the borderless enterprise, when we ask people what’s in the cloud now and what’s going to be substantially cloud based in three years, three of the highest growth areas were innovation in R and D, supply chain, and HR. All of those go straight to this idea that boundaryless digital enterprises are emerging and that cloud will be the underpinning of these enterprises.
We're working with Tim right now on a big global study about the workforce. When I talk about culture and the way companies function internally, a year ago, when we started this research, HR was the least likely function of the ones we queried to be in the cloud, and it's going to have massive growth in the next couple of years.
These stories start to converge of boundaryless and culture, all coming together via the cloud.
These stories start to converge of boundaryless and culture, all coming together via the cloud. That’s the segue to say that we're really excited to see how these numbers look when we refield this survey this summer, because that progress is snowballing and accelerating beyond even what people thought it was the last time we asked them.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP.
You may also be interested in:
Posted By Dana L Gardner,
Wednesday, July 23, 2014
| Comments (0)
Three years ago, Systems Mechanics Limited used relational databases to assemble and analyze some 20 different data sources in near real-time. But most relational database appliances used 1980s technical approaches, and the ability to connect more data and manage more events capped off. The runway for their business expansion just ended.
So Systems Mechanics looked for a platform that scales well and provides real-time data analysis, too. At the volumes and price they needed, HP Vertica has since scaled without limit ... an endless runway.
To learn more about how Systems Mechanics improved how their products best deliver business intelligence (BI), analytics streaming, and data analysis, BriefingsDirect spoke with Andy Stubley, Vice President of Sales and Marketing at Systems Mechanics, based in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: You've been doing a lot with data analysis at Systems Mechanics, and monetizing that in some very compelling ways.
Stubley: Yes, indeed. System Mechanics is principally a consultancy and a software developer. We’ve been working in the telco space for the last 10-15 years. We also have a history in retail and financial services.
The focus we've had recently and the products we’ve developed into our Zen family are based on big data, particularly in telcos, as they evolve from principally old analog conversations into devices where people have smartphone applications -- and data becomes ever more important.
All that data and all those people connected to the network cause a lot more events that need to be managed, and that data is both a cost to the business and an opportunity to optimize the business. So we have a cost reduction we apply and a revenue upside we apply as well.
Gardner: What’s a typical way telcos use Zen, and that analysis?
Stubley: Let’s take a scenario where you’re looking in network and you can’t make a phone call. Two major systems are catching that information. One is a fault-management system that’s telling you there is a fault on the network and it reports that back to the telecom itself.
The second one is the performance management system. That doesn’t specify faults basically, but it tells you if you’re having things like thresholds being affected, which may have an impact on performance every time. Either of those can have an impact on your customer, and from a customer’s perspective, you might also be having a problem with the network that isn’t reported by either of the systems.
We’re finding that social media is getting a bigger play in this space. Why is that? Now, particular the younger populations with consumer-based telcos, mobile telcos particularly, if they can’t get a signal or they can’t make a phone call, they get onto social media and they are trashing the brand.
They’re making noise. A trend is combining fault management and performance management, which are logical partners with social media. All of a sudden, rather than having a couple of systems, you have three.
In our world, we can put 25 or 30 different data sources on to a single Zen platform. In fact, there is no theoretical limit to the number we could, but 20 to 30 is quite typical now. That enables us to manage all the different network elements, different types of mobile technologies, LTE, 3G, and 2G. It could be Ericsson, Nokia, Huawei, ZTE, or Alcatel-Lucent. There is an amazing range of equipment, all currently managed through separate entities. We’re offering a platform to pull it all together in one unit.
The other way I tend to look at it is that we’re trying to turn the telcos into how you might view a human. We take the humans as the best decision-making platforms in the world and we probably still could claim that. As humans, we have conscious and unconscious processes running. We don’t think about breathing or pumping our blood around our system, but it’s happening all the time.
We use a solution with visualization, because in the world of big data, you can’t understand data in numbers.
We have senses that are pulling in massive amount of information from the outside world. You’re listening to me now. You’re probably doing a bunch of other things while you are tapping away on a table as well. They’re getting senses of information there and you are seeing, and hearing, and feeling, and touching, and tasting.
Those all contain information that’s coming into the body, but most of the activity is subconscious. In the world of big data, this is the Zen goal, and what we’re delivering in a number of places is to make as many actions as possible in a telco environment, as in a network environment, come to that automatic, subconscious state.
Suppose I have a problem on a network. I relate it back to the people who need to know, but I don’t require human intervention. We’re looking a position where the human intervention is looking at patterns in that information to decide what they can do intellectually to make the business better.
That probably speaks to another point here. We use a solution with visualization, because in the world of big data, you can’t understand data in numbers. Your human brain isn’t capable of processing enough, but it is capable of identifying patterns of pictures, and that’s where we go with our visualization technology.
Gather and use data
We have a customer who is one of the largest telcos in EMEA. They’re basically taking in 90,000 alarms from the network a day, and that’s their subsidiary companies, all into one environment. But 90,000 alarms needing manual intervention is a very big number.
Using the Zen technology, we’ve been able to reduce that to 10,000 alarms. We’ve effectively taken 90 percent of the manual processing out of that environment. Now, 10,000 is still a lot of alarms to deal with, but it’s a lot less frightening than 90,000, and that’s a real impact in human terms.
Gardner: Now that we understand what you do, let’s get into how you do it. What’s beneath the covers in your Zen system that allows you to confidently say you can take any volume of data you want?
If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.
Stubley: Fundamentally, that comes down to the architecture we built for Zen. The first element is our data-integration layer. We have a technology that we developed over the last 10 years specifically to capture data in telco networks. It’s real-time and rugged and it can deal with any volume. That enables us to take anything from the network and push it into our real-time database, which is HP’s Vertica solution, part of the HP HAVEn family.
Vertica analysis is to basically record any amount of data in real time and scale automatically on the HP hardware platform we also use. If we need more processing power, we can add more services to scale transparently. That enables us to get any amount of data, which we can then process.
We have two processing layers. Referring to our earlier discussion about conscious and subconscious activity, our conscious activity is visualizing that data, and that’s done with Tableau.
We have a number of Tableau reports and dashboards with each of our product solutions. That enables us to envision what’s happening and allows the organization, the guys running the network, and the guys looking at different elements in the data to make their own decisions and identify what they might do.
We also have a streaming analytics engine that listens to the data as it comes into the system before it goes to Vertica. If we spot the patterns we’ve identified earlier “subconsciously,” we’ll then act on that data, which may be reducing an alarm count. It may be "actioning" something.
It may be sending someone an email. It may be creating a trouble ticket on a different system. Those all happen transparently and automatically. It’s four layers simplifying the solution: data capture, data integration, visualization, and automatic analytics.
Developing high value
Gardner: And when you have the confidence to scale your underlying architecture and infrastructure, when you are able to visualize and develop high value to a vertical industry like a telco, this allows you to then expand into more lines of business in terms of products and services and also expand into move vertical. Where have you taken this in terms of the Zen family and then where do you take this now in terms of your market opportunity?
Stubley: We focus on mobile telcos. That’s our heritage. We can take any data source from a telco, but we can actually take any data source from anywhere, in any platform and any company. That ranges from binary to HTML. You name it, and if you’ve got data, we could load it.
That means we can build our processing accordingly. What we do is position what we call solution packs, and a solution pack is a connector to the outside world, to the network, and it grabs the data. We’ve got an element of data modeling there, so we can load the data into Vertica. Then, we have already built reports in Tableau that allows us to interrogate automatically. That’s at a component level.
Once you go to a number of components, we can then look horizontally across those different items and look at the behaviors that interact with each other. If you are looking at pure telco terms, we would be looking at different network devices, the end-to-end performance of the network, but the same would apply to a fraud scenario or could apply to someone who is running cable TV.
The very highest level is finding what problem you’re going to solve and then using the data to solve it.
So multi-play players are interesting because they want to monitor what’s happening with TV as well and that will fit in exactly in the same category. Realistically, anybody with high-volume, real-time data can take benefit from Vertica.
Another interesting play in this scenario is social gaming and online advertising. They all have similar data characteristics, very high volume and fixed data that needs to be analyzed and processed automatically.
Gardner: How long have you been using Vertica, and what is it that drove you to using it vis-à-vis alternatives?
Stubley: As far as the Zen family goes, we have used other technologies in the past, other relational databases, but we’ve used Vertica now for more than two-and-a-half years. We were looking for a platform that can scale and would give us real-time data. At the volumes we were looking at nothing could compete with Vertica at a sensible price. You can build yourself any solid solution with enough money, but we haven’t got too many customers who are prepared to make that investment.
So Vertica fits in with the technology of the 21st century. A lot of the relational database appliances are using 1980 thought processes. What’s happened with processing in the last few years is that nobody shares memory anymore, and our environment requires a non-shared memory solution. Vertica has been built on that basis. It was scaled without limit.
One of the areas we’re looking at that I mentioned earlier was social media. Social media is a very natural play for Hadoop, and Hadoop is clearly a very cost-effective platform for vast volumes of data at real-time data load, but very slow to analyze.
So the combination with a high-volume, low-cost platform for the bulk of data and a very high performing real-time analytics engine is very compelling. The challenge is going to be moving the data between the two environments. That isn’t going to go away. That’s not simple, and there is a number of approaches. HP Vertica is taking some.
There is Flex Zone, and there are any number of other players in that space. The reality is that you probably reach an environment where people are parallel loading the Hadoop and the Vertica. That’s what we probably plan to do. That gives you much more resilience. So for a lot of the data we’re putting into our system, we’re actually planning to put the raw data files into Hadoop, so we can reload them as necessary to improve the resilience of the overall system, too.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Posted By Dana L Gardner,
Tuesday, July 15, 2014
| Comments (0)
An expected deluge of data and information about patients, providers, outcomes, and needed efficiencies is pushing the healthcare industry to rapid change. But more than dealing with just the volume of data is required. Interoperability, security and the ability to adapt rapidly to the lessons in the data are all essential.
The means of enabling Boundaryless Information Flow, Open Platform 3.0 adaptation, and security for the healthcare industry are then, not surprisingly, headline topics for The Open Group’s upcoming event, Enabling Boundaryless Information Flow on July 21 and 22 in Boston.
And Boston is a hotbed of innovation and adaption for how technology, enterprise architecture, and open standards can improve the communication and collaboration among healthcare ecosystem players.
In preparation for the conference, BriefingsDirect had the opportunity to interview Jason Lee, the new Healthcare and Security Forums Director at The Open Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: I'm looking forward to the Boston conference next week and want to remind our listeners and readers that it's not too late to sign up to attend. You can learn more at www.opengroup.org.
Let’s start by talking about the relationship between Boundaryless Information Flow, which is a major theme of the conference, and healthcare. Healthcare perhaps is the killer application for Boundaryless Information Flow.
Lee: Interesting, I haven’t heard it referred to that way, but healthcare is 17 percent of the US economy. It's upwards of $3 trillion. The costs of healthcare are a problem, not just in the United States, but all over the world, and there are a great number of inefficiencies in the way we practice healthcare.
We don’t necessarily intend to be inefficient, but there are so many places and people involved in healthcare, it's very difficult to get them to speak the same language. It's almost as if you're in a large house with lots of different rooms, and every room you walk into they speak a different language. To get information to flow from one room to the other requires some active efforts, and that’s what we're undertaking here at The Open Group.
Gardner: What is it about the current collaboration approaches that don’t work? Obviously, healthcare has been around for a long time and there have been different players involved. What are the hurdles? What prevents a nice, seamless, easy flow and collaboration in information that creates better outcomes? What’s the holdup?
Lee: There are many ways to answer that question, because there are many barriers. Perhaps the simplest is the transformation of healthcare from a paper-based industry to a digital industry. Everyone has walked into a medical office, looked behind the people at the front desk, and seen file upon file and row upon row of folders, information that’s kept in a written format.
When there's been movement toward digitizing that information, not everyone has used the same system. It's almost like trains running on different gauge track. Obviously if the track going east to west is a different gauge than going north to south, then trains aren’t going to be able to travel on those same tracks. In the same way, healthcare information does not flow easily from one office to another or from one provider to another.
Gardner: So not only do we have disparate strategies for collecting and communicating health data, but we're also seeing much larger amounts of data coming from a variety of new and different places. Some of them now even involve sensors inside of patients themselves or devices that people will wear. So is the data deluge, the volume, also an issue here?
Lee: Certainly. I heard recently that an integrated health plan, which has multiple hospitals involved, contains more elements of data than the Library of Congress. As information is collected at multiple points in time, over a relatively short period of time, you really do have a data deluge. Figuring out how to find your way through all the data and look at the most relevant [information] for the patient is a great challenge.
Gardner: I suppose the bad news is that there is this deluge of data, but it’s also good news, because more data means more opportunity for analysis, a better ability to predict and determine best practices, and also provide overall lower costs with better patient care.
We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems.
So it seems like the stakes are rather high here to get this right, to not just crumble under a volume or an avalanche of data, but to master it, because it's perhaps the future. The solution is somewhere in there, too.
Lee: No question about it. At The Open Group, our focus is on solutions. We, like others, put a great deal of effort into describing the problems, but figuring out how to bring IT technologies to bear on business problems, how to encourage different parts of organizations to speak to one another and across organizations to speak the same language, and to operate using common standards and language. That’s really what we're all about.
And it is, in a large sense, part of the process of helping to bring healthcare into the 21st Century. A number of industries are a couple of decades ahead of healthcare in the way they use large datasets -- big data, some people refer to it as. I'm talking about companies like big department stores and large online retailers. They really have stepped up to the plate and are using that deluge of data in ways that are very beneficial to them -- and healthcare can do the same. We're just not quite at the same level of evolution.
Gardner: And to your point, the stakes are so much higher. Retail is, of course, a big deal in the economy, but as you pointed out, healthcare is such a much larger segment. So just making modest improvements in communication, collaboration, or data analysis can reap huge rewards.
Lee: Absolutely true. There is the cost side of things, but there is also the quality side. So there are many ways in which healthcare can improve through standardization and coordinated development, using modern technology that cannot just reduce cost, but improve quality at the same time.
Gardner: I'd like to get into a few of the hotter trends. But before we do, it seems that The Open Group has recognized the importance here by devoting the entire second day of their conference in Boston, that will be on July 22, to healthcare.
Maybe you could provide us a brief overview of what participants, and even those who come in online and view recorded sessions of the conference at http://new.livestream.com/opengroup should expect? What’s going to go on July 22?
Lee: We have a packed day. We're very excited to have Dr. Joe Kvedar, a physician at Partners HealthCare and Founding Director of the Center for Connected Health, as our first plenary speaker. The title of his presentation is “Making Health Additive.”
It will become an area where standards development and The Open Group can be very helpful.
Dr. Kvedar is a widely respected expert on mobile health, which is currently the Healthcare Forum’s top work priority. As mobile medical devices become ever more available and diversified, they will enable consumers to know more about their own health and wellness.
A great deal of data of potentially useful health data will be generated. How this information can be used -- not just by consumers but also by the healthcare establishment that takes care of them as patients -- will become a question of increasing importance. It will become an area where standards development and The Open Group can be very helpful.
Our second plenary speaker, Proteus Duxbury, Chief Technology Officer at Connect for Health Colorado, will discuss a major feature of the Affordable Care Act — the health insurance exchanges -- which are designed to bring health insurance to tens of millions of people who previous did not have access to it.
He is going to talk about how enterprise architecture -- which is really about getting to solutions by helping the IT folks talk to the business folks and vice versa -- has helped the State of Colorado develop their health insurance exchange.
After the plenaries, we will break up into three tracks, one of which is healthcare-focused. In this track there will be three presentations, all of which discuss how enterprise architecture and the approach to Boundaryless Information Flow can help healthcare and healthcare decision-makers become more effective and efficient.
One presentation will focus on the transformation of care delivery at the Visiting Nurse Service of New York. Another will address stewarding healthcare transformation using enterprise architecture, focusing on one of our platinum members, Oracle, and a company called Intelligent Medical Objects, and how they're working together in a productive way, bringing IT and healthcare decision-making together.
Then, the final presentation in this track will focus on the development of an enterprise architecture-based solution at an insurance company. The payers, or the insurers -- the big companies that are responsible for paying bills and collecting premiums -- have a very important role in the healthcare system that extends beyond administration of benefits. Yet, payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.
With the increase in payer data brought on in large part by the adoption of a new coding system -- the ICD-10 -- which will come online this year, there will be a huge amount of additional data, including clinical data, that become available. At The Open Group, we consider payers -- health insurance companies (some of which are integrated with providers) -- as very important stakeholders in the big picture.
In the afternoon, we're going to switch gears a bit and have a speaker talk about the challenges, the barriers, the “pain points” in introducing new technology into the healthcare systems. The focus will return to remote or mobile medical devices and the predictable but challenging barriers to getting newly generated health information to flow to doctors’ offices and into patients records, electronic health records, and hospitals' data-keeping and data-sharing systems.
Payers are not always recognized for their key responsibilities and capabilities in the area of clinical improvements and cost improvements.
We'll have a panel of experts that responds to these pain points, these challenges, and then we'll draw heavily from the audience, who we believe will be very, very helpful, because they bring a great deal of expertise in guiding us in our work. So we're very much looking forward to the afternoon as well.
Gardner: I'd also like to remind our readers and listeners that they can take part in this by attending the conference, and there is information about that at the opengroup.org website.
It's really interesting. A couple of these different plenaries and discussions in the afternoon come back to this user-generated data. Jason, we really seem to be on the cusp of a whole new level of information that people will be able to develop from themselves through their lifestyle, new devices that are connected.
We hear from folks like Apple, Samsung, Google, and Microsoft. They're all pulling together information and making it easier for people to not only monitor their exercise, but their diet, and maybe even start to use sensors to keep track of blood sugar levels, for example.
In fact, a new Flurry Analytics survey showed 62 percent increase in the use of health and fitness application over the last six months on the popular mobile devices. This compares to a 33 percent increase in other applications in general. So there's an 87 percent faster uptick in the use of health and fitness applications.
Tell me a little bit how you see this factoring in. Is this a mixed blessing? Will so much data generated from people in addition to the electronic medical records, for example, be a bad thing? Is this going to be a garbage in, garbage out, or is this something that could potentially be a game changer in terms of how people react to their own data -- and then bring more data into the interactions they have with healthcare providers?
Challenge to predict
Lee: It's always a challenge to predict what the market is going to do, but I think that’s a remarkable statistic that you cited. My prediction is that the increased volume of person-generated data from mobile health devices is going to be a game changer. This view also reflects how the Healthcare Forum members (which includes members from Capgemini, Philips, IBM, Oracle and HP) view the future.
The commercial demand for mobile medical devices, things that can be worn, embedded, or swallowed, as in pills, as you mentioned, is growing ever more. The software and the applications that will be developed to be used with the devices is going to grow by leaps and bounds.
As you say, there are big players getting involved. Already some of the pedometer-type devices that measure the number of steps taken in a day have captured the interest of many, many people. Even David Sedaris, serious guy that he is, was writing about it recently in The New Yorker.
What we will find is that many of the health indicators that we used to have to go to the doctor or nurse or lab to get information on will become available to us through these remote devices.
There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now.
There will be a question of course as to reliability and validity of the information, to your point about garbage in, garbage out, but I think standards development will help here This, again, is where The Open Group comes in. We might also see the FDA exercising its role in ensuring safety here, as well as other organizations, in determining which devices are reliable.
The Open Group is working in the area of mobile data and information systems that are developed around them, and their ability to (a) talk to one another, and (b) talk to the data devices/infrastructure used in doctors’ offices and in hospitals. This is called interoperability and it's certainly lacking in the country.
There are already problems around interoperability and connectivity of information in the healthcare establishment as it is now. When patients and consumers start collecting their own data, and the patient is put at the center of the nexus of healthcare, then the question becomes how does that information that patients collect get back to the doctor/clinician in ways in which the data can be trusted and where the data are helpful?
After all, if a patient is wearing a medical device, there is the opportunity to collect data, about blood-sugar level let's say, throughout the day. And this is really taking healthcare outside of the four walls of the clinic and bringing information to bear that can be very, very useful to clinicians and beneficial to patients.
In short, the rapid market dynamic in mobile medical devices and in the software and hardware that facilitates interoperability begs for standards-based solutions that reduce costs and improve quality, and all of which puts the patient at the center. This is The Open Group’s Healthcare Forum’s sweet spot.
Gardner: It seems to me a real potential game changer as well, and that something like Boundaryless Information Flow and standards will play an essential role in. Because one of the big question marks with many of the ailments in a modern society has to do with lifestyle and behavior.
So often, the providers of the care only really have the patient’s responses to questions, but imagine having a trove of data at their disposal, a 360-degree view of the patient to then further the cause of understanding what's really going on, on a day-to-day basis.
But then, it's also having a two-way street, being able to deliver perhaps in an automated fashion reinforcements and incentives, information back to the patient in real-time about behavior and lifestyles. So it strikes me as something quite promising, and I look forward to hearing more about it at the Boston conference.
Any other thoughts on this issue about patient flow of data, not just among and between providers and payers, for example, or providers in an ecosystem of care, but with the patient as the center of it all, as you said?
Lee: As more mobile medical devices come to the market, we'll find that consumers own multiple types of devices at least some of which collect multiple types of data. So even for the patient, being at the center of their own healthcare information collection, there can be barriers to having one device talk to the other. If a patient wants to keep their own personal health record, there may be difficulties in bringing all that information into one place.
There are issues, around security in particular, where healthcare will be at the leading edge.
So the interoperability issue, the need for standards, guidelines, and voluntary consensus among stakeholders about how information is represented becomes an issue, not just between patients and their providers, but for individual consumers as well.
Gardner: And also the cloud providers. There will be a variety of large organizations with cloud-modeled services, and they are going to need to be, in some fashion, brought together, so that a complete 360-degree view of the patient is available when needed. It's going to be an interesting time.
Of course, we've also looked at many other industries and tried to have a cloud synergy, a cloud-of-clouds approach to data and also the transaction. So it’s interesting how what's going on in multiple industries is common, but it strikes me that, again, the scale and the impact of the healthcare industry makes it a leader now, and perhaps a driver for some of these long overdue structured and standardized activities.
Lee: It could become a leader. There is no question about it. Moreover, there is a lot healthcare can learn from other companies, from mistakes that other companies have made, from lessons they have learned, from best practices they have developed (both on the content and process side). And there are issues, around security in particular, where healthcare will be at the leading edge in trying to figure out how much is enough, how much is too much, and what kinds of solutions work.
There's a great future ahead here. It's not going to be without bumps in the road, but organizations like The Open Group are designed and experienced to help multiple stakeholders come together and have the conversations that they need to have in order to push forward and solve some of these problems.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.
You may also be interested in:
The Open Group
The Open Group Conference
Posted By Dana L Gardner,
Monday, July 14, 2014
| Comments (0)
When Swedish communications services provider TDC needed network infrastructure improvements from their disparate networks across several Nordic countries, they needed both simplicity in execution and agility in performance.
Our next innovation case study interview therefore highlights how TDC in Stockholm found ways to better determine root causes to any network disruption, and conduct deep inspection of the traffic to best manage their service-level agreements (SLAs).
BriefingsDirect had an opportunity to learn first-hand how over 50,000 devices can be monitored and managed across a state-of-the-art network when we interviewed Lars Niklasson, the Senior Consultant at TDC. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: You have a number of main businesses in your organization. There’s TDC Solutions and mobile. There’s even television and some other hosting. Explain for us how large your organization is.
Niklasson: TDC is an operator in the Nordic region, where we have a network covering Norway, Sweden, Finland, and Denmark. In Sweden, we’re also an integrator and have a quite big consultant role in Sweden. In Sweden we’re around 800 people, and the whole TDC group is almost 10,000 people.
Gardner: So it’s obviously a very significant network to support this business and deliver the telecommunication services. Maybe you could define your network for us.
Niklasson: It's quite big, over 50,000 devices, and everything is monitored of course. It’s a state-of-the-art network.
Gardner: When you have so many devices to track, so many types of layers of activity and levels of network operations, how do you approach keeping track of that and making sure that you’re not only performing well, but performing efficiently?
Niklasson: Many years ago, we implemented HP Network Node Manager (NNM) and we have several network operating centers in all countries using NNM. When HP released different smart plug-ins, we started to implement those too for the different areas that they support, such as quality assurance, traffic, and so on.
Gardner: So you’ve been using HP for your network management and HP Network Management Center for some time, and it has of course evolved over the years. What are some of the chief attributes that you like or requirements that you have for network operations, and why has the HP product been so strong for you?
Quick and easy
Niklasson: One thing is that it has to be quick and easy to manage. We have lots of changes all the time, especially in Sweden, when a customer comes. And in Sweden, we’re monitoring end customers’ networks.
It's also very important to be able to integrate it with the other systems that we have. So we can, for example, tell which service-level agreement (SLA) a particular device has and things like that. NNM makes this quite efficient.
Gardner: One of the things that I’ve heard people struggle with is the amount of data that’s generated from networks that then they need to be able to sift through and discover anomalies. Is there something about visualization or other ways of digesting so much data that appeals to you?
Niklasson: NNM is quite good at finding the root cause. You don’t get very many incidents when something happens. If I look back at other products and older versions, there were lots and lots of incidents and alarms. Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps and so on.
Gardner: TDC uses network management capabilities and also sells it. They also provide it with their telecom services. How have you experienced the use in the field? Do any of your customers also manage their own networks and how has this been for your consumers of network services?
Niklasson: We’re also an HP partner in selling NNM to end customers. Part of my work is helping customers implement this in their own environment. Sometimes a customer doesn’t want to do that. They buy the service from us, and we monitor the network. It’s for different reasons. One could be security, and they don’t allow us to access the network remotely. They prefer to have it in-house, and I help them with these projects.
Now, I find it quite easy to manage and configure NNM so it's monitoring the correct things and listening to the correct traps.
Gardner: Lars, looking to the future, are there any particular types of technology improvements that you would like to see or have you heard about some of the roadmaps that HP has for the whole Network Management Center Suite? What interests you in terms of what's next?
Niklasson: I would say two things. One is the application visibility in the network, where we can have some of that with traffic that’s cleaner, but it's still NetFlow-based. So I’m interested in seeing more deep inspection of the traffic and also more virtualization of the virtual environments that we have.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Network node management
Posted By Dana L Gardner,
Wednesday, July 09, 2014
| Comments (0)
As smartphones have become de rigueur in the global digital economy, users want them to do more work, and businesses want them to be more productive for their employees -- as well as powerful added channels to consumers.
But neither businesses nor mobile-service providers have a cross-domain architecture that supports all the new requirements for a secure digital economy, one that allows safe commerce, data sharing and user privacy.
So how do we blaze a better path to a secure mobile future? How do we make today’s ubiquitous mobile devices as low risk as they are indispensable?
BriefingsDirect recently posed these and other questions to a panel of experts on mobile security: Paul Madsen, Principal Technical Architect in the Office of the CTO at Ping Identity; Michael Barrett, President of the FIDO (Fast Identity Online) Alliance, and Mark Diodati, a Technical Director in the Office of the CTO at Ping Identity. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: We're approaching this Cloud Identity Summit 2014 (CIS) in Monterey, Calif. on July 19 and we still find that the digital economy is not really reaching its full potential. We're still dealing with ongoing challenges for trust, security, and governance across mobile devices and network.
Even though people have been using mobile devices for decades—and in some markets around the world they're the primary tool for accessing the Internet—why are we still having problems? Why is this so difficult to solve?
Diodati: There are so many puzzle pieces to make the digital economy fully efficient. A couple of challenges come to mind. One is the distribution of identity. In prior years, the enterprise did a decent job -- not an amazing job, but a decent job -- of identifying users, authenticating them, and figuring out what they have access to.
Once you move out into a broader digital economy, you start talking about off-premises architectures and the expansion of user constituencies. There is a close relationship with your partners, employees, and your contractors. But relationships can be more distant, like with your customers.
Additionally, there are issues with emerging security threats. In many cases, there are fraudsters with malware being very successful at taking people’s identities and stealing money from them.
Mobility can do a couple of things for us. In the old days, if you want more identity assurance to access important applications, you pay more in cost and usability problems. Specialized hardware was used to raise assurance. Now, the smartphone is really just a portable biometric device that users carry without us asking them to do so. We can raise assurance levels without the draconian increase in cost and usability problems.
We’re not out of the woods yet. One of the challenges is nailing down the basic administrative processes to bind user identities to mobile devices. That challenge is part cultural and part technology. [See more on a new vision for identity.]
Gardner: So it seems that we have a larger set of variables, end users, are not captive on network, who we authenticate. As you mentioned, the mobile device, the smartphone, can be biometric and can be an even better authenticator than we've had in the past. We might actually be in a better position in a couple of years. Is there a transition that’s now afoot that we might actually come out better on the other end?
Madsen: The opportunities are clear. As Mark indicated, the phones, not just because of its technical features, but because of the relatively tight binding that users feel for them, make a really strong authentication factor.
It's the old trope of something you have, something you know, and something you are. Phones are something you already have, from the user’s point of view. It’s not an additional hard token or hard USB token that we're asking employees to carry with them. It's something they want to carry, particularly if it's a BYOD phone.
So phones, because they're connected mobile computers, make a really strong second-factor authentication, and we're seeing that more and more. As I said, it’s one that users are happy using because of the relationship they already have with their phones, for all the other reasons. [See more on identity standards and APIs.]
Gardner: It certainly seems to make sense that you would authenticate into your work environment through your phone. You might authenticate in the airport to check in with your phone and you might use it for other sorts of commerce. It seems that we have the idea, but we need to get there somehow.
What’s architecturally missing for us to make this transition of the phone as the primary way in which people are identified session by session, place by place? Michael, any thoughts about that?
Barrett: There are a couple of things. One, in today’s world, we don’t yet have open standards that help to drive cross-platform authentication, and we don’t have the right architecture for that. In today’s world still, if you are using a phone with a virtual keyboard, you're forced to type this dreadful, unreadable tiny password on the keyboard, and by the way, you can’t actually read what you just typed. That’s a pretty miserable user experience, which we alluded to earlier.
But also, it’s a very ugly. It’s a mainframe-centric architecture. The notion that the authentication credentials are shared secrets that you know and that are stored on some central server is a very, very 1960s approach to the world. My own belief is that, in fact, we have to move towards a much more device-centric authentication model, where the remote server actually doesn’t know your authentication credentials. Again, that comes back to both architecture and standards.
My own view is that if we put those in place, the world will change. Many of us remember the happy days of the late '80s and early '90s when offices were getting wired up, and we had client-server applications everywhere. Then, HTML and HTTP came along, and the world changed. We're looking at the same kind of change, driven by the right set of appropriately designed open standards.
Gardner: So standards, behavior, and technology make for an interesting adoption path, sometimes a chicken and the egg relationship. Tell me about FIDO and perhaps any thoughts about how we make this transition and adoption happen sooner rather than later?
Barrett: I gave a little hint. FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use. As an ex-CTO, I can tell you the experience when you try to give them stronger authenticators that are harder for them to use. They won’t voluntarily use them.
FIDO is an open-standards organization really aiming to develop a set of technical standards to enable device-centric authentication that is easier for end users to use.
We have to do better than we're doing today in terms of ease of use of authentication. We also have to come up with authentication that is stronger for the relying parties, because that’s the other face of this particular coin. In today’s world, passwords and pins work very badly for end users. They actually work brilliantly for the criminals.
So I'm kind of old school on this. I tend to think that security controls should be there to make life better for relying parties and users and not for criminals. Unfortunately, in today’s world, they're kind of inverted.
So FIDO is simply an open-standards organization that is building and defining those classes of standards and, through our member companies, is promulgating deployment of those standards.
Madsen: I think FIDO is important. Beyond the fact that it’s a standard is the pattern that it’s normalizing. The pattern is one where the user logically authenticates to their phone, whether it be with a fingerprint or a pin, but the authentication is local. Then, leveraging the phone’s capabilities -- storage, crypto, connectivity. etc. -- the phone authenticates to the server. It’s that pattern of a local authentication followed by a server authentication that I think we are going to see over and over.
Gardner: Thank you, Paul. It seems to me that most people are onboard with this. I know that, as a user, I'm happy to have the device authenticate. I think developers would love to have this authentication move to a context on a network or with other variables brought to bear. They can create whole new richer services when they have a context for participation. It seems to me the enterprises are onboard too. So there's a lot of potential momentum around this. What does it take now to move the needle forward? What should we expect to hear at CIS?
Diodati: There are two dimensions to moving the needle forward: avoiding the failures of prior mobile authentication systems, and ensuring that modern authentication systems support critical applications. Both are crucial to the success of any authentication system, including FIDO.
At CIS, we have an in-depth, three-hour FIDO workshop and many mobile authentication sessions.
There are a couple of things that I like about FIDO. First, it can use the biometric capabilities of the device. Many smart phones have an accelerometer, a camera, and a microphone. We can get a really good initial authentication. Also, FIDO leverages public-key technology, which overcomes some of the concerns we have around other kinds of technologies, particularly one-time passwords.
Madsen: To that last point Mark, I think FIDO and SAML, or more recent federation protocols, complement each other wonderfully. FIDO is a great authentication technology, and federation historically has not resolved that. Federation didn't claim to answer that issue, but if you put the two together, you get a very strong initial authentication. Then, you're able to broadcast that out to the applications that you want to access. And that’s a strong combination.
Barrett: One of the things that we haven't really mentioned here -- and Paul just hinted at it -- is the relationship between single sign-on and authentication. When you talk to many organizations, they look at that as two different sides of the same coin. So the better application or ubiquity you can get, and the more applications you can sign the user on with less interaction, is a good thing.
Gardner: Before we go a little bit deeper into what’s coming up, let’s take another pause and look back. There have been some attempts to solve these problems. Many, I suppose, have been from a perspective of a particular vendor or a type of device or platform or, in an enterprise sense, using what they already know or have.
Proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.
We've had containerization and virtualization on the mobile tier. It is, in a sense, going back to the past where you go right to the server and very little is done on the device other than the connection. App wrapping would fall under that as well, I suppose. What have been the pros and cons and why isn’t containerization enough to solve this problem? Let’s start with Michael.
Barrett: If you look back historically, what we've tended to see are lot of attempts that are truly proprietary in nature. Again, my own philosophy on this is that proprietary technology is really great for many things, but there are certain domains that simply need a strong standards-based backplane.
There really hasn't been an attempt at this for some years. Pretty much, we have to go back to X.509 to see the last major standards-based push at solving authentication. But X.509 came with a whole bunch of baggage, as well as architectural assumptions around a very disconnected world view that is kind of antithetical to where we are today, where we have a very largely connected world view.
I tend to think of it through that particular set of lenses, which is that the standards attempts in this area are old, and many of the approaches that have been tried over the last decade have been proprietary.
For example, on my old team at PayPal, I had a small group of folks who surveyed security vendors. I remember asking them to tell me how many authentication vendors there were and to plot that for me by year?
Growing number of vendors
They sighed heavily, because their database wasn’t organized that way, but then came back a couple of weeks later. Essentially they said that in 2007, it was 30-odd vendors, and it has been going up by about a dozen a year, plus or minus some, ever since, and we're now comfortably at more than 100.
Any market that has 100 vendors, none of whose products interoperate with each other, is a failing market, because none of those vendors, bar only a couple, can claim very large market share. This is just a market where we haven’t seen the right kind of approaches deployed, and as a result, we're struck where we are today without doing something different.
Gardner: Paul, any thoughts on containerization, pros and cons?
Madsen: I think of phones as almost two completely orthogonal aspects. First is how you can leverage the phone to authenticate the user. Whether it’s FIDO or something proprietary, there's value in that.
Secondly is the phone as an application platform, a means to access potentially sensitive applications. What mobile applications introduce that’s somewhat novel is the idea of pulling down that sensitive business data to the device, where it can be more easily lost or stolen, given the mobility and the size of those devices.
IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.
The challenge for the enterprise is, if you want to enable your employees with devices, or enable them to bring their own in, how do you protect that data. It seems more and more important, or recognized as the challenge, that you can’t.
The challenge is not only protecting the data, but keeping the usage of the phone separate. IT, arguably and justifiably, wants to protect the business data on it, but the employee, particularly in a BYOD case, wants to keep their use of the phone isolated and private.
So containerization or dual-persona systems attempt to slice and dice the phone up into two or more pieces. What is missing from those models, and it’s changing, is a recognition that, by definition, that’s an identity problem. You have two identities—the business user and the personal user—who want to use the same device, and you want to compartmentalize those two identities, for both security and privacy reasons.
Identity standards and technologies could play a real role in keeping those pieces separate.The employee might use Box for the business usage, but might also use it for personal usage. That’s an identity problem, and identity will keep those two applications and their usages separate.
Diodati: To build on that a little bit, if you take a look at the history of containerization, there were some technical problems and some usability problems. There was a lack of usability that drove an acceptance problem within a lot of enterprises. That’s changing over time.
To talk about what Michael was talking about in terms of the failure of other standardized approaches to authentication, you could look back at OATH, which is maybe the last big industry push, 2004-2005, to try to come up with a standard approach, and it failed on interoperability. OATH was a one-time password, multi-vendor capability. But in the end, you really couldn’t mix and match devices. Interoperability is going to be a big, big criteria for acceptance of FIDO. [See more on identity standards and APIs.]
Mobile device management
Gardner: Another thing out there in the market now, and it has gotten quite a bit of attention from enterprises as they are trying to work through this, is mobile device management (MDM). Do you have any thoughts, Mark, on why that has not necessarily worked out or won’t work out? What are the pros and cons of MDM?
Diodati: Most organizations of a certain size are going to need an enterprise mobility management solution. There is a whole lot that happens behind the scenes in terms of binding the user's identity, perhaps putting a certificate on the phone.
Michael talked about X.509. That appears to be the lowest common denominator for authentication from a mobile device today, but that can change over time. We need ways to be able to authenticate users, perhaps issue them certificates on the phone, so that we can do things like IPSec.
Also, we may be required to give some users access to offline secured data. That’s a combination of apps and enterprise mobility management (EMM) technology. In a lot of cases, there's an EMM gateway that can really help with giving offline secure access to things that might be stored on network file shares or in SharePoint, for example.
If there's been a stumbling block with EMM, it's just been that the heterogeneity of the devices, making it a challenge to implement a common set of policies.
The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device.
But also the technology of EMM had to mature. We went from BlackBerry Enterprise Server, which did a pretty good job in a homogeneous world, but maybe didn't address everybody’s needs. The AirWatchs and the Mobile Irons of the world, they've had to deal with heterogeneity and increased functionality.
Madsen: The fundamental issue with MDM is, as the name suggests, that you're trying to manage the device, as opposed to applications or data on the device. That worked okay when the enterprise was providing employees with their BlackBerry, but it's hard to reconcile in the BYOD world, where users are bringing in their own iPhones or Androids. In their mind, they have a completely justified right to use that phone for personal applications and usage.
So some of the mechanisms of MDM remain relevant, being able to wipe data off the phone, for example, but the device is no longer the appropriate granularity. It's some portion of the device that the enterprise is authoritative over.
Gardner: It seems to me, though, that we keep coming back to several key concepts: authentication and identity, and then, of course, a standardization approach that ameliorates those interoperability and heterogeneity issues. [See more on a new vision for identity.]
So let’s look at identity and authentication. Some people make them interchangeable. How should we best understand them as being distinct? What’s the relationship between them and why are they so essential for us to move to a new architecture for solving these issues? Let’s start with you, Michael.
Identity is center
Barrett: I was thinking about this earlier. I remember having some arguments with Phil Becker back in the early 2000s when I was running the Liberty Alliance, which was the standards organization that came up with SAML 2.0. Phil coined that phrase, "Identity is center," and he used to argue that essentially everything fell under identity.
What I thought back then, and still largely do, is that identity is a broad and complex domain. In a sense, as we've let it grow today, they're not the same thing. Authentication is definitely a sub-domain of security, along with a whole number of others. We talked about containerization earlier, which is a kind of security-isolation technique in many regards. But I am not sure that identity and authentication are exactly in the same dimension.
In fact, the way I would describe it is that if we talk about something like the levels-of-assurance model, we're all fairly familiar with in the identity sense. Today, if you look at that, that’s got authentication and identity verification concepts bound together.
Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.
In fact, I suspect that in the coming year or two, we're probably going to have to decouple those and say that it’s not really a linear one-dimensonal thing, with level one, level two, level three, and level four. Rather it's a kind of two-dimensional metric, where we have identity verification concepts on one side and then authentication comes from the other. Today, we've collapsed them together, and I am not sure we have actually done anybody any favors by doing that.
Definitely, they're closely related. You can look at some of the difficulties that we've had with identity over the last decade and say that it’s because we actually ignored the authentication aspect. But I'm not sure they're the same thing intrinsically.
Gardner: Interesting. I've heard people say that any high-level security mobile device has to be about identity. How else could it possibly work? Authentication has to be part of that, but identity seems to be getting more traction in terms of a way to solve these issues across all other variables and to be able to adjust accordingly over time and even automate by a policy.
Mark, how do you see identity and authentication? How important is identity as a new vision for solving these problems?
Diodati: You would have to put security at the top, and identity would be a subset of things that happen within security. Identity includes authorization -- determining if the user is authorized to access the data. It also includes provisioning. How do we manipulate user identities within critical systems -- there is never one big identity in the sky. Identity includes authentication and a couple of other things.
To answer the second part of your question, Dana, in the role of identity and trying to solve these problems, we in the identity community have missed some opportunities in the past to talk about identity as the great enabler.
With mobile devices, we want to have the ability to enforce basic security controls , but it’s really about identity. Identity can enable so many great things to happen, not only just for enterprises, but within the digital economy at large. There's a lot of opportunity if we can orient identity as an enabler.
Authentication and identity
Madsen: I just think authentication is something we have to do to get to identity. If there were no bad people in the world and if people didn’t lie, we wouldn’t need authentication.
We would all have a single identifier, we would present ourselves, and nobody else would lay claim to that identifier. There would be no need for strong authentication. But we don’t live there. Identity is fundamental, and authentication is how we lay claim to a particular identity.
Diodati: You can build the world's best authorization policies. But they are completely worthless, unless you've done the authentication right, because you have zero confidence that the users are who they say there are.
Gardner: So, I assume that multifactor authentication also is in the subset. It’s just a way of doing it better or more broadly, and more variables and devices that can be brought to bear. Is that correct?
We have to apply a set of adaptive techniques to get better identity assurance about the user.
Diodati: The definition of multifactor has evolved over time too. In the past, we talked about “strong authentication”. What we mean was “two-factor authentication,” and that is really changing, particularly when you look at some of the emerging technologies like FIDO.
If you have to look at the broader trends around adaptive authentication, the relationship to the user or the consumer is more distant. We have to apply a set of adaptive techniques to get better identity assurance about the user.
Gardner: I'm just going to make a broad assumption here that the authentication part of this does get solved, that multifactor authentication, adaptive, using devices that people are familiar with, that they are comfortable doing, even continuing to use many of the passwords, single sign-on, all that gets somehow rationalized.
Then, we're elevated to this notion of identity. How do we then manage that identity across these domains? Is there a central repository? Is there a federation? How would a standard come to bear on that major problem of the federation issue, control, and management and updating and so forth? Let’s go back to Michael on that.
Barrett: I tend to start from a couple of different perspectives on this. One is that we do have to fix the authentication standards problem, and that's essentially what FIDO is trying to do.
So, if you accept that FIDO solves authentication, what you are left with is an evolution of a set of standards that, over the last dozen years or so, starting with SAML 2.0, but then going on up through the more recent things like OpenID Connect and OAuth 2.0, and so on, gives you a robust backplane for building whatever business arrangement is appropriate, given the problem you are trying to solve.
I chose the word "business" quite consciously in there, because it’s fair to say that there are certain classes of models that have stalled out commercially for a whole bunch of reasons, particularly around the dreaded L-word, i.e, liability.
We tried to build things that were too complicated. We could just describe this grand long-term vision of what the universe looked like. Andrew Nash is very fond of saying that we can describe this rich ecosystem as identity-enabled services and so on, but you can’t get there from here, which is the punch line of a rather old joke.
Gardner: Mark, we understand that identity is taking on a whole new level of importance. Are there some examples that we can look to that illustrate how an identity-centric approach to security, governance, manageability for mobile tier activities, even ways it can help developers bring new application programming interfaces (APIs) into play and context for commerce and location, are things we haven’t even scratched the surface of yet really?
Identity is pretty broad when you take a look at the different disciplines that might be at play.
Help me understand, through an example rather than telling, how identity fits into this and what we might expect identity to do if all these things can be managed, standards, and so forth.
Diodati: Identity is pretty broad when you take a look at the different disciplines that might be at play. Let’s see if we can pick out a few.
We have spoken about authentication a lot. Emerging standards like FIDO are important, so that we can support applications that require higher assurance levels with less cost and usability problems.
A difficult trend to ignore is the API-first development modality. We're talking about things like OAuth and OpenID Connect. Both of those are very important, critical standards when we start talking about the use of API- and even non-API HTTP based stuff.
OpenID Connect, in particular, gives us some abilities for users to find where they want to authenticate and give them access to the data they need. The challenge is that the mobile app is interacting on behalf of a user. How do you actually apply things like adaptive techniques to an API session to raise identity assurance levels? Given that OpenID Connect was just ratified earlier this year, we're still in early stages of how that’s going to play out.
Gardner: Michael, any thoughts on examples, use cases, a vision for how this should work in the not too distant future?
Barrett: I'm a great believer in open standards, as I think I have shown throughout the course of this discussion. I think that OpenID Connect, in particular, and the fact that we now have that standard ratified, [is useful]. I do believe that the standards, to a very large extent, allow the creation of deployments that will address those use-cases that have been really quite difficult [without these standards in place].
Ahead of demand
The problem that you want to avoid, of course, is that you don’t want a standard to show up too far ahead of the demand. Otherwise, what you wind up with is just some interesting specification that never gets implemented, and nobody ever bothers deploying any of the implementations of it.
So, I believe in just-in-time standards development. As an industry, identity has matured a lot over the last dozen years. When SAML 2.0 came along in Shibboleth, it was a very federation-centric world, addressing a very small class of use cases. Now, we have a more robust sets of standards. What’s going to be really interesting is to see, how those new standards get used to address use cases that the previous standards really couldn’t?
I'm a bit of a believer in sort of Darwinian evolution on this stuff and that, in fact, it’s hard to predict the future now. Niels Bohr famously said, "Prediction is hard, especially when it involves the future.” There is a great deal of truth to that.
Prediction is hard, especially when it involves the future.
Gardner: Hopefully we will get some clear insights at the Cloud Identity Summit this month, July 19, and there will be more information to be had there.
I also wonder whether we're almost past the point now when we talk about mobile security, cloud security, data-center security. Are we going to get past that, or is this going to become more of a fabric of security that the standards help to define and then the implementations make concrete? Before we sign off, Mark, any last thoughts about moving beyond segments of security into a more pervasive concept of security?
Diodati: We're already starting to see that, where people are moving towards software as a service (SaaS) and moving away from on-premises applications. Why? A couple of reasons. The revenue and expense model lines up really well with what they are doing, they pay as they grow. There's not a big bang of initial investment. Also, SaaS is turnkey, which means that much of the security lifting is done by the vendor.
That's also certainly true with infrastructure as a service (IaaS). If you look at things like Amazon Web Services (AWS). It is more complicated than SaaS, it is a way to converge security functions within the cloud.
You may also be interested in:
Cloud Identity Summit
Posted By Dana L Gardner,
Monday, July 07, 2014
| Comments (0)
A stubborn speed bump continues to hobble the digital economy. We're referring to the outdated use of passwords and limited identity-management solutions that hamper getting all of our devices, cloud services, enterprise applications, and needed data to work together in anything approaching harmony.
The past three years have seen a huge uptick in the number and types of mobile devices, online services, and media. Yet, we're seemingly stuck with 20-year-old authentication and identity-management mechanisms -- mostly based on passwords.
The resulting chasm between what we have and what we need for access control and governance spells ongoing security lapses, privacy worries, and a detrimental lack of interoperability among cross-domain cloud services. So, while a new generation of standards and technologies has emerged, a new vision is also required to move beyond the precarious passel of passwords that each of us seems to use all the time.
The fast approaching Cloud Identity Summit 2014 this July gives us a chance to recheck some identity-management premises -- and perhaps step beyond the conventional to a more functional mobile future. To help us define these new best ways to manage identities and access control in the cloud and mobile era, please join me in welcoming our guest, Andre Durand, CEO of Ping Identity. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: The Cloud Identity Summit is coming up, and at the same time, we're finding that this digital economy is not really reaching its potential. There seems to be this ongoing challenge, as we have more devices, varieties of service and this need for this cross-domain interaction capability. It’s almost as if we're stymied. So why is this problem so intractable? Why are we still dealing with passwords and outdated authentication?
Durand: Believe it or not, you have to go back 30 years to when the problem originated, when the Internet was actually born. Vint Cerf, one of the founders and creators of the Internet, was interviewed by a reporter two or three years back. He was asked if he could go back 30 years, when he was creating the Internet, what would he do differently? And he thought about it for a minute and said, "I would have tackled the identity problem."
He continued, "We never expected the Internet to become the Internet. We were simply trying to route packets between two trusted computers through a standardized networking protocol. We knew that the second we started networking computers, you needed to know who the user was that was making the request, but we also knew that it was a complicated problem." So, in essence, they punted.
Roll forward 30 years, and the bulk of the security industry and the challenges we now face in identity management at scale, Internet or cloud scale, all result from not having tackled identity 30 years ago. Every application, every device, every network that touches the Internet has to ask you who you are. The easiest way to do that is via user name and password, because there was no concept of who the user was on the network at a more fundamental universal layer.
So all this password proliferation comes as a result of the fact that identity is not infrastructure today in the Internet, and it's a hard problem to retrofit the Internet for a more universal notion of who you are, after 30 years of proliferating these identity silos.
Internet of things
Gardner: It certainly seems like it’s time, because we're not only dealing with people and devices. We're now going into the Internet of Things, including sensors. We have multiple networks and more and more application programming interfaces (APIs) and software-as-a-service (SaaS) applications and services coming online. It seems like we have to move pretty quickly. [See more on identity standards and APIs.]
Durand: We do. The shift that began to exacerbate, or at least highlight, the underlying problem of identity started with cloud and SaaS adoption, somewhere around 2007-2008 time frame. With that, it moved some of the applications outside of the data center. Then, starting around 2010 or 2011, when we started to really get into the smartphone era, the user followed the smartphone off the corporate network and the corporate-issued computer and onto AT and T’s network.
So you have the application outside of the data center. You have the user off the network. The entire notion of how to protect users and data broke. It used to be that you put your user on your network with a company-issued computer accessing software in the data center. It was all behind the firewall.
Those two shifts changed where the assets were, the applications, data, and the user. The paradigm of security and how to manage the user and what they have access to also had to shift and it just brought to light the larger problem in identity.
What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains.
Gardner: And the stakes here are fairly high. We're looking at a tremendously inefficient healthcare system here in the United States, for example. One of the ways that could be ameliorated and productivity could be increased is for more interactions across boundaries, more standards applied to how very sensitive data can be shared. If we can solve this problem, it seems to me there is really a flood of improvement in productivity to come behind it.
Durand: It's enormous and fundamental. Someone shared with me several years ago a simple concept that captures the essence of how much friction we have in the system today in and around identity and users in their browsers going places. The comment was simply this: In your browser you're no longer limited to one domain. You're moving between different applications, different websites, different companies, and different partners with every single click.
What we need is the ability for your identity to follow your browser session, as you're moving between all these security domains, and not have to re-authenticate yourself every single time you click and are off to a new part of the Internet.
We need that whether that means employees sitting at their desktop on a corporate network, opening their browser and going to Salesforce.com, Office 365, Gmail, or Box, or whether it means a partner going into another partner’s application, say to manage inventory as part of their supply chain.
We have to have an ability for the identity to follow the user, and fundamentally that represents this next-gen notion of identity.
Gardner: I want to go back to that next-gen identity definition in a moment, but I notice you didn't mention authenticate through biometrics to a phone or to a PC. You're talking, I think, at a higher abstraction, aren’t you? At software or even the services level for this identity. Or did I read it wrong?
Durand: No, you read it absolutely correctly. I was definitely speaking at 100,000 feet there. Part of the solution that I play out is what's coming in the future will be stronger authentication to fewer places, say stronger authentication to your corporate network or to your corporate identity. Then, it's a seamless ability to access all the corporate resources, no matter if they're business applications that are proprietary in the data center or whether or not the applications are in the cloud or even in the private cloud.
So, stronger user authentication is likely through the mobile phone, since the phones have become such a phenomenal platform for authentication. Then, once you authenticate to that phone, there will be a seamless ability to access everything, irrespective of where it resides.
Gardner: Then, when you elevate to that degree, it allows for more policy-driven and intelligence-driven automated and standardized approaches that more and more participants and processes can then adopt and implement. Is that correct?
Durand: That’s exactly correct. We had a notion of who was accessing what, the policy, governance, and the audit trail inside of the enterprise, and that was through the '80s, '90s, and the early 2000s. There was a lot of identity management infrastructure that was built to do exactly that within the enterprise.
We now live in this much more federated scenario, and there is a new generation of identity management that we have to install.
Gardner: With directories.
Durand: Right, directories and all the identity management, Web access management, identity-management provisioning software, and all the governance software that came after that. I refer to all of those systems as Identity and Access Management 1.0.
It was all designed to manage this, as long as all the applications, user, and data were behind the firewall on the company network. Then, the data and the users moved, and now even the business applications are moving outside the data center to the public and private cloud.
We now live in this much more federated scenario, and there is a new generation of identity management that we have to install to enable the security, auditability, and governance of that new highly distributed or federated scenario.
Gardner: Andre, let’s go back to that "next-generation level" of identity management. What did you mean by that?
Durand: There are few tenets that fall into the next-generation category. For me, businesses are no longer a silo. Businesses are today fundamentally federated. They're integrating with their supply chain. They're engaging with social identities, hitting their consumer and customer portals. They're integrating with their clients and allowing their clients to gain easier access to their systems. Their employees are going out to the cloud.
All of these are scenarios where the IT infrastructure in the business itself is fundamentally integrated with its customers, partners, and clients. So that would be the first tenet. They're no longer a silo.
The second thing is that in order to achieve the scale of security around identity management in this new world, we can no longer install proprietary identity and access management software. Every interface for how security and identity is managed in this federated world needs to be standardized.
So we need open identity standards such as SAML, OAuth, and OpenID Connect, in order to scale these use cases between companies. It’s not dissimilar to an era of email, before we had Internet e-mail and the SMTP standard.
Companies had email, but it was enterprise email. It wouldn’t communicate with other companies' proprietary email. Then, we standardized email through SMTP and instantly we had Internet-scaled email.
I predict that the same thing is occurring, and will occur, with identity. We'll standardize all of these cases to open identity standards and that will allow us to scale the identity use cases into this federated world.
So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access.
The third tenet is that, for many years, we really focused on the browser and web infrastructure. But now, you have users on mobile devices and applications accessing APIs. You have as many, if not most, transactions occurring through the API mobile channel than you do through the web.
So whatever infrastructure we develop needs to normalize the API and mobile access the same way that it does the web access. You don’t want two infrastructures for those two different channels of communication. Those are some of the big tenets of this new world that define an architecture for next-gen identity that’s very different from everything that came before it.
Gardner: To your last tenet, how do we start to combine without gaps and without security issues the ability to exercise a federated authentication and identity management capability for the web activities, as well as for those specific APIs and specific mobile apps and platforms?
Durand: I’ll give you a Ping product specific example, but it’s for exactly that reason that we kind of chose the path that we did for this new product. We have a product called PingAccess, which is a next-gen access control product that provides both web access management for the web browsers and users using web application. It provides API access management when companies want to expose their APIs to developers for mobile applications and to other web services.
Prior to PingAccess in a single product, allowing you to enable policy for both the API channel and the web channel, those two realms typically were served by independent products. You'd buy one product to protect your APIs and you’d buy another product to do your web-access management.
Now with this next-gen product, PingAccess, you can do both with the same product. It’s based upon OAuth, an emerging standard for identity security for web services, and it’s based upon OpenID Connect, which is a new standard for single sign-on and authentication and authorization in the web tier. [See more on identity standards and APIs.]
We built the product to cross the chasm, between API and web, and also built it based upon open standards, so we could really scale the use cases.
Gardner: Whenever you bring out the words "new" and "standard," you'll get folks who might say, "Well, I'm going to stick with the tried and true." Is there any sense of the level of security, privacy control management, and governance control with these new approaches, as you describe them, that would rebut that instinct to stick with what you have?
Durand: As far as the instinct to stick with what you have, keep in mind that the alternative is proprietary, and there is nothing about proprietary that necessarily means you have better control or more privacy.
There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.
The standards are really defining secure mechanisms to pursue a use case between two different entities. You want a common interface, a common language to communicate. There's a tremendous amount of the work that goes into it by the entire industry to make sure that those standards are secure and privacy enabling.
I'd argue that it's more secure and privacy enabling than the one-off proprietary systems and/or the homegrown systems that many companies developed in the absence of these open standards.
Gardner: Of course, with standards, it's often a larger community, where people can have feedback and inputs to have those standards evolve. That can be a very powerful force when it comes to making sure that things remain stable and safe. Any thoughts about the community approach to this and where these standards are being managed?
Durand: A number of the standards are being managed now by the Internet Engineering Task Force (IETF), and as you know, they're well-regarded, well-known, and certainly well-recognized for their community involvement and having a cycle of improvement that deals with threats, as they emerge, as the community sees them, as a mechanism to improve the standards over time to close those security issues.
Gardner: Going back to the Cloud Identity Summit 2014, is this a coming-out party of sorts for this vision of yours? How do you view the timing right now? Are we at a tipping point, and how important is it to get the word out properly and effectively?
Durand: This is our fifth annual Cloud Identity Summit. We've been working toward this combination of where identity and the cloud and mobile ultimately intersect. All of the trends that I described earlier today -- cloud adoption, mobile adoption, moving the application and the user and the device off the network -- is driving more and more awareness towards a new approach to identity management that is disruptive and fundamentally different than the traditional way of managing identity.
On the cusp
We're right on the cusp where the adoption across both cloud and mobile is irrefutable. Many companies now are moving all in in their strategies to make adoption by their enterprises across those two dimensions a cloud-first and mobile-first posture.
So it is at a tipping point. It's the last nail in the coffin for enterprises to get them to realize that they're now in a new landscape and need to reassess their strategies for identity, when the business applications, the ones that did not convert to SaaS, move to Amazon Web Services, Equinix, or to Rackspace and the private-cloud providers.
That, all of a sudden, would be the last shift where applications have left the data center and all of the old paradigms for managing identity will now need to be re-evaluated from the ground up. That’s just about to happen.
Gardner: Another part of this, of course, is the user themselves. If we can bring to the table doing away with passwords, that itself might encourage a lot of organic adoption and calls for this sort of a capability. Any sense of what we can do in terms of behavior at the user level and what would incentivize them to knock on the door of their developers or IT organization and ask for this sort of capability and vision that we described.
When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time. .
Durand: Now you're highlighting my kick-off speech at PingCon, which is Ping’s Customer and Partner Conference the day after the Cloud Identity Summit. We acquired a company and a technology last year in mobile authentication to make your mobile phone the second factor, strong authentication for corporations, effectively replacing the one-time tokens that have been issued by traditional vendors for strong authentication.
It’s an application you load on your smartphone and it enables you an ability to simply swipe across the screen to authenticate when requested. We'll be demonstrating the mobile phone as a second-factor authentication. What I mean there is that you would type in your username and password and then be asked to swipe the phone, just to verify your identity before getting into the company.
We'll also demonstrate how you can use the phone as a single-factor authentication. As an example, let’s say I want to go to some cloud service, Dropbox, Box, or Salesforce. Before that, I'm asked to authenticate to the company. I'd get a notification on my phone that simply says, "Swipe." I do the swipe, it already knows who I am, and it just takes me directly to the cloud. That user experience is phenomenal.
When you experience an ability to get to the cloud, authenticating to the corporation first, and simply swipe with your mobile phone, it just changes how we think about authentication and how we think about the utility of having a smartphone with us all the time.
Gardner: This aligns really well, and the timing is awesome for what both Google with Android and Apple with iOS are doing in terms of being able to move from screen to screen seamlessly. Is that something that’s built in this as well?
If I authenticate through my mobile phone, but then I end up working through a PC, a laptop, or any other number of interfaces, is this is something that carries through, so that I'm authenticated throughout my activity?
Durand: That's the entire vision of identity federation. Authenticate once, strongly to the network, and have an ability to go everywhere you want -- data center, private cloud, public SaaS applications, native mobile applications -- and never have to re-authenticate.
Gardner: Sounds good to me, Andre. I'm all for it. Before we sign off, do we have an example? It's been an interesting vision and we've talked about the what and how, but is there a way to illustrate to show that when this works well perhaps in an enterprise, perhaps across boundaries, what do you get and how does it work in practice?
Durand: There are three primary use cases in our business for next-generation identity, and we break them up into workforce, partner, and customer identity use cases. I'll give you quick examples of all three.
In the workforce use case, what we see most is a desire for enterprises to enable single sign-on to the corporation, to the corporate network, or the corporate active directory, and then single-click access to all the applications, whether they're in the cloud or in the data center. It presents employees in the workforce with a nice menu of all their application options. They authenticate once to see that menu and then, when they click, they can go anywhere without having to re-authenticate.
That's the entire vision of identity federation. Authenticate once, strongly to the network.
That's primarily the workforce use case. It's an ability for IT to control what applications, where they're going in the cloud, what they can do in the cloud to have an audit trail of that, or have full control over the use of the employee accessing cloud applications. The next-gen solutions that we provide accommodate that use case.
The second use case is what we call a customer portal or a customer experience use case. This is a scenario where customers are hitting a customer portal. Many of the major banks in the US and even around the world use Ping to secure their customer website. When you log into your bank to do online banking, you're logging into the bank, but then, when you click on any number of the links, whether to order checks, to get check fulfillment, that goes out to Harland Clarke or to Wealth Management.
That goes to a separate application. That banking application is actually a collection of many applications, some run by partners, some by run by different divisions of the bank. The seamless customer experience, where the user never sees another login or registration screen, is all secured through Ping infrastructure. That’s the second use case.
The third use case is what we call a traditional supply chain or partner use case. The world's largest retailer is our customer. They have some 100,000 suppliers that access inventory applications to manage inventory at all the warehouses and distribution centers.
Prior to having Ping technology, they would have to maintain the username and password of the employees of all those 100,000 suppliers. With our technology they allow single sign-on to that application, so they no longer have to manage who is an employee of all of those suppliers. They've off-loaded the identity management back to the partner by enabling single sign-on.
About 50 of the Fortune 100 are all Ping customers. They include Best Buy, where you don’t have to login to go to the reward zone. You're actually going through Ping.
If you're a Comcast customer and you log into comcast.net and click on any one of the content links or email, that customer experience is secured though Ping. If you log into Marriott, you're going through Ping. The list goes on and on.
In the future
Gardner: This all comes to a head as we're approaching the July Cloud Identity Summit 2014 in Monterey, Calif., which should provide an excellent forum for keeping the transition from passwords to a federated, network-based intelligent capability on track.
Before we sign-off, any idea of where we would be in a year from now? Is this a stake in the ground for the future or something that we could extend our vision toward in terms of what might come next, if we make some strides and a lot of what we have been talking about today gets into a significant uptake and use.
Durand: We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication. That adoption is going to be fairly quick. I expect that, and you're going to see enterprises adopting en masse stronger authentication using the smartphone.
Gardner: I suppose that is an accelerant to the bring-your-own-device (BYOD) trend. Is that how you see it as well?
We're right on the cusp of the smartphone becoming a platform for strong, multi-factor authentication.
Durand: It’s a little bit orthogonal to BYOD. The fact that corporations have to deal with that phenomenon brings its own IT headaches, but also its own opportunities in terms of the reality of where people want to get work done.
But the fact that we can assume that all of the devices out there now are essentially smartphone platforms, very powerful computers with lots of capabilities, is going to allow the enterprises now to leverage that device for really strong multi-factor authentication to know who the user is that’s making that request, irrespective of where they are -- if they're on the network, off the network, on a company-issued computer or on their BYOD.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.
You may also be interested in:
Cloud Identity Summit
Posted By Dana L Gardner,
Wednesday, July 02, 2014
| Comments (0)
The advent of the application programming interface (API) economy has forced a huge, pressing need for organizations to both seek openness and improve security for accessing mobile applications, data, and services anytime, anywhere, and from any device.
Awash in inadequate passwords and battling subsequent security breaches, business and end-users alike are calling for improved identity management and federation technologies. They want workable standards to better chart the waters of identity management and federation, while preserving the need for enterprise-caliber risk remediation and security.
Meanwhile, the mobile tier is becoming an integration point for scads of cloud services and APIs, yet unauthorized access to data remains common. Mobile applications are not yet fully secure, and identity control that meets audit requirements is hard to come by. And so developers are scrambling to find the platforms and tools to help them manage identity and security, too.
Clearly, the game has clearly changed for creating new and attractive mobile processes, yet the same old requirements remain wanting around security, management, interoperability, and openness.
BriefingsDirect assembled a panel of experts to explore how to fix these pressing needs: Bradford Stephens, the Developer and Platforms Evangelist in the CTO’s Office at Ping Identity; Ross Garrett, Senior Director of Product Marketing at Axway, Kelly Grizzle, Principal Software Engineer at SailPoint Technologies. Welcome, Kelly. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: We are approaching the Cloud Identity Summit 2014 (CIS), which is coming up on July 19 in Monterey, Calif. There's a lot of frustration with identity services that meet the needs of developers and enterprise operators as well. So let’s talk a little bit about what’s going on with APIs and identity.
What are the trends in the market that keep this problem pressing? Why is it so difficult to solve?
Stephens: Well, as soon as we've settled on a standard, the way we interact with computers changes. It wasn’t that long ago that if you had Active Directory and SAML and you hand-wrote security endpoints of model security products, you were pretty much covered.
But in the last three or four years, we've gone to a world where mobile is more important than web. Distributed systems are more important than big iron. And we communicate with APIs instead of channels and SDKs, and that requires a whole new way of thinking about the problem.
Garrett: Ultimately, APIs are becoming the communication framework, the fabric, in which all of the products that we touch today talk to each other. That, by extension, provides a new identity challenge. That’s a lot of reason why we've seen some friction and schizophrenia around the types of identity technologies that are available to us.
So we see waves of different technologies come and go, depending on what is the flavor of the month. That has caused some frustration for developers, and will definitely come up during our Cloud Identity Summit in a couple of weeks.
Grizzle: APIs are becoming exponentially more important in the identity world now. As Bradford alluded to, the landscape is changing. There are mobile devices as well as software-as-a-service (SaaS) providers out there who are popping up new services all the time. The common thread between all of them is the need to be able to manage identities. They need to be able to manage the security within their system. It makes total sense to have a common way to do this.
APIs are key for all the different devices and ways that we connect to these service providers. Becoming standards based is extremely important, just to be able to keep up with the adoption of all these new service providers coming on board.
Gardner: As we describe this as the API economy, I suppose it’s just as much a marketplace and therefore, as we have seen in other markets, people strive for predominance. There's jockeying going on. Bradford, is this a matter of an architectural shift? Is this a matter of standards? Or is this a matter of de-facto standards? Or perhaps all of the above?
Stephens: It’s getting complex quickly. I think we're settling on standards, like it or not, mostly positively. I see most people settling on at least OAuth 2.0 as a standard token, and OpenID Connect for implementation and authentication of information, but I think that’s about as far as we get.
There's a lot of struggle with established vendors vying to implement these protocols. They try to bridge the gap between the old world of say SAML and Active Directory and all that, and the new world of SCIM, OAuth, OpenID Connect. The standards are pretty settled, at least for the next two years, but the tools, how we implement them, and how much work it takes developers to implement them, are going to change a lot, and hopefully for the better.
Garrett: We have identified a number of new standards that are bridging this new world of API-oriented connectivity. Learning from the past of SAML and legacy, single sign-on infrastructure, we definitely need some good technology choices.
The standards seem to be leading the way. But by the same token, we should keep a close eye on the market changing with regards to how fast standards are changing. We've all seen things like OAuth progress slower than some of the implementations out there. This means the ratification of the standard was happening after many providers had actually implemented it. It's the same for OpenID Connect.
We are in line there, but the actual standardization process doesn’t always keep up with where the market wants to be.
Gardner: We've been this play out before that the standards can lag. Getting consensus, developing the documentation and details, and getting committees to sign off can take time, and markets move at their own velocity. Many times in the past, organizations have hedged their bets by adopting multiple standards or tracking multiple ways of doing things, which requires federation and integration.
Kelly, are there big tradeoffs with standards and APIs? How do we mitigate the risk and protect ourselves by both adhering to standards, but also being agile in the market?
Grizzle: That’s kind of tricky. You're right in that standards tend to lag. That’s just part and parcel of the standardization process. It’s like trying to pass a bill through Congress. It can go slow.
You're right in that standards tend to lag. That’s just part and parcel of the standardization process.
Something that we've seen some of these standards do right, from OAuth and from the SCIM perspective, is that both of those have started their early work with a very loose standardization process, going through not one of the big standards bodies, but something that can be a little bit more nimble. That’s how the SCIM 1.0 and 1.1 specs came out, and they came out in a reasonable time frame to get people moving on it.
Now that things have moved to the Internet Engineering Task Force (IETF), development has slowed down a little bit, but people have something to work with and are able to keep up with the changes going on there.
I don’t know that people necessarily need to adopt multiple standards to hedge their bets, but by taking what’s already there and keeping a pulse on the things that are going to change, as well as the standard being forward-thinking enough to allow some extensibility within it, service providers and clients, in the long run, are going to be in a pretty good spot.
Gardner: We've talked a few technical terms so far, and just for the benefit of our audience, I'd like to do a quick primer, perhaps with you Bradford. To start: OAuth, this is with the IETF now. Could you just quickly tell the audience what OAuth is, what it’s doing, and why it’s important when we talk about API, security and mobile?
Stephens: OAuth is the foundation protocol for authorization when it comes to APIs for web applications. OAuth 2 is much more flexible than OAuth 1.
Basically, it allows applications to ask for access to stuff. It seems very vague, but it’s really powerful once you start getting the right tokens for your workflows. And it provides the same foundation for everything else we do for identity and APIs.
The best example I can think of is when you log into Facebook, and Facebook asks whether you really want this app to see your birthday, all your friends’ information, and everything else. Being able to communicate all that over the OAuth 2.0 is a lot easier than how it was with OAuth 1.0 a few years ago.
Gardner: How about OpenID Connect. This is with the OpenID Foundation. How does that relate, and what is it?
If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself.
When you access an API and you authenticate, you choose a scope, and one of the most common scopes is OpenID Profile. This OpenID Profile will just have things like your username, maybe your address, various other pieces of identity information, and it describes who the "you" is, who you are.
Gardner: And SCIM, you mentioned that Kelly, and I know you have been involved with it. So why don’t you take the primer for SCIM, and I believe it’s Simple Cloud Identity Management?
Grizzle: That's the historical name for it, Simple Cloud Identity Management. When we took the standard to the IETF, we realized that the problems that we were solving were a little bit broader than just the cloud and within the cloud. So the acronym now stands for the System for Cross-domain Identity Management.
That’s kind of a mouthful, but the concept is pretty simple. SCIM is really just an API and a schema that allows you to manage identities and identity-related information. And by manage them, I mean to create identities in systems to delete them, update them, change the entitlements and the group memberships, and things like that.
Gardner: From your perspective, Kelly, what is the relationship then between OAuth and SCIM?
Grizzle: OAuth, as Bradford mentioned, is primarily geared toward authorization, and answers the question, "Can Bob access this top-secret document?" SCIM is really not in the authorization and authentication business at all. SCIM is about managing identities.
OAuth assumes that an identity is already present. SCIM is able to create that identity. You can create the user "Bob." You can say that Bob should not have access to that top-secret document. Then, if you catch Bob doing some illicit activity, you can quickly disable his account through a SCIM call. So they fit together very nicely.
Gardner: In the real world, developers like to be able to use APIs, but they might not be familiar with all the details that we've just gone through on some of these standards and security approaches.
How do we make this palatable to developers? How do we make this something that they can implement without necessarily getting into the nitty-gritty? Are there some approaches to making this a bit easier to consume as a developer?
The best thing we can do is have tool-providers give them tools in their native language or in the way developers work with things.
Stephens: As a developer who's relatively new to this field -- I worked in database for three years -- I've had personal experience of how hard it is to wrap your head around all the standards and all these flows and stuff. The best thing we can do is have tool providers give them tools in their native language, or in the way developers work with things.
This needs well-documented, interactive APIs -- things like Swagger -- and lots of real-world code examples. Once you've actually done the process of authentication through OAuth, getting a JSON Web Token, and getting an OpenID Connect profile, it’s really simple to see how it all works together, if you do it all through a SaaS platform that handles all the nitty-gritty, like user creation and all that.
If you have to roll your own, though, there's not a lot of information out there besides the WhitePages and Wall Post. It’s just a nightmare. I tried to roll my own. You should never roll your own.
So having SaaS platforms to do all this stuff, instead of having documents, means that developers can focus on providing their applications, and just understand that I have this media and project, not about which tokens carry information that accesses OAuth and OpenID Connect.
I don’t really care how it all works together; I just know that I have this token and it has the information I need. And it’s really liberating, once you finally get there.
So I guess the best thing we can do is provide really great tools that solve the identity-management problems.
Tools: a key point
Garrett: Tools, that’s the key point here. Whether we like it or not, developers tend to be kind of lazy sometimes and they certainly don’t have the time or the energy to understand every facet of the OAuth specification. So providing tools that can wrap that up and make it as easy to implement as possible is really the only way that we get to really secure mobile applications or any API interaction. Because without a deep understanding of how this stuff works, you can make pretty fundamental errors.
Having said that, at least we've started to take steps in the right direction with the standards. OAuth is built at least with the idea of mobile access in mind. It’s leveraging REST and JSON types, rather than SOAP and XML types, which are really way too heavyweight for mobile applications.
So the standards, in their own right, have taken us in the right direction, but we absolutely need tools to make it easy for developers.
Grizzle: Tools are of the utmost importance, and some of the identity providers and people with skin in the game, so to speak, are helping to create these tools and to open-source them, so that they can be used by other people.
Identity isn’t the most glamorous thing to talk about, except when it all goes wrong, and some huge leak makes the news headlines.
Another thing that Ross touched on was keeping the simplicity in the spec. These things that we're addressing -- authorization, authentication, and managing identities -- are not extremely simple concepts always. So in the standards that are being created, finding the right balance of complexity versus completeness and flexibility is a tough line to walk.
With SCIM, as you said, the first initial of the acronym used to stand for Simple. It’s still a guiding principle that we use to try to keep these interactions as simple as possible. SCIM uses REST and JSON, just like some of these other standards. Developers are familiar with that. Putting the burden on the right parties for implementation is very important, too. To make it easy on clients, the ones who are going to be implementing these a lot, is pretty important.
Gardner: Do these standards do more than help the API economy settle out and mature? Cloud providers or SaaS providers want to provide APIs and they want the mobile apps to consume them. By the same token, the enterprises want to share data and want data to get out to those mobile tiers. So is there a data-management or brokering benefit that goes along with this? Are we killing multiple birds with one set of standards?
Garrett: The real issue here, when we think about the new types of products and services that the API economy is helping us deliver, is around privacy and ultimately customer confidence. Putting the user in control of who gets to access which parts of my identity profile, or how contextual information about me can perhaps make identity decisions easier, allows us to lock down, or better understand, these privacy concerns that the world has.
Identity isn’t the most glamorous thing to talk about -- except when it all goes wrong -- and some huge leak makes the news headlines, or some other security breach has lost credit-card numbers or people’s usernames and passwords.
Hand in hand
In terms of how identity services are developing the API economy, the two things go hand in hand. Unless people are absolutely certain about how their information is being used, they simply choose not to use these services. That’s where all the work that the API management vendors and the identity management vendors are doing and bringing that together is so important.
Gardner: You mentioned that identity might not be sexy or top of mind, but how else can you manage all these variables on an automated or policy-driven basis? When we move to the mobile tier, we're dealing with multiple networks. We're dealing with multiple services ... cloud, SaaS, and APIs. And then we're linking this back to enterprise applications. How other than identity can this possibly be managed?
Stephens: Identity is often thought of as usernames and passwords, but it’s evolving really quickly to be so much more. This is something I harp on a lot, but it’s really quickly becoming that who we are online is more important than who we are in real life. How I identify myself online is more important than the driver's license I carry in my wallet.
And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.
As you know, your driver’s license is like a real-life token of information that describes what you're allowed to do in your life. That’s part of your identity. Anybody who has lost their license knows that, without that, there's not a whole lot you can do.
Bringing that analogy back to the Internet, what you're able to access and what access you're able to give other people or other applications to change important things, like your Facebook posts, your tweets, or go through your email and help categorize that is important. All these little tasks that help define how you live, are all part of your identity. And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.
Gardner: Let me pose the same question, but in a different way. When you do this well, when you can manage identity, when you can take advantage of these new standards that extend into mobile requirements and architectures, with the API economy in mind, what do you get? What does it endow you with? What can you do that perhaps you couldn’t do if you were stuck in some older architectures or thinking?
Grizzle: Identity is key to everything we do. Like Bradford was just saying, the things that you do online are built on the trust that you have with who is doing them. There are very few services out there where you want completely anonymous access. Almost every service that you use is tied to an identity.
So it’s of paramount importance to get a common language between these. If we don’t move to standards here, it's just going to be a major cost problem, because there are a ton of different providers and clients out there.
If every provider tries to roll their own identity infrastructure, without relying on standards, then, as a client, if I need to talk to two different identity providers, I need to write to two different APIs. It’s just an explosive problem, with the amount that everything is connected these days.
So it’s key. I can’t see how the system will stand up and move forward efficiently without these common pieces in place.
Gardner: Do we have any examples along these same lines of what do you get when you do this well and appropriately based on what you all think is the right approach and direction? We've been talking at a fairly abstract level, but it really helps solidify people’s thinking and understanding when they can look at a use-case, a named situation or an application.
Stephens: If you want a good example of how OAuth delegation works, building a Facebook app or just working on Facebook app documentation is pretty straightforward. It gives you a good idea of what it means to delegate certain authorization.
Likewise, Google is very good. It’s very integrated with OAuth and OpenID Connect when it comes to building things on Google App Engine.
The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure.
So if you want to secure an API that you built using Google Cloud on Google App Engine, which is trivial to do, Google Cloud Endpoints provides a really good example. In fact, there is a button that you can hit in their example button called Use OAuth and that OAuth transports OpenID Connect profile, and that’s a pretty easy way to go about it.
Garrett: I'll just take a simple consumer example, and we've touched on this already. It was the idea in the past, where every individual service or product is offering only their identity solution. So I have to create a new identity profile for every product or service that I'm using. This has been the case for a long time in the consumer web and in the enterprise setting as well.
So we have to be able to solve that problem and offer a way to reuse existing identities. It involves so taking technologies like OpenID Connect, which is totally hidden to the end user really, but simply saying that you can use this existing identity, your LinkedIn or Facebook credentials, etc., to access some new products, takes a lot of burden away from the consumer. Ultimately, that provides us a better security model end to end.
The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure. Even though some people might shy away from using their Facebook identity across multiple applications, in many ways it’s actually better to, because that’s really one centralized place where I can actually see, audit, and adjust the way that I'm presenting my identity to other people.
That’s a really great example of how these new technologies are changing the way we interact with products everyday.
Grizzle: At SailPoint, the company that I work for, we have a client, a large chip maker, who has seen the identity problem and really been bitten by it within their enterprise. They have somewhere around 3,500 systems that have to be able to talk to each other, exchange identity data, and things like that.
The issue is that every time they acquire a new company or bring a new group into the fold, that company has its own set of systems that speak their own language, and it takes forever to get them integrated into their IT organization there.
So they've said that they're not going to support every app that these people bring into the IT infrastructure. They're going with SCIM and they are saying that all these apps that come in, if they speak SCIM, then they'll take ownership of those and pull them into their environment. It should just plug in nice and easy. They're doing it just because of a resourcing perspective. They can't keep up with the amount of change to their IT infrastructure and keep everything automated.
They can't keep up with the amount of change to their IT infrastructure and keep everything automated.
Gardner: I want to quickly look at the Cloud Identity Summit that’s coming up. It sounds like a lot of these issues are going to be top of mind there. We're going to hear a lot of back and forth and progress made.
Does this strike you, Bradford, as a tipping point of some sort, that this event will really start to solidify thinking and get people motivated? How do you view the impact of this summit on cloud identity?
Stephens: At CIS, we're going to see a lot of talk about real-world implementation of these standards. In fact, I'm running the Enterprise API track and I'll be giving a talk on end-to-end authentication using JAuth, OAuth, and OpenID Connect. This year is the year that we show that it's possible. Next year, we'll be hearing a lot more about people using it in production.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.
You may also be interested in:
Cloud Identity Summit
Posted By Dana L Gardner,
Thursday, June 26, 2014
| Comments (0)
When Capgemini's business information management (BIM) practices unit needed to provide big data capabilities to its insurance company customers, it needed to deliver the right information to businesses much faster from the very bottom up.
That means an improved technical design and an architectural way of delivering information through business intelligence (BI) and analytics. The ability to bring together structured and unstructured data -- and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there -- was critical for CapGemini.
And that's because Capgemini's Financial Services Global Business Unit, based in the United Kingdom, must drive better value to its principal-level and senior-level consultants as they work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those COO- and CFO-level customers.
Our next innovation case study interview therefore highlights how Capgemini is using big data and analysis to help its organization clients better manage risk.
BriefingsDirect had an opportunity to learn first-hand how big data and analysis help its Global 500 clients identify the most pressing analysis from huge data volumes we interviewed Ernie Martinez, Business Information Management Head at the Capgemini Financial Services Global Business Unit in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Risk has always been with us. But is there anything new, pressing, or different about the types of risks that your clients are trying to reduce and understand?
Martinez: I don't think it's as much about what's new within the risk world, as much as it's about the time it takes to provision the data so companies can make the right decisions faster, therefore limiting the amount of risk they may take on in issuing policies or taking on policies with new clients.
Gardner: In addition to the risk issue, of course, there is competition. The speed of business is picking up, and we’re still seeing difficult economic climates in many markets. How do you step into this environment and find a technology that can improve things? What have you found?
Martinez: There is the technology aspect of delivering the right information to business faster. There is also the business-driven way of delivering that information faster to business.
The BIM practice is a global practice. We’re ranked in the top upper right-hand quadrant in Gartner as one of the best BIM practices out there with about 7,000 BIM resources worldwide.
Our focus is on driving better value to the customer. So we have principal-level and senior-level consultants that work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those, the COO- and CFO-level customers.
We then drive more business into the technical design and architectural way of delivering information in business intelligence (BI) and analytics. Once we define what the road to good looks like for an organization, when you talk about integrating information across the enterprise, it's about what is that path to good looks like and what are the key initiatives that an organization must do to be able to get there.
This is where our technical design, business analysis, and data analysis consultants fit in. They’re actually going in to work with business to define what do they need to see out of their information to help them make better decisions.
To get a product demonstration, send an email to:
Gardner: Of course, the very basis of this is to identify the information, find the information, and put the information in a format that can be analyzed. Then, do the analysis, speed this all up, and manage it at scale and at the lowest possible cost. It’s a piece of cake, right? Tell us about the process you go through and how you decide what solutions to use and where the best bang for the buck comes from?
Martinez: Our approach is to take that senior-level expertise in big data and analytics, bring that into our practice, put that together with our business needs across financial services, insurance, and capital markets, and begin to define valid use cases that solve real business problems out there.
We’re a consulting organization, and I expect our teams to be able to be subject matter experts on what's happening in the space and also have a good handle on what the business problems are that our customers are facing. If that’s true, then we should be able to outline some valid use cases that are going to solve some specific problems for business customers out there.
In doing so, we’ll define that use case. We’ll do the research to validate that indeed it is a business problem that's real. Then we’ll build the business case that outlines that if we do build this piece of intellectual property (IP), we believe we can go out and proactively affect the marketplace and help customers out there. This is exactly what we did with HP and the HAVEn platform.
Why Capgemini and our BIM practices jumped in with a partnership with HP and Vertica in the HAVEn platform is really about the ability to deliver the right information to business faster from the bottom up. That means the infrastructure and the middleware by which we serve that data to business. From the top down, we work with business in a more iterative fashion in delivering value quickly out of the data that they are trying to harvest.
Gardner: So we’re talking about a situation where you want to have wide applicability of the technology across many aspects of what you are doing, that make sense economically, but of course it also has to be the right tool for the job, that's to go deep and wide. You’re in a proof-of-concept (POC) stage. How did you come to that? What were some of the chief requirements you had for doing this at that right balance of deep and wide?
Martinez: We, as an organization, believe that our goal as BI and analytics professionals is to deliver the right information faster to business. In doing so, you look at the technologies that are out there that are positioned to do that. You look at the business partners that have that mentality to actually execute in that manner. And then you look at the organization, like ours, whose sole purpose is to mobilize quickly and deliver value to customer.
I think it was a natural fit. When you look at HP Vertica in the HAVEn platform, the ability to integrate social media data through Autonomy and then of course through Vertica and Hadoop -- the integration of the entire architecture -- gives us the ability to do many things.
But number one, it's the ability to bring in structured and unstructured data, and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there.
Being here at HP Discover this week has certainly solidified in my mind that we’re betting on the right horse.
Over the course of the last six months of 2013, that conversation began to blossom into a relationship. We all work together as a team and we think we can mobilize not just the application or the solution that we’re thinking about, but the entire infrastructure derivatives to our customers quickly. That's where we’re at.
What that means is that once we partnered and got the go ahead with HP Vertica to move forward with the POC, we mobilized a solution in less than 45 days, which I think shows the value of the relationship from the HP side as well as from Capgemini.
Gardner: Down the road, after some period of implementation, there are general concerns about scale when you’re dealing with big data. Because you’re near the beginning of this, how do you feel about the ability for the platform to work to whatever degree you may need?
Martinez: Absolutely no concern at all. Being here at HP Discover has certainly solidified in my mind that we’re betting on the right horse with their ability to scale. If you heard some of the announcements coming out, they’re talking about the ability to take on big data. They’re using Vertica and the HAVEn network.
There’s absolutely zero question in my mind that organizations out there can leverage this platform and grow with it over time. Also, it gives us the ability to be able to do some things that we couldn’t do a few years back.
Gardner: Ernie, let's get back to the business value here. Perhaps you can identify some of the types of companies that you think would be in the best position to use this. How will this hit the road? What are the sweet spots in the market, the applications you think would be the most urgently that make a right fit for this?
Martinez: When you talk about the largest insurers around the world, whether from Zurich to Farmers in the US to Liberty Mutual, you name it, these are some of our friendly customers that we are talking to that are providing feedback to us on this solution.
We’ll incorporate that feedback. We’ll then take that to some targeted customers in North America, UK, and across Europe, that are primed and in need of a solution that will give them the ability to not only assess risk more effectively, but reduce the time to be able to make these type of decisions.
Reducing the time to provision data reduces costs by integrating data across multiple sources, whether it be customer sentiment from the Internet, from Twitter and other areas, to what they are doing around their current policies. It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs. It gives them the ability to do many more things than they were able to do in the past.
It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs.
Gardner: And Capgemini is in the position of mastering this platform and being able to extend the value of that platform across multiple clients and business units. Therefore, that reduces the total cost of that technology, but at the same time, you’re going to have access to data across industries, and perhaps across boundaries that individual organizations might not be able to attain.
So there's a value-add here in terms of your penetration into the industry and then being able to come up with the inferences. Tell me a little bit about how the access-to-data benefit works for you?
Martinez: If you take a look at the POC or the use case that he POC was built on, it was built on a commercial insurance risk assessment. If you take a look at the underlying architecture around commercial insurance risk, our goal was to be able to build an architecture that will serve the uses case that HP bought into, but at the same time, flatten out that data model and that architecture to also bring in better customer analytics for commercial insurance risk.
So we’ve flattened out that model and we’ve built the architecture so we could go after additional business, instead of more clients, across not just commercial insurance, but also general insurance. Then, you start building in the customer analytics capability within that underlying architecture and it gives us the ability to go from the insurance market over to the financial services market, as well as into the capital markets area.
Gardner: All the data in one place makes a big difference.
Martinez: It makes a huge difference, absolutely.
Gardner: Tell us a bit about the future. We’ve talked about a couple of aspects of the HAVEn suite. Autonomy, Vertica, and Hadoop seem to be on everyone's horizon at some point or another due to scale and efficiencies. Have you already been using Hadoop, or how do expect to get there?
Martinez: We haven’t used Hadoop, but certainly, with its capability, we plan to. I’ve done a number of different strategies and roadmaps in engaging with larger organizations, from American Express to the largest retailer in the world. In every case, they have a lot of issues around how they’re processing the massive amounts of data that are coming into their organization.
When you look at the extract, transform, load (ETL) processes by which they are taking data from systems of record, trying to massage that data and move it into their large databases, they are having issues around load and meeting load windows.
The HAVEn platform, in itself, gives us the ability to leverage Hadoop, maybe take out some of that processing pre-ETL, and then, before we go into the Vertica environment, be able to take out some of that load and make the Vertica even more efficient than it is today, which is one of the biggest selling points of Vertica. It certainly is in our plans.
This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations.
Gardner: Another announcement here at Discover has been around converged infrastructure, where they’re trying to make the hardware-software efficiency and integration factor come to bear on some of these big-data issues. Have you thought about the deployment platform as well as the software platform?
Martinez: You bet. At the beginning of this interview, we talked about the ability to deliver the right information faster to business. This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations. To be able to have a partner like HP who is talking about the convergence of software and infrastructure all at the same time to help companies manage this better, is one of the biggest reasons why we're here.
We, as a consulting organization, can provide the consulting services and solutions that are going to help deliver the right information, but without that infrastructure, without that ability to be able to integrate faster and then be able to analyze what's happening out there, it’s a moot point. This is where this partnership is blossoming for us.
Gardner: Before we sign off, Ernie, now that you have gone through this understanding and have developed some insights into the available technologies and made some choices, is there any food for thought for others who might just be beginning to examine how to enter big data, how to create a common platform across multiple types of business activities? What did you not think of before that you wish you had known?
Martinez: If I look back at lessons learned over the last 60 to 90 days for us within this process, it’s one thing to say that you're mobilizing the team right from the bottom up, meaning from the infrastructure and the partnership with HP, and as well as the top-down with your business needs to finding the right business requirements and then actually building to that solution.
In most cases, we’re dealing with individuals. While we might talk about an entrepreneurial way of delivering solutions into the marketplace, we need to challenge ourselves, and all of the resources that we bring into the organization, to actually have that mentality.
What I’ve learned is that while we have some very good tactical individuals, having that entrepreneurial way of thinking and actually delivering that information is a different mindset altogether. It's about mentoring our resources that we currently have, bringing in that talent that has more of an entrepreneurial way of delivering, and trying to build solutions to go to market into our organization.
To get a product demonstration, send an email to:
I didn’t really think about the impact of our current resources and how it would affect them. We were a little slow as we started the POC. Granted, we did this in 45 days, so that’s the perfectionist coming out in me, but I’d say it did highlight a couple of areas within our own team that we can improve on.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Posted By Dana L Gardner,
Tuesday, June 24, 2014
Updated: Tuesday, June 24, 2014
| Comments (0)
The next BriefingsDirect panel discussion defines new business values from the massive Open Platform 3.0 shift that combines the impacts and benefits of big data, cloud, Internet of things, mobile and social.
Our discussion comes to you from The Open Group Conference held on May 13, 2014 in Amsterdam, where the focus was on enabling boundaryless information flow.
To learn more about making Open Platform 3.0 a business benefit in an architected fashion, please join moderator Stuart Boardman, a Senior Business Consultant at KPN and Open Platform 3.0 Forum co-chairman; Dr. Chris Harding, Director for Interoperability at The Open Group, and Open Platform 3.0 Forum Director; Lydia Duijvestijn, Executive Architect at IBM Global Business Services in The Netherlands; Andy Jones, Technical Director for EMEA at SOA Software; TJ Virdi, Computing Architect in the Systems Architecture Group at Boeing and also a co-chair of the Open Platform 3.0 Forum; Louis Dietvorst, Enterprise Architect at Enexis in The Netherlands; Sjoerd Hulzinga, Charter Lead at KPN Consulting, and Frans van der Reep, Professor at the Inholland University of Applied Sciences.
Here are some excerpts:
Boardman: Welcome to the session about obtaining value from Open Platform 3.0, and how we're actually going to get value out of the things that we want to implement from big data, social, and the Internet-of-Things, etc., in collaboration with each other.
We're going to start off with Chris Harding, who is going to give us a brief explanation of what the platform is, what we mean by it, what we've produced so far, and where we're trying to go with it.
He'll be followed by Lydia Duijvestijn, who will give us a presentation about the importance of non-functional requirements (NFRs). If we talk about getting business value, those are absolutely central. Then, we're going to go over to a panel discussion with additional guests.
Without further ado, here's Chris Harding, who will give you an introduction to Open Platform 3.0.
Purpose of architecture
Harding: Hello, everybody. It's a great pleasure to be here in Amsterdam. I was out in the city by the canals this morning. The sunshine was out, and it was like moving through a set of picture postcards.
It's a great city. As you walk through, you see the canals, the great buildings, the houses to the sides, and you see the cargo hoists up in the eaves of those buildings. That reminds you that the purpose of the arrangement was not to give pleasure to tourists, but because Amsterdam is a great trading city, that is a very efficient way of getting goods distributed throughout the city.
That's perhaps a reminder to us that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well.
Probably when those canals were first thought of, it was not obvious that this was the right thing to do for Amsterdam. Certainly it would not be obvious that this was the right layout for that canal network, and that is the exciting stage that we're at with Open Platform 3.0 right now.
We have developed a statement, a number of use cases. We started off with the idea that we were going to define a platform to enable enterprises to get value from new technologies such as cloud computing, social computing, mobile computing, big data, the Internet-of-Things, and perhaps others.
We developed a set of business use cases to show how people are using and wanting to use those technologies. We developed an Open Group business scenario to capture the business requirements. That then leads to the next step. All these things sound wonderful, all these new technologies sound wonderful, but what is Open Platform 3.0?
Though we don't have the complete description of it yet, it is beginning to take shape. That's what I am hoping to share with you in this presentation, our current thoughts on it.
Looking historically, the first platform, you could say, was operating systems -- the Unix operating system. The reason why The Open Group, X/Open in those days, got involved was because we had companies complaining, "We are locked into a proprietary operating system or proprietary operating systems. We want applications portability." The value delivered through a common application environment, which was what The Open Group specified for Unix, was to prevent vendor lock-in.
The second platform is the World Wide Web. That delivers a common services environment, for services either through accessing web pages for your browser or for web services where programs similarly can retrieve or input information from or to the web service.
The benefit that that has delivered is universal deployment and access. Pretty much anyone or any company anywhere can create a services-based solution and deploy it on the web, and everyone anywhere can access that solution. That was the second platform.
The way Open Platform 3.0 is developing is as a common architecture environment, a common environment in which enterprises can do architecture, not as a replacement for TOGAF. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0.
Open Platform 3.0 is more about what kind of architecture you will create, and by the definition of a common environment for doing this, the big business benefit that will be delivered will be integrated solutions.
Yes, you can develop a solution, anyone can develop a solution, based on services accessible over the World Wide Web, but will those solutions work together out of the box? Not usually. Very rarely.
There is an increasing need, which we have come upon in looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions developed for those technologies independently of each other that need to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently. That's what I am going talk about.
The Open Group has recently published its first thoughts on Open Platform 3.0, that's the White Paper. I will be saying what’s in that White Paper, what the platform will do -- and because this is just the first rough picture of what Open Platform 3.0 could be like -- how we're going to complete the definition. Then, I will wrap up with a few conclusions.
So what is in the current White Paper? Well, what we see as being eventually in the Open Platform 3.0 standards are a number of things. You could say that a lot of these are common architecture artifacts that can be used in solution development, and that's why I'm talking about a common architecture environment.
Statement of need objectives and principles is not that of course; it's why we're doing it.
Definition of key terms: clearly you have to share an understanding of the key terms if you're going to develop common solutions or integrable solutions.
Stakeholders and their concerns: an important feature of an architecture development. An understanding of the stakeholders and their concerns is something that we need in the standard.
A capabilities map that shows what the products and services do that are in the platform.
And basic models that show how those platform components work with each other and with other products and services.
Explanation: this is an important point and one that we haven’t gotten to yet, but we need to explain how those models can be combined to realize solutions.
Standards and guidelines
Finally, it's not enough to just have those models; there needs to be the standards and guidelines that govern how the products and services interoperate. These are not standards that The Open Group is likely to produce. They will almost certainly be produced by other bodies, but we need to identify the appropriate ones and, probably in some cases, coordinate with the appropriate bodies to see that they are developed.
What we have in the White Paper is an initial statement of needs, objectives, and principles; definitions of some key terms; our first-pass list of stakeholders and their concerns; and maybe half a dozen basic models. These are in an analysis of the use cases, the business use cases, for Open Platform 3.0 that were developed earlier.
These are just starting points, and it's incomplete. Each of those sections is incomplete in itself, and of course we don't have the complete set of sections. It's all subject to change.
This is one of the basic models that we identified in the snapshot. It's the Mobile Connected Device Model and it comes up quite often. And you can see, that stack on the left is a mobile device, it has a user, and it has a platform, which would probably be Android or iOS, quite likely. And it has infrastructure that supports the platform. It’s connected to the World Wide Web, because that’s part of the definition of mobile computing.
On the right, you see, and this is a frequently encountered pattern, that you don't just use your mobile phone for running an app. Maybe you connect it to a printer. Maybe you connect it to your headphones. Maybe you connect it to somebody's payment terminal. You might connect it to various things. You might do it through a USB. You might do it through Bluetooth. You might do it by near field communication (NFC).
But you're connecting to some device, and that device is being operated possibly by yourself, if it was headphones; and possibly by another organization if, for example, it was a payment terminal and the user of the mobile device has a business relationship with the operator of the connected device.
That’s the basic model. It's one of the basic models that came up in the analysis of use cases, which is captured in the White Paper. As you can see, it's fundamental to mobile computing and also somewhat connected to the Internet-of-Things.
That's the kind of thing that's in the current White Paper, a specific example of all those models in the current White Paper. Let’s move on to what the platform is actually going to do?
There are three slides in this section. This slide is probably familiar to people who have watched presentations on Open Platform 3.0 previously. It captures our understanding of the need to obtain information from these new technologies, the social media, the mobile devices, sensors, and so on, the need to process that information, maybe on the cloud, and to manage it, stewardship, query and search, all those things.
Ultimately, and this is where you get the business value, it delivers it in a form where there is analysis and reasoning, which enables enterprises to take business decisions based on that information.
So that’s our original picture of what Open Platform 3.0 will do.
IT as broker
This next picture captures a requirement that we picked up in the development of the business scenario. A gentleman from Shell gave the very excellent presentation this morning. One of the things you may have picked up him saying was that the IT department is becoming a broker.
Traditionally, you would have had the business use in the business departments and pretty much everything else on that slide in the IT department, but two things are changing. One, the business users are getting smarter, more able to use technology; and two, they want to use technology either themselves or to have business technologists closely working with them.
Systems provisioning and management is often going out to cloud service providers, and the programming, integration, and helpdesk is going to brokers, who may be independent cloud brokers. This is the IT department in a broker role, you might say.
But the business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, it's not a good defense to say, "Well, our broker did it that way." You are responsible.
Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved. And that’s a way of using IT that Open Platform 3.0 must and will accommodate.
Finally, I mentioned the integration of independently developed solutions. This next slide captures how that can be achieved. Both of these, by the way, are from the analysis of business use cases.
Also, you'll also notice they are done in ArchiMate, and I will give ArchiMate a little plug at this point, because we have found it actually very useful in doing this analysis.
But the point is that if those solutions share a common model, then it's much easier to integrate them. That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.
It will also have common artifacts, such as architectural principles, stakeholders, definitions, descriptions, and so on. If the independently developed architectures use those, it will mean that they can be integrated more easily.
So how are we going to complete the definition of Open Platform 3.0? This slide comes from our business use cases’ White Paper and it shows the 22 use cases we published. We've added one or two to them since the publication in a whole range of areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on. Those use cases touch on a wide variety of areas.
You can see that we've started an analysis of those use cases. This is an ArchiMate picture showing how our first business use case, The Mobile Smart Store, could be realized.
And as you look at that, you see common models. If you notice, that is pretty much the same as the TOGAF Technical Reference Model (TRM) from the year dot. We've added a business layer. I guess that shows that we have come architecturally a little way in that direction since the TRM was defined.
But you also see that the same model actually appears in the same use case in a different place, and it appears all over the business use cases.
But you can also see there that the Mobile Connected Device Model has appeared in this use case and is appearing in other use cases. So as we analyze those use cases, we're finding common models that can be identified, as well as common principles, common stakeholders, and so on.
So we have a development cycle, whereby the use cases provide an understanding. We'll be looking not only at the ones we have developed, but also at things like the healthcare presentation that we heard this morning. That is really a use case for Open Platform 3.0 just as much as any of the ones that we have looked at. We'll be doing an analysis of those use cases and the specification and we'll be iterating through that.
The White Paper represents the very first pass through that cycle. Further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt, more than one version of that standard.
In conclusion, Open Platform 3.0 provides a common environment for architecture development. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies.
Cognitive computing no doubt has been suggested as another technology that Open Platform 3.0 might, in due course, accommodate. What would that lead to? That would lead to additional use cases and further analysis, which would no doubt identify some basic models for common computing, which will be added to the platform.
Open Platform 3.0 enables enterprise IT to be user-driven. This is really the revolution on that slide that showed the IT department becoming a broker, and devolvement of IT to cloud suppliers and so on. That's giving users the ability to drive IT directly themselves, and the platform will enable that.
It will deliver the ability to integrate solutions that have been independently developed, with independently developed architectures, and to do that within a business ecosystem, because businesses typically exist within one or more business ecosystems.
Those ecosystems are dynamic. Partners join, partners leave, and businesses cannot necessarily standardize the whole architecture across the ecosystem. It would be nice to do so, but by the time you finish the job, the business opportunity would be gone.
So independently developed integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them.
The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features, as I have explained.
The development is being done by The Open Platform 3.0 Forum, and these are representatives of Open Group members. They are defining the platform. And the forum is not only defining the platform, but it's also working on standards and guides in the technology areas.
For example, we have reformed a group to develop a White Paper on big data. If you want to learn about that, Ken Street, who is one of the co-chairs, is in this conference. And we also have cloud projects and other projects.
But not only are we doing the development within the Forum, we welcome input and comments from other individuals within and outside The Open Group and from other industry bodies. That’s part of the purpose of publishing the White Paper and giving this presentation to obtain that input and comment.
If you need further information, here's where you can download the White Paper from. You have to give your name and email address and have an Open Group ID and then it's free to download.
If you are looking for deeper information on what the Forum is doing, the Forum Plato page, which is the next URL, is the place to find it. Nonmembers get some information there; Forum members can log in and get more information on our work in progress.
Boardman: Next is Lydia Duijvestijn, who is one of these people who, years ago when I first got involved in this business, we used to call Technical Architects, when the term meant something. The Technical Architect was the person who made sure that the system actually did what the business needed it to do, that it performed, that it was reliable, and that it was trustworthy.
That's one of her preoccupations. Lydia is going to give us a short presentation about some ideas that she is developing and is going to contribute to The Open Platform 3.0.
Quality of service
Duijvestijn: Like Stuart said, my profession is being an architect, apart from your conventional performance engineer. I lead a worldwide community within IBM for performance and competency. I've been working a couple of years with the Dutch Research Institute on projects around quality of service. That basically is my focus area within the business. I work for Global Services within IBM.
What I want to achieve with this presentation is for you to get a better awareness of what functional requirements, functional characteristics, or quality of service characteristics are, and why they won't just appear out of the blue when the new world of Platform 3.0 comes along. They are getting more and more important.
I will zoom in very briefly on three categories; performance and scalability, availability and business continuity, and security and privacy. I'm not going to talk in detail about these topics. I could do that for hours, but we don’t have the time.
Then, I'll briefly start the discussion on how that reflects into Platform 3.0. The goal is that when we're here next year at the same time, maybe we would have formed a stream around it and we would have many more ideas, but now, it's just in the beginning.
This is a recap, basically, of a non-functional requirement. We have to start the presentation with that, because maybe not everybody knows this. They basically are qualities or constraints that must be satisfied by the IT system. But normally, it's not the highest priority. Normally, it's functionality first and then the rest. We'll find out about that later when the thing is going into production, and then it's too late.
So what sorts of non-functionals do we have? We have run-time non-functionals, things that can be observed at run-time, such as performance, availability, or what have you. We also have non-run-time non-functionals, things that cannot apparently be tested, such as maintainability, but they are all very important for the system.
Then, we have constraints, limitations that you have to be aware of. It looks like in the new world, there are no limitations, cloud is endless, but in fact it's not true.
Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen. You could lose business. You could lose image. And many other things could happen to you. It's not seen as something positive to work on it. It's seen as a risk if you don’t do it, but it's a significant risk.
We've seen occasions where a system was developed that was really doing what it should do in terms of functionality. Then, it was rolled into production, all these different users came along, and the website completely collapsed. The company was in the newspapers, and it was a very bad place to be in.
As an example, I took this picture in Badaling Station, near the Great Wall. I use this in my performance class. This depicts a mismatch between the workload pattern and the available capacity.
What happens here is that you take the train in the morning and walk over to Great Wall. Then you've seen it, you're completely fed up with it, and you want to go back, but you have to wait until 3 o’clock for the first train. The Chinese people are very patient people. So they accept that. In the Netherlands people would start shouting and screaming, asking for better.
This is an example from real life, where you can have a very dissatisfied user because there was a mismatch between the workload, the arrival pattern, and available capacity.
But it can get much worse, here we have listed a number of newspaper quotes as a result of security incidents. This is something that really bothers companies. This is also non-functional. It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue.
There are many, many non-functional aspects, as you can see. This guy is not making sense out of it. He doesn’t know how to balance it, because it's not as if you can have them all. If you put too much focus on one, it could be bad for the other. So you really have to balance and prioritize.
Not all non-functionals are equally important. We picked three of them for our conference in February: performance, availability and security. I now want to talk about performance.
Everybody recognizes this picture. This was Usain Bolt winning his 100 meters in London. Why did I put this up? Because it very clearly shows what it's all about in performance. There are three attributes that are important.
You have the response time, basically you compare the 100 meters time from start to finish.
You have the throughput, that is the number of items that can be processed with the time limit. If this is an eight-lane track, you can have only eight runners at the same time. And the capacity is basically the fact that this was an eight lane track, and they are all dependent on each other. It's very simple. But you have to be aware of all of them when you start designing your system. So this is performance.
Now, let’s go to availability. That is really a very big point today, because with the coming of the Internet in the '90s, availability really became important. We saw that when companies started opening up their mainframes for the Internet, they weren't designed for being open all the time. This is about scheduled downtime. Companies such as eBay, Amazon, Google are setting the standard.
We come to a company, and they ask us for our performance engineering. We ask them what their non-functional requirements are. They tell us that it has to be as fast as Google.
Well, you're not doing the same thing as Google; you are doing something completely different. Your infrastructure doesn’t look as commodity as Google's does. So how are you going to achieve that? But that is the perception. That is what they want. They see that coming their way.
They're using mobile devices, and they want it also in the company. That is the standard, and disaster recovery is slowly going away. RTO/RPO are going to 0. It's really a challenge. It's a big challenge.
The future is never-down technology independence, and it's very important to get customer satisfaction. This is a big thing.
Now, a little bit about security incidents. I'm not a security specialist. This was prepared by one of my colleagues. Her presentation shows that nothing is secure, nothing, and you have all these incidents. This comes from a report that tracks over several months what sort of incidents are happening. When you see this, you really get frightened.
Is there a secure site? Maybe, they say, but in fact, no, nothing is secure. This is also very important, especially nowadays. We're sharing more and more personal information over the net. It's really important to think about this.
What does this have to do with Platform 3.0? I think I answered it already, but let's make it a little bit more specific. Open Platform 3.0 has a number of constituents, and Chris has introduced that to you.
I want to highlight the following clouds, the ones with the big letters in it. There is Internet-of-Things, social, mobile, cloud, big data, but let’s talk about this and briefly try to figure out what it means in terms of non-functionals.
In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place.
If this is about healthcare, you can understand that privacy must be ensured. Social security privacy is very important in that respect. And it doesn’t come for free. We have to design it into the systems.
Now, big data. We have the four Vs there; Volume, Variety, Velocity, and Veracity. That already suggests a high focus on non-functionals, especially volume, performance, veracity, security, velocity, performance, and also availability, because you need this information instantaneously. When decisions have to be made based on it, it has to be there.
So non-functionals are really important for big data. We wrote a white paper about this, and it's very highly rated.
Cloud has a specific capacity of handling multi-tenant environments. So we have to make sure that the information of one tenant isn’t entered in another tenant’s environment. That's a very important security problem again. There are different workloads coming in parallel, because all these tenants have to have very specific types of workloads. So we have to handle it and balance it. That’s a performance problem.
Again, there are a lot of non-functional aspects. For mobile and social, the issue is that you have to be always on, always there, accessible from anywhere. In social especially, you want to share your photos, you personal data, with your friends. So it's social security again.
It's actually very important in Platform 3.0 and it doesn’t come for free. We have to design it into our model.
That's basically my presentation. I hope that you enjoyed it and that it has made you aware of this important problem. I hope that, in the next year, we can start really thinking about how to incorporate this in Platform 3.0.
Boardman: Let me introduce the panelists: Andy Jones of SOA Software, TJ Virdi from Boeing, Louis Dietvorst from Enexis, Sjoerd Hulzinga from KPN, and Frans van der Reep from Inholland University.
We want the panel to think about what they've just heard and what they would like Platform 3.0 to do next. What is actually going to be the most important, the most useful, for them, which is not necessarily the things we have thought of.
Jones: The subject of interoperability, the semantic layer, is going to be a permanent and long running problem. We're seeing some industries. for example, clinical trials data, where they can see movement in that area. Some financial services businesses are trying to abstract their information models, but without semantic alignment, the vision of the platform is going to be difficult to achieve.
Dietvorst: For my vision on Platform 3.0 and what it should support, I am very much in favor of giving the consumer or the asking party the lead, empower them. If you develop this kind of platform thinking, you should do it with your stakeholders and not for your stakeholders. And I wonder how can we attach those kind of stakeholders that they become co-creators. I don’t know the answer.
Male Speaker: Neither do I, but I feel that what The Open Group should be doing next on the platform is, just as my neighbor said, keep the business perspective, the user perspective, continuously in your focus, because basically that’s the only reason you're doing it.
In the presentation just now from Lydia about NFRs, you need to keep in mind that one of the most difficult, but also most important, parts of the model ought to be the security and blind spots over it. I don’t disagree that they are NFRs. They are probably the most important requirements. It’s where you start. That would be my idea of what to do next.
Not platform, but ecosystem
Male Speaker: Three remarks. First, I have the impression this is not a platform, but an ecosystem. So one should change the wording, number one.You should correct the wording.
Second, I should stress the business case. Why should I buy this? What problem does it solve? I don’t know yet.
The third point, as the Open Group, I would welcome a lobby to make IT vendors, in a formal sense, product reliable like other industries -- cars, for example. That will do a lot for the security problem the last lady talked about. IT centers are not reliable. They are not responsible. That should change in order to be a grownup industry.
Virdi: I agree about what’s been said, but I will categorize in three elements here what I am looking for from a Boeing perspective on what platform should be doing: how enterprises could create new business opportunities, how they can actually optimize their current business processes or business things, and how they can optimize the operational aspects.
So if there is a way to expedite these by having some standardized way to do things, Open Platform 3.0 would be a great forum to do that.
Boardman: Okay, thanks.Louis made the point that we need to go to the stakeholders and find out what they want. Of course, we would love if everybody in the world were a member of The Open Group, but we realize that that isn’t going to be the case tomorrow, perhaps the day after, who knows. In the meantime, we're very interested in getting the perspectives of a wider audience.
So if you have things you would like to contribute, things you would like to challenge us with, questions, more about understanding, but particularly if you have ideas to contribute, you should feel free to do that. Get in touch probably via Chris, but you could also get in touch with either TJ or me as co-chairs, and put in your ideas. Anybody who contributes anything will be recognized. That was a reasonable statement, wasn’t it Chris? You're official Open Group?
Is there anybody down there who has a question for this panel, raise your hand?
Duijvestijn: Your remark was that IT vendors are not reliable, but I think that you have to distinguish the layers of the stack. In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time.
If you look at the Gartner reports about incidents in performance and availability, what you see is that most of these happen because of process problems and application problems. That is where the focus has to be. Regarding the availability of applications, nobody ever publishes their book rate.
Boardman: Would anybody like to react to that?
Male Speaker: I totally agree with what Lydia was just saying. As soon as you go up in the stack, that’s where the variation starts. That’s where we need to make sure that we provide some kind of capabilities to manage that easily, so the business can make those kind of expedited way to provide business solutions on that. That’s where we're actually targeting it.
The lower in the stack we go, it's already commoditized. So we're just trying to see how far high we can go and standardize those things.
Male Speaker: I think there are two discussions together; one discussion is about the reliability on the total [IT process], where the fault is in a [specific IT stack]. I think that’s two different discussions.
I totally agree that IT, or at least IT suppliers, need to focus more on reliability when they get the service as a whole. The customers aren’t interested in where in the stack the problem is. It should be reliable as a whole, not on a platform or in the presentation layer. That’s a non-issue, non-operational, but a non-issue. The issue is it should be reliable, and I totally agree that IT has a long way to go in that department.
Boardman: I'm going to move on to another question, because an interesting question came up on the Tweets. The question is: "Do you think that Open Platform 3.0 will change how enterprises will work, creating new line of business applications? What impact do you see?" An interesting question. Would anybody like to endeavor to answer that?
Male Speaker: That’s an excellent question actually. When creating new lines of business applications, what we're really looking for is semantic interoperability. How can you bridge the gap between social and business media kind of information, so you can utilize the concept of what’s happening in the social media? Can you migrate that into a business media kind of thing and make it a more agile knowledge or information transfer.
For example, in the morning we were talking about HL7 as being very heavyweight for healthcare systems. There may be need to be some kind of an easy way to transform and share information. Those kind of things. If we provide those kind of capabilities in the platform, that will make the new line-of-business applications easier to do, as well as it will have an impact in the current systems as well.
Jones: We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources. And with new functionality being more focused on a particular application area, there's less utility bundling.
It also leads on to the question of what happens to the existing line of business apps. How will they exist in an enterprise, which is trying to go for a Platform 3.0 kind of strategy? Lydia’s point about NFRs and the importance of the NFRs brings into light a question of applications that don’t meet NFRs which are appropriate to the new world, and how you retrofit and constrain their behavior, so that they do play well in that kind of architecture. This is an interesting problem for most enterprises.
Boardman: There's another completely different granularity question here. Is there a concept of small virtualization, a virtual machine on a watch or phone?
Male Speaker: On phones and all, we have to make a compartmentalized area, where it's kind of like a sandbox. So you can consider that as a virtualization of area, where you would be doing things and then tearing that apart.
It's not similar to what virtualization is, but it's creating a sandbox in smart devices, where enterprises could utilize some of their functionality, not mingling up with what are called personal device data. Those things are actually part of the concept, and could be utilized in that way.
Question: My question about virtualization is linked to whether this is just an architectural framework. Because when I hear the word platform, it's something I try to build something on, and I don’t think this is something I build on. If you can, comment on the validity of the use of the word platform here.
Male Speaker: I don’t care that much what it is called. If I can use it in whatever I am doing and it produces a positive outcome for me, I'm okay with it. I gave my presentation the Internet-of-Things, or the Internet of everything, or the everywhere or the Thing of Net, or the Internet of People. Whatever you want to call it, just name it, if you can identify its object that’s important to you. That’s okay with me. The same thing goes for Platform 3.0 or whatever.
I'm happy with whatever you want to call it. Those kinds of discussions don't really contribute to the value that you want to produce with this effort. So I am happy with anything. You don't agree?
Male Speaker: A large part of architecture is about having clear understandings and what they mean.
Male Speaker: Let me augment what was just said, and I think Dr. Harding was also alluding to this. It is in the stage where we're defining what Platform 3.0 is. One thing for sure is that we're going to be targeting it as to how you can build that architectural environment.
Whether it may have frameworks or anything is still to be determined. What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that. Whether it's a pure translation of a platform per se is still to be determined.
Boardman: The Internet-of-Things is still a very fuzzy definition. Here we're also looking at fuzzy definitions, and it's something that we constantly get asked questions about. What do we mean by Platform 3.0?
The reason this question is important, and I also think Sjoerd’s answer to it is important, is because there are two aspects of the problem. What things do we need to tie down and define because we are architects and what things can we simply live with. As long as I know that his fish is my bicycle, I'm okay.
It's one of the things we're working on. One of the challenges we have in the Forum is what exactly are we going to try and tie down in the definition and what not? Sorry, I had to slip that one in.
I wanted to ask about trust, how important you see the issue of trust. My attention was drawn to this because I just saw a post that the European Court of Justice has announced that Google has to make it possible for any person or organization who asks for it to have Google erase all information that Google has stored anywhere about them
I wonder whether these kinds of trust issues going to become critical for the success of this kind of ecosystem, because whether we call it a platform or not, it is an ecosystem.
Trust is important
Male Speaker: I'll try to start an answer. Trust is a very important part ever since the Internet became the backbone of all of those processes and all of those systems in those data exchanges. The trouble is that it's very easy to compromise that trust, as we have seen with the word from the NSA as exposed by Snowden. So yes, trust ought to be a part of it, but trust is probably pretty fragile the way w're approaching it right now.
Do I have a solution to that problem? No, I don't. Maybe it will come in this new ecosystem. I don't see it explicitly being addressed, but I am assuming that, between all those little clouds, there ought to be some kind of a trust relationship. That's my start of an answer.
Jones: Trust is going to be one of those permanently difficult questions. In historical times, maybe the types of organizations that were highest in trust ratings would have been perhaps democratic governments and possibly banks, neither of which have been doing particularly well in the last five years in that area.
It’s going to be an ethical question for organizations who are gathering and holding data on behalf of their consumers. We know that if you put a set of terms and conditions in front of your consumers, they will probably click on "agree" without reading it. So you have to decide what trust you're going to ask for and what trust you think you can deliver on.
Data ownership and data usage is going to be quite complex. For example, in clinical trials data, you have a set of data, which can be identified against the named individual. That sounds quite fine, but you can then make that set of data so it’s anonymized and is known to relate to a single individual, but can no longer identify who. Is that as private?
That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then? It seems to quickly goes out of the scope of reason and understanding of the consumer themselves. So the responsibility for ethical behavior appears to lie with the experts, which is always quite a dangerous place.
Male Speaker: We probably all agree that trust management is a key aspect when we are converging different solutions from so many partners and suppliers. When we're talking about Internet of data, Internet-of-Things, social, and mobile, no one organization would be providing all the solutions from scratch.
So we may be utilizing stuff from different organizations or different organizational boundaries. Extending the organizational boundaries requires a very strong trust relationship, and it is very significant when you are trying to do that.
Boardman: There was a question that went through a little while ago. I'm noticing some of these questions are more questions to The Open Group than to our panel, but one I felt I could maybe turn around. The question was: "What kind of guidelines is the Forum thinking of providing?"
I'd like to do is turn that around to the panel and ask: what do you think it would be useful for us to produce? What would you like a guideline on, because there would be lots of things where you would think you don’t need that, you'll figure it out for yourself. But what would actually be useful to you if we were to produce some guidelines or something that could be accepted as a standard?
Does it work?
Male Speaker: Just go to a number of companies out there and test whether it works.
Male Speaker: In terms of guidelines, you mentioned it very well about semantic interoperability. How do you exchange information between different participants in an ecosystem or things built on a platform.
The other thing is how you can standardize things that are yet to be standardized. There's unstructured data. There are things that need to be interrogated through that unstructured data. What are the guiding principles and guidelines that would do those things? So maybe in those areas, Platform 3.0 with the participations from these Forum members, can advance and work on it.
Jones: I think contract, composition, and accumulation. If an application is delivering service to its end users by combining dozens of complementary services, each of which has a separate contract, what contract can it then offer to its end user?
Boardman: Does the platform plan to define guidelines and directions to define application programming interfaces (APIs) and data models or specific domains? Also, how are you integrating with major industry reference models?
Just for the information, some of this is work of other parts of The Open Group's work around industry domain reference models and that kind of thing. But in general, one of the things we've said from the Platform, from the Forum, is that as much as possible, we want to collate what is out there in terms of standards, both in APIs, data models, open data, etc.
We're desperate not to go and reproduce anybody else’s work. So we are looking to see what’s out there, so the guideline would, as far as possible, help to understand what was available in which domain, whether that was a functional domain, technical domain, or whatever. I just thought I would answer those because we can’t really ask the panel that.
We said that the session would be about dealing with realizing business value, and we've talked around issues related to that, depending on your own personal take. But I'd like to ask the members of the panel, and I'd like all of you to try and come up with an answer to it: What do you see are the things that are critical to being able to deliver business value in this kind of ecosystem?
I keep saying ecosystem, not to be nice to Frans, I am never nice to Frans, but because I think that that captures what we are talking about better. So do you want to start TJ? What are you looking for in terms of value?
Virdi: No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly. The expectations from business values or businesses to provide new solutions in real-time, information exchange, and all those things are the norm now.
We can provide some of those as a baseline to provide as maybe foundational aspects to business to realize those new things what we are looking as in social media or some other places, where things are getting exchanged so quickly, and the kind of payload they have is a very small payload in terms of information exchange.
So keeping the integrity of information, as well as sharing the information with the right people at the right time and in the right venue, is really the key when we can provide those kind of enabling capabilities.
Ease of change
Jones: In Lydia’s presentation, at the end, she added the ease of use requirement as the 401st. I think the 402nd is ease of change and the speed of change. Business value pretty much relies on dynamism, and it will become even more so. Platforms have to be architected in a way that they are sufficiently understood that they can change quickly, but predictably, maintaining the NFRs.
Dietvorst: One of the reasons why I would want to adopt this new ecosystem is that it gives me enough feeling that it is a reliable product. What we know from the energy system innovations we've done the last three or four years is that the way you enable and empower communities is to build up the trust themselves, locally, like you and your neighbor, or people who are close in proximity. Then, it’s very easy to build trust.
Some call it social evidence. I know you, you know me, so I trust you. You are my neighbor and together we build a community. But the wider this distance is, the less easy it is to trust each other. That’s something you need to build in into the whole concept. How do you get the trust if it is something that's a global concept. It seems hardly possible.
van der Reep: This ecosystem, or whatever you're going to call it, needs to bring the change, the rate of change. "Change is life" is a well-known saying, but lightning-fast change is the fact of life right now, with things like social and mobile specifically.
One Twitter storm and the world has a very different view of your company, of your business. Literally, it can happen in minutes. This development ought to address that, and also provide the relevant hooks, if you will, for businesses to deal with that. So the rate of change is what I would like to see addressed in Platform 3.0, the ecosystem.
Male Speaker: It should be cheap and reliable, it should allow for change, for example Cognition-as-a-Service, and it should hide complexity for those "stupid businesspeople" and make it simple.
Boardman: I want to pick up on something that Frans just said because it connects to a question I was going to ask anyway. People sometimes ask us why the particular five technologies that we have named in the Forum: cloud, big data, big-data analysis, social, mobile, and the Internet-of-Things? It's a good question, because fundamental to our ideas in the Forum that it’s not just about those five things. Other things can come along and be adopted.
One of the things that we had played with at the beginning and decided not to include, just on the basis of a feeling about lack of maturity, was cognitive computing. Then, here comes Frans and just mentions cognitive things.
I want to ask the panel: "Do you have a view on cognitive computing? Where is it? When we can expect it to be something we could incorporate? Is it something that should be built into the platform, or is it maybe just tangential to the platform?" Any thoughts?
Male Speaker: I did a speech on this last week. In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in. That's a very big market and there's no reason not to include it in the lower levels of the platform and to make it into cloud.
We have lots of examples already in the Netherlands that ICT devices recognize emotions and from recognizing speech. Recognizing emotion, you can optimize the matching of the company with the customer, and you can hide complexity. I think there’s a big market for that.
Virdi: We need to look at it in the context of what business wants to do with that. It could be enabling things that could be what I consider as proprietary things, which may not be part of the platform for others to utilize. So we have to balance out what would be the enabling things we can provide as a base of foundation for everyone to utilize. Or companies can build on top of it what values it would provide. We probably have to do a little bit further assessment on that.
Male Speaker: I'd like to follow up on this notion of cognitive computing, the notion that maybe objects are self-aware, as opposed to being dumb -- self-aware being an object, a sensor that’s aware of its neighbor. When a neighbor goes away, it can find other neighbors. Quite simple as opposed to a bar code.
We see that all the time. We have kids that are civil engineers and they pour it in concrete all the time. In terms of cost, in terms of being able to have the discussion, it's something that’s in front of us all the time. So at this time, should we probably think about at least the binary aspect of having self-aware sensors as opposed to dumb sensors?
Male Speaker: From aviation perspective, there are some areas where dumb devices would be there, as well as active devices. There are some passive sensor devices where you can just interrogate them when you request and there are some devices that are active, constantly sending sensor messages. Both are there in terms of utilization for business to create new business solutions.
Both of them are going to be there, and it depends upon what business needs are to support those things. Probably we could provide some ways to standardize some of those and some other specifications. For example, an ATA, for aviation. They're doing that already. Also, in healthcare, there's HL7, looking for doing some smart sensor devices to exchange information as well. So some work is already happening in the industry.
There are so many business solutions that have already been built on those. Maybe they're a little bit more proprietary. So a platform could provide some ways to provide a standard base to exchange that information. It may be some things relate to guidelines and how you can exchange information in those active and passive sensor devices.
Jones: I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling. I have an interest in complex consumer devices in retail and other field locations, especially self-service kiosks, and in that field quite a lot of effort has been spent trying to infer the states of devices by their behavior, rather than just having them tell you what's going on, which should be so much easier.
Male Speaker: Of course, it depends on where the boundary is between aware and not aware. If there is thermometer in the field and it sends data that it's 15 degrees centigrade, for example, do I really want to know whether it thinks it's chilly or not? I'm not really sure about it.
I'd have to think about it a long time to get a clear answer on whether ther's a benefit in self-aware devices in those kinds of applications. I can understand that there will be an advantage in self-aware sensor devices, but I struggle a little to see any pattern or similarities in those circumstances.
I could come up with use cases, but I don’t think it's very easy to come up with a certain set of rules that leads to the determination whether or not a self-aware device is applicable in that particular situation. It's a good question. I think it deserves some more thought, but I can't come up with a better answer than that right now.
Skilton: I just wanted to add to the embedded question, because I thought it was a very good one. Three case studies happened to me recently. I was doing some work with Rolls Royce and the MH370, the flight that went down. One of the key things about the flight was that the engines had telemetry built in. TJ, you're more qualified to talk about this than I am, but essentially there was information that was embedded in the telemetry of the technology of the plane.
As we know from the mass media that reported on that, that they were able to analyze from some of the data potentially what was going on in the flight. Clearly, with the band connection, it was the satellite data that was used to project it was going south, rather than north.
So one of the lessons there was that smart information built into the object was of value. Clearly, there was a lesson learned there.
With Coca Cola, for example, what's very interesting in retail is that a lot of the shops now have embedded sensors in the cooler systems or into products that are in the warehouse or on stock. Now, you're getting that kind of intelligence over RFID coming back into the supply chain to do backfilling, reordering, and stuff like that. So all of this I see is smart.
Another one is image recognition when you go into a car park court. You have your face being scanned in, whether you want it or not. Potentially, they can do advertising in context. These are all smart feedback loops that are going on in these ecosystems and are happening right now.
There are real equations of value in doing that. I was just looking at the Open Automotive Alliance. We've done some work with them around connected car forecast. Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years with BMW, Jaguar Land Rover, and Volvo. All the major car players are doing this right now.
So Open Platform 3.0 for me is riding that wave of understanding where the intelligence and the feedback mechanisms work within each of the supply chains, within each of the contexts, either in the plane, in the shop, or whatever, starting to get intelligence built in.
We talk about big data and small data at the university that I work at. At the moment, we're moving from a big-data era, which is analytics, static, and analyzing the process in situ. Most likely it's Amazon sort of purchasing recommendations or advertisement that you see on your browser today.
We 're moving to a small-data era, which is where you have very much data in context of what's going on in the events at that time. I would expect this with embedded technologies. The feedback loops are going to happen within each of the traditional supply chains and will start to build that strength.
The issue for The Open Group is to capture the sort of standards of interoperability and connectivity much like what Boeing is already leading with, with the automotive sector , and with the airline sector. It's riding that wave, because the value of bringing that feedback into context, the small-data context is where the future lies.
Male Speaker: I totally agree. Not only are the devices or individual components getting smarter, but that requires infrastructures to be there to utilize that sensing information in a proper way. From the Platform 3.0 guidelines or specifications perspective, determining how you can utilize some devices, which are already smart, and others, which are still considered to be legacy, and how you can bridge those gap would be a good thing to do.
Boardman: Would anyone like to add anything, closing remarks?
Jones: Everybody’s perspective and everybody’s context is going to be slightly different. We talked about whether it's a platform ora framework. In the end there will be a built universal 3.0 Platform, but everybody will still have a different view and a different perspective of what it does and what it means to them.
Male Speaker: My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment, where you can experiment and see what happens. Do it at many places at the same time in the world, and let the factors be proof of the pudding.
Male Speaker: Whatever you are going to call it, keep to 3.0, that sounds snappy, but just get the beneficiaries in, get the businesses in, and get the users in.
Male Speaker: The more open, the more a commodity it will be. That means that no company can get profit from it. In the end, human interaction and stewardship will enter the market. If you come to London city airport and you find your way in the Tube, there is a human being there who helps you into the system. That becomes very important as well. I think you need to do both, stewardship and these kinds of ecosystems that spread complexity.
You may also be interested in:
Internet of things
The Open Group
The Open Group Conference