Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  VMWare  HPDiscover  virtualization  Ariba  data center  enterprise architecture  data analytics  HP DISCOVER  SOA  HP Vertica  Ariba Network  Open Group Conference  SAP  security  VMWorld  Ariba LIVE  Tony Baer  desktop virtualization  Jennifer LeClaire  mobile computing  TOGAF  Business Intelligence 

Rapid matching of consumer inferences to ads serves up a big data success story

Posted By Dana L Gardner, Tuesday, February 03, 2015

The next BriefingsDirect big data innovation success story uncovers how New York-based adMarketplace, a search syndication advertising network, uses big data to improve its search advertising capabilities.

In part two of our series on adMarketplace, we'll explore how they instantly capture and analyze massive data to allow for efficient real-time bidding for traffic sources for online advertising. And we'll hear how the data-analysis infrastructure also delivers rapid cost-per-click insights to advertisers.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, BriefingsDirect sat down with Raj Yakkali, Director of Data Infrastructure at adMarketplace, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about adMarketplace. What do you do, and why is big data such a big part of that?

Yakkali: adMarketplace is a leading advertising platform for search intent. We provide the advertisers with the consumer space where they can project their ads. The benefit of the adMarketplace comes into play where we provide a data platform that can match those ads with the right user intent.

Yakkali

When user searches for a certain keyword, they're directly telling us what they want to see, and we match it perfectly well with our ads. The relationship that we have with our advertisers is that we match them well and make it accessible in exactly what the user is thinking. We do some predictive analytics on top of what the user is saying. We add that dimension to our user search and provide ads aptly.

Gardner: I'm all for getting better ads based on lot of things I already get. Do you have more than just keywords in terms of how you can draw inference, and what sort of scale of data are we talking about when it comes to all that inference information about an intent on behalf of the consumer?

15 dimensions

Yakkali: Keyword search is one side or one dimension of the user search. There are also category campaigns that the advertisers are running. At the same time, there's a geospatial analysis to it as well. There are 15 dimensions that we go through to provide an ad that is perfectly fit for the advertiser and for the consumer to see and take advantage of to meet their needs. With some of the ads, we are trying to serve the user’s requirements and needs.

http://bit.ly/1sWpHmCGardner: With all these variables, this sounds like you're going to be gathering an awful lot of information. You also need to reply back with your results very fast or you lose the opportunity for that consumer to get the ad and then even click through and make a decision. Tell me about scale and speed.

Yakkali: You're right on with that question. In this business, latency is your enemy. If you look into the certain metrics, there are almost a half a billion requests that we're receiving every day and we have to match all of those ads with a sub-second performance. We have internal proprietary datasets, which we take care of before matching these ads. And there are two platforms that we've built internally.

Become a member of myVertica today
and gain access to the
FREE HP Vertica Community Edition

One is called Bid Smart. That performs the analysis between the user intent and the traffic sources that the user search is coming from. At the same time, the price of that ad goes to the publisher. There are the pricing strategies, the traffic sources, and the user intent of the search. All of these things are put together. That predictive analytics system gathers all this information and emits the right ad towards the consumer.

With the partnership with Vertica, we’re able to take the dataset, derive analytics about it, and provide our marketers with all that information.

On top of it, if you look into the amount of data, those half a billion requests that are coming into our system, it generates around two terabytes per hour. At certain times, we can't store all of it for analytics. There is a lot of data that's not inside the database. Now, with the partnership with Vertica, we’re able to take the dataset, derive analytics about it, and provide our marketers with all that information. Bid Smart is the one that does the pricing and matching.

The other thing is Advertiser 3D, which provides that detailed analytics into all these dimensions on the metrics. That provides a very good insight. Now, when it comes to the competition or the opportunity to deliver the right ad at the right time, that's where data work flows make a difference.

We utilize Vertica to directly stream all this click data into it, rather than going into certain other locations and then doing it in a batch format. We directly live-stream that data into Vertica, so that it is readily available for analytics. Our Bid Smart System makes use of that dataset. That's where we get the opportunity to deliver much better ads, with price tags, and the right user intent matched.

Gardner: It sounds very complex. There's an awful lot going on for just serving up an ad. I suppose people don’t appreciate that, but the economics here are very compelling, the more refined and appropriate an ad can be, the more likely the consumer is to buy, but there are a lot of resources that don't get wasted in the meantime. Do you have any sense of what the payoff is, either in business, financial, or technical terms for when you can really accomplish this goal of targeted advertising?

Conversion rate

Yakkali: So our conversion rate is a major key performance indicator (KPI) when it comes to understanding how well we are doing. When we see higher conversion rates, that gives us the sense that we've done the best job and user is happy with what they are searching and what they are getting.

At the at the same time, the publishers, as well as the advertisers, are happy, because the user is coming to us again and again to get that similar, beautiful experience. The advertisers are able to sell more products that meet the needs of the user. And the users are able to get the product that really caters to their needs. We're in the middle of all these things, trying provide the facilitation to the advertisers, as well as the users and the publishers' space.

Gardner: I daresay this is the very future of advertising. Now for you to accomplish these goals and create those positive KPIs, are you housing Vertica in your own data center, do you use cloud, hybrid cloud? Given that you have different platforms, different datasets, how do you manage this technically?

Yakkali: On that end, we started with testing cloud two or three years ago, but again, it turned out that because of so many unknowns and troubleshooting, we had to go with our data centers. Now, we host all our systems in our own data centers and we manage it.

We have our own hardware to deal with. Our system is a 24/7, and we have to be able to deliver the sub-second latency performance. Having your own infrastructure, you have the controlled environment where you can tweak and tune your system to get the best performance out of it.

Become a member of myVertica today
and gain access to the
FREE HP Vertica Community Edition

Considering that it is a 24/7, there are fewer excuses that you can get away with in not delivering it. For that, we do innovation in terms the data flows and the process of how we ingest the data, how we process the data, how we emit the data, and how we clean up the data when we don’t really need it.

All these things have to come together, and it really helps us having that control on all of our infrastructure and all elements in the data pipeline, starting from the user intent and user search, until we provide the data and the results.

Gardner: How long have you been using Vertica in this regard and how did you go about making that decision?

Yakkali: We've been using Vertica for four to five years and our data pipeline was not on Vertica to start with, but as Vertica came into the picture and we saw the great beauty and the powerful features that it brings to capitalize our ability.

That really helped us. With Vertica in place, we have been migrating our mechanics slowly to use it for the real-time analysis and real-time bidding and all those beautiful features that make us do what we can do better. So it’s been a great partnership with Vertica and we see many more features coming in with the new version. Our Bid Smart mechanism is also improving, and with that, algorithmic capabilities are increasing. So it’s progressing.

Feedback loop

Gardner: Tell us a little bit about where your business is heading. In addition to speed, complexity, and scale, where do you see the ability to create this feedback loop? It’s very rapid feedback loop between a lot of incoming data and an action like streaming up an ad. It seems like this could be applied to either other marketing or advertising chores or perhaps even have an ancillary business-development direction. You’ve got this platform and these data centers. Is there something else that you're gearing up for?

Yakkali: At this point, we're in the business of connecting the advertisers, the publishers, and the users. But that is an untapped business to what it can accomplish. The market has started its pathway towards the level of reaching that epitome. If we take a step back and try to understand it, initially, when search started, there was no Google or anything. It was more about curated search.

So the publishers put out all this content together and then projected it out to the user. They didn't know what user wanted. At the same time, when the user looked at this content, they didn't know whether they want it or whether it catered to their needs.

Then, Google came along and user search started. What that directly told was "I want this piece of information. I want to use this piece of information. And I want to see this ad that is relevant to my needs." That’s a very powerful thing. When you hear that part, you're able to analyze that piece and match it properly with the advertisers. But then again, it started to fragment.

At this point, we're in the business of connecting the advertisers, the publishers, and the users.

Now, it’s not only Google. There is Yahoo, Bing, there is mobile, and there are certain apps. There are many apps in the mobile space and each one has its own search. So not all the searches are going to Google, Yahoo, or Bing. Search is already fragmented.

We tap all those pieces. The market that is beyond Google. Yahoo-Bing is stronger and it is growing. So there is a lot of market that needs to be tapped into. We come into the place connecting the advertisers to tap that untapped marketplace.

We've been improving our internal Bid Smart algorithm that came out in the last year. Then, we also launched Advertiser 3D last year as well. Those two products have been providing tremendous growth in our revenue, and the retention rates have been stellar.

The top 60 percent of Google’s top spenders are working with us to complement their business. At the same time, we're also able to provide 50 percent increase in year-over-year revenues. It's additional revenue for them, and even our revenues are increasing based on that fact.

Gardner: It seems like you have an awful lot of runway ahead of you in terms of where search could be applied, and analytics can be drawn from that to augment these services and explode that market.

Is Vertica being used just for the intercept between the incoming data and the outgoing ad, or you are also analyzing what goes on within these marketplace so that you better appreciate, whether you can offer reports, audit trails, and that sort of thing? Is this an inclusive platform, or do you use different analytics platforms for different aspects of what you are doing?

End to end

Yakkali: We do almost everything. It is an end-to-end platform. As part of the business we look into the operational metrics of the whole thing, starting from the user search until the ad is delivered. Then, from that end, there is always that analytics piece that comes onto play, which provides insights to the marketers.

Our market base is filled with the very data-savvy marketers, and they look into each and every data dimension to understand their return on investment (ROI). We give them transparency through our Advertise 3D System and utilizing that, they're able to navigate through the space and aptly tune their campaigns to get the best out of it and to deliver the best to the customer.

Gardner: Any thoughts about other organizations that are also facing significant challenges around speed, scale, also perhaps with a big runway, in terms of knowing that more and more business could be coming their way therefore more data? What would you advise them in terms of the data architecture or the planning in order to accomplish the goals?

Yakkali: When we look at the industries and the market, the ad industry still is untapped. The healthcare industry is just getting into the business of doing much more with analytics. It’s all about the speed and the latency and the insights as well. One, at the operational level and the other, at the insight level to do more innovation on top of it.

Our market base is filled with the very data-savvy marketers, and they look into each and every data dimension to understand their return on investment (ROI).

The ability to listen to the customer depends on how fast you can capture all that feedback, and you tighten that loop of feedback so that you're able to do something with it and make a better product out of it.

So it’s all about taking a look at the datasets very closely as to what they mean, what the user is asking us, what do they want to see, and how you are listening to the customer. Those two aspects really make the difference.

You want to listen to the customer, what they really want. Are you providing it and are you able to guess what they want for tomorrow for that predictive, and going into prescriptive analytics, phase later on. You're telling them what they need to do even before they tell you.

That's the stage that the market is going towards. We're not even scratching the surface of prescriptive analytics. The wave has not yet started towards that route. We're still at the predictive analytics phase, and there is still a lot more to go within that space. Get the foundation stronger, drive towards prescriptive analytics, and listening to your customer, are the three aspects that would make any industry. Those three would be the key foundational pieces for making innovation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  adMarketplace  big data  BriefingsDirect  Cloud  Dana Gardner  HP  HP Vertica  HPDiscover  Interarbor Solutions  Raj Yakkali 

Share |
PermalinkComments (0)
 

University of New Mexico delivers efficient IT services by centralizing on secure, managed cloud automation

Posted By Dana L Gardner, Wednesday, September 24, 2014

The latest BriefingsDirect discussion focuses on one of the toughest balancing acts in seeking the best of cloud computing benefits. This balance comes from obtaining the proper degree of centralization or "common good" for infrastructure efficiency, while preserving a sufficient culture of decentralization for agility, innovation, and departmental-level control.

The requirement for empowering centralization is no more evident than in a large university setting, where support and consensus must be preserved among such constituencies as faculty, staff, students, and researchers -- across an expansive educational community.

But the typical IT model does not support localized agility when it takes weeks to spin up a server, if online services lack automation, or if manual processes hold back efficient ongoing IT operations. Too much IT infrastructure redundancy also means weak security, high costs, lack of agility, and slow upgrades.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We're joined by an IT executive from the University of New Mexico (UNM) to learn more about moving to a streamlined and automated private cloud model to gain a common good benefit, while maintaining a vibrant and reassured culture of innovation. We're also joined by a VMware executive to learn more about the latest ways to manage cloud architectures and processes to attain the best of cloud efficiencies, while empowering improved services delivery and process agility.

They are: Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque, and Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your IT organization at the university and how you've been able to do change, but at the same time not alienate your users, who are, I imagine, used to having things their way.

Pietrewicz: At the University of New Mexico, it's a highly decentralized organization. In most cases, the departments are responsible for their own IT. In most cases, that means they don't have the resources to effectively run IT, in particular, things like data centers, servers, storage, disaster recovery (DR), and backups.

Pietrewicz

What we're doing to improve the process is providing infrastructure as a service (IaaS) to those groups so that they don’t have to worry about the heavy lifting of the infrastructure pieces that I mentioned before. They can stay focused on their core mission, whether that’s physics, or psychology, or who knows what.

So we offer IaaS. We're running a VMware stack, and we're also running vCloud Automation Center (vCAC). We've deployed the Self-Service Portal. We give departments, faculty members, or departmental IT folks the ability to go into the portal and deploy their own machines at will.

Then, they are administrators of that machine. They also have additional management features through the vCAC console so that they can effectively do whatever they need to do with the server, but not have to worry about any of the underlying infrastructure.

Gardner: That sounds like the best of both worlds. In a sense, you're a service provider in the organization, getting the benefits of centralization and efficiency, but allowing them to still have a lot of hands-on control, which I assume that they want.

Pietrewicz: Correct. The other part is the agility, the ability for them to be able to react quickly, to consume infrastructure on demand as they need it, and have the benefit of all the things that virtualization brings with redundant infrastructure, lower cost of ownership, and those sorts of things.

New expectations

Milne: It’s an interesting time to be in the IT space, because there's this new set of expectations being imposed on IT by the business to be strategic, to quickly adopt new technology, and boost innovation.

Milne

At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence so they maintain uptime, security, and quality of service for transactional systems and business-critical systems.

It’s really an interesting paradox. How do you do these two things that are seemingly mutually exclusive -- go fast, but at the same time, stay in control?

Brian’s approach is what I call it "push button IT," where you give folks a button to push and they get what they need when they want it. But if IT controls the button and they control what happens when the user pushes the button, IT is able to maintain control. It’s really the best of both worlds.

Gardner: Brian, tell us a little bit about how long you have been there and what it was like before you began this journey?

Pietrewicz: I've been at UNM for about two-and-a-half years, and I can tell you the number one complaint. We suffer from a lot of the same problems that other large IT shops have, with funding and things like that. But the primary issue that we had when I walked in the door was customers being upset because we didn't have clearly-defined services, and we had sold these services to these customers.

We had sold virtual machines (VMs) with database backups, and all kinds of interesting things, with no service-level agreements (SLAs), no processes, nothing wrapped around it. The delivery of these services was completely inconsistent.

So I started out down the new path. The first thing that we did was to make the services more consistent. Just to give you an example, deploying a virtual machine for a customer. The way that it was when I got here was that a ticket came into the service desk. It went to a single technician, and then whichever technician got that ticket figured out their own way of getting that machine deployed.

At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence.

As the next step in that process, we went through and, instead of just having it being done a different way by whoever received the ticket, we identified all the steps associated. In looking at all the steps associated, we identified over a 100 manual steps that went though six different completely separate groups inside of our organization.

Those included operating system, storage, virtualization, security, and networking for firewall changes. In all those various groups that deploy their individual piece of that puzzle, it was being done differently every time. Our deployment times were taking as long as three weeks. You can imagine how painful that is when it takes 20 minutes to spin up a VM -- but it was taking three weeks to deploy it to a customer.

We identified all the steps and defined the process very, very clearly; exactly what it takes to deploy a VM. The interesting thing that came out of that was that it gave us the content necessary to be able to start developing a true service description and an SLA.

Ticketing system

It also made it so that it was consistent. We did a few things after we did the process development. We generated workflows within our ticketing system, so that all that happened was a ticket was put in and then it auto-generated all the necessary tickets to deploy the VM, so it happened in a very consistent way.

That dropped the deployment time from three weeks down to about three days, because it still had to go through certain approval process and things like that with security.

For the next step we said, "Okay, how can we do this better?" We looked at all of those steps that we put in place and found that they were all repetitive, manual steps that could be easily automated. So enters VMware vCAC.

We took all the steps, after we had them clearly defined, and we automated all the steps that we could. We couldn’t automate all of them, for example, sending information to our billing system to bill the customer back. From vCAC we shoot an email over to our ticketing system, that generates a ticket. Then, the billing information is still entered manually, and we are working on an upgrade to that.

UNM is approximately 45,000 faculty, staff, and students. We have about 100 either departments or affiliates, and today, we're running about 660 VMs for our organization. For central IT, we're between 98 percent and 99 percent virtualized.

When I first got here, the services were not defined and the processes were not defined. Since then, we have clearly defined the processes, narrowed those down into the very specific processes and tasks that had to be done, and then we automated. We're going through the process of automating every step in that process.

ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within.

Now, we have a thing we call Lobo Cloud -- our mascot is the Lobo. Customers can now go online and deploy a machine within 20 minutes. So basically everything has transformed from extremely inconsistent service and taking as long as three weeks to deploy, to now it being the equivalent going into McDonald’s and ordering a Big Mac. It’s extremely consistent and down from three weeks to 20 minutes.

Gardner: I assume Brian that you've adopted some industry-standard methods, perhaps a framework, that gave you some guidance on this. How does your service delivery policy adhere to an industry standard like ITIL?

Pietrewicz: That’s what we use. We follow ITIL and we're at varying levels of maturity with it. ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within, to start narrowing down these process, defining services, setting SLAs. It gives you a good overarching framework to work within.

The absolute hardest part of all of this is implementing the ITIL framework, identifying your processes, identifying what your service is, and identifying your SLA. Walking through all of that is exponentially harder than putting the technology in place.

Gardner: It seems to me that not only are you going to get faster servers, response times, and automation, but there are some other significant benefits to this approach. I'm thinking about security, disaster recovery (DR), the ability to budget better through an OPEX model, and then ultimately reduce total costs.

Is it too soon or have some of these other benefits that I have heard about typically when people move to a more automated cloud approach? How is that working for you?

Less expensive

Pietrewicz: We don’t really have good statistics on it. For the folks that had machines sitting underneath their desks and in closets before, we don’t have a lot of the statistics to know exactly the cost and the time they were spending on that.

Anybody who works with virtualization quickly learns that once you hit a certain size, it becomes significantly less expensive. You become far more agile and you get a huge number of benefits. Some of them are things that you mentioned -- the deployment time, DR, the ability to automate, the taking advantage of economies of scale.

Instead of deploying one $10,000 server per application, you're now loading up 70 machines on a $15,000 server. All of those things come into play. But we really don’t have good statistics, because we didn’t really have any good processes before we started.

What’s interesting now is that our next step in the process is to automate our billing process. Once we do that, we're going to have everything from our virtual infrastructure deployed into our billing system and either a charge-back or a show-back methodology.

The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

So we'll have complete detailed costs of all of our infrastructure associated with every department and every application that is using our service. We'll be able to really show the total cost of ownership (TCO).

Milne: Brian, it sounds like you're on a path that a lot of our customers are on. What we see typically is that there is a change in consumption behavior when your customers know that they can get IaaS on demand. They stop hoarding resources. The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

Virtualization by itself increases capacity utilization quite a bit, but then going to this kind of services delivery, service consumption for infrastructure, actually further increases utilization and drives down over-provisioning.

Adding that cost transparency to that service will further change your consumers' behavior and the ability to get it when you need it and only pay for what you use drives down the amount of resources that you have to keep in your data center.

Pietrewicz: Absolutely. It’s amazing what happens when you have to pay for something and it’s very visible.

Milne: I always feel that if IT is free that really changes the supply and demand equation, if you study economics. People don’t know what to do with free. They typically take too much.

Economic behavior

Pietrewicz: Right. This really starts driving basic economic and social behavior into the equation in IT. It’s a difficult thing for organizations to get their head around, and they're sort of getting it here at the university. It’s not completely in place. The way that we look at it is as a, "We'll build it, and they'll come" kind of thing.

Most folks have figured out that they can really save that money. Instead of going out and buying a $10,000 server, they can buy a $1,000 VM from us that does the exact same thing. If they don’t want it any more, they can turn it off and not pay any more. All of those things come into play.

Another piece on that is the university was experimenting with a thing called reliability centered maintenance (RCM), which is a budgeting process that works toward the bottom line of a particular organization. That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

Ancillary benefits

Gardner: We talked about some of the ancillary benefits of your approach, but there are some direct benefits when you go to a cloud model, which gives you more options. You can have your private cloud. You can look to public cloud and other hosting models, and then you can start to see a path or a vision towards a hybrid cloud environment, where you might actually move workloads around based on the right infrastructure approach for the right job at the right time. Any thoughts about where your clouds goals are vis-à-vis the hybrid potential?

Pietrewicz: We have a few things in play that we're actively working. Today, we have people using various cloud providers. The interesting part about that they're just paying for it with a credit card out of their department, and the university doesn’t have any clear way of knowing exactly what’s out there. We don’t really have any good security mechanisms in place for determining whether there's any sensitive data being stored out there inadvertently.

We're working with a lot of the cloud providers that we are already spending money with and we are already working with to develop consolidated accounts. One, we can save money through economies of scale. And two, we can get some visibility into what folks are actually using the cloud for. And then three, IT would like to act as an adviser to be able to point out for the various cloud providers that are out there -- this particular provider is good at functionality or this particular provider is good at security.

We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The first step is to corral the use of public cloud for UNM and create an escorting process to the cloud. The second step is going to be a hybrid cloud that we'll set up from our private cloud here on site. We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The other major benefit that we very much look forward to is being able to do DR in the cloud and taking advantage of the ability to replicate data and then spin up systems as you need them, rather than having a couple of million dollars in equipment sitting, waiting, and hoping you never use it. Things that you have to refresh every four years so that you have a viable DR plan.

Gardner: Is vCloud Automation Center something that will be useful in moving to this hybrid model? The one button to push, as it were, on the private cloud, will that become a one button to push in the hybrid model as well?

Pietrewicz: It will. I mentioned those various cloud service providers. Most of them are compatible with the vCloud Connector, so that you can simply just connect up that hybrid cloud service and with a little bit of work, be able to massage your portal.

We can have a menu option of public cloud providers through our portal that they could just select and say that they want to get a vCHS, Amazon, or Terremark, and then potentially move workloads back and forth. So vCAC and vCloud Connector are all at the center of it.

The other interesting piece that we're working on and going to try to figure out as part of this is that we really want to start looking into NSX and/or VIX to be able to provide very clear security boundaries, basically multi-tenancy, and then potentially be able to move those multi-tenant environments back and forth in the cloud or extend them from public to private cloud as well.

Software-defined networking

Gardner: Brian, you mentioned multi-tenancy earlier, and of course, there is a lot going on with software-defined data center, networking, and storage. What is it about it that’s interesting to you and why is this a priority for you, software-defined networking (SDN), for example?

Pietrewicz: SDN is the next sort of step in being able to truly automate your IaaS and your virtual environment. If you want to be able to dynamically deploy systems and have them be in a SAN box that is multi-tenant by customer, you really need to have an SDN-type solution, or at least that’s extremely helpful to do that.

One of the things that we are looking at next is to be able to implement something like NSX, so that we can deploy the equivalent of what’s a virtual wire, a multi-tenant environment, to individual customers, so that they can only see their stuff and can’t see their neighbors and vice versa.

The key is the ability to orchestrate that on demand and not have to deal with the legacy VLAN and firewall kind of issues that you have with the legacy environment.

Gardner: It’s interesting how a lot of these major trends -- service delivery, cloud, private cloud, DR, and SDN -- are interrelated. It’s a complex bundle, but the payoffs, when you do this inclusively, are pretty impressive.

From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service.

Pietrewicz: Whenever you get to the point of abstracting things to the software level, you provide the ability to automate. When you have the ability to automate, you get tremendous flexibility. That sometimes can be an issue in and of itself, just making decisions on how you want to do something. But along with that flexibility, you get the ability to automate just about anything that you want or need to be able to do.

The second piece to that is that we're really excited about figuring out, when we build the hybrid cloud model, how we might be able to extend those tenants into the cloud, either as active running workloads or in a DR model, so that the multi-tenancy is retained.

Milne: From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service. It’s that capability that NSX provides that creates that seamless experience from your data center out to the hybrid cloud.

As you said, Brian, that kind of network configuration, allocation, and reallocation of IP addresses, when you are moving things from one data center to another, is not something you want to do on a manual basis. So NSX is a key component of our hybrid cloud vision. It’s something that lot of the other cloud providers just don’t have.

Pietrewicz: I see it as the next frontier in IT. I think that when SDN starts taking off, it’s going to be a game changer in ways that we are not even recognizing yet, and that’s one example. Moving a workload from one network to another network is extremely powerful.

Cloud broker

Gardner: Kurt, this sounds as if not only is Brian transitioning into being a service provider to his constituencies, but now he's also becoming a cloud broker. Is this typical of what you're seeing in the market as well?

Milne: It is. Some of our customers will take a step to try to get their arms around shadow IT, users going around IT, to just offer that provisioning option through the IT portal. So it’s like, "You're using Amazon? That’s fine. We can help you do that." So putting a button in the service catalog deploys the kind of work that they've been doing in a public cloud like Amazon, but it has to come through IT. Then, IT is aware of it.

There's a saying I like. It’s called the "cloud boomerang." A lot of times, the IT customers will put thing out in the public cloud, but like a boomerang, it seems to always come back. The customer wants to integrate it with an existing system or they realize that they have to support it up in the cloud. A lot of times, those rogue deployments make their way back to the IT organization. So putting an Amazon service in the vCAC portal and not changing anything else is a nice first step in corralling that.

Now, we're taking that next step and combining a lot of those capabilities into a single platform.

Pietrewicz: That is exactly what we're seeing. At a university, because there isn’t really governance, it’s more like build a good service and hope they come. We take the approach of trying to enable it. We want to make it very transparent and say that they can use Amazon or vCHS, but there's a better way to do it. If you do it through the portal, you may be able to move those workloads back and forth.

We are actually seeing exactly what you mentioned, Kurt. Folks are reaching the limitations of using some of the cloud providers, because they need to get access to data back here at UNM and are actually doing the boomerang approach. They started out there and now they're migrating their machines into our IaaS so that they can get access to the data that they need.

Gardner: Kurt, we heard some very interesting things at VMworld recently around the cloud-management platform. Why don’t you tell us a little bit about that and how that fits into what we've been discussing in terms of this ongoing maturity and evolution that a large organization like the University of New Mexico is well into?

Milne: We recently announced the vRealizeSuite, which is a cloud management platform. So we're moving our product management strategy to a common platform.

Over the years, VMware has either built or acquired quite a few different management products. We've combined those products into a number of suites, like our automation, operations, and our business management suites. Now, we're taking that next step and combining a lot of those capabilities into a single platform.

There are a couple of guiding ideas there. We see in organizations like Brian’s is that the lines between the automated provisioning of those workloads automation, provisioning those workloads, and the ongoing operations and maintenance and support of those workloads, is really starting to blur.

So you have automation tasks that might happen when you're doing a support call. Maybe you want to provision some more resources, and there are operations tasks like checking system health that you might want to do as a step in an automation routine.

Shared services

Our product strategy change is to move toward a shared-services model, similar to a service-oriented architecture. The different services that are underlying our management products would be executable through a tool like vCAC, through a command line interface, or through like a REST API. There's kind of a mix-and-match opportunity to execute those services in different ways.

To build that platform with the shared service model on top, we need to start re-architecting some of our products in the back-end, so that we have a common orchestration engine, a common DR backup and a common policy engine. You don’t want one tool to undo the work that another tool did yesterday. You can’t have conflicting robots going out and doing automated tasks.

The general idea is to try to further consolidate these different management functions into a single platform. The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Gardner: Brian, is that something that you think is going to be on your radar? Is management so distributed now that you're looking for a more consolidated approach that’s inclusive?

The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Pietrewicz: That would be wonderful. We're doing things many different ways. If you take the example of orchestration, we are using Orchestrator, PowerShell, Perl, and starting to experiment with Puppet.

It would be really good if you could have one standardized way that you approach orchestration, as an example, and how that might tie into all the other pieces for back-end management, rather than handling it several different ways. As Kurt was mentioning, one part starts to step on another part. Having that be consolidated and consistent would be a huge value.

Milne: The other part of the strategy is also to make that work across environments. So the same tools and services would be available if you are provisioning up to Amazon or to your private cloud or hybrid cloud service, and even different hypervisors.

We're fully aware of the heterogeneous nature of the modern data center. So we're shifting to try to create that kind of powerful common management stack with that unified management experience across all of the environment. It’s kind of a nirvana. When we talk to people, they say that’s exactly what they want. So our vision is to kind of march towards delivering on that.

Gardner: Kurt, I am trying to recall from VMworld whether this was offered on-premises, as a service from a cloud, or some combination?

Service offerings

Milne: That’s the other interesting part of this. We're starting to go down the path of offering a number of our management products as a service. For example, at VMworld, we announced the availability of a beta for our vCAC product as a software as a service (SaaS), so you can without installing any software get a service portal, get that workflow and policy engine, and deploy infrastructure services across different environments.

We'll be rolling out betas for our other products in subsequent quarters over the next year or so. Then potentially we could have the SaaS services interact with and combine with the services that are available through the products that are installed on-premise. Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Gardner: It’s interesting. We might have a reverse boomerang when it comes to the management of all of this. Does that sound appealing Brian? Is that something you would look to as a cloud service, comprehensive management?

Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Pietrewicz: Absolutely, but it’s largely dependent on return on investment (ROI). It’s that balance of, when you get to a certain level in an IT shop, it’s sometimes cheaper to do things in-house than it is to outsource it, and sometimes not. You have to do the analysis on the ROI on what makes more sense to bring it in or to use a SaaS.

As an example, we completely outsourced all of our email, because it’s a lot of work. It's very simple and easy to do as a SaaS solution, but it’s a lot more work to do in-house. It’s definitely something that we would look into.

Milne: In a mid-sized organization that might have 300 different applications that the IT organization supports, maybe 50 of those are IT tools. Already we've seen progress with companies like ServiceNow that have a SaaS-based service desk. It makes sense to start to turn more of those management products into a SaaS delivery model.

Gardner: Brian, any thoughts about others who are starting to move in your direction, perhaps their own Lobo Cloud, their own portal rationalizing these services, being able to measure them better. What in 20/20 hindsight do you have that you could recommend for them as they go about this? Any learned lessons you could share?

Process orientation

Pietrewicz: The biggest lesson learned, without a doubt, is the focus on the process orientation, the ITIL model. The technology is really not that hard. It’s determining what your service is, what are you trying to deliver, and then how do you build that into a consistently delivered service, complete with SLAs and service descriptions that meet the customer needs. That's the most difficult part.

The technical folks can definitely sling the technology. That doesn’t seem to be that big of a deal. The partners and providers do a very good job of putting together products that make it happen, but the hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

Gardner: Kurt, any thoughts in reaction to what Brian said in terms of getting started on the right path around cloud rationalization of your IT organization?

Milne: One of the things that I've seen is a lot of organizations go through this process that Brian has described, trying to clearly define their services and figure out which parts of those services they're going to automate.

The hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

A lot of organizations start that service definition effort from an inside-out perspective, get a bunch of IT guys together, and try to define what you do on a daily basis in a service. That's hard.

The easier approach is just to go talk to your customers and users and ask, "If I were going to give you a button you could click to get what you need, what would you put behind the button?" Then, you define your services more from an outside-in perspective. It seems to be where companies get anyway and you just shortcut a lot of teeth gnashing and internal meetings when you do it that way.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Tags:  BriefingsDirect  Brien Pietrewicz  Cloud  Dana Gardner  HP VirtualSystem for VMware  IaaS  Interarbor Solutions  Kurt Milne  software-defined networking  UNM  virtualization 

Share |
PermalinkComments (0)
 

Managing transformation to Platform 3.0 a major focus of The Open Group Philadelphia conference on July 15

Posted By Dana L Gardner, Sunday, July 07, 2013

Taken as a whole, the converging IT and business mega trends of big data, cloud, mobile and social amount to more than a mere infrastructure or device shift.

Businesses and organizations often embrace some, but not all, of these activities. Their legacy and experience with them individually varies greatly. Each business and vertical industry has its own essential variables. And rarely are the trends embraced in unison, with a plan for how to cross-reference and exploit the others in concert.

Moreover, there are even more elements to the current upheaval: the Internet of things, aka machine-to-machine (M2M), and consumerization of IT (CoIT) implications, as well as the building interest in bring your own device (BYOD). There's clearly a lot of change afoot.

It's no wonder that the coordinated path to so-called Platform 3.0 that includes all these trends and their inter-relatedness is marked by uncertainty -- despite the opportunity for significant disruption.

Rarely are the trends embraced in unison, with a plan for how to cross-reference and exploit the others in concert.

So how should organizations factor standardization, planning, governance, measurement and even leadership over the productive adoption of Platform 3.0? The topic was initially outlined in an earlier blog post by Dave Lounsbury, Chief Technical Officer at The Open Group.

These questions will certainly play a big part of the upcoming The Open Group conference beginning July 15 in Philadelphia. While the theme of the conference is Enterprise Transformation and an emphasis on the finance, government, and healthcare sectors, The Open Group is working with a number of IT experts, analysts and thought leaders to better understand the opportunities available to businesses, and the steps they need to take to best transform amid the Platform 3.0 uptake.

The Open Group vision of Boundaryless Information Flow™ to me forms a large ingredient to helping enterprises take advantage of these convergent technologies. A working group within the consortium will analyze the use of cloud, social, mobile computing and big data, and describe the business benefits that enterprises can gain from them. The forum will then proceed to describe the new IT platform in the light of this analysis, with an eye to repeatable methods, patterns and standards.

Registration open

Registration to the conference remains open to attend in person, and many parts of the event will be streamed or available to watch later. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

In a lead-up to the conference, The Open Group also organized a Tweet Jam last month around that hashtags #ogP3 and #ogChat to investigate how the early patterns for Platform 3.0 use and adoption are unfolding. I was happy to be the moderator.

Among some of the salient take-aways from the various discussion and the online Twitter chat:

  • Speed of technology and business innovation will rapidly change the focus from asset ownership to the usage of services, requiring more agile architecture models to adapt to the rate and impact of such change
  • New value networks will result from the interaction and growth of the "Internet of things" and multiple devices and the expected new connectivity that targets specific vertical industry sector needs
  • Expect exponential growth of data inside and outside organizations, converging with increased end-point usage in mobile devices, coupled with powerful analytics all amid hybrid-cloud-hosted environments
  • Leaders will need to incorporate new sources of data, including social media and sensors in the Internet of Things and rapidly turn the data into usable information through correlation, fusion, analysis and visualization
  • Performance and security implications will develop from cross-technology platforms across more federated environments
  • Social behavior and market channel changes will result in multiple ways to search and select IT and business services, engendering new market drivers and effects

And some Tweets of interest from the chat:

  • Vince Kuraitis ‏@VinceKuraitis -- Great term. RT @NadhanAtHP: @technodad #ogP3 principle of "Infonomics" introduced by @doug_laney #ogChat http://bit.ly/YnxXwe
  • jim_hietala ‏@jim_hietala -- RT @nadhanathp: @VinceKuraitis Agreed.  Introducing new definition for ROI - Return on Information http://bit.ly/VAsuAK  #ogP3 #ogChat
  • E.G.Nadhan ‏@NadhanAtHP -- Boundaryless Information Flow to be introduced into Healthcare @theopengroup conference in July' 13 http://blog.opengroup.org/2013/06/06/driving-boundaryless-information-flow-in-healthcare/ … #ogChat #ogP3
  • E.G.Nadhan ‏@NadhanAtHP -- Say hello to the Data Scientist - Sexiest job in the world of #bigdata in the 21st century http://bit.ly/V62TcG  #ogChat #ogP3
  •  Vince Kuraitis ‏@VinceKuraitis -- Business strategy and IT strategy converge @ Platform 3.0 #ogp3 #ogChat

Again, registration to the conference remains open to attend in person. I hope to see you there. We'll also be conducting some BriefingsDirect podcasts from the conference, so watch for those in future posts.

You may also be interested in:

Tags:  ArchiMate  big data  BriefingsDirect  Cloud  cloud computing  Dana Gardner  mobile  Platform 3.0  social  The Open Group  TOGAF 

Share |
PermalinkComments (0)
 

Why should your business care about Platform 3.0? A Tweet Jam

Posted By Dana L Gardner, Friday, May 31, 2013

On Thursday, June 6, The Open Group will host a "tweet jam" examining Platform 3.0 and why the concept has great implications for businesses.

Over recent years a number of technologies -- cloud, mobile, big data, social -- have emerged and converged to disrupt the way we engage with each other in both our personal and business lives. Most of us are familiar with the buzz words, including "the Internet of things," "machine-to-machine (M2M)," and "consumerization of IT," but what do they mean when they act in concert? How can we treat them as separate? How can we react best?

Technologies have emerged and converged to disrupt the way we engage with each other in both our personal and business lives.

I was early to recognize this confluence as more than the sum of its parts, back in 2010. And Gartner was early too to recognize this convergence of trends representing a number of architectural shifts which it called a "Nexus of Forces." This nexus was presented as both an opportunity in terms of innovation of new IT products and services and a threat for those who do not keep pace with evolution, rendering current business architectures obsolete.

Understanding opportunities

Rather than tackle this challenge solo, The Open Group is working with a number of IT experts, analysts and thought leaders to better understand the opportunities available to businesses and the steps they need to benenefit and prosper from Platform 3.0, not fall behind. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

So please join the burgeoning Platform 3.0 community on Twitter on Thursday, June 6 at 9 a.m. PT/12 p.m. ET/5 p.m. GMT for a tweet jam, moderated by me, Dana Gardner (@Dana_Gardner), BriefingsDirect, that will discuss and debate the issues and implications around Platform 3.0.

All are welcome, including The Open Group members and interested participants from all backgrounds.

Key areas that will be addressed during the discussion include: the specific technical trends (big data, cloud, consumerization of IT, etc.), and ways businesses can use them – and are already using them – to increase their business opportunities.

All are welcome, including The Open Group members and interested participants from all backgrounds, to join the one-hour online chat session and interact with our panel's thought leaders. To access the discussion, please follow the #ogp3 and #ogChat hashtags during the discussion time.

You may also be interested in:

Tags:  ArchiMate  big data  BriefingsDirect  Cloud  cloud computing  Dana Gardner  mobile  Platform 3.0  social  The Open Group  TOGAF 

Share |
PermalinkComments (0)
 

Dutch insurance giant Achmea deploys 'ERP for IT' to reinvent IT processes and boost business performance

Posted By Dana L Gardner, Tuesday, March 19, 2013

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how Achmea Holding, one of the largest providers of financial services and insurance in the Netherlands, has made large strides in running their IT operations like an efficient business itself.

We'll hear how Achmea rearchitected its IT operations to both be more responsive to users and more manageable by the business, all based on clear metrics.

Here to explore these and other enterprise IT performance issues, we're joined by our co-host for this sponsored podcast, Georg Bock, Director of the Customer Success Group at HP Software, and he's based in Germany.

And we also welcome our special guest, Richard Aarnink, leader in the IT Management Domain at Achmea in the Netherlands, to explain how they've succeeded in making IT better governed and agile -- even to attain "enterprise resource planning (ERP) for IT" benefits.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why is running IT more like a business important? Why does this make sense now?

Aarnink: Over the last year, whenever a customer asked us questions, we delivered what he asked. We came to the conclusion that delivery of every request that we got was an intensive process for which we created projects.

It was very difficult to make sure that it was not a one-time hero effect, but that we could deliver to the customer what he asked every time, on scope, on specs, on budget, and on time. We looked at it and said, "Well, it is actually like running a normal business, and therefore why should we be different? We should be predictive as well."

Gardner: Georg Bock, is this something you are seeing more and more of in the field?

Trend in the market

Bock: Yes, we definitely see this as a trend in the market, specifically with the customers that are a little more mature in their top-down strategic thinking. Let’s face it, running IT like a business is an end-to-end process that requires quite a bit of change across the organization -- not only technology, but also process and organization. Everyone has to work hand in hand to be, at the end of the day, predictable and repeatable in what they're doing, as Richard just explained.

That’s a huge change for most organizations. However, when it’s being done and when it has lived in the organization, there's a huge payback. It is not an easy thing to undertake but it’s inevitable, specifically when we look at the new trends around cloud multi-sourcing, mobility, etc., which brings new complexity to IT.

You'd better have your bread and butter business under control before moving into those areas. That’s why also the timing right now is very important and top of people’s minds.

Gardner: Tell us a bit about Achmea, the size of your organization, and why IT is so fundamentally important to you.

Aarnink: Achmea is a large insurance provider in the Netherlands. We have around eight million customers in the Netherlands with 17,000 employees. We're a very old and cooperative organization, and we have had lots and lots of mergers and acquisitions in the last 20 years. So we had various sets of IT departments from all the other companies that we centralized over the past years.

Aarnink

If you look at insurance, it's actually having the trust that whenever something happens to a customer, he can rely on the insurer to help him out, and usually this means providing money. IT is necessary to ensure that we can deliver on those promises that we made to our customers. So it’s a tangible service that we deliver, it’s more like money, and it’s all about IT.

Of the 17,000 employees that we have in the Netherlands, about 1,800-2,000 employees work in the centralized IT department. Over the last year, we changed our target operating model to centralize the technologies in competence centers, as we call them, in the department that we call Solution Development.

We created a new department, IT Operations, and we created business-relationship departments that were merged with the business units that were asking or demanding functionality from our IT department. We changed our entire operating model to cope with that, but we still have a lot of homegrown applications that we have to deliver on a daily basis.

Changing the department and the organizational structure is one thing, and now we need to change the content and the applications we deliver.

Gardner: How has all this allowed you to better manage all the aspects of IT, and make it align with the business?

Strategy and governance

Aarnink: To answer that question I need to elaborate a little bit on the strategy and governance department, which is actually within the IT department. What we centralized there were project portfolio and project steering, and also the architectural capabilities.

We make sure that whatever solution we deliver is architectured from a single model that we manage centrally. That's a real benefit that we gained in centralizing this and making sure that we can -- from both the architecture and project perspectives -- govern the projects that we're going to deliver to our business units.

Bock: Achmea is a leader in that, and the structure that Richard described is inevitable to be successful. ERP for IT, or running IT as a business, the fundamental IT processes, is all about standardization, repeatability, and predictability, especially in situations where you have mergers and acquisitions. It’s always a disruption if you have to bring different IT departments together. If you have a standard that’s easy to replicate, that’s a no-brainer and winner from a business bottom-line perspective.

In order to achieve that, you have to have a team that has a horizontal unit and that can drive the standardization of the company. Richard and Achmea are not alone in that. Richard and I have quite a number of discussions with other companies from other industries, and we very much see that everyone has the same problem, and given those horizontal teams, primary enterprise architecture, chief technology officer (CTO) office, or whatever you like to call those departments, is definitely a trend in the industry and for those mature customers that want to take that perspective and drive it forward that way.

It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties.

But as I said, it’s all about standardization. It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties of driving the adoptions across the different organizations in the company.

Gardner: What sort of problems or issues did you need to resolve as you worked to change things for the better?

Aarnink: We looked at the entire scope of implementing ERP for IT and first we looked at the IT projects and the portfolio. We looked at that and found out that we still had several departments running their own solutions in managing IT projects and also budgets. In the past, we had a mechanism of only controlling the budget for the different business units, but no centralized view on the IT portfolio, as a whole, for Achmea.

We started in that area, looking at one system of record for IT projects and portfolio management, so we could steer what we wanted to develop and what we wanted to sunset.

Next, we looked at application portfolio management and tried to look at the set of applications that we want to currently use and want to use in the future and the set of applications that we want to sunset in the next year and how that related to the IT project. So that was one big step that we made in the last two years. There's still a lot of work to be done in that area, but it was a big topic.

Service management

The second big topic was looking at service management. Due to all the mergers, we still had lots of variations on IT process. Incident management was covered in a whole different way, when you looked at several departments from the past.

We adopted service desks to cater to all those kind of deviations from the standard ITIL process. We looked at that and said that we had to centralize again and we had to make sure that we become more prescriptive in how these process will look and how we make sure that it's standardized.

That was the second area that we looked at. The third area was more on the application quality. How could we make sure that we got a better first-time-right score in delivering IT projects? How could we make sure that there is one system of record for requirements and one system of record for test results and defects. That’s three areas that we invested in in the first phase.

Lots of change going on

Gardner: What have you have seen in the market that leads you to believe that ERP for IT is not a vision, but is, in fact, happening, and that we're starting to see tangible benefits?

Bock: Richard very much nicely described real, practical results, rather than coming up with a dogmatic, philosophical process in the first place. I think it’s all about practical results and practical results need to be predictable and repeatable, otherwise it’s always the one-time hero effort that Richard brought up in the beginning, and that’s not scalable at all.

At some point you need process, but you shouldn’t try that dogmatically. I also hear about the Agile versus the waterfall, whatever is applicable to the problem is the right thing to do. Does that rule out process? No, not at all. You have to live the process in a little different way.

Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

Everyone has to get-away from their dogmatic position and look at it in a little more relaxed way. We shouldn’t take our thoughts too seriously, but when we drive ERP for IT to apply some standard ways of doing things, we just make our life easier. It has nothing to do with esoteric vision, but it's something that is very achievable. It’s about getting a couple of people to agree on practical ways of getting it done.

Then, we can draw the technological consequences from it, rather than the other way around. That's been the problem in IT from my perspective for years. Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

From my perspective, standardization is simply a necessary conclusion from some of the trial-and-error mistakes that have been made over the last 10-15 years, where people tried to customize the hell out of everything just to be in line with the specificity of how things are being done in their particular company. But nobody asked why it was that way.

Aarnink: I completely agree. We had several discussions about how the incident process is being carried out, and it’s the same in every other company as well. Of course there are slight differences, but the fact is that an incident needs to be so resolved, and that’s the same within every company.

Best practice

You can easily create a best practice for that, adopt it within your own company, and unburden yourself from thinking about how you should go for this process, reinvent it, creating your own tool sets, interfaces with external companies. That can all be centralized, it can all be standardized.

It’s not our business to create our own IT tools. It’s the business of delivering policy management systems for our core industry, which is insurance. We don’t want all the IT that we need in order to just to keep the IT running. We want that standardized, so we can concentrate on delivering business value.

Gardner: Now that we've been calling this ERP for IT, I think it’s important to look back on where ERP as a concept came from and the fact that getting more data, more insight, repeatability, analyzing processes, determining best processes and methods and then instantiating them, is at the core of ERP. But when we try to do that with IT, how do we measure, what is the data, and what do we analyze?

Richard, at Achmea, are you looking at key performance indicators (KPIs) and are using project portfolio management maturity models? How is it that you're measuring this so that you can, in fact, do what ERP does best, make it repeatable, make it standardized?

The IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you.

Aarnink: If you look from the budget perspective, we look at the budgets, the timeframes, and the scope of what we need to deliver and whether we deliver on time, on budget, and on specs, as I already said. So those are basically the KPIs that we're looking for when we deliver projects.

But also, if you look at the processes involved when you deliver a project, then you talk about requirements management. How quickly can you create a set of requirements and what is the reuse of requirements from the past. Those are the KPIs we're looking for in the specific processes when you deliver an IT project.

So the IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you. At that level we try to standardize and we try to make KPIs in order to make sure that we use as much as possible, that we deliver quality, and we have the resources in place that we actually need to deliver those functionalities.

You need to look at small steps that can be taken in a couple of months’ time. So draw up a roadmap and enable yourself to deliver value every, let’s say 100 days. Make sure that every time you deliver functionality that’s actually used, and you can look at your roadmap and adjust it, so you enable yourself to be agile in that way as well.

The biggest thing that you need to do is take small steps. The other thing is to look at your maturity. We did a CMMi test review. We didn't do the entire CMMi accreditation, but only looked at the areas that we needed to invest in.

Getting advice

We looked at where we had standardized already and the areas that we needed to look at first. That can help you prioritize. Then, of course, look at companies in your network that actually did some steps in this and make sure that you get advice from them as well.

Bock: I absolutely agree with what Richard said. If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal. Those tactical step need to have a clear measure and a clear success criteria associated with them. Then you're on a good track

I just want to come back to the notion of ERP for IT that you alluded to earlier, because that term can actually hurt the discussion quite a bit. If you think about ERP 20 years ago, it was a big animal. And we shouldn’t look at IT nowadays in the same manner as ERP was looked at 20 years ago. We don’t want to reinvent a big animal right now, but we have to have a strategic goal where we look at IT from an end-to-end perspective, and that’s the analogy that we want to draw.

If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal.

ERP is something that has always been looked as an end-to-end process, and having a clear, common context associated from an end-to-end perspective, which is not the case in IT today. We should learn from those analogies that we shouldn’t try to implement ERP literally for IT, because that would take the whole thing in one step, where as Richard just said very nicely, you have to take it in digestible pieces, because we have to deal with a lot of technology there. You can't take that in one shot.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Achmea Holding  Application lifecycle management  BriefingsDirect  Cloud  Configuration management  Dana Gardner  ERP  Georg Bock  HP  HP DISCOVER  Interarbor Solutions  Richard Aarnink 

Share |
PermalinkComments (0)
 

Latest Jitterbit release further eases application and data integration from among modern sources

Posted By Dana L Gardner, Thursday, January 17, 2013

Data and apps integration provider Jitterbit this week released a new version of its solution, Jitterbit 5, designed to be the glue between on-premise, cloud, social, and mobile data,

Jitterbit focuses on simple yet powerful integration technologies that can be quickly and easily deployed to create integrated processes and data views. We've seen a lot of interest in light-weight, low-coding integration capabilities as more SaaS and cloud services need to be coordinated. This is now becoming even more pertinent to bringing data together from a variety of sources.

Jitterbit 5 aims to raise the level of simplicity even higher with new features that streamline process integration, said the Oakland, CA company. The wizards-based approach allows non-technical users to design integration projects through a graphical, point-and-click interface. I think making more people able to tailor and specify integrations can significantly boost innovation and productivity.

Vendors have been trying to solve the issue of integration of technology for over twenty years.

In enterprise computing today, there are three main sources of data that must come together to help drive the business forward, according to Jitterbit's thinking. First there's corporate data -- which for years has been the cornerstone of technology strategies -- that sits in databases, data warehouses, enterprise applications, etc. and is typically kept safe and sound on-premise, behind the firewall.

Over the past few years, two other sources of data have emerged as critical for businesses that want to optimize their operations and better serve their customers; data stored in cloud services, and data from a pair of new platforms -- social and mobile. And we'll no doubt be seeing ever larger and more specific data emerge from business and consumer activities from these domains.

These newer sources of data can be located anywhere, and the information they provide comes in a wide variety of formats, making it harder than ever to integrate with structured corporate information using traditional integration technologies.

Three pillars

Jitterbit's focus therefore is to help enterprises better achieve integration of data from all these three pillars of modern computing. And the means to do it must appeal to the business analysts who understand best the need to have many types of different data readily available and associated with business processes in near real time.

"Vendors have been trying to solve the issue of integration of technology for over 20 years.

The majority of companies come at it with a technical perspective -- they try to solve the problem for the professional developer," says Andrew Leigh, vice president of products with Jitterbit.

"But the problem of integration isn't just a technical issue; it's a business issue. The people who are best at building, managing, and changing integration are the ones that understand it's really a process. We're putting integration back in the hands of the business analysts who really understand the data and processes to make that integration effective."

The people who are best at building, managing, and changing integration are the ones that understand it's really a process.

While Jitterbit features wizards and other simple tools to let non-technical users quickly build the data connections that the business requires, it's important that they work in partnership with IT to ensure the process is governed correctly, says Leigh, who recently joined Jitterbit from Salesforce.com.

"We've built all the knowledge and best practices that the industry has been building up over the last two decades into our solution; now we're focused on the user experience and hiding complexity," says Leigh.

This latest release also features enhanced connectivity to Salesforce, Microsoft Dynamics and SAP, as well as Twitter and Chatter. The new Instant View and Process Monitor tools provide visibility to the status and results of more complex business process integrations. And Version 5.0 supports large-volume cloud APIs to allow organizations to rapidly synchronize large volumes of data at higher levels of performance.

Jitterbit's approach also fits into the vision of "integration as a service," which seems a natural development of cloud models. I'd like to see more cloud services providers embed such integration services into their offerings. This is especially important for PaaS to go mainstream.

A video describing the new features in Jitterbit 5.0, available now can be found here. A free 30-day trial of the product is available here.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:


Tags:  application integration  BriefingsDirect  Cloud  Dana Gardner  Interarbor Solutions  Jitterbit  mobile  social software 

Share |
PermalinkComments (0)
 

Study: Cloud computing becoming pervasive, and IT needs to take control now

Posted By Dana L Gardner, Wednesday, March 21, 2012

Cloud computing may be taking the business world by storm, but its success could mean a "perfect storm" that endangers the role of IT.

As a result, IT needs to step up now and change its approach to cloud services. This includes building trust with the lines of business, beginning to manage public cloud services, and pursuing increased automation for service provisioning and operations.

These are the key findings of a survey commissioned by BMC Software and conducted by Forrester Research. The study, "Delivering on High Cloud Expectations," shows that business units' demand for speed and agility is leading them to circumvent IT and acquire cloud services, more than half of them from what were termed "unmanaged" clouds.

Brian Singer, Lead Solutions Marketing Manager for BMC, said his company commissioned the survey in an effort to confirm what the company was hearing anecdotally from customers. "Cloud and software as a service (SaaS) are in enterprises in a big way," Singer said, "and we wanted to see how IT was dealing with them."

Cloud and SaaS are in enterprises in a big way and we wanted to see how IT was dealing with them.



For the study, researchers polled 327 enterprise infrastructure executives and architects in the United States, Europe, and Asia-Pacific. Among the key findings:

  • Today, 58 percent run mission critical workloads in unmanaged public clouds, regardless of policy. The researchers use "unmanaged" to describe clouds that are managed by the cloud operators, but not by the company buying the service.
  • In the next two years, 79 percent plan to run mission-critical workloads on unmanaged cloud services.
  • Nearly three out of four responders, 71 percent, thought that IT should be responsible for public cloud services.
  • Seventy two percent of CIOs believe that the business sees cloud computing as a way to circumvent IT.

Wake-up call

"This is a wake-up call," Singer said. "They know that this is going on and they understand that cloud is a way to go around monolithic IT." According to the survey, 81 percent of respondents said that a comprehensive cloud strategy is a high priority for the next year.

While cost is a major driver in the C-suite, the lines of business respondents put cost way down on their list of priorities. Instead they are seeking higher availability, faster delivery of services, more agility, and options and flexibility.

The researchers suggested a three-prong approach for IT to get a handle on this:

  • Build trust with the users and create a better user experience -- have an honest conversation about needs of the business, incorporate business requirements into a cloud strategy, and demonstrate progress toward them.

    They know that this is going on and they understand that cloud is a way to go around monolithic IT.


  • Shift from unmanaged to managed public cloud services. Many cloud vendors allow IT operations to monitor and manage services. This will help mitigate the risk and complexity that unmanaged clouds now introduce.
  • Develop ways to provision and operate internal services so that users get experiences similar to those they get from outside. This requires more automation to rapidly deploy solutions.

The full study results will be announced April 26 at 11 a.m. CST as part of a BMC webinar, registration required.

You may also be interested in:

Tags:  BMC Software  BriefingDirect  Business Service Management  Cloud  cloud computing  data center  Enterprise IT 

Share |
PermalinkComments (0)
 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal