Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  big data  The Open Group  HPDiscover  SaaS  VMWare  virtualization  data analytics  HP DISCOVER  Ariba  enterprise architecture  data center  HP Vertica  SOA  Ariba Network  SAP  Open Group Conference  security  Ariba LIVE  VMWorld  mobile computing  Tony Baer  desktop virtualization  Jennifer LeClaire  TOGAF  Business Intelligence 

Microsoft sets stage for an automated hybrid cloud future with Azure Stack Technical Preview

Posted By Dana L Gardner, Monday, February 01, 2016

Last week’s arrival of the Microsoft Azure Stack Technical Preview marks a turning point in the cloud-computing market and forms a leading indicator of how dramatically Microsoft has changed in the past two years.

The cloud turning point comes because the path to hybrid-cloud capabilities and benefits has a powerful new usher, one with the enterprise, developer, and service-provider presence, R and D budget, and competitive imperative to succeed in a market still underserved and nebulous.

Over the past five years, public cloud infrastructure-as-a-service (IaaS) value and utility have matured rapidly around three major players: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. But hybrid-cloud infrastructure standards are still under-developed, with no dominant market driver.

The best path to a hybrid cloud global standard remains a vision only, fragmented in practice, and lacking a viable commercial beachhead.

OpenStack, Apache CloudStack, Cloud Foundry, Eucalyptus, vCloud Air — none is dominant, none forms an industry standard with critical mass at either the enterprise or service-provider levels. The best path to a hybrid cloud global standard remains a vision only, fragmented in practice, and lacking a viable commercial beachhead.

Right now, it’s hard for enterprise IT architects to place a major bet on their hybrid cloud strategy. Yet placing major bets on the best path to effective hybrid cloud capabilities is exactly what enterprise IT architects should be doing as soon as possible.

Instead of a clear private-to-public cloud synergy strategy, IT organizations are fretting over whether to take a cloud-first or mobile-first approach to their apps, data and development. They want to simultaneously modernize legacy apps, rationalize their data, give their organizations a DevOps efficiency, find comprehensive platform-as-a-service (PaaS) simplicity, and manage it all securely.  They know that hybrid cloud is a big part of all of these, yet they have no clear direction.

API first

The right way to approach the problem, says my friend Chris Haydon, chief Strategy Officer at SAP Ariba, is to resist cloud-first and mobile-first, and instead take the higher abstraction API-first approach to as many aspects of IT as possible. He’s right, and SAP’s own success at cloud models — particularly SaaS and big data as a service — is a firm indicator. [Disclosure: SAP Ariba is a sponsor of my BriefingsDirect podcasts.]

With Microsoft Azure Stack (MAS), the clear direction of the future of cloud is of an API-first and highly automated private-cloud platform that has full compatibility with a major public cloud, Microsoft Azure. Like public Azure, private Azure Stack supports workloads from many tools and platforms — including Linux and Docker — and, as a cloud should, fires up hypervisors to run any major virtual machine supported workload.

Sensing an integrated private cloud platform opportunity big enough to drive a truck through, Microsoft has developed MAS to be highly inclusive, with a unified application model around Azure Resource Manager. Using templates typically found on GitHub, MAS operators can rapidly and simply create powerful private cloud resources to support apps and data. Because it’s Azure-consistent, they also allow ease in moving those workloads to a public cloud. This is not a Windows Server or .NET abstraction, its a private-cloud abstraction, with an API-first approach to management and assembly of data centers on the fly.

Because it’s Azure-consistent, they also allow ease in moving those workloads to a public cloud.

Because MAS is built on software-defined data center (SDDC) principles and technologies, it requires modern and robust, albeit industry standard, commodity hardware. Converged and hyper-converged infrastructure models, then work very well to rapidly deploy MAS private clouds on-premises as appliances, racks, blocks, and allow cost and capacity visibility and planning to align the hardware side with the API-first model on the software infrastructure. Indeed, the API-first and converged infrastructure models together are highly compatible, synergistic.
Hewlett Packard Enterprise (HPE) clearly has this hardware opportunity for enterprise private cloud in mind, recognizes the vision for "composable infrastructure," and has already partnering with Microsoft at this level. [Disclosure: HPE is a sponsor of my BriefingsDirect podcasts.]

Incidentally, the MAS model isn’t just great for new and old server-based apps, but it’s a way to combine that with desktop virtualization and to deliver the full user experience as a service to any desktop or mobile endpoint. And the big-data analytics across all database, app stores, and unstructured data sources can be integrated well into the server and desktop apps cloud.

Dramatic change

And this is why MAS portends the dramatic change that Microsoft has undergone. Certainly MAS and the Azure hybrid cloud roadmap suit Microsoft’s installed base and therefore Windows legacy. There is a compelling path from Windows Server, Microsoft SaaS apps and Exchange, .NET, and Visual Studio to Azure and MAS. There is a way to rationalize all Microsoft and standard data across its entire lifecycle. But there is also a customer-focused requirements list that allows for any client endpoint support, and an open mobile apps development path. There is a path for any enterprise app or database on a hypervisor to and from MAS and Azure. There are attractive markets for ISVs, service providers, and IT integrators and support providers. There is a high desirable global hardware market around the new on-premises and cloud provider hardware configurations to support SDDC modernization and MAS.

Clearly, Amazon Web Services and its stunning public-cloud success has clarified Microsoft's thinking around customer-centric market focus and more open IT systems design. But the on-premises data center, when made efficient via SDDC and new hardware, competes well on price against public cloud over time. And private cloud solves issues of data sovereignty, network latency, control, and security.

But to me, what makes Microsoft Azure Stack such a game-changer is the new path toward an automated hybrid-cloud future that virtually no other vendor or cloud provider is in a better position than Microsoft to execute. Google, Amazon, even IBM, are public-cloud biased. The legacy software IT vendors are on-premises-biased. Pretty much only Microsoft is hybrid biased and on the path to API-first thinking removes the need for enterprise IT to puzzle over the hybrid cloud boundary. It will become an automated boundary, but only with a truly common hybrid cloud management capability.

Because when you take the step toward API-first IT and find a common hybrid cloud model designed for the IT as a service future, then all the major IT constituencies — from dev to ops to CISO to business process architects to integrators to users — focus on the services. Just the services.

Once IT focuses on IT as a service, the target for deployment can be best managed programmatically, based on rules and policies.

Once IT focuses on IT as a service, the target for deployment can be best managed programmatically, based on rules and policies. Eventually managing the best dynamic mix of on-prem and public-cloud services can be optimized and automated using algorithms, compliance dictates, cost modeling, and experience. This hybridization can be extended down to the micro-services and container levels. but only if the platform and services are under a common foundation.

In IT, architecture is destiny. And business supported as management of services, with the infrastructure abstracted away, is the agile Digital Business that innovates and analyzes better than the competition.

A common set of API and data services that spans on-premises and public clouds is essential for creating hybrid clouds that support and propel business needs. With the right cloud model in place, IT leaders gain the freedom to acquire, deploy and broker all the services any business needs. IT becomes the source of business value streams, while the hybrid cloud supports that.

API-first private cloud instances on converged infrastructure with automated hybrid cloud services management is the obvious future. Too bad there have been so few clear paths to attainment of this end-game.

It just now looks like Microsoft is poised to get there first and best. Yes, it really is a new Microsoft.

[Disclosure: Microsoft defrayed travel expenses for me to attend a recent Microsoft Azure Stack workshop and industry analyst gathering.]

You may also be interested in:

Tags:  API first  Azure  BriefingsDirect  Dana Gardner  hybrid cloud  Interarbor Solutions  MAS  Microsoft  Microsoft Azure Stack 

Share |
PermalinkComments (0)
 

Procurement in 2016—The supply chain goes digital

Posted By Dana L Gardner, Tuesday, January 19, 2016

The next BriefingsDirect business innovation thought leadership discussion focuses on the heightened role and impact of procurement as a strategic business force.

We'll explore how intelligent procurement is rapidly transforming from an emphasis on cost savings to creating new business value and enabling supplier innovations.

As the so-called digital enterprise adapts to a world of increased collaboration, data access, and business networks, procurement leaders can have a much bigger impact, both inside and outside of their companies.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of procurement as a focal point of integrated business services we’re joined by Kurt Albertson, Principal of Advisory Services at The Hackett Group in Atlanta, and Dr. Marcell Vollmer, Chief Operating Officer at SAP Ariba and former Chief Procurement Officer at SAP. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We're looking at mobile devices being used more and more for business. We have connected business networks. How are these trends impacting procurement, and why is procurement going to have a bigger impact as time goes on?

Vollmer: I see a couple of disruptive trends, which are very important and are directly impacting procurement.

Vollmer

We see how smartphones and tablets have changed the way we work on a daily basis, not to forget big data, Internet of Things (IoT), Industry 4.0. So, there are a lot of technology trends out there that are very important.

On the other side, we also see completely new business models taking off. Uber is the largest taxi company without owning a single cab. Airbnb is basically the same, the largest accommodation provider, but not owning a single bed. We see also companies like WhatsApp, Skype, and WeChat. They don't own the infrastructure anymore, like what we know from the past.

I could mention a couple more, like Alibaba. Everybody knows it was the highest IPO in history, with an initial market capitalization of around $230 billion, and they even don’t have an inventory. What we're seeing are fundamental changes, the technology on one side and then the new business models.

We now see the impact here for procurement. When business models are changing, procurement also needs to change. Companies intend to simplify the way they do business today.

Complex processes

We see a lot of complex processes. We have a lot of complex business models. Today it needs to be "Apple easy" and "Google fast." This is simply what millennials expect in the market.

But also, we see that procurement, as a function itself, is transforming from a service to function. And this is definitely one trend. We see a different strategic impact. What is asked of procurement from the lines of business is more important and is on the agenda for the procurement function.

Let me add one last topic, the evolution of the Chief Procurement Officer (CPO) role, by saying that seeing the different trends in the market, seeing also the different requirements indicated by the trends for procurement, the role of procurement, as well as the CPO role in the 21st Century will definitely change.

I believe that the CPO role might evolve and might be a Chief Collaboration Officer role. Or, in the future, as we see the focus is more and more on the business value, a Chief Value Officer role might be the next big step.

Gardner: Kurt, we're hearing a lot from Marcell about virtual enterprises. When we say that a major retailer doesn’t have an inventory, or that a hotel rooms coordinator doesn’t have any beds, we're really now talking about relationships. We're talking about knowledge rather than physical goods. Does that map in some way to the new role of the CPO? How has the virtual enterprise impacted the procurement process?

Albertson: Marcell brought up some great points. Hackett is a quantitative-based organization. Let me share with you some of the insights from a very recent Key Issues Study that we did for 2016. This is a study we do each year, looking forward across the market. We're usually talking with the head of procurement about where the focus is, what’s the priority, what’s going to have the biggest impact on success, and what capabilities they're building out.

Albertson

Let me start at a high level. A lot of things that Marcell talked about in terms of elevating procurement’s role, and more collaboration and driving more value, we saw it quite strongly in 2015 -- and we see it quite strongly in 2016.

In 2015, when we did our Key Issues Study, the number one objective of the procurement executive was to elevate the role of procurement to what we called a trusted adviser, and certainly you've heard that term before.

We actually put a very solid definition around it, but achieving the role of a trusted adviser, in itself, is not the end-game. It does allow you to do other things, like reduce costs, tap suppliers for innovation, and become more agile as an organization, which was in the top five procurement objectives as well.

Trusted advisor

So when we look at this concept of the trusted adviser role of procurement, just as Marcell said, it's about a lot of the procurement executives across multiple industries who are asking, "How do we change the perception of procurement within the eyes of the stakeholders, so that we can do more higher value type activities?"

For example, if you're focusing on cost, we talk a lot about the quantity of spend influence, versus the quality of spend influence. In fact, in our forum in October, we had a very good discussion on that with our client base.

We used to measure success of the procurement organization by cost savings, but one of the key metrics a lot of our clients would look at is percent of spend influenced by procurement. We have a formal definition around that, but when you ask people, you'll get a different definition from them in terms of how they define spend influence.

What we've realized is that world-class organizations are in the 95 percent range and 90 percent plus on the indirect side. Non world-class procurement organizations are lagging, in the 70 percent range in terms of influence. Where do we go from here? It has to be about the quality of the spend influence.

When we look out in the market, there are a lot of companies that don't have line-item level detail or they don't have 90 percent or 95 percent-plus data quality with respect to spend analytics.

And what our data shows very clearly is that world-class organizations are involved during the requirements and planning stages with their internal stakeholders much more often than non-world-class organizations. The latter are usually involved either once the supplier has been identified, or for the most part, once requirements are identified and the stakeholder already knows what they want.

In both cases, you're influencing. But in the world-class case, you're doing a much better job of quality of influence, and you can open up tremendous amounts of value. It changes the discussion with your internal stakeholders from, "We're here to go out and competitively bid and help you get the best price," to, "Let’s have a conversation with what you're trying to achieve and, with the knowledge, relationships, and tool sets that we have around the supply markets and managing those supply markets, let us help you get more value in terms of what you are trying to achieve."

We've asked some organizations how we become a trusted adviser, and we've built some frameworks around that. One of the key things is exactly what you just talked about. In fact, we did a forward-looking, 10-year-out procurement 2025 vision piece of research that we published a few months ago, and big data and analytics were key components of that.

When we look at big data, like a lot of the things Marcell already talked about, most procurement groups aren’t very good at doing basic spend analytics, even with all the great solutions and processes that are out there. Still, when we look out in the market, there are a lot of companies that don't have line-item-level detail, or they don't have 90 percent or 95 percent-plus data quality with respect to spend analytics.

We need to move way beyond that for procurement to really elevate its role within the organization. We need to be looking at all of the big data that’s out there in the supply networks, across these supply networks, and across a lot of other sources of information. You have PDAs and all kinds of information.

We need to be constructively pulling that information together in a way that then allows us to marry it up with our internal information, do more analysis with that, synthesize that data, and then turn it over and provide it to our internal stakeholders in a way that's meaningful and insightful for them, so that they can then see how their businesses are going to be impacted by a lot of the trends out in the supply markets.

Transformational impact

This year, we asked a question that I thought was interesting. We asked which trends will have the greatest transformational impact on the way procurement performs its job over the next decade. I was shocked. Three out of the top five have to do with technology: predictive analytics and forecasting tools, cloud computing and mobility, the global economy and millennial workforce.

Mobility, predictive analytics, forecasting, and cloud computing are in the top five, along with global economy and the millennial workforce, two other major topics that were in our forward-looking procurement 2025 paper.

When we look at the trend that’s going to have the greatest transformational impact, it's predictive analytics and forecasting tools in terms of how procurement performs its job over the next 10 years. That’s big.

Consider the fact that we aren’t very good at doing the basics around spend analytics right now. We're saying that we need to get a lot better to be able to predict what’s going to happen in the future in terms of budgets, based on what we expect to happen in supply markets and economies.

We need to put in the hands of our stakeholders toolsets that they can then use to look at their business objectives and understand what’s happening in the supply market and how that might impact it in two to three years.

We need to put in the hands of our stakeholders tool sets that they can then use to look at their business objectives and understand what’s happening in the supply market and how that might impact it in two to three years. That way, when you look at some of the industries out there, when your revenue gets cut in more than half almost within a year, you have a plan in place that you can then go execute on to take out cost in a strategic way as opposed to just taking a broad axe and trying to take out that cost.

Vollmer: I couldn’t agree more what Kurt said about the importance of the top priorities today. It's very important also to ask what you want to do with the data. First of all, you need technology. You need to get access to all the different sources of information that you have in a company.

We see today how difficult it is. I could echo what Kurt said about the challenges. A lot of procurement functions aren't even capable of getting the basic data to drive procurement, to do spend analytics, and then to see that it really links this to supply-chain data. In the future this will definitely change.

Good time to purchase

When you think about what you can do with the data by predictive analytics and then say, "This is a good time to buy, based on the cycle we've seen is this time-frame." This would give you a good time to make a purchase decision and go to the market.

And what do you need to do that? You need the right tools, spend visibility tools, and access to the data to drive end-to-end transparency on all the data what you have, for the entire source-to-pay process.

Gardner: Another thing that we're expecting to see more of in 2016 is collaboration between procurement inside an organization and suppliers -- finding new ideas for how to do things, whether it’s marketing or product design.

Kurt, do you have any data that supports this idea that this is not just a transaction, that there is, in fact, collaboration between partners, and that that can have quite an impact on the role and value that the procurement officer and their charges bring back to their companies?

That helps procurement category managers raise their game and really be perceived as adding more value, becoming this trusted advisor.

Albertson: Let me tie it into the conversation that we've been having. We just talked about a lot of data and analytics and putting that in the hand of procurement folks, so that they can then go and have conversations and be really advisers in terms of helping enable business strategies as opposed to just looking at historical spend cost analysis, for example. That helps procurement category managers raise their game and really be perceived as adding more value, becoming this trusted adviser.

Hackett Group works with hundreds of Global 1000 organizations, and probably still one of the most common discussions we have, and even in on-site training support that we do, is around strategic category management. It's switching the game from strategic sourcing, which we view as an end-step process that results in awarding a competitive bid process, with aggregation of spend and awarding a contract, to a more formal category management framework.

That provides a whole set of broader value levers that you can pull to drive value, including supplier relationship management (SRM), which includes working with suppliers to innovate, impacting a much broader set of value objectives that our stakeholders have, including spend cost reduction, but not only including spend cost-reduction.

We see such a level of interesting category management today. In our Key Issues Study in 2016, when we look at the capability building that organizations are rolling out, we've been seeing this shift from strategic sourcing to category management.

Strategic sourcing as a capability was always number one. It still is, but now number two is this category management framework. Think of those two as bookends, with category management being a much more mature framework than just strategic sourcing.

Category management

Some 80 percent of companies said category management is a key capability that they need to use to drive procurement’s objectives, and that’s because they're impacting a broader set of value objectives.

Now, the value levers they're pulling are around innovation and SRM. In fact, if you look at our 2016 Key Issues Study again, tapping supplier innovation is actually a little bit further on down the list, somewhere around 10.

When we look at all the things that are there, it’s actually ninth on the list, with 55 percent of procurement executives saying it’s a critical and major importance for us.

The interesting thing, though, is that if you go back to 2015 and compare where that is versus 2016, in 2016, that moves nearly into the top three with respect to the significantly more focus on a key capability. SRM has been a hot topic for our clients for a long time, but this tells us that it’s getting more and more important.

We're seeing a lot of organizations still with very informal SRM, supply innovation frameworks, in place. It’s done within the organization, but it’s done haphazardly by individuals within the business and by key stakeholders. A lot of times, that activity isn't necessarily aligned with where it can drive the most value.

We have to rethink how we look at our supply base and really understand where those suppliers are that can truly move the needle on supplier innovation.

When we work with a company, it's quite common for them to say, "These are our top five suppliers that we want to innovate with." And you ask, "If innovation is your objective, either to drive cost reduction or to help improve the market effectiveness of your products or services and drive greater revenue, whatever the reason you are doing that, are these suppliers going to get you there?"

Probably 7 out of 10 times, people come back to us and say that they picked these suppliers because they were the largest spend impact suppliers. But when you start talking about supplier innovation, they freely admit that there's no way that supplier is going to engage with them in any kind of innovation.

We have to rethink how we look at our supply base and really understand where those suppliers are that can truly move the needle on supplier innovation and engage them through a category-management framework that pulls the value lever of SRM and then track the benefits associated with that.

And as I said, looking at our 2016 Key Issues Study, supplier innovation was the fastest growing in terms of its focus objective that we saw when we asked the procurement executives.

Gardner: Marcell, back to you. It sounds as if the idea of picking a supplier is not just a cost equation, but that there is a qualitative part to that. How would you automate and scale that in a large organization? It sounds to me like you need a business network of some sort where organizations can lay out much more freely what it is that they're providing as a service, and then making those services actually hook up -- a collaboration function.

Is that something you're seeing at Ariba, as well that the business network, helping procurement move from a transaction cost equation to a much richer set of services?

Key role

Vollmer: Business networks play a key role for us for our business strategy, but also on how to help companies to simplify their complexity.

When you reach out to a marketplace, you're looking for things. You're probably also starting discussions and getting additional information. You're not necessarily looking for paint in the automotive industry or the color of a car. Why not get an already painted car as a service at the end?

This is a very simple example, but now think about when you go to the next level on how to evolve and have a technology partnership, where you reach out to suppliers, looking for new suppliers, by getting more and more information and also asking others who have probably having already done similar things.

When you do this on a network, you get probably responses from suppliers you wouldn't even have thought about having capabilities like that. This is a process that, in the future, will continue to aid successfully the transformation to a more value-focused procurement function, and simplicity is definitely a key.

You need to run simple. You need to focus on your business, and you need to get rid of the complexity.

You need to run simple. You need to focus on your business, and you need to get rid of the complexity. You can’t have all the information and do everything on your own. You need to focus on your core competencies and help the business in getting whatever they need to be successful, from the suppliers out in the market to ensure you get the best price for the desired quality, and ensure on-time deliveries.

The magic triangle of procurement is not a big secret in the procurement world. Everybody knows that it's not possible to optimize everything. Therefore, you need to find the right mix. You also need to be agile to work with suppliers in a different way by not only focusing just on the price, which a lot of operational technical procurement functions are used to. You need what you really want to achieve as a business outcome.

On a network you can get help from suppliers, from the collaboration side also, in finding the right ones to drive business value for your organization.

Gardner: Another major area where we're expecting significant change in 2016 is around the use of procurement as a vehicle for risk reduction. So having this visibility using networks -- elevating the use of data analysis, everything we have talked about, in addition to cost-efficiencies, in addition to bringing innovation to play between suppliers and consumers at the industrial scale -- it seems to me that we're getting insight deeply into supply chains and able to therefore head off a variety of risks. These risks can be around security, around the ability to keep supply chains healthy and functioning, and even unknown factors could arise that would damage even an entire company's reputation.

Kurt, do you have some data, some findings that would illustrate or reinforce this idea that procurement as a function, and CPOs in particular, can play a much greater role in the ability to detect risk and prevent bad things from happening to companies?

Supply continuity risk

Albertson: Again, I'll go back to the 2016 Key Issues Study and talk about objectives. Reducing supply continuity risk is actually number six on the list, and it’s a long list, and that’s pretty important.

A little bit further down, we see things like regulatory noncompliance risk, which is certainly core. It's certainly more aligned with certain industries than others. So just from our perspective, we see this as certainly number six on the list of procurement 2016 objectives, and the question is what we do about it.

There's another objective that I talked about earlier, which is to improve agility. It's actually number four on the list for procurement 2016 objectives.

I look at risk management and procurement agility going hand in hand. The way data helps support that is by getting access to better information, really understanding where those risks are, and then being able to quickly respond and hopefully mitigate those risks. Ideally, we want to mitigate risks and we want to be able to tap the suppliers themselves and the supply network to do it.

In fact, we attacked this idea of supply risk management in our 2025 procurement study. It’s really about going beyond just looking at a particular supplier and looking at all the suppliers that are out there in the network, their suppliers, their suppliers, and so on.

But then, it's also tapping all the other partners that are participating in those networks, and using them to help support your understanding and proactively identifying where risk might be occurring, so that you can take action against it.

How do we manage and analyze all this data? How do we make sense of it? That's where we see a lot of our clients struggling today.

It’s one of the key cornerstones of our 2025 research. It's about tapping supplier networks and pulling information from those networks and other external sources, pulling that information into some type of solution that can help you manage and analyze that information, and then presenting that to your internal stakeholders in a manner that helps them manage risk better.

And certainly, an organization like SAP Ariba is in a good position to do that. That’s obviously one of the major barriers with this big-data equation. How do we manage and analyze all this data? How do we make sense of it? That's where we see a lot of our clients struggling today.

We have had some examples of clients that have built out an SRM group inside their procurement organization as a center-of-excellence capability purely to pull this information that resides out in the market, whether it’s supplier market intelligence or information flowing from networks and other network partners. Marrying that information with their internal objectives and plans, and then synthesizing that information, lets them put that information in the hands of category managers.

Category managers can then sit down with business leaders and have fact-based opinions about what’s going to happen in those markets from a risk perspective. We could be talking about continuity of supply, pricing risks and the impact on profitability, or what have you. Whatever those risks are, you're able to use that information. It goes back to elevating the roles of trusted advisor. The more information and insight you can put into their hands the better.

The indirect side

Obviously, when we look at some of the supply networks, there's a lot of information that can be gleaned out there. Think about different buyers that are working with certain suppliers in getting information to them on supply risk performance. To be frank, a lot of organizations still don’t do a great job on the indirect side.

There are opportunities, and we're seeing it already in some of these markets for supply networks to start with the supplier performance piece of this, tap the network community to provide insight to that, and get help from a risk perspective that can be used to help identify where opportunities to manage risk better might occur.

But there are a lot of other sources of information and it’s really up to procurement to try to figure this out with all the sources of big data. Whether it’s sensor data, social data, transactional data, operational data, partner data, machine-to-machine (M2M) data, or cloud services based data, there's a lot of information. We have a model that looks at this kind of these three levels of kind of this analytics model.

The first level of the model is just for recording things and generating reports. The second level is that you're understanding and generating information that then can be used for analytics. Third, you're actually anticipating. You have intelligence and you're moving towards more real-time analytics so that you can be quicker in responding to potential risk.

Procurement organizations need to ensure that they really help the business as much as possible, and also evolve to the next level for their own procurement functions.

I mentioned this idea of agility as being key on the procurement executive’s list. Agility can be in many things, but one of the things that it means with respect to risk is that you can’t avoid every risk event. Some risk events are going to happen. There's nothing you're going to do about them, but you can proactively make plans for when those risk events do occur, so that you have a well thought-out plan based on analytics to execute in order to minimize the impact of that risk.

Time and time again, when we look at case studies and at the research that’s out there, those organizations that are much more agile in terms of responding to these risks where you're not going to be able to avoid them, minimize the impact of those risks significantly compared to others.

Gardner: As we look ahead to 2016, we're certainly seeing a lot on the plate for the procurement organization. It looks like they're facing a lot more technology issues, they're facing change of culture, they're thinking about being a networked organization. Marcell, how do you recommend that procurement professionals prepare themselves? What would you recommend that they do in order to meet these challenges in 2016? How can they be ready for such a vast amount of change?

Vollmer: Procurement organizations need to ensure that they really help the business as much as possible, and also evolve to the next level for their own procurement functions. Number one is that procurement functions need to see that they have the right organizational setup in place. That setup needs to fit the overall organizational line of business spectra, what a company has.

The second component, which I think is very important, is to have an end-to-end focus on the process side. Source-to-pay is a clearly defined term, but it's a little bit different in all the companies. When you really want to optimize, when you really want to streamline your process, you want to use business networks and strategic sourcing tools, as well as running in a highly automated level of transaction to leverage the automation potential of what you have in a purchase order or invoice automation, for example.

One defined process

Then, you need to ensure that you have one defined process and you need to have side systems covering all the different parts of the process. This needs to be highly integrated, as well as integrated in your entire IT landscape.

Finally, you need to also consider change management. This is a most important component by which you help the buyers in your organization transform and evolve to the next level into a more strategic procurement function.

As Kurt said about the data, if you don’t have some basic data, you're very far away from driving predictive analytics and prescriptive guidance. Therefore, you need to ensure that you invest also in your talents and that you drive to change management side.

These are the three components that I would see in 2016. This sounds easy, but I've talked to a lot of CPOs. This journey might take a couple of years, but procurement doesn't have a lot of time. We need to see now in procurement that we define the right measures, the right actions, to ensure that we can help the business and also create value.

As was already mentioned, this needs to go beyond just creating procurement savings. I believe that this concept is here to stay in the future. I think the value is what counts, what you can create.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Tags:  BriefingsDirect  business network  Dana Gardner  Interarbor Solutions  Kurt Albertson  Marcell Vollmer  procurement  SAP Ariba  The Hackett Group 

Share |
PermalinkComments (0)
 

The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation

Posted By Dana L Gardner, Monday, January 18, 2016
Updated: Monday, January 18, 2016

The next BriefingsDirect thought leadership interview explores a new user group being formed around TOGAF, The Open Group standard, and how this group will further foster the practical use of TOGAF for effective and practical business transformation.

The discussion, which comes in conjunction with The Open Group San Francisco 2016 event on January 25, sets the stage for the next chapter in enterprise architecture (EA) for digital business success.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about the grassroots ecosystem building around transformational EA, we're joined by the President and CEO of The Open Group, Steve Nunn. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we get to the TOGAF User Group news, let’s relate what’s changed in the business world and why EA and frameworks and standards like TOGAF are more practical and more powerful than ever.

Nunn

Nunn: One of the keys, Dana, is that we're seeing EA increasingly used as a tool in business transformation. Whereas in the past, maybe in the early adoptions of TOGAF and implementations of TOGAF, it was more about redesigning EA, redesigning systems inside an organization more generally. Nowadays, with the need to transform businesses for the digital world, EA has another more immediate and more obvious appeal.

It’s really around an enablement tool for companies and organizations to transform their businesses for the digital world, specifically the worlds of the Internet of Things (IoT), big data, social, mobile, all of those things which we at The Open Group lump into something we call Open Platform 3.0, but it really is affecting the business place at large and the markets that our member organizations are part of.

Gardner: TOGAF has been around for quite a while. How old is TOGAF now?

Nunn: The first version of TOGAF was published in 1993, so it's been quite some time. For a little while, we published a version every year. Once we got to Version 7.0, the refreshes and the new versions came a bit slower after that.

We're now at Version 9.1, and there is a new version being worked on. The key for TOGAF is that we introduced a certification program around it for both tools that help people implement TOGAF, but also for the practitioners, the individuals who are actually using it. We did that with version 8.0 and then we moved to what we consider, and the marketplace certainly considers, to be an improved version with TOGAF 9.0, making it an exam-based certification. It has proved to be very popular indeed, with more than 50,000 certified individuals under that program to date.

Gardner: Now the IT world, the business world, many things about these worlds have changed since 1993. Something that comes to mind, of course, is the need to not just think about architecture within your organization, but how that relates across boundaries of many organizations.

I sometimes tease friends who are Star Trek fans that we have gone from regular chess to 3-D chess, and that’s a leap in complexity. How does this need to better manage Boundaryless Information Flow make EA and standards like TOGAF so important now?

Common vocabulary

Nunn: With the type of change that you talked about and the level of complexity, what standards like TOGAF and others bring is commonality and ability to make architecting organizations a little bit easier; to give it all a bit more structure. One of the things that we hear is most valuable about TOGAF, in particular, is the common vocabulary that it gives to those involved in a business transformation, which obviously involves multiple parts of an organization and multiple partners in a group of organizations, for example.

So, it’s not just for enterprise architects. We're hearing increasingly about a level of training and introductory use of TOGAF at all levels of an organization as a means of communicating and having a common set of terminology. So everyone has the same expectation about what particular terms mean. With added complexity, we need things to help us work through that and divide up the complexity into different layers that we can tackle. EA and TOGAF, in particular, are proving very popular for tackling those levels of complexity.

Gardner: So in the next chapter, these things continue to evolve, react to the market, and adjust. We're hearing that there is news at the event, the January 25 event in San Francisco, around this new user group. Tell me why we're instituting a user group associated with TOGAF at this point?

Nunn: It’s going to be the first meeting of a TOGAF User Group, and it’s something we have been thinking about for some time, but the time seems to be now. I've alluded to the level of popularity of TOGAF, but it really is becoming very widely used. What users of TOGAF are looking for is how to better use it in their day jobs. How can they make it effective? How can they learn from what others have done, both good and bad, the things to try and the things not to try or more the things that worked and things that didn’t work? That isn’t something that we've necessarily offered, apart from a few conference sessions at previous events.

So this really ends up getting a broader community around TOGAF, and not just those members of the Architecture Forum which is our particular forum that advances the TOGAF standard. It’s really to engage the wider community, both those who are certified and those who aren’t certified, as a way of learning how to make better and more effective use of TOGAF. There are a lot of possibilities for what we might do at the meeting, and a lot of it will depend on what those who attend would like to cover.

Gardner: Now, to be clear, any standard has a fairly rigorous process by which the standard is amended, changed, or evolves over time. But we're talking about something separate from that. We're talking about perhaps more organic information flow, sharing, bringing points into that standard’s process. Maybe you could clarify the separation, the difference, the relationship between a standard’s adoption and a user group's input.

This is the first time we've offered nonmembers a real opportunity, not necessarily to decide what goes into the standard, but certainly a greater degree of influence.

Nunn: That’s the key point, Dana. The standard will get evolved by the members of The Open Group, specifically the members of The Open Group Architecture Forum. They are the ones who have evolved it this far and are very actively working on a future version. So they will be the ones who will ultimately get to propose what goes in and ultimately vote on what goes in.

Where the role of the user community, both members and non-members -- but specifically the opportunity for non-members -- comes in is being able to give their input, put forward ideas that areas where maybe TOGAF might be strengthened or improved in some way. Nobody pretends it’s prefect as you use it. It has evolved over time and it will evolve in the future. But hearing from those who actually use TOGAF day to day, we might get, certainly from The Open Group point of view, some new perspectives, and those perspectives will then get passed on through us to the members of the Architecture Forum.

Many of those we expect to attend the event anyway. They might hear it for the first time, but certainly we would spend part of the meeting looking at what that input might be, so that we have something to pass on to them for consideration in the standard.

This is the first time we've offered nonmembers a real opportunity, not necessarily to decide what goes into the standard, but certainly a greater degree of influence.

It's somewhat of a throwback to the days where user groups were very powerful in what came out of vendor organizations. I do hope that this will be something that will enable everyone to get the benefit of a better overall standard.

Past user groups

Gardner: I certainly remember, Steve, the days when vendors would quake in their boots when user meetings and groups came up, because they had such influence and impact. They both benefited each other. The vendors really benefited by hearing from the user groups and the user groups benefited by the standards that could come forth and vendor cooperation that they basically demanded.

I recall, at the last Open Group event, the synergy discussions around Zachman, and other EA frameworks. Do you expect that some of these user group activities that you're putting forth will allow some of that cross pollination, if you will, people who might be using other EA tools and want to bring more cooperation and collaboration across them?

Nunn: I would certainly expect that to happen. Our position at The Open Group, and we've said it consistently over the years, is that it’s not "TOGAF or," it’s "TOGAF and." The reality is that  most organizations, the vast majority, are not just going to take TOGAF and let it be everything they use in implementing their EAs.

So the other frameworks are certainly relevant. I expect there to be some interest in tools, as well as frameworks. We hear that quite a lot, suggestions of what good tools are for people at different stages of maturity and their implementation of the EA. So, I expect a lot of discussion about the other thoughts or the other tools in the toolbox of an EA to come up here.

Gardner: So user groups serve to bring more of an echo system approach, voices from disparate parties coming together sounds very powerful. Now this is happening on January 25. This is a free first meeting. Is that correct? And being in San Francisco, of course, it's within a couple hours drive of a lot of influential users, start-ups, the VC community, vendors, or service providers. Tell us a little bit about why people who are within a quick access to the Bay Area might consider coming to this on January 25?

What people would get out of it is the chance to hear a bit more about how TOGAF is used by others, case studies, what’s worked, what hasn’t worked, the opportunity to talk directly with people.

Nunn: That’s another reason, the location of our next event. We were first thinking this is the right time to do a first TOGAF User Group, because you see there are a lot of users of TOGAF in the area or within a few hours of it. What people would get out of it is the chance to hear a bit more about how TOGAF is used by others, case studies, what’s worked, what hasn’t worked, the opportunity to talk directly with people, whether it’s through networking or actually in the sessions in the user group meeting.

We're trying to not put too much rigid structure around those particular sessions, because we won’t be able to get the most benefit out of them. So it’s really what they want to get out of it that will probably be achievable.The point of view of The Open Group is that it's about getting that broader perspective for the attendees, learning useful tips and tricks, learning from the experience of others, and learning a bit more about The Open Group and how TOGAF has evolved.

This is a key point. TOGAF is so widely used now and globally, and even though we have quite a few members in The Open Group, we have more than 350 organization participating in some way in the Architecture Forum, and more in The Open Group as a whole.

But there's obviously a much wider community of those who are using it. Hearing more about how it has developed, what the processes are inside The Open Group, might make them feel good about the future of something that they clearly have some investment in. Hopefully, it might even persuade a few of those organizations to join and influence from the inside.

Gardner: Now, there's more information about the user group at www.opengroup.org. You're meeting on January 25 at 9:30 a.m. Pacific Time at the Marriott Union Square right in the heart of San Francisco. But this is happening in association with a larger event. So tell us about the total event that's happening between January 25 and 28.

Quarterly events

Nunn: This is part of one of our quarterly events that we've been running for lot of years now. They take the form generally of a plenary sessions that are open to anyone and also member meetings, where the members of the various Open Group forums get together to progress the work that they do virtually. But it’s to really knuckle down and progress some of it face-to-face, which as, we all know, is generally a very productive way of working.

Apart from the TOGAF User Group, we have on the agenda sessions on the Digital Business Strategy and Customer Experience, which is an activity that's being driven inside our Open Platform 3.0 Forum, as a membership activity, but this is really to open that up to a wide audience at the conference. So, we'll have people talking about that.

Open Platform 3.0 is where the convergence of technologies like cloud, social computing, mobile computing, big data, and IoT all come together. As we see it, our goal is for our members to create an Open Platform 3.0 Standard, which is basically a standard for digital platform, so that the enterprises can more easily use the technologies and get the benefit of these technologies that are now out there. There will be quite a bit of focus on Open Platform 3.0.

The other big thing that is proving very popular for us, which will be featured at the conference is the Open Group IT4IT Reference Architecture, and there is a membership activity, the IT4IT Forum. They're working on standards. We published the first version of that reference architecture at our last quarterly conference, which was in Edinburgh in October last year.

There has been a lot of interest in it so far, and we are working on a certification program for IT4IT that we will be launching later this year, hopefully at our next quarterly event in London in April.

There has been a lot of interest in it, and it's really a standard for running the business of IT. Oftentimes, IT is just seen as doing its own thing and not really part of the business. But the reality nowadays is that whoever is running the IT, be it the CIO or whatever other individual, to be successful they have to not just run IT as a business, with the usual business principles of return on investment, etc., but they have to be seen to be doing so. This is a reference architecture that's not specific to any industry and that provides a guide for how to go about doing that.

We're quite excited about it. There has been a lot of interest in it so far, and we are working on a certification program for IT4IT that we will be launching later this year, hopefully at our next quarterly event in London in April.

Gardner: I'll just remind our listeners and readers that we're going to be doing some separate discussions and sharing with them on the IT4IT Reference Architecture. So please look for that coming up.

Getting back to the event, Steve, I've attended many of these over the years and I find a lot of the discussions around security, around specific markets like healthcare and government really powerful and interesting. Is there anything in particular about this conference that you're particularly interested in or looking forward to?

Nunn: The ones I've already spoken to are the ones that I'm personally most looking forward to. We'll be having sessions on health care and security, as you say

In the security area it’s worth calling out that one of the suggestions that we've had about TOGAF -- I won’t call it criticism, but one of the suggestions for future versions -- is that TOGAF is a bit light on security. It could do with beefing up that particular area.

The approach that we've taken this time, which people attending the conference will hear about, is that we have actually got the security experts to say what we need to cover in TOGAF, in the next version of TOGAF from a security point of view. Rather than having the architects include what they know about security, we have some heavyweight security folks in there, working with the Architecture Forum, to really beef up the security aspect. We'll hear a bit more about that.

Customer experience

Gardner: I also see that customer experience, which is closely aligned with user experience, is a big part of the event this year. That’s such a key topic these days for me, because it sort of forms a culmination of Platform 3.0. When you can pull together big data, hybrid cloud architectures, mobile enablement and reach, you can start to really do some fantastic new things that just really couldn’t have been done before when it comes to that user experience, real-time adaptation to user behaviors, bringing that inference back into a cloud or a back-end architecture, and then bringing back some sort of predictive or actionable result.

Please flesh out a bit more for us about how this user experience and customer experience is such a key part of the output, the benefit, the value, and the business transformation that we get from all these technical issues that we've discussed; this is sort of a business issue.

Nunn: You're absolutely right. It’s when we start providing a better experience for the customers overall and they can get more out of what the organizations are offering that everybody wins.

What we're trying to do from the organizational side is focus on what is it that you can do to look at it from the customers’ point of view, meet their expectations, and start to evolve from there.

From the group that we have working on this inside The Open Group, they are coming at it from a point of view that some of these new technologies are actually very scary for organizations, because they are forced to transform. The expectations of customers now are completely different. They expect to be able to get things on their cellphones or their tablets, or whatever device they might be using. That's  quite a big shift for a lot of organizations, and that’s not even getting into some of the areas of IoT, which promises to be huge.

What we're trying to do from the organizational side is focus on what is it that you can do to look at it from the customers’ point of view, meet their expectations, and start to evolve from there.

To me, it’s interesting from the point of view that it’s pretty business-driven. The technologies are there to be taken advantage of or to actually be very disruptive. So the business needs to know at a fairly early stage what those customer expectations are and take advantage of the new technologies that are there. That’s the angle that we are coming from inside The Open Group on that.

Some of the main participants in that group are actually coming from the telco world, where things have obviously changed enormously over the last few years. So that one is going to move quite quickly.

Gardner: It certainly seems that the ability to have boundaryless architecture is essential on that customer experience benefit. You certainly seem to be in the right place at the right time for that.

But the event in San Francisco also forms a milestone for you, Steve. You're now in your first full event as President and CEO of The Open Group, having taken over from Allen Brown last Fall. Tell us a little bit about your earlier roles within the standards organization and a bit more about yourself perhaps for those folks who are not yet familiar with you?

Quite different

Nunn: Yes, it will be quite different this time around. I've been with The Open Group for 22 years now. I was originally hired as General Counsel, and then fairly quickly moving on to Vice President of Corporate, Legal and Chief Operating Officer under Allen Brown as CEO. Allen was CEO for 17 years, and I was with him all of that time. It’s going to be quite different to have somebody else running the events, but I'm very much looking forward to it.

From my point of view, it’s a great honor to be leading The Open Group and its members into our next phase of evolution. The events that we hold are one small part of it, but they're a very important part, particularly these quarterly ones. It’s where a lot of our customers and members come together in one place, and as we have heard, there will be some folks who may not have been involved with one of our events before through the user group, so it’s pretty exciting.

I'm looking forward to building on the very solid foundation that we have and some of the great work activities that we mainly have ongoing inside The Open Group.

I'm looking forward to building on the very solid foundation that we have and some of the great work activities that we mainly have ongoing inside The Open Group.

Don’t expect great change from The Open Group, but just really more of the same good stuff that we've been working on before, having regard to the fact that obviously things are changing very rapidly around us and we need to be able to provide value in that fast changing world, which we are very confident we can.

Gardner: As an observer of the market, but also of The Open Group, I'm glad to hear that you're continuing on your course, because the world owes you in many ways. Things you were talking about 5 or 10 years ago have become very essential. You were spot on on how you saw the vision of the world changing on IT and its influence on business and vice versa.

More than ever, it seems that IT and EA is destiny for businesses. So I'm glad to hear that we're having a long vision, and the future seems very bright for your organization as the tools and approaches and the mentality and philosophy that you have been espousing becomes essential to do some of these things we have been discussing, like Platform 3.0, like customer experience, and IoT.

In closing, let’s remind our audience that you can register for the event at The Open Group website, www.opengroup.org. The first day, January 25, includes that free user group, the inaugural user group for TOGAF, and it all happens at the Marriott Union Square, San Francisco, along with the General Conference, which also runs from January 25 to 28.

Any last thoughts Steve, as we close out, in terms of where people should expect The Open Group to go, or how they can become perhaps involved in ways that they hadn’t considered before?

Good introduction

Nunn: Attending one of our events is a really good introduction to what goes on in The Open Group. For those who haven’t attended one previously, you might be pleasantly surprised.

If I had to pick one thing, I would say it's the breadth of activities there are at these events. It’s very easy for an organization like The Open Group to be known for one thing or a very small number of things, whether it’s UNIX originally and EA more recently, but there really is a lot going on beyond there.

Getting exposure to that at an event such as this, particularly in a location as important to the industry and as beautiful as San Francisco is, is a great chance. So anyone who is on the fence about going, then jump over the fence and try us out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  enterprise architecture  Interarbor Solutions  Sam Nunn  The Open Group  The Open Group Event  TOGAF  TOGAF User Group 

Share |
PermalinkComments (0)
 

Learn how SKYPAD and HPE Vertica enable luxury brands to gain rapid insight into consumer trends

Posted By Dana L Gardner, Thursday, January 14, 2016

The next BriefingsDirect big-data use case leadership discussion explores how retail luxury goods market analysis provider Sky I.T. Group has upped its game to provide more buyer behavior analysis faster -- and with more user depth.

Learn how Sky I.T. changed its data analysis platform infrastructure to Hewlett Packard Enterprise (HPE) Vertica -- and why that has helped solve its challenges around data variety, velocity, and volume and make better insights available across the luxury retail marketplace.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To share how retail intelligence just got a whole lot smarter, we welcome Jay Hakami, President; Dane Adcock, Vice President of Business Development, and Stephen Czetty, Vice President and Chief Technology Officer, all at Sky I.T. Group in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's driving the need for greater and better big-data analysis for luxury retailers? Why do they need to know more, better, faster?

Adcock: Well, customers have more choices. As a result, businesses need to be more agile and responsive and fill the customer's needs more completely or lose the business. That's driving the entire industry into practices that mean shorter times from design to shelf in order to be more responsive.

It has created a great deal of gross marketing pressure, because there's simply more competition and more selections that a consumer can make with their dollar today.

Gardner: Is there anything specific to the retail process around luxury goods that is even more pressing when it comes to this additional speed?

Sky I.T. Group

Retail Business-Intelligence Solutions
Get More Information

Adcock: Yes. The downside to making mistakes in terms of designing a product and allocating it in the right amounts to locations at the store level carries a much greater penalty, because it has to be liquidated. There's not a chance to simply cut back on the supply chain side, and so margins are more at risk in terms of making the mistake.

Ten years ago, from a fashion perspective, it was about optimizing the return and focusing on winners. Today, you also have to plan to manage and optimize the margins on your losers as well. So, it's a total package.

Gardner: So, clearly, the more you know about what those users are doing or what they have done is going to be essential. It seems to me, though, that we'rere talking about a market-wide look rather than just one store, one retailer, or one brand.

How does that work, Jay? How do we get to the point where we've been able to gather information at a fairly comprehensive level, rather than cherry-picking or maybe getting a non-representative look based on only one organization’s view into the market?

Hakami: With SKYPAD, what we're doing is collecting data from the supplier, from the wholesaler, as well as from their retail stores, their wholesale business, and their dot-com, meaning the whole omni channel. When we collect that data, we cleanse it to make sure its meaningful to the user.

Hakami

Now, we're dealing with a connected world where the retailer, wholesalers, and suppliers have to talk to one another and plan together for the buying season. So the partnerships and the insight that they get into the product performance is extremely important, as Dane mentioned, in terms of the gross margin and in terms of the software information. SKYPAD basically provides that intelligence, that insight, into this retail/wholesale world.

Gardner: Isn’t this also a case where people are opening up their information and making it available for the benefit of a community or recognizing that the more data and the more analysis that’s available, the better it is for all the participants, even if there's an element of competition at some point?

Hakami: That's correct. The retail business likes to share the information with their suppliers, but they're not sharing it across all the suppliers. They're sharing it with each individual supplier. Then, you have the market research companies who come in and give you aggregation of trends and so on. But the retailers are interested in sell-through. They're interested in telling X supplier, "This is how your products are performing in my stores."

If they're not performing, then there's going to be a mark down. There's going to be less of a margin for you and for us. So, there's a very strong interest between the retailer and a specific supplier to improve the performance of the product and the sell-through of those products on the floor.

Gardner: Before we learn more about the data science and dealing with the technology and business case issues, tell us a little bit more about Sky I.T. Group, how you came about, and what you're doing with SKYPAD to solve some of these issues across this entire supply chain and retail market spot.

Complex history

Hakami: I'll take the beginning. I'll give you a little bit of the history, Dana, and then maybe Dane and Stephen can jump in and tell you what we are doing today, which is extremely complex and interesting at the same time.

We started with SKYPAD about eight years ago. We found a pain point within our customers where they were dealing with so many retailers, as well as their own retail stores, and not getting the information that they needed to make sound business decisions on a timely basis.

We started with one customer, which was Theory. We came to them and we said, "We can give you a solution where we're going to take some data from your retailers, from your retail stores, from your dot-com, and bring it all into one dashboard, so you can actually see what’s selling and what’s not selling."

Fast forward, we've been able to take not only EDI transactions, but also retail portals. We're taking information from any format you can imagine -- from Excel, PDF, merchant spreadsheets -- bringing that wealth of data into our data warehouse, cleansing it, and then populating the dashboard.

So today, SKYPAD is giving a wealth of information to the users by the sheer fact that they don’t have to go out by retailer and get the information. That’s what we do, and we give them, on a Monday morning, the information they need to make decisions.

As these business intelligence (BI) tools have become more popular, the distribution of data coming from the retailers has gotten more ubiquitous and broader in terms of the metrics.

Dane, can you elaborate more on this as well?

Adcock: This process has evolved from a time when EDI was easy, because it was structured, but it was also limited in the number of metrics that were provided by the mainstream. As these business intelligence (BI) tools have become more popular, the distribution of data coming from the retailers has gotten more ubiquitous and broader in terms of the metrics.

But the challenge has moved from reporting to identification of all these data sources and communication methodologies and different formats. These can change from week to week, because they're being launched by individuals, rather than systems, in terms of Excel spreadsheets and PDF files. Sometimes, they come from multiple sources from the same retailer.

One of our accounts would like to see all of their data together, so they can see trends across categories and different geographies and markets. The challenge is to bring all those data sources together and align them to their own item master file, rather than the retailer’s item master file, and then be able to understand trends, which accounts are generating the most profits, and what strategies are the most profitable.

Visit the Sky BI Team

At The 2016 NRF Big Show!
Get More Information

It's been a shifting model from the challenge of reporting all this data together, to data collection. And there's a lot more of it today, because more retailers report at the UPC level, size level, and the store level. They're broadcasting some of this data by day. The data pours in, and the quicker they can make a decision, the more money they can make. So, there's a lot of pressure to turn it around.

Gardner: When you're putting out those reports on Monday morning, do you get queries back? Is this a sort of a conversation, if you will, where not only are you presenting your findings, but people have specific questions about specific things? Do you allow for them to do that, and is the data therefore something that’s subject to query?

Subject to queries

Adcock: It’s subject to queries in the sense that they're able to do their own discovery within the data. In other words, we put it in a BI tool, it’s on the web, and they're doing their own analysis. They're probing to see what their best styles are. They're trying to understand how colors are moving, and they're looking to see where they're low on stock, where they may be able to backfill in the marketplace, and trying to understand what attributes are really driving sales.

But of course, they always have questions about completeness of the data. When things don’t look correct, they have questions about it. That drives us to be able to do analysis on the fly, on-demand, and deliver some responses, "All your stores are there, all of your locations, everything looks normal." Or perhaps there seems to be some flaws or things in the data that don’t actually look correct.

Not only do we need to organize it and provide it to them so that they can do their own broad, flexible analysis, but they're coming back to us with questions about how their data was audited. And they're looking for us to do the analysis on the spot and provide them with satisfactory answers.

Gardner: Stephen Czetty, we've heard about the use case, the business case, and how this data challenge has grown in terms of variety as well as volume. What do you need to bring to the table from the data architecture to sustain this growth and provide for the agility that these market decision-makers are demanding?

Czetty: We started out with an abacus, in a sense, but today we collect information from thousands of sources literally every single week. Close to 9,000 files will come across to us and we'll process them correctly and sort of them out -- what client they belong to and so forth, but the challenge is forever growing.

Czetty

We needed to go from older technology to newer technology, because our volumes of data are increasing and the amount of time that we need to consume to data in is static.

So we're quite aware that we have a time limit. We found HPE Vertica as a platform for us to be able to collect the data into a coherent structure in a very rapid time as opposed to our legacy systems.

It allows us to treat the data in a truly vertical way, although that has nothing to do with the application or the database itself. In the past we had to deal with each client separately. Now we can deal with each retailer separately and just collect their data for every single client that we have. That makes our processes much more pipelined and far faster in performance.

The secret sauce behind that is the ability in our Vertica environment to rapidly sort out the data -- where it belongs, who it belongs to -- calculate it out correctly, put it into the database tables that we need to, and then serve it back to the front end that we're using to represent it.

That's why we've shifted from a traditional database model to a Vertica-type model. It's 100 percent SQL for us, so it looks the same for everybody who is querying it, but under the covers we get tremendous performance and compression and lots of cost savings.

Gardner: For some organizations that are dealing with the different sources and  different types of data, cleansing is one problem. Then, the ability to warehouse that and make it available for queries is a separate problem. You've been able to tackle those both at the same time with the same platform. Is that right?

Proprietary parsers

Czetty: That's correct. We get the data, and we have proprietary parsers for every single data type that we get. There are a couple of hundred of them at this point. But all of that data, after parsing, goes into Vertica. From there, we can very rapidly figure out what is going where and what is not going anywhere, because it’s incomplete or it’s not ours, which happens, or it’s not relevant to our processes, which happens.

We can sort out what we've collected very rapidly and then integrate it with the information we already have or insert new information if it's brand-new. Prior to this, we'd been doing this by hand to a large-scale, and that's not effective any longer with our number of clients growing.

Gardner: I'd like to hear more about what your actual deployment is, but before we do that, let’s go back to the business case. Dane and Jay, when HPE Vertica came online, when Steve was able to give you some of these more pronounced capabilities, how did that translate into a benefit for your business? How did you bring that out to the market, and what's been the response?

Hakami: I think the first response was "wow." And I think the second response was, "Wow, how can we do this fast and move quickly to this platform?"

Prior to this, we'd been doing this by hand to a large-scale, and that's not effective any longer with our number of clients growing.

Let me give you some examples. When Steve did the proof of concept (POC) with the folks from HPE, we were very impressed with the statistics we had seen. In other words, going from a processing time of eight or nine hours to minutes was a huge advantage that we saw from the business side, showing our customers that we can load data much faster.

The ability to use less hardware and infrastructure as a result of the architecture of Vertica allowed us to reduce, and to continue to reduce, the cost of infrastructure. These two are the major benefits that I've seen in the evolution of us moving from our legacy to Vertica.

From the business perspective, if we're able to deliver faster and more reliably to the customer, we accomplished one of the major goals that we set for ourselves with SKYPAD.

Adcock: Let me add something there. Jay is exactly right. The real impact, as it translates into the business, is that we have to stop processing and stop collecting data at a certain point in the morning and start processing it in order for us to make our service-level agreements (SLAs) on reporting for our clients, because they start their analysis. The retail data comes in staggered over the morning and it may not all be in by the time that we need to shut that processing off.

One of the things that moving to Vertica has allowed us to do is to cut that time off later, and when we cut it off later, we have more data, as a rule, for a customer earlier in the morning to do their analysis. They don’t have to wait until the afternoon. That’s a big benefit. They get a much better view of their business.

Driving more metrics

The other thing that it has enabled us to do is drive more metrics into the database and do some processing in the database, rather than in the user tool, which makes the user tool faster and it provides more value.

For example, maybe for age on the floor, we can do the calculation in the background, in the database, and it doesn't impede the response in the front-end engine. We get more metrics in the database calculated rather than in our user tool, and it becomes more flexible and more valuable.

Sky I.T. Group

Retail Business-Intelligence Solutions
Get More Information

Gardner: So not only are you doing what you used to do faster, better, cheaper, but you're able to now do things you couldn't have done before in terms of your quality of data and analysis. Is there anything else that is of a business nature that you're able to do vis-à-vis analytics that just wasn't possible before, and might, in fact, be equivalent of a new product line or a new service for you?

Czetty: In the old model, when we got a new client we had to essentially recreate the processes that we'd built for other clients to match that new client, because they're collecting that data just for that client just at that moment.

In the current model, where we're centered on retailers, the only thing that will take us a long time to do in this particular situation is if there's a new retailer that we've never collected data from.

So 99 percent of it is the same as any other client, but one percent is always different, and it had to be built out. On-boarding a client, as we call it, took us a considerable amount of time -- we are talking weeks.

In the current model, where we're centered on retailers, the only thing that will take us a long time to do in this particular situation is if there's a new retailer that we've never collected data from. We have to understand their methodology of delivery, how it comes, how complex it is and so forth, and then create the logic to load that into the database correctly to match up with what we are collecting for others.

In this scenario, since we’ve got so many clients, very few new stores or new retailers show up, and typically it’s just our clients on retail chain, and therefore our on-boarding is just simplified, because if we are getting Nordstrom’s data from client A, we're getting the same exact data for client B, C, D, E, and F.

Now, it comes through a single funnel and it's the Nordstrom funnel. It’s just a lot easier to deal with, and on-boarding comes naturally.

Hakami: In addition to that, since we're adding more significant clients, the ability to increase variety, velocity, and volume is very important to us. We couldn't scale without having Vertica as a foundation for us. We'd be standing still, rather than moving forward and being innovative, if we stayed where we were. So this is a monumental change and a very instrumental change for us going forward.

Gardner: Steve, tell us about your actual deployment. Is this a single tenant environment? Are you on a single database? What’s your server or data center environment? What's been the impact of that on your storage and compression and costs associated with some of the ancillary issues?

Multi-tenant environment

Czetty: To begin with, we're coming from a multi-tenant environment. Every client had its own private database in the past, because in IBM DB2, we couldn't add all these clients into one database and get the job done. There was not enough horsepower to do the queries and the loads.

We ran a number of databases on a farm of servers, on Rackspace as our hosting system. When we brought in Vertica, we put up a minimal configuration with three nodes, and we're still living with that minimal configuration with three nodes.

We haven't exhausted our capacity on the license by any means whatsoever in loading up this data. The compression is obscenely high for us, because at the end of the day, our data absolutely lends itself to being compressed.

Everything repeats over and over again every single week. In the world of Vertica, that means it only appears once in wherever it lives in the database, and the rest of it is magic. Not to get into the technology underneath it at this point, from our perspective, it's just very effective in that scenario.

With the three nodes, we've had zero problems with performance. It hasn't been an issue at all. We're just looking back and saying that we wish we had this a little sooner.

Also in our IBM DB2 world, we're using quite costly large SAN configurations with lots of spindles, so that we can have the data distributed all across the spindles for performance on DB2, and that does improve the performance of that product.

However, in HPE Vertica, we have 600 GB drives and we can just pop more in if we need to expand our capacity. With the three nodes, we've had zero problems with performance. It hasn't been an issue at all. We're just looking back and saying that we wish we had this a little sooner.

Vertica came in and did the install for us initially. Then, we ended up taking those servers down and reinstalling it ourselves. With a little information from the guide, we were able to do it. We wanted to learn it for ourselves. That took us probably a day and a half to two days, as opposed to Vertica doing it in two hours. But other than that, everything is just fine. We’ve had a little training, we’ve gone to the Vertica event to learn how other people are dealing with things, and it's been quite a bit of fun.

Now there is a lot of work we have to do at the back end to transform our processes to this new methodology. There are some restrictions on how we can do things, updates and so forth. So, we had to reengineer that into this new technology, but other than that, no changes. The biggest change is that we went vertical on the retail silos. That's just a big win for us.

Gardner: As you know, HPE Vertica is cloud-ready. Is there any benefit to that further down the road where maybe it’s around issues of a spike demand in holiday season, for example, or for backup recovery or business continuity? Any thoughts about where you might leverage that cloud readiness in the future?

Dedicated servers

Czetty: We're already sort of in the cloud with the use of dedicated servers, but in our business, the volume increases in the stores around holidays is not doubling the volume. It’s adding 10 percent, 15 percent, maybe 20 percent of the volume for the holiday season. It hasn’t been that big a problem in DB2. So, it’s certainly not going to be a problem in Vertica.

We've looked at virtualization in the cloud, but with the size of the hardware that we actually want to run, we want to take advantage of the speed and the memory and everything else. We put up pretty robust servers ourselves, and it turns out that in secure cloud environments like we're using right now at Rackspace, it's simply less expensive to do it as dedicated equipment. To spin up a machine, like another node for us at Rackspace, would take about same time it would take for virtual system setup and configure to a day or so. They can give us another node just like this on our rack.

We looked at the cloud financially every single time that somebody came around and said there was a better cloud deal, but so far, owning it seems to be a better financial approach.

Gardner: Before we close out, looking to the future, I suppose the retailers are only going to face more competition. They're going to be getting more demand from their end users or customers for user experience for information.

We looked at the cloud financially every single time that somebody came around and said there was a better cloud deal, but so far, owning it seems to be a better financial approach.

We're going to see more mobile devices that will be used in a dot-com world or even a retail world. We are going to start to see geolocation data brought to bear. We're going to expect the Internet of Things (IoT) to kick in at some point where there might be more sensors involved either in a retail environment or across the supply chain.

Clearly, there's going to be more demand for more data doing more things faster. Do you feel like you're in a good position to do that? Where do you see your next challenges from the data-architecture perspective?

Czetty: Not to disparage too much the industry of luxury, but at this point, they're not the bleeding edge on the data collection and analysis side, where they are on the bleeding edge on social media and so forth. We've anticipated that. We've got some clients who were collecting information about their web activities and we have done analysis for identifying customers who are presenting different personas through their different methods as they contact the company.

We're dabbling in that area and that’s going to grow as it becomes so tablet-oriented or phone-oriented as the interfaces go. A lot of sales are potentially going to go through social media and not just the official websites in the future.

We'll be capturing that information as well. We’ve got some experience with that kind of data that we’ve done in the past. So, this is something I'm looking forward to getting more of, but as of today, we’re only doing it for a few clients.

Well positioned

Hakami: In terms of planning, we're very well-positioned as a hub between the wholesaler and the retailer, the wholesaler and their own retail stores, as well as the wholesaler and their dot-coms. One of the things that we are looking into, and this is going to probably get more oxygen next year, is also taking a look at the relationships and the data between the retailer and the consumer.

As you mentioned, this is a growing area, and the retailers are looking to capture more of the consumer information so they can target-market to them, not based on segment but based on individual preferences. This is again a huge amount of data that needs to be cleansed, populated, and then presented to the CMOs of companies to be able to sell more, market more, and be in front of their customers much more than ever before.

Visit the Sky BI Team

At The 2016 NRF Big Show!
Get More Information

Gardner: That’s a big trend that we are seeing in many different sectors of the economy -- that drive for personalization, and it really is a result of these data technologies to allow that to happen.

Any other thoughts about where the intersection of computer science capabilities and market intelligence demands are coming together in new and interesting ways?

Adcock: I'm excited about the whole approach to leveraging some predictive capabilities alongside the great inventory of data that we've put together for our clients. It's not just about creating better forecasts of demand, but optimizing different metrics, using this data to understand when product should be marked down, what types of attributes of products seem to be favored by different locations of stores that are obviously alike in terms of their shopper profiles, and bringing together better allocations and quantities in breadth and depth of products to individual locations to drive better, higher percentage of full-price selling and fewer markdowns for our clients.

So it’s a predictive side, rather than discovery using a BI tool.

Czetty: Just to add to that, there's the margin. When we talked to CEOs and CFOs five or six years ago and told them we could improve business by two, three, or four percent, they were laughing at us, saying it was meaningless to them. Now, three, four, or five percent, even in the luxury market, is a huge improvement to business. The companies like Michael Kors, Tory Burch, Marc Jacobs, Giorgio Armani, and Prada are all looking for those margins.

I'm excited about the whole approach to leveraging some predictive capabilities alongside the great inventory of data that we've put together for our clients.

So, how do we become more efficient with a product assortment, how do we become more efficient with distribution and all of these products to different sales channels, and then how do we increase our margins? How do we not over-manufacture and not create those blue shirts in Florida, where they are not selling, and create them for Detroit, where they're selling like hotcakes.

These are the things that customers are looking at and they must have that tool or tools in place to be able to manage their merchandising and by doing so become a lot more agile and a lot more profitable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tags:  big data  BriefingDirect  Dana Gardner  Dane Adcock  data analysis  Hewlett Packard Enterprise  HPE  HPE Discover  Interarbor Solutions  Jay Hakami  Sky I.T.  Stephen Czetty 

Share |
PermalinkComments (0)
 

Is 2016 the year that accounts payable becomes strategic?

Posted By Dana L Gardner, Monday, January 11, 2016

The next BriefingsDirect business innovation thought leadership discussion focuses on the changing role and impact of accounts payable (AP) as a strategic business force.

We’ll explore how intelligent AP is rapidly transforming by better managing exceptions, adopting fuller automation, and implementing end-to-end processes that leverage connected business networks.

As the so-called digital enterprise adapts to a world of increased collaboration, digital transactions, and e-payables management -- AP is needing to adapt in 2016.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of AP as a focal point of automated business services we are joined by Andrew Bartolini, Chief Research Officer at Ardent Partners in Boston, and Drew Hofler, Senior Director of Marketing at SAP Ariba. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Drew, let’s look at the arrival of 2016. We have more things going on digitally, we have a need to improve efficiency, and AP has been shifting -- but how will 2016 make a difference? What should we expect in terms of AP elevating its role in the enterprise?

Hofler: AP is one of those areas that everybody looks at, first and foremost, as a cost center. So when AP looks at what they can do better, they've typically thought about efficiency and cost savings first. That’s the plus side of the cost center as saving money by spending less.

Hofler

But what we've been seeing happening over the last year or so, and what will accelerate in 2016, is that AP begins to move more from just a cost saving and efficiency focus to value creation. And this is where they sit in the hub of one of the three critical elements of working capital -- inventory, receivables and payables -- and AP sits squarely on that last one.

And they have influence over that which affects the company's working capital. AP has become so very important for companies by creating the efficiencies in the invoice process, it opens up opportunities, and they're going to be able to affect a company’s working capital for the positive going forward. That’s going to grow as they move beyond just the automation that is the foundation to then seeing the opportunities that come out of that.

Gardner: Andrew, do you see AP also as a digital hub, growing in its role and influence and being able to increase its value beyond cost efficiency into these other higher innovation levels or strategic levels of benefit?

Tracking trends

Bartolini: Yes, absolutely. I've been researching and working in this space for 17 years, doing significant market research over the last 11 years. So I've been tracking the trends and the ebbs and flows of relative interest and investment in AP.

What we've seen in 2015 in some of our most recent research is that there has been a broader focus or a shift away from viewing the AP opportunity as an efficiency one or solely an efficiency one. Let’s automate. Let’s reduce our costs in processing invoices. Let’s reduce our costs in payments.

Bartolini

But what we saw this year for the first time in our research was that the top area of focus, the top business pressure that’s driving investments in AP transformation was the need to get better visibility into all the valuable information that comes across the AP departments or through the AP operation, both on the invoice and the payment side.

That begins to change the conversation. We talked about the evolution of AP moving from a strictly back-office siloed department to an increasing point of collaboration with procurement at the purchase-to-pay (P2P) process, with treasury, from a cash-management perspective. Now, we see it starting to move and becoming a true intelligence hub, and that’s where we've seen some momentum. There’s a lot of wind in the sails for AP, really pushing that forward in 2016 and beyond.

Gardner: Andrew, what’s driving this? Is this the technology that's now allowing that data?

Bartolini: There are a couple of factors underlying this movement. The first is taking the broader perspective within business as a whole. Businesses can no longer allow distinct business functions to operate within silos. They need everybody on the same team, rowing in the same direction. That has forced greater collaboration.

That’s something that we've seen more broadly between procurement and finance over the past couple of years, specifically with the role of the CPO and the CFO. A majority of organizations see a very strong level of collaboration within those two job roles and within their departments as a whole.

That has opened up larger opportunities for AP, which is a more tactical function as it relates to procurement, but by bringing the two groups together, you now have shared resources and shared focus on improving the entire source-to-settle process.

That relationship has driven greater interest, because the opportunities are fantastic for procurement to leverage the value of a more efficient AP process and to be able to see the information that’s there.

As Drew mentioned, by becoming more efficient on the front end of the AP process, organizations are doing a better job in reducing the amount of paper that’s coming in through the front door. They're processing their invoices faster. That's opening up opportunities on the back-end, on the payment side.

So, you have a confluence of those factors and you see newer solutions in the marketplace as well that are really changing the view that AP departments have of what defines a transformation. They're thinking more holistically across the entirety of the AP process, from invoice receipt, all the way through payment and settlement.

Allowing for variables

Gardner: Drew, it seems that over history, once a contract is closed the terms remain fairly rigid, and then there is a simple fulfillment aspect to it. But it sounds like -- as we get more visibility, as we get digitized, and we can automate -- we can handle exceptions better and allow for more variables.

I've heard instances where the terms can be adjusted, that market forces can provide for ways in which a deal gets amended as an ongoing basis, whether it's in terms of payment, whether perhaps there are other ancillary issues. Is that what we are seeing, that the digital transformation is giving us more opportunity to be flexible, and is that then elevating the role of the AP organization?

Hofler: You make a couple of good points there, and it really springs from what Andrew just said about not having to silo or not staying in that siloed place where AP and procurement are separate or the processes are separate, because what companies have realized, particularly as the digital age has made it possible, is that the procure-to-pay process, the source-to-settle process, is a fundamentally connected one.

Over the years they've operated very disconnectedly, with hand-offs, where procurement does its thing, writes a contract and then hands it off once the purchase order (PO) goes out the door, and then AP takes up the process from there. But in that, there are a lot of disconnects.

What companies have realized, particularly as the digital age has made it possible, is that the procure-to-pay process, the source-to-settle process, is a fundamentally connected one.

When you're able to bring networked systems together to bring visibility across that entire process, now you have the AP group acting in a more strategic manner to deliver value by acting as the value-capture group.

For example, prior to this age that we live in now, a contract would be written, it would have specific terms for specific items and specific prices for specific SKUs, and maybe some volume discounts. AP had no idea about that, because these contracts would get signed and they get put in a file cabinet or stuck in a PDF file somewhere, and the AP had no idea. So they went off of the invoice that came in.

This is how an entire industry about post-audit recovery came about, to go after the fact and try to claw back over-payment, because there's no visibility in AP to what procurement did.

By bringing these together in a system, on a network, you're able to automatically capture those savings, because AP now has visibility into what’s happening inside of that contract, and can insure on an automated basis that they are paying the right amount. So, it becomes not just a buy-right thing from the procurement side, but a pay-right thing as well, a buy- and pay-right tied together.

But that's your point about terms. Yes, you have certain terms tied into that contract, but again, that's set at the beginning of a relationship with a supplier. There are lots of opportunities that come up when everybody has visibility into what's going on, into an early-approved invoice for example.

Opportunities for collaboration

There are lots of opportunities that arise for collaboration where maybe the situation has changed a little bit. Maybe a supplier, instead of being paid in 45 days, now would very much like to be paid in five days, because they have payroll ahead or they have an equipment purchase to make, and they want to accelerate their cash flow.

In a disconnected world, you can't account for that. But in a networked world, where there is visibility, I like to say that it's the confluence of visibility, opportunity, and capability where all parties have visibility into the opportunity created by efficiencies with that earlier approved invoice. Then, there's the capability inside the system to simply click a button and accelerate that cash flow or modify those terms on that contract, in those payment terms.

So this idea of P2P being a linked value chain and the digital technology of today can bring those together so that there are no barriers to that information flow and that creates all sorts of opportunities for all parties involved.

Gardner: Andrew, we're having a common denominator here of visibility, the visibility is what allows for a lot of these efficiencies and innovation values to occur, where does that visibility come from, where does the data get generated, how is it shared, and how do we further reduce the silos through the free flow of data analysis and information?

Bartolini: Visibility at the core starts with automation tools that automate processes. If we're looking at the P2P process, you're looking at an eProcurement system. You can go back to where it starts, from sourcing and contracting. If you have contract visibility or at least visibility into your header-level information, you begin to have an understanding of what, in fact, the relationship is and what relationships you have as an organization, who are your preferred suppliers, who are your strategic suppliers.

Visibility at the core starts with automation tools that automate processes.

As you start to drill down, if you have the capabilities to capture things like payment terms and service-level agreements (SLAs). That information begins to provide a more robust view of the relationship that can then be more strategically managed from a procurement perspective, and then really sets up the operational procurement side.

If you have an eProcurement system, you're able to generate purchase orders against those contracts and you're ensuring that before the purchase order is even sent to the supplier, the pricing and the terms are correct.

That cascades over onto the AP automation side. We use the term "ePayables" very broadly to describe AP automation solutions. When you have an eProcurement and an ePayables solution connecting, you begin to have greater visibility within the enterprise for the entirety of the relationship and the entirety of the transaction.

On the flip side, where we haven't gotten to the value proposition for suppliers who really view their customer relationship as a single one, what often happens is they have multiple relationships within that customer that really aren't needed. They negotiate a contract, they have their internal customer, and then they are dealing with maybe a procurement department and then trying to then figure out who they are dealing with on the AP side.

When you’ve got visibility that can be shared with trading partners, you get extraordinarily greater value out of the entire thing, and you streamline relationships and you're able to focus on the more important aspects of those relationships. But to the original question, visibility starts and ends with technology.

Centralizing procurement

Gardner: We're also seeing the trend of larger organizations centralizing procurement, sometimes placing it, if it's a global organization, in another country instead of having it in multiple countries or multiple markets. It becomes consolidated and automated. How does that fit in, Drew?

Hofler: We see definitely a move toward a shared service or a global process ownership type of thing, where they want to take the variability out of the different geographies or different business units doing what is essentially a standardized process, or they want to make that standardized.

We definitely see the movement in that, and it's both a business desire and goal to remove the variability, but it's something that's enabled by the technology that we have today in business networks, in centralized systems, that can tie all of this together. Now you have business units operating across the world, but tapping in all of that information, tapping in, getting all the invoices to come into one place through a network. Those business units can see that. Those business units have access at a controlled pace to the information that they need inside of those systems as well.

On the procurement side, if you're sourcing globally, you can have different centers of excellence.

For the ability to connect the data to everybody, to turn that data not just from an information but to intelligence, getting it in front of the right people at the right time and the right process, the business networks really, really help to drive that. Having that centralized network hub where everybody can connect at the point of the process that they need really helps drive or enable the movement towards shared service and centralized AP procurement.

Bartolini: Anyone would be hard-pressed to make a case that you should have a decentralized AP operation. That doesn't mean that you can't have staff that are geographically dispersed, but there's no reason why that should exist.

On the procurement side, if you're sourcing globally, you can have different centers of excellence. Again, you want to have a more centralized view into visibility and to be utilizing the same systems and processes. On the AP side, centralization also helps from the standpoint that you begin to get a better sense of what resources are being applied in the AP process today. It also becomes easier to centralize or to gain budgets for investment in tools that can drive efficiency, visibility, and all the things we've just been talking about.

Gardner: Another thread that I’m hearing in our conversation is that technology needs to be exploited, visibility gained, and automation made possible. Then, centralization can become a huge benefit from all of that. But none of this is possible if we don't go all digital. If we don't get off of manual processes and get off of paper. What do you think is going to be the ratio, if you will, of a paper approach that's left? Are we finally going to pull the last paper invoice out, or the last payment that's manual? Where are we, Drew, when it comes to making that full transition to digital? It seems to me an overwhelmingly beneficial direction.

Still using paper

Hofler: I've been in the payment space for about 20 years and the payable space for the last 10, and in payment, there have been predictions in that space that we would get rid of the paper check completely. Gosh, for the last 20 years everybody is saying it's going to happen, but it hasn't. It's still about 50 percent paper checks.

So I'm not going to make a prediction that paper is going to go away, but most definitely, companies need to deal with and move toward electronic data. Even if it's paper based, a lot of companies are moving toward getting the data in electronically, but a lot of them say, "Well, I get my paper scanned, I've sent it to a scanning service or whatever, and I get it in PDF or electronic data form."

That's fine and that's one step along the process, but companies are realizing that there's a limitation in that. When you do that, you're simply getting the data that was on that paper source document faster. If that paper source document data is garbage, and that's what creates exceptions, then you're just getting the exceptions quicker, and that doesn't really help the process, that doesn't really solve the true issue of making sure you're not only getting the data faster, but that you get it in clean and that you get it in better.

This is where companies need to move toward full electronic invoicing, where it starts its life as an electronic invoice.

This is where companies need to move toward full electronic invoicing, where it starts its life as an electronic invoice, so that a supplier can submit it and have it run through business rules electronically before it even gets the AP. They can identify the exceptions and turn it around to the supplier and have them correct it, all in a very quick and automated fashion, so that by the time AP gets it it's 98 percent exception free or straight-through processing.

Companies are going to realize that just transforming a paper source document into an electronic form has had value in the past, but its value is quickly running out, and they need to move toward true electronic.

How far we are going to get along that path? Well, that’s a big prediction to make, but I think we'll move along way down that path. Companies definitely need to recognize, and are starting to recognize, that they need to deal with native electronic data in order to truly gain value, efficiency, and intelligence and be able to leverage that into other opportunities.

Gardner: We mentioned exception handling, exception management, making that easier, better, faster. It strikes me that exception management is really a means to a greater end, and the greater end is general flexibility -- even looking at things as markets, as auctions, where there's variability and a fit-for-purpose kind of mentality can come in.

So am I off in some pie-in-the-sky direction, Andrew? Or when we think about the ability to do exception management, are we really opening up the opportunity to do even more interesting, innovative things with business transactions?

Reduction of exceptions

Bartolini: No, I don’t think it’s pie in the sky. One of our recent surveys of about 200 or so AP finance, and P2P professionals, a question was asked, what’s the number one game changer that will get your AP operation to the next level of performance? And the answer that came in loud and clear was the reduction of exceptions and the ability to perform root-cause analysis in a much more significant way.

So it’s a fundamental problem, and the opportunity is for a majority of things. About two-thirds of organizations feel that if they could handle this issue better, if they could reduce that number, they would be operating at a significantly higher level.

We haven’t really talked too much about the suppliers in this equation, but a lot of business focus and a lot of the themes in our research this year and into 2016 has been focused on agility and the need for organizations to become more adept and responsive to market shifts and changes.

Part of that is getting better alignment with the strategic suppliers that are going to drive more value and that are having a greater impact on the company's own products and services and ultimately their results.

When that noise in the relationship is reduced it allows organizations to focus on goals and objectives and to invest more in the strategic elements of the relationship.

So, you look at something like exceptions that are problematic for both sides of the trading-partner equation, when you start to reduce those, when you start to eliminate a lot of the friction that is built in, certainly around the manual P2P process, but can exist even in an automated environment. When that noise in the relationship is reduced it allows organizations to focus on goals and objectives and to invest more in the strategic elements of the relationship.

Gardner: Drew, anything to add to that, particularly when you consider that the pool of suppliers is, in a sense, growing when we look at contingent workers, when we look at different types of suppliers as smaller firms, perhaps located at a much greater geographic distance than in the past. We have more open markets as a result of connected business networks. How do you see that panning out in 2016?

Hofler: Yeah, there's definitely a growth in that. There's a pretty good stat that shows that a much larger portion of a company's workforce is not bound to that company, and it's a temporary, it's a contingent workforce, it's services that are from contractors that aren't necessarily tied to them.

The need to handle that, particularly the churn that happens with that, the broader number of contractors that you might have with that, the variability in the services that are asked for, that are needed, all of this adds layers of complexity, potentially, to AP, and to procurement as well. We're focused more on AP here, but it adds layers of complexity in managing that and approving that, and as a result, can add a significant number of exceptions.

So, while you're operating your business in a way that is a little more fitting in today’s world, you're also adding a lot of complexity and exceptions to the process, unless you’ve got a way to automatically build in the ability to define the invoice and to identify the exceptions so that these various suppliers who are much smaller and geographically dispersed can submit online or can submit electronically and can do so in a way that's standardized, even across this large group.

Catching exceptions

The exceptions can be caught right away, for example, field services. If there's a service sheet form that was put out by procurement to hire somebody to go fix an oil well, and they get out to the oil well and there’s more to be done than what was on that, they have to get approval for that. To have the ability to get that approval online, automatically, through a mobile device, and have it tied directly into the invoice, and have the invoice close that eliminates all those potential breakpoints of finally getting that invoice in and getting the exceptions dealt with and approved.

Exceptions to me aren't just a matter of, "Gosh, they're hard to do." They're something we want to get rid of. But exceptions are simply the barrier to the opportunity that comes when you can get that invoice moved through and approved right away, not necessarily a matter of paying the invoice faster from the payer’s perspective, but the ability to have it approved and ready to go right away, so that you have options, and so that the supplier has options potentially for cash flow and things like that.

Exceptions become something that we have to eliminate in order to get to that opportunity, but without the platform to do that, to your point, the dispersed workforce, and the increasing contractors, they can make it even harder than it is or than it has been.

Gardner: When we look at the payoffs from doing things better using AP intelligence and technology, we are not just looking at efficiency for its own sake. I think you're opening up more opportunity, as you put it, to the larger business.

If procurement and accounts payable can adjust and react rapidly to complexity, to exceptions, to new ways of doing business -- this is a powerful tool to the business at large.

If procurement and accounts payable can adjust and react rapidly to complexity, to exceptions, to new ways of doing business -- this is a powerful tool to the business at large. They can go at markets differently. They can acquire goods and services across a wider portfolio of choices, a wider marketplace, and therefore be able to perhaps get things easier, faster, cheaper.

Let’s look at this idea of non-tangible payoffs that elevates the value of AP to being a sophisticated intelligent operation. Let's start with Andrew. What are some of the intangibles -- if we do all the above that we have mentioned well – how does this empower the organization in ways that we haven't seen before?

Bartolini: That’s a great question and it gets back to the one point I was just making about agility. If you were to argue that we're operating in an age of innovation, where globalization and the level of competition, and the speed of business in general has really accelerated the time frames that organizations must react -- I think this is happening at a much faster pace.

You can see that in areas like the consumer electronics market, and in all industries, product lifecycles are shortening, and so the windows of opportunity to maximize sales and revenues in the marketplace are much shorter as well.

Things are happening at a much faster clip and in tighter time frames. This has created a much greater reliance upon your suppliers and upon your supply chain. And so having visibility across the P2P process, across the source-to-settle process, and having much tighter relationships with your strategic suppliers ultimately positions the organization to become much more agile and much more competitive. And that's the value dividend that's created from a more streamlined P2P process.

It’s being able to more fully optimize the relationships that you have with your suppliers, and it's being able to make decisions and shifts in a much faster way than in the past, and that's not just from the sourcing side, that carries all the way through to the payment side as well.

Business agility

Gardner: Drew, when we think about the strategic role of AP -- of providing business agility -- you can’t get more strategic than that.

Hofler: No, that's right. AP particularly can become the source of much of that strategic intelligence that companies need. They can't just see themselves as processing paper or as a back-office cost center, but as being the ones that capture that can, through their use of systems and investment in systems and networks, capture the data in invoices, for example, and can feed that data into the sourcing cycle at the beginning, so it becomes a virtuous circle.

They can create the opportunity for the company to meet some of their very strategic goals around working capital. So now AP and their ability to tie into what procurement has done before them and automate the process and get things done very nimbly and ready to go and create this opportunity, are creating opportunity for treasury as well, so now you have got a third party in there.

The treasurer is very concerned about what his liabilities are out there, what the payments liabilities are. Does he know? Often, in today’s world, treasurers can’t see their payable liabilities until they run through their payment cycle and they're ready to be paid the next day. So they have to move cash around to make sure that they have enough cash to manage those liabilities going out.

With visibility into what’s going to be paid out 30 days from now, having that 30 days in advance offers the treasurer all sorts of options on how to manage their cash among various different bank accounts.

It gives the treasurer the opportunity to pay that supplier early, using excess cash that’s sitting in a bank account.

Plus, it gives them the option to do things around their days payable outstanding (DPO), to bring third parties into a business network, to bring in third-party supply chain finance that allow a supplier who might need early payment liquidity and early cash flow to access that from a third party while the buying organization is able to hold on to their cash, and so extend their DPO and improve their working-capital management.

Or it gives the treasurer the opportunity to pay that supplier early, using excess cash that’s sitting in a bank account. Even though the Fed just raised rates in the last day or two, they only raised it a quarter of a percent. So it’s still not earning very much. But now, a treasurer can take that and pay a supplier early in exchange for a discount that earns them something along the lines of 8-12 percent annually.

It opens up options, but right at the nexus of all of that opportunity, information, and intelligence sits AP. That’s a very strategic place for AP to be if they can get their hands around that data, create those opportunities, and make it visible to the rest of the business.

Gardner: One last area to get into for 2016 … One of the top concerns in addition to business agility for companies and organizations is risk, security, and dealing with compliance issues, with regulatory issues. Is there something that AP brings to the table when it has elevated itself to the strategic level, with that visibility with that data, with the ability to act quickly and be able to take on exceptions and work through them?

Andrew, we've heard about how, on the procurement side, that examining the supply chain, knowing that supply chain, being able to head off interruptions or other issues, having business continuity mindset is important. Does that translate over to AP, and why and how does AP have a larger role in issues around continuity?

Risk mitigation

Bartolini: From a risk-mitigation standpoint, when you have greater assurances, that the invoices are matched to the PO, to the orders that have been generated, to what has been delivered, when you have a clear view into how that payment is made, across and into the supplier’s account, you're reducing the opportunities for fraud, which can exist in any type of environment, manual or fully automated. One of the largest risk-mitigation opportunities for AP is really at the transactions level.

When you start to cascade the visibility that AP generates out into the larger organization, you can start to do some predictive analysis from the procurement side to better understand potential issues that suppliers may be facing.

Also from a treasury standpoint, when you have visibility into the huge amount of money that is being paid out by AP, you have a better sense of your company’s liquidity, your cash positions, and what you need to do to ensure that you maintain that liquidity.

Looking on the supplier’s side, when you're processing invoices more quickly and you have the opportunities to make payments early, there are those opportunities for the larger companies to step in and help out some of their struggling suppliers, whether that’s paying their invoices early or some other mechanism. It starts with visibility, and from that visibility you start to have a better ability to make smarter decisions and to anticipate potential issues.

They may have had an otherwise healthy business, but not sufficient cash flow to maintain operations, and that hurt buying organizations who depend on them.

Gardner: Last word to you Drew on this issue of risk reduction, continuity, using intelligence to head off disruption or fraud, how do you see that panning out in 2016?

Hofler: I think AP does play a large role in that. Andrew touched on some of that.

One of the key areas, if you think about supply chain and from the procurement side, the financial supply chain is pretty much just as important as the physical supply chain when it comes to risk. As we learned, people have gotten it deep in their bones since 2008 and 2009 when liquidity became a very big issue. There was liquidity risk in supply chains of suppliers who couldn’t access cash flow or didn’t have sufficient cash flow. They may have had an otherwise healthy business, but not sufficient cash flow to maintain operations, and that hurt buying organizations who depend on them.

By being able to approve invoices very quickly and offer up to your suppliers, through a single portal, a single network access, access to cash, either from a buying company using their own or bringing in third-party financing, you essentially are able to eliminate or greatly mitigate liquidity risk in your supply chain.

But there are other areas of risk, too. Anytime you're talking about AP, Andrew said it the right way, where he talked about the massive amounts of money that AP is paying out. That’s their job.

In order to do that, they have to actually capture, manage, and maintain bank account information from their suppliers in order to pay electronically. We're always trying to get away from paper checks, because paper checks, we know, are rife with fraud, very horribly opaque and very slow, but electronic payments require them to capture bank account information. And that’s not a core competency of most AP departments.

Network power

But AP departments can tap into the power of network ecosystems that bring in third parties whose core competency that very much is, to eliminate their need to ever even see a supplier’s bank account information.

Some forward-looking AP departments are looking at how they can divest themselves of that which is not their core competency, and in some ways around risk mitigation and payment, one of them is getting rid of having to touch bank account information.

Beyond that, when we talk about compliance and that type of thing, AP sits right in the middle of that, whether that be from VAT compliance in Europe, to archival compliance, to stocks compliance here in the US, having all of the data electronic and having an auditable trail and being able to know exactly where every piece of data and every dollar or euro spent has been and where it went along the way and having a trail of that automatically capture and archived, that goes a long way towards compliance.

AP is the one that sits right there to be able to capture that and provide that.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Tags:  accounts payable  Andrew Bartolini  BriefingsDirect  business network  Dana Gardner  Drew Hofler  Interarbor Solutions  procurement  SAP Ariba 

Share |
PermalinkComments (0)
 

Redmonk analysts on best navigating the tricky path to DevOps adoption

Posted By Dana L Gardner, Friday, January 08, 2016

The next BriefingsDirect analyst thought leadership discussion explores the pitfalls and payoffs of DevOps adoption -- with an emphasis on the developer perspective.

We're joined by two prominent IT industry analysts, the founders of RedMonk, to unpack the often tricky path to DevOps and to explore how enterprises can find ways to make pan-IT collaboration a rule, not an exception.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

With that, please join me in welcoming James Governor, Founder and Principal Analyst at RedMonk, and he is based in London, and Stephen O'Grady, also Founder and Principal Analyst at RedMonk, and he is based in Portland, Maine. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gentleman, let’s look at DevOps through a little bit of a different lens. Often, it’s thought of as a philosophy. It’s talked about as a way of improving speed and performance of applications and quality, but ultimately, this is a behavior and a culture discussion -- and the behavior and culture of developers is an important part of making DevOps successful.

What do developers think of DevOps? Is this seen as a positive thing or a threat? Do they have a singular sense of it, or is it perhaps all over the map?

O'Grady

O’Grady: The overwhelming assessment from developers is positive, simply because -- if you look at the tasks for a lot of developers today -- it’s going to involve operational tasks.

In other words, if you're working, for example, on public-cloud platforms, some degree of what you're doing as a developer is operational in nature, and vice versa, once you get to the operational side. A lot of the operational side has now been automated in ways that look very much like what we used to expect from development. 

So there is very naturally a convergence between development and operations that developers are embracing.

Driven by developers

Governor: I think developers have driven the change. We've seen this in a number of areas, whether it’s data management or databases, where the developers said, "We're not going to wait for the DBA anymore. We're going to do NoSQL. We're just going to choose a different store. And we're not going to just use Oracle." We've seen this in different parts of IT.

Governor

The bottom line is that waterfall wasn’t working. It wasn’t leading to the results it should, and developers were taking some of the heat from that. So engineers and developers have begun to build out what has now becomes DevOps. A lot of them were cloud natives and thought they knew best, and in some cases, they actually did some really good work.

Partly enabled by cloud computing, DevOps had made a lot of sense, because you're able to touch everything in a way that you weren’t able to on your own prem. It has been a developer-led phenomenon. It would be surprising if developers were feeling threatened by it.

Gardner: Enterprises, the traditional Global 2000 variety, see what happens at startups and recognize that they need to get on that same speed or agility, and oftentimes those startups are developer-centric and culturally driven by developers.

Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation

If the developers are, in fact, the tip of the arrow for DevOps, what is it that the operations people should keep in mind? What advice would you give to the operations side of the house for them to be better partners with their developer core?

Governor: The bottom line is that it’s coming. This is not an option. An organization could say we have this way of doing ops and we will continue doing that. That’s fine, but to your point about business disruption, we don’t have time to wait. We do need to be delivering more products to market faster, particularly in the digital sphere, and the threat posture and the opportunity posture have changed very dramatically in the past three years.

It's the idea that Hilton International or Marriott would be worrying about Airbnb. They weren’t thinking like that. Or transport companies around the world asking what the impact of Uber is.

We've all heard that software is eating the world, but what that basically says is that the threats are real. We used to be in an environment where, if you were a bank, you just looked at your four peer banks and thought that as long as they don’t have too much of an advantage, we're okay. Now they're saying that we're a bank and we're competing with Google and Facebook.

Actually, the tolerance for stability is a little bit lower than it was. I had a very interesting conversation with a retailer recently. They had talked about the different goals that organizations have. And it was very interesting to me that he said that, on the first day they launched a new mobile app, it fell over. And they were all high-fiving and fist pumping, because it meant they had driven so much traffic that the app fell over, and it was something they needed to remediate.

That is not how IT normally thinks. Frankly, the business has not told IT they want it to be either, but it has sort of changed. I think the concern for new experiences, new digital products is now higher than the obsession with stability. So it is a different world. It is a cultural shift.

Differentiator

Gardner: Whether you're a bank or you're making farm equipment, your software is the biggest differentiator you can bring to the market. Stephen, any thoughts about what operations should keep in mind as they become more intertwined with the developer organization?

O'Grady: The biggest thing for me is a variety of macro shifts in the market, things like the availability of open-source software and public cloud. It used to be that IT could control the developer population. In other words, they were essentially the arbiter of what went to production and what actually got produced. If you're a developer and you have a good idea, but you don’t have any hardware or infrastructure, then you're out of luck.

These days, that’s changed, and we see these organizationally, where developers can go to operations and say, they need infrastructure, and operations will say six months. The developers say, "To hell with six months. I'm going to go to Amazon and I have something up in 90 seconds." The notion that's most important for operations is that they're going to have to work with developer populations because, one way or another, developers are going to get what they want.

Gardner: When we think about the supplier, the vendor, side of things, almost every vendor I've talked to in the last two or three months has raised the question of DevOps. It has become top of mind for them. Yet, if you were to ask an organization, how do you install DevOps, how do you buy DevOps, which shape box does it come in, none of those questions are relevant because it’s not a product.

If you're in ops and you are not currently looking at tools like Chef, Puppet, Ansible, or SaltStack, you're doing yourself a disservice. They're powerful tools in the arsenal.

How do the vendors grease the skids toward adoption, if you will? What do you think needs to happen from those tools, platforms and technologies?

Governor: It’s very easy to say that DevOps is not a product, and that’s true. On the other hand, there are some underlying technologies that would have driven this, particularly in automation and the notion of configuration is code.

If you're in ops and you are not currently looking at tools like Chef, Puppet, Ansible, or SaltStack, you're doing yourself a disservice. They're powerful tools in the arsenal. 

One of the things to understand is that in the world of open source, it's perhaps going to be packaged by a more traditional vendor. Certainly, one of the things is rethinking how you do automation. I would counsel anyone in IT ops to at least have a team starting to look at that, perhaps for some new development that you're doing.

It’s easy to say that everything is a massive transformation, because then it’s just a big services opportunity and there's lots of hand waving. But at the end of the day, DevOps has been a tools-driven phenomenon. It’s about being closer to the metal, having better observability, and having a better sense of how the application is working.

One of the key things is the change in responsibility. We've lived in an environment where we remember the blame game and lots of finger pointing. If you look at Netflix, that doesn’t happen. The developer who breaks the build fixes it.

There are some significant changes in culture, but there are some tools that organizations should be looking at.

What can they do?

O’Grady: If we're talking from a vendor perspective, they can talk to their customers about the cultural change and organizational change that’s necessary to achieve the results that they want, but they can't actually affect that. In other words, what we're talking about, rather, is what they can do.

The most important thing that vendors who play in this and related spaces can do is understand that it’s a continuum. From the first code that gets written, to check-in, to build, to being deployed on to infrastructure that’s configured using automated software, it’s a very, very long chain, a very long sequence of events.

Understanding from a vendor perspective where you fit into that lifecycle, what are the other pieces you have to integrate with, and from a customer perspective what are all the different pieces they are going to be using is critical.

In other words, if you're focused on a particular functional capability and that's the only thing that you are aware of and that’s the only thing that you tackle, you're doing your customer a disservice. There are too many moving pieces for any one vendor to tackle them all. So it’s going to be critically important that you're partner-friendly, project-friendly and so on and integrate well and play nicely with others.

Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation

Governor: But also, don’t let a crisis go to waste. IT ops has budget, but they're also always getting a kick in the teeth. Anything that goes wrong is their fault, even if it’s someone else's. The simple fact is that we're in an environment where organizations, as I've said, are thinking that the threat and opportunity posture has changed. It's time to invest in this.

A good example of this would be that we always talk about standardization, but then nobody wants to give us the budget to do that. One of the things that we've tended to see in these web-native companies and how they manage operations and so on is that they've done an awful lot of standardization on what the infrastructure looks like. So there is an opportunity here. It’s a threat and an opportunity as well.

Gardner: I've been speaking with a few users, and there are a couple of rationales from them on what accelerates DevOps adoption. One of them is security and compliance, because the operations people can get more information back to the developers. Developers can insist that security gets baked in early and often.

The other one is user experience. The operations side of the house now has the data from the user, especially when we talk about mobile apps and smaller customer-facing applications and web apps. All that data is now being gathered. What happens with the application can be given back to development very quickly. So there is a feedback loop that's compressed.

What do you think needs to happen in order for the incentives to quicken the adoption of DevOps from the perspective of security, user experience, and feedback loops of shared data?

Ongoing challenge

Governor: That’s such a good question. It’s going to remain an ongoing challenge. The simple fact is that, as I said about the retail and the mobile app, different parts of the business have different goals. Finance doesn't have the same goals as sales, and sales does not have the same goals as marketing in fact.

Within IT, there are different groups that have had very different key performance indicators (KPIs), and that’s part of the discussion. I think you are absolutely right to bring that up, understanding what are the metrics that we should be using, what are the new metrics for success? Is it the number of new products or changes to our application code that we can run out?

We're all incredibly impressed by Etsy and Netflix because they can make all of these changes per day to their production environment. Not everybody wants to do that, but I think it’s what these KPIs are.

It might be, as Stephen had mentioned, if previously we were waiting six months to get access to server and storage, and we get that down to a minute or so, it’s pretty obvious that that’s a substantive step forward.

The big one for me is user experience, and that to me is where a lot of the DevOps movement has come from.

You're absolutely right to say that it is about the data. When we began on this transition around agile and so on, there was a notion that those guys don’t care about data, they don’t care about compliance. The opposite is true, and there has been a real focus on data to enable the developer to do better work.

In terms of this shift that we're seeing, there's an interesting model that, funnily enough, HPE has begun talking about, which is "shifting left." What they mean by that is taking that testing earlier into the process.

We had been in an environment where a developer would hand off to someone else, who would hand off to someone else, at every step of the way. The notion that testing happens early and happens often is super important in this regard.

Gardner: Continuous delivery and service virtualization are really taking off now. I just want to give Stephen an opportunity to address this alignment of interests, security, user experience, and shared data, and thoughts about how organizations should encourage adoption using these aligned interests.

User experience

O’Grady: I can’t speak to this query angle as much. In other words, there are aspects to that, particularly when we think about configuration management and the things that you can do via automation and so on.

The big one for me is user experience, and that to me is where a lot of the DevOps movement has come from. What we've found out is that if you want to deliver an ideal experience via an application to 100 people or 1,000 people, that’s not terribly difficult, and what you are using infrastructure-wise to address that is also not a sort of huge challenge.

On the flip side, you start talking millions and tens of millions of users, hundreds of millions of users potentially, then you have a completely different set of challenges involved. What we've seen from that is that the infrastructure necessary to deliver quality experiences, whether you're Netflix, Facebook, or Google, or even just a large bank, that's a brand-new challenge.

But security is definitely an elephant stomping around the room. There's no question. The feedback loop around DevOps has not been as fixated on security as it might be.

Then, when you get into not just delivering a quality experience through a browser, but delivering it through a mobile application, this encourages and, in fact, necessitates a series of architectural changes to scale out and all these other sort of wonderful things that we talk about.

Then, if we're dealing with tens of thousands or hundreds of thousands of machines, instead of a handful of very, very large ones, we need a different approach, and that different approach in many respects is going to be DevOps. It’s going to be taking a very hands-on developer approach to traditional operational tasks.

Governor: But security is definitely an elephant stomping around the room. There's no question. The feedback loop around DevOps has not been as fixated on security as it might be.

Quite frankly, developers are about getting things done and this is the constant challenge, ops, security, and so on. Look at Docker. Developers absolutely love it, but it didn’t start in a position of how do we make this the most secure model you could ever have for application deployment.

There are some weird people who started to use the word DevOps(Sec), but there are a lot of unicorns and rainbows and there is going to be a mess that needs clearing up. Security is something that we generally don’t do that well.

On the other hand, as I said, we're less concerned with stability, and on the security side, it does seem like. Look at privacy. We all gave up on that, right?

Gardner: I suppose. Let’s not give up on security though.

Governor: Well, those things go together.

Gardner: They do.

Need to step up

Governor: Certainly, the organizations that would claim to be really good at security are the ones that have been leaving all of their customers' details on a USB stick or on a laptop. The security industry has not done itself many favors. They need to step up as much as developers do.

Gardner: As we close out, maybe we can develop some suggestions for organizations that create a culture for DevOps or put in place the means for DevOps. Again, speaking with a number of users recently, automation and orchestration come to mind. Having that in place means being able to scale, to be able to provide the data back, monitoring, and data from a big-data perspective across systems to pan IT data, and the ability to measure that user experience. Any other thoughts about what you as an organization should put in place that will foster a DevOps mentality?

Learn More About DevOps
Solutions That Unify Development and Operations
To Accelerate Business Innovation

Governor: There are a couple of things. One thing you didn’t mention is pager duty. It's a fact that somebody is going to get called out to fix the thing, and it’s about individuals taking responsibility. With that responsibility, give them a higher salary. That’s an interesting challenge for IT, because they're always told, here are a bunch of tools that enable the Type As to get stuff done.

What’s important is to just get out and start spending time reading the stuff that the web companies are doing and sharing.

As to your point about whether this is a cultural shift or a product shift, the functional areas you mentioned are absolutely right, but as to the culture, just what’s important is to just get out and start spending time reading the stuff that the web companies are doing and sharing.

If you look at Etsy or Netflix, they're not keeping this close to their chest. Netflix, in fact, has provided the tools it uses to improve stability through Chaos Monkey. So there's much more sharing, there's much more there, and the natural thing would be to go to your developer events. They're the people building out this new culture. Embed yourself in this developer aesthetic, where GitHub talks about “optimizing for developer joy." Etsy is about “Engineering Happiness.”

Gardner: Stephen, what should be in place in organizations to foster better DevOps adoption?

O’Grady: It’s an interesting question. The thing that comes to mind for me is a great story from Adrian Cockcroft, who used to be with Netflix. We've talked about him a couple of times. He's now with Battery Ventures, and he gives a very interesting talk, where he goes out and talks to executives and senior technology executives from all of these Fortune 500 companies.

One of the things he get asked over and over and over is, "Where do you find engineers like the ones that work at Netflix? Where do we find these people that can do this sort of miraculous DevOps work?" And his response is, "We hired them from you."

The singular lesson that I would tell all the organizations is that somewhere in your organization probably are the people who know how this stuff works and want to get it done. A lot of times, it’s basically just empowering them, getting out of the way and letting the stuff happen, as opposed to trying to put the brakes on all the time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  DevOps  Hewlett Packard Enterprise  HPE  HPE Discover  Interarbor Solutions  James Governor  RedMonk  Stephen O'Grady 

Share |
PermalinkComments (0)
 

How INOVVO delivers big data network analysis for greater mobile user loyalty

Posted By Dana L Gardner, Monday, December 21, 2015

The next BriefingsDirect big-data case study discussion examines how INOVVO delivers impactful network analytical services for mobile operators to help them engender improved end-user loyalty.

We'll see how advanced analytics, drawing on multiple data sources, enables INOVVO’s mobile carrier customers to provide mobile users with faster, more reliable, and relevant services.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how INOVVO uses big data to make major impacts on mobile services, please join me in welcoming Joseph Khalil, President and CEO of INOVVO in Reston, Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: User experience and quality of service are so essential nowadays. What has been the challenge for you to gain an integrated and comprehensive view of subscribers and networks that they're on in order to uphold that expectation for user experience and quality?

Khalil: As you mentioned in your intro, we cater to the mobile telco industry. Our customers are mobile operators who have customers in North America, Europe, and the Asia-Pacific region. There are a lot of privacy concerns when you start talking about customer data, and we're very sensitive to that.

Khalil

The challenge is to handle the tremendous volume of data generated by the wireless networks and still adhere to all privacy guidelines. This means we have to deploy our solutions within the firewalls of network operators. This is a big-data solution, and as you know, big data requires a lot of hardware and a big infrastructure.

So our challenge is how we can deploy big data with a small hardware footprint and high storage capacity and performance. That’s what we’ve been working on over the last few years. We have a very compelling offer that we've been delivering to our customers for the past five years. We're leveraging HPE Vertica for our storage technology, and it has allowed us to meet very stringent deployment requirements. HPE has been and still is a great technology partner for us.

Gardner: Tell us a little bit more about how you do that in terms of gathering that data, making sure that you adhere to privacy concerns, and at the same time, because velocity, as we know, is so important, quickly deliver analytics back. How does that work?

User experience

Khalil: We deal with a large number of records that are generated daily within the network. This is data coming from deep packet inspection probes. Almost every operator we talk to has them deployed, because they want to understand the user experience on their networks.

These probes capture large volume of clickstream data. Then, they relay it to us almost in a near real-time fashion. This is the velocity component. We leverage open-source technologies that we adapted to our needs that allow us to deal with the influx of streaming data.

Embed the HPE Big Data Analytics Engines
To Meet Enterprise-Scale Requirements
Get More Information

We're now in discussion with HPE about their Kafka offering, which deals with streaming data and scalability issues and seems to complement our current solution and enhances our ability to deal with the velocity and volume issues. Then, our challenge is not just dealing with the data velocity, but also how to access the data and render reports in few seconds.

One of our offering is a care product that’s used by care organizations. They want to know what their customers did the last hour on the network. So there's a near real-time urgency to have this data streamed, loaded, processed, and available for reporting. That’s what our platforms offers.

Gardner: Joseph, given that you're global in nature and that there are so many distribution points for the gathering of data, do you bring this all into a single data center? Do you use cloud or other on-demand elements? How do you manage the centralization of that data?

Our customers can go and see the performance of everything that’s happened on the network for the last 13 months.

Khalil: We don’t have cloud deployments to date, even though our technology allows for it. We could deploy our software in the cloud, but again, due to privacy concerns with customers' data, we end up deploying our solutions in-network within the operators’ firewalls.

One of the big advantages of our solution is that we can choose to host it locally on customers’ premises. We typically store data for up to 13 months. So our customers can go and see the performance of everything that’s happened on the network for the last 13 months.

We store the data at different levels -- hourly, daily, weekly, monthly -- but to answer your question, we deploy on-site, and that’s where all the data is centralized.

Gardner: Let’s look at why this is so important to your customer, the mobile carrier, the mobile operator. What is it that helps their business and benefits their business by having this data and having that speed of analysis?

Customer care

Khalil: Our customer care module, the Subscriber Analytix Care, is used by care agents. These are the individuals that respond to 611 calls from customers complaining about issues with their devices, coverage, or whatever the case may be.

When they're on the phone with a customer and they put in a phone number to investigate, they want to be able to get the report to render in under five seconds. They don’t want to have the customer waiting while the tool is churning trying to retrieve the care dashboard. They want to hit "go," and have the information come on their screen. They want to be able to quickly determine if there's an issue or not. Is there a network issue, is it a device issue, whatever the case may be?

So we give them that speed and simplicity, because the data we are collecting is very complex, and we take all the complexity away. We have our own proprietary data analysis and modeling techniques, and it happens on-the-fly as the data is going through the system. So when the care agent loads that screen, it’s right there at a glance. They can quickly determine what the case may be that’s impacting the customer.

Our care module has been demonstrated to reduce the average call handle time, the time care personnel spend with the customer on the phone.

Our care module has been demonstrated to reduce the average call handle time, the time care personnel spend with the customer on the phone. For big operators, you could imagine how many calls they get every day. Shaving a few minutes off each call can amount to a lot of savings in terms of dollars for them.

Gardner: So in a sense, there’s a force-multiplier by having this analysis. Not only do you head off the problems and fix them before they become evident, which includes better user experience, they're happier as a customer. They stay on the network. But then, when there are problems, you can empower those people who are solving the problem, who are dealing with that customer directly to have the right information in hand.

Khalil: Exactly. They have everything. We give them all the tools that are available to them to quickly determine on the fly how to resolve the issue that the customer is having. That’s why speed is very important for a module like care.

Embed the HPE Big Data Analytics Engines
To Meet Enterprise-Scale Requirements
Get More Information

For our marketing module, speed is important, but not as critical as care, because now you don’t have a customer waiting on the line for you to run your report to see how subscribers are using the network or how they're using their devices. We still produce reports fairly quickly in few seconds, which is also what the platform can offer for marketing.

Gardner: So those are some of the immediate and tactical benefits, but I should think that, over time, as you aggregate this data, there is a strategic benefit, where you can predict what demands are going to be on your networks and/or what services will be more in demand than others, perhaps market by market, region by region. How does that work? How do you provide that strategic level of analysis as well?

Khalil: This is on the marketing side of our platform, Subscriber Analytix Marketing. It's used by the CMO organizations, by marketing analysts, to understand how subscribers are using the services. For example, an operator will have different rate plans or tariff plans. They have different devices, tablets, different offerings, different applications that they're promoting.

How are customers using all these services? Before the advent of deep packet inspection probes and before the advent of big data, operators were blind to how customers are using the services offered by the network. Traditional tools couldn’t get anywhere near handling the amount of data that’s generated by the services.
Specific needs

Today, we can look at this data and synthesize it for them, so they can easily look at it, slice and dice it along many dimensions such as, age, gender, device type, location, time, you name it. Marketing analysts can then use these dimensions to ask very detailed questions about usage on the network. Based on that, they can target specific customers with specific offers that match their specific needs.

Gardner: Of course, in a highly competitive environment, where there are multiple carriers vying for that mobile account, the one that’s first to market with those programs can have a significant advantage.

Khalil: Exactly. Operators are competing now based on the services they offer and their related costs. Back 10-15 years ago, radio coverage footprint and voice plans were the driving factors. Today, it's the data services offered and their associated rate plans.

Gardner: Joseph, let’s learn a little bit more about INOVVO. You recently completed purchase of comScore’s wireless solutions division. Tell us a bit about how you’ve grown as a company, both organically and through acquisition, and maybe the breadth of your services beyond what we've already described?

Our tool allows them to anticipate when existing network elements exhaust their current capacity.

Khalil: INOVVO is a new company. We started in May 2015, but the business is very mature. My senior managers and I have been in this business since 2005. We started the Subscriber Analytix product line back in 2005. Then, comScore acquired us in 2010, and we stayed with them for about 5 years, until this past May.

At that time, comScore decided that they wanted to focus more on their core business and they decided to divest the Subscriber Analytix group. My senior management and I executed a management buyout, and that’s how we started INOVVO.

However, comScore is still a key partner for us. A key component of our product is a dictionary for categorizing and classifying websites, devices, and mobile apps. That’s produced by comScore, and comScore is known in this industry as the gold standard for these types of categorizations .

We have exclusive licensing rights to use the dictionary in our platform. So we have a very close partnership with comScore. Today, as far as the services that INOVVO offers, we have a Subscriber Analytix product line, which is for care, marketing, and network.

We talked about care and marketing, we also have a network module. This is for engineers and network planners. We help engineers understand the utilization of their network elements and help them plan and forecast what the utilization is going to be in the near future, given current trends, and help them stay ahead of the curve. Our tool allows them to anticipate when existing network elements exhaust their current capacity.

Gardner: And given that platform and technology providers like HPE are enabling you to handle streaming real-time highly voluminous amounts of data, where do you see your services going next?

It appears to me that more than just mobile devices will be on these networks. Perhaps we're moving towards the Internet of Things (IoT). We're looking more towards people replacing other networks with their mobile network for entertainment and other aspects of their personal and business lives. At that packet level, where you examine this traffic, it seems to me that you can offer more services to more people in the fairly near future.

Two paths

Khalil: IoT is big and it’s showing up on everybody’s radar. We have two paths that we're pursuing on our roadmap. There is the technology component, and that’s why HPE is a key partner for us. We believe in all their big data components that they offer. And the other component for us is the data-science component and data analysis.

The innovation is going to be in the type of modeling techniques that are going to be used to help, in our case, our customers, the mobile operators. Moving down the road, there could be other beneficiaries of that data, for example companies that are deploying the sensors that are generating the data.

I'm sure they want some feedback on all that data that their sensors are generating. We have all the building blocks now to keep expanding what we have and start getting into those advanced analytics, advanced methodologies, and predictive modeling. These are the areas, and this is where we see really our core expertise, because we understand this data.

Today you see a lot of platforms showing up that say, “Give me your data and I'll show you nice looking reports.” But there is a key component that is missing and that is the domain expertise in understanding the data. This is our core expertise.

My advice is that it’s a new field and you need to consider not just the Hadoop storage layer but the other analytical layers that complements it.

Gardner: Before we finish up, I'd like to ask you about lessons learned that you might share with others. For those organizations that are grappling with the need for near real-time analytics with massive amounts of data, having tremendous amount of data available to them, maybe it’s on a network, maybe it’s in a different environment, do you have any 20/20 hindsight that you might offer on how to make the best use of big data and monetize it?

Khalil: There is a lot of confusion in the industry today about big data. What is big data and what do I need for big data? You hear the terms Hadoop. "I have deployed a Hadoop cluster. So I have solved my big data needs." You ask people what’s their big-data strategy, and they say they have deployed Hadoop. Well, then. what are you doing with Hadoop? How are you accessing the data? How are you reporting on the data?

My advice is that it’s a new field and you need to consider not just the Hadoop storage layer but the other analytical layers that complements it. Everybody is excited about big data. Everybody wants to really have strategy to use big data, and there are multiple components to it. We offer a key component. We don't pitch ourselves to our customers and say, “We are your big data solution for everything you have.”

Embed the HPE Big Data Analytics Engines
To Meet Enterprise-Scale Requirements
Get More Information

There is an underlying framework that they have to deploy, and Hadoop is one of them. then comes our piece. It sits on top of the data hosting infrastructure and feeds from all the different data types, because in our industry, typical operators have hundreds if not thousands of data silos that exist in their organization.

So you need framework to really host the various data sources, and Hadoop could be one of them. Then, you need a higher-level reporting layer, an analytical layer, that really can start combining these data silos and making sense of it and bringing value to the organization. So it's a complete strategy of how to handle big data.

Gardner: And that analytics layer that's what HPE Vertica is doing for you.

Key component

Khalil: Exactly. HPE is a key component of what do we do in our analytical layer. There are misconceptions. When we go talk to our customers, They say, “Oh, you're using your Vertica platform to replicate our big data store,” and we say that we're not. The big data store is a lower level, and we're an analytical layer. We're not going to keep everything. We're going to look at all your data, throw away a lot of it, just keep what you really need, and then synthesize it to be modeled and reported on.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tags:  big data  BriefingsDirect  Dana Gardner  data analysis  Hewlett Packard Enterprise  HP Vertica  HPE  INOVVO  Interarbor Solutions  Joseph Khalil  mobile computing 

Share |
PermalinkComments (0)
 

DevOps by design--A practical guide to effectively ushering DevOps into any organization

Posted By Dana L Gardner, Thursday, December 17, 2015
Updated: Thursday, December 17, 2015

The next BriefingsDirect DevOps innovation case study highlights how Cognizant Infrastructure Services has worked with a large telecommunications and Internet services company to make DevOps benefits a practical reality.

We'll learn important ways to successfully usher DevOps into any large, complex enterprise IT environment, and we'll hear best practices on making DevOps a multi-generational accelerant to broader business goals -- such as adapting to the Internet of Things (IoT) requirements, advancing mobile development, and allowing for successful cloud computing adoption.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To provide a practical guide to effectively ushering DevOps into any organization, we're joined by Sachin Ohal, Manager Consulting at Cognizant Infrastructure Services in Philadelphia, and Todd DeCapua, Chief Technology Evangelist at HPE Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When we talk about DevOps in a large environment, what are the barriers that we're facing these days? It's a complex undertaking, but what are the things we need to be thinking about in terms of making DevOps a beneficial reality?

Ohal: Fundamentally, industries come in many different models, which is often a sending and receiving mode rather than a communicating mode.

Ohal

So either one team is sending to the other team or one organization is sending to the other team. When we come up with a model like DevOps, the IT team starts DevOps without selecting an area where DevOps needs to start, or where a team needs to take a lead to start DevOps in the organization.

Companies are trying to enhance their IT infrastructure. They want to enforce DevOps. On the other hand, when they all start communicating, they're getting lost. This has become a fundamental problem in implementing DevOps.

Gardner: You've been working with a number of companies in bringing DevOps best practices into play. What are some of the bedrock foundation steps companies should take? Is there a common theme, or does it vary from company to company?

Ohal: DevOps is a kind of domain that varies inside a company. We can't compare company to company. It varies company to company, domain to domain, organization to organization, because here we're talking about developing a culture. When we talk about developing a culture, a thought process, understanding those thought processes plays a key role.

DevOps Solutions Unify Development,
Accelerate Innovation and Meet Market Demands

Find out More from Hewlett Packard Enterprise

And if we fundamentally talk about an application development organization, testing organization, or the IT ops organization, they have their own key performance indicators (KPIs), their own thought process, and their own goals defined.

Many times, we observe that within the IT organization, development, testing, and operations have different goals, objectives, and KPI’s. They never cross-functionally define business needs. They mostly define technology as organization-specific. As an example, a functional tester doesn’t know how developers are communicating with each other, or the security team for security-related issues. An operations engineer has KPI up-time, but he really doesn’t know the various application modules he's supporting.  

Suddenly, by enforcing DevOps, we're telling all the organization to begin communicating, start intersecting, start having cross-communication. So this has become a key problem in the 21st century infrastructure, application, testing, or overall DevOps framework implementation. Communication and understanding have become key challenges for organizations.

Gardner: Before we get into the specific use case scenario and case study, what is the relationship between Cognizant and HPE? You're a services provider; they're a technology provider. How does it work?

Strong partner

Ohal: We're a strong partner with HPE. Cognizant is a consulting company, a talent company. On the other hand, HPE is an enterprise-scale product delivery company. There is a very nice synergy between Cognizant and HPE.

When we go to market, we assess the situation, we request HPE to come on-premises, to work with us, have a handshake, form a high-performance team, and deliver into an enterprise solution to Cognizant's and HPE's customers.

Gardner: Todd, given the challenges of bringing DevOps to bear in many organizations, the fact that it varies from company to company really sounds like a team sport, not something one can do completely alone. It's an ecosystem play. Is that right?

DeCapua: It absolutely is. When I think about this ecosystem, there are three players. You have your customer first, but then you have an organization like HPE that provides enterprise products and capabilities, and then other partners like Cognizant that can bring in the talent to be able to put it all together.

DeCapua

As we think about kind of this transition and think about what these challenges are that our number one player, our customers, have, there are these foundational pieces that you think about -- things like time-to-market as being a challenge, brand value being a challenge, and, of course, revenue is another challenge.

As we were talking early on, what are those fundamental challenges that our customer, again as a team sport, are being challenged with? We see that this is different for every one of our customers, and starting with some of these fundamentals, what are those challenges?

Understanding that helps with, "We need to make a change. We need to influence the culture. We need to do all these pieces." Before we jump right into that technical solution, let’s sit down as the teams together, with a customer, with someone like HPE, with someone like Cognizant, and really understand what our challenges are.

Gardner: Let's drill down a bit into a specific scenario. Sachin, a large telecommunications, media and Internet services company, tell us about what their goals were and why they were pursuing DevOps and practical improvement in how they have a test/deploy synergy.

Ohal: When we talk about telco, pharma or retail customers, they fundamentally come up with many upstream/downstream revenue-oriented, customer service, workbench platforms -- and it's very hard to establish a synergy between all the platforms, and to make them understand what their end goal is.

Obviously the end goal is customer service, but to achieve that goal you have to go through so many processes, so many handshakes on a business level, on a technology level, on a customer-service level, and even internal customer service level.

Always supporting

In today's world, we are IT for IT. None of the organizations inside a company works as an independent IT group. They work IT for IT. They are always supporting either business or internal IT group.

Having this synergy established, having this core value established, we come across many people who don't understand the communication. The right tools are not in place. Once we overcome the tools and the communication process, the major question is how I'll put that process in end-to-end in the IT organization

That, again, becomes a key challenge to that process, because it's not easy to have it adopted with something new. As Todd said, we're talking about Agile development and mobile. Your IT organization becomes your business. You're asking to inject something new with no result. It's like injecting some test assay with some new drug. That's exactly the feeling any IT executive has: "Why am I supposed to be injecting this thing?"

Do I have a value out of it or don't I, because there is no benchmark available in the industry that people succeed in a certain domain or a certain area. There are always bits and pieces. This is a key challenge that we observe across the industry  -- a lack of adaptiveness to a new technology or a new process. We're still seeing that.

There is no benchmark available in the industry that people succeed in a certain domain or a certain area. There are always bits and pieces.

I have a couple of customers who say, "Oh, I run Windows 2000 server. I run Windows 98. I have no idea how many Java libraries my application is using." They are also unable to explain why they still have so many.

It's similar on the testing side. Somebody says, "I use a Load Testing Solution 9," where even HPE themselves got rid of it three or four years back.

Then, if you come to the operations organization, people say, "I use a very old server." What does it mean? It means that business is just getting IT services. They have to understand that this service needs to be enhanced so that the business will be enhanced.

Technology enhancement doesn’t mean that my data center is flooded with some new technology. Technology enhancement means that my entire end-to-end landscape is upgraded with a new technology that will support for next gen, but I'm still struggling with legacy. These are the key challenges we observe in the market.

Gardner: But specifically with this use case, how did you overcome them? Did you enter into the test environment and explain to them how they should do things differently, leverage their data in different ways, or did you go to the developers first? Is there a pattern to how you begin the process of providing a better DevOps outcome?

End-to-end process

Ohal: First of all, we had to define an end-to-end delivery process and then we had to identify end-to-end business value out of that delivery process.

Once we identified the business value, we drew a line between various organizations so they could understand that they were not cutting across each other, but going parallel. But this is a thin line, which is going to work, and which will definitely vary domain-to-domain.

In a multi-generational business plan, when we talk about drawing this thin line, we don’t have any scope that tells exactly how we draw it in IT organization, a business organization, or inside IT. We draw it in a testing organization or a development organization.

DevOps can be started in any landscape. We may start with a testing organization and then we decide to pull it into the development and IT organization.

DevOps Solutions Unify Development,
Accelerate Innovation and Meet Market Demands

Find out More from Hewlett Packard Enterprise

In some cases we may start with a development organization, and then testing and operational organizations come into place. Some businesses start DevOps, and they say that they want to do things the way they want.

If you really ask me about a specific case study, rather than giving a very centric answer, I want to tell you that the answer is a wide area. I don’t just want to take our audience in a wrong direction. Somebody else started in testing. So we'll just start in testing. Somebody else started in development. Let’s start in development.

You can start anywhere, but before starting, just stay back, decide where you want to start, why you want to start, how you want to start, and get the right folks and the right tools in the picture.

Gardner: Given that there is a general pattern, but there are also deep specifics, could you walk us through the general methodology that you have been talking about and that you are describing?

Ohal: At one point in time, most users or most listeners on this podcast, were startup companies. They started up their company as a product or as service and they were struggling with a market.

Then, they shifted themselves as a product company. When I say product it doesn’t mean a physical product; it might be service as a product. Then, they started merger, acquisition, and enhancing their portfolio in the market. They've done a couple of exercises that fundamentally industry does.

Service companies

Now, more big companies are transforming themselves to the service companies. They want to make sure that their existing customers and their new customers are getting the same values, because challenges remain, while adding new customers. Are my existing customers still with me? Are they happy and satisfied, and are they willing to continue business with my company?

Are they getting equivalent service to what we have committed to them? Are they getting my new technology and business value out of those services?

This creates a lot of pressure on IT and business executives. In mobile computing and cloud computing, suddenly some companies are trying to transform themselves into cloud companies from service companies. There is a lot of pressure on their IT organization to go toward cloud.

They're starting with using cloud web services, cloud authentication at an IT level. We're not talking a larger landscape, but they're trying that. Basically this transformation from startup to product, product to services, and then services to cloud. That is your multi-generational vision with your multi-generational business plan, because your people change, your IT changes, your technology changes, your business models keep changing, your customers change, your revenue changes, and the mode of revenue changes.

That's where your IT plays a key role. Information technology becomes a key strategic business unit in your organization that is driving this whole task force.

Consider the example of eBay and Google. At some point in time, they never existed. We never even thought that these companies would be leading on Wall Street, giving us so much employment, or have such a large consumer base.

Being a consulting company like Cognizant, we observe those trends in the market very quickly. We see those changes in the market, we assist them, and we come with our own internal teams that understanding this all -- yet the customer multi-generational vision remains the same.

To run this vision I have a strategic business objective, a strategic business unit. How will this unit communicate with the strategic business objective? That's where your IT plays a key role. Information technology becomes a key strategic business unit in your organization that is driving this whole task force.

While driving this task force, if you didn’t define your DevOps in a multi-generational business plan, what will happen is that your focus is IT-centric. The moment technology changes, you're in trouble. The moment the process changes -- and the moment you think about cross domain in your company -- you're in trouble.

As an example, a telco is doing a cross-domain with the retailer. Then, pharma is doing cross-domain with the telco. Do you want to spend double for your IT or your business, or do you want to shut down the existing project and fund a new project?

There are so many questions that come into the picture when we talk about an IT-centric DevOps organization, but when we have business-centric DevOps initiation, we accommodate all the views, and accordingly, IT takes control of your business and they help you to run your business.

Gardner: So business agility is really the payoff, Todd?
Looking at disruptions

DeCapua: Yes. Dana and Sachin, as we look at this challenge and wrapping this around the use case that Cognizant has -- not only the one customer that we are talking about, but really all of them -- and thinking through this multi-generational business plan using DevOps, there are some real fundamentals to think about. But there are disruptions in the world today, and maybe starting there helps to illustrate a little bit better why this concept of a multi-generational business plan is so important.

Consider Uber, Yelp, or Netflix. Each one of them is in a different stage of a multi-generational business plan, but as to this foundational element that Sachin had been explaining -- where some organizations today are stuck in a legacy technology or IT organization -- it’s really starting at that fundamental level of understanding, What are our strategic business objectives?

Then look at this from whether there's a strategic business unit and where that's focused. Then, build up from there to where you have technology that lives on the top of that.

What’s fun for me is when I look at Uber, Yelp, or Netflix, knowing they are all different, but some of them do have a product and some of them don’t. Some of them are an IT organization that has a services layer that connects all of these pieces together.

Look at this from whether there's a strategic business unit and where that's focused. Then, build up from there to say you have technology that lives on the top of that.

So whether it's a large telecom or an Internet provider, there are products, but there has really been a focus on services.

What can help is that this organizational, multi-generational vision is going to live through the iterations that every organization goes through. I hate to keep pounding on these three examples, but I think they're great in ways that help illustrate this.

We all remember when things like Uber came in as a startup and was not really well-understood. Then, you look down, and it has become productized. It’s probably safe to assume that we've reached a certain level where it's available in most cities that I travel to.

Then, you move into something more like a product, looking at Yelp. That is definitely a product that’s mainstream. It definitely has a lot of users today. Then you move down into the service area, and as something would mature into a service it has now become definitely adopted in the majority of their target users.

DevOps Solutions Unify Development,
Accelerate Innovation and Meet Market Demands

Find out More from Hewlett Packard Enterprise

The fourth I would like to call on is cloud. As you move to something like cloud, that's where Netflix becomes a perfect example. It’s all cloud-based. I'm a subscriber. I know that I can have streaming video any device, anywhere in the world, at any time, on Netflix delivered from the cloud.

So these four generational business plan items that we are talking about -- startup, products, service, and cloud -- again, carrying that underlying vision, all supported by information technology and a defined strategic business objective, focusing on a strategic business unit.

It’s really important to help understand that as I look at somebody like Cognizant as a partner and the approach that they have used with several of their customers.

Gardner: For organizations reading this or listening in that are interested in getting to that multi-generational benefit -- where their investments in IT pay off dividends for quite some time, particularly in their ability to adapt to change rapidly -- any good starting points? Are there proof of concept (POC) places where you start? I know it’s boiling the ocean in some ways, but there must be some good principles to get going with.

Sensing inside

Ohal: Definitely there are. In this 21st Century IT business goal, first you have to sense everything inside of your business, rather than sensing the outside market. Sense all your business thoroughly, in real time. What is it doing?

You have to analyze your business model. Does my business model fit in these four fundamental parts? Where am I right now? Am I into the startup side, product side, service side, or cloud and where do I want to go? We have to define that, and then based on that, you have to adopt DevOps. You have to make sure where you are adopting your DevOps.

I was on product and I'm going to services, so I need a DevOps fitting here. Or I'm right now in a well-matured product and I want to go on a cloud. Where I am going? Or, I'm right now on a cloud and I want to have more and more refined services for my customers.

Find out that scale and define that scale, rather than getting many IT groups together and just doing a brainstorming session. Where am I supposed to stand? No. What is your business vision? What is your customer value? Those values really derive your business, and to derive that business use DevOps.

You have to make sure where you are adopting your DevOps.

It's not for just getting the continuous delivery in-place or continuous integration in-place. Two IT executives are talking, "You're in my organization doing a great handshake," and the business says, "I don’t want that handshake. I want that up-time."

There are so many various aspects, various views. Todd mentioned that he has all these examples, but if you check other example as well, they're very focused on their multi-generational business plan, and if you want to succeed, you have to be focused on those aspects as well.

Gardner: Anything else to add, Todd?

DeCapua: As far as getting started and what works and where you go, there are a number of different ways that we've worked with our customers to get started.

One of the ones that I have seen proven is something that has been neglected. For example, there's a maintenance backlog. Here are items that over six months, a year, or sometimes even two years, have just been neglected. If you really want to try to find some quick value, maybe it’s pulling that maintenance backlog off, prioritizing that with your customer, understanding what's important still, what’s not important any longer, and shortening it down to a target list.

The second piece that comes in is this analysis capability. How are you tracking the results?

Then being able to identify that if we're going to focus a few resources on a few of these high-priority items that are going to continue to be neglected, then starting to adopt some of these practices and capabilities to then immediately show value to that business owner because we have applied a few resources with a little bit of time and gone after the highest priority items that otherwise would have been neglected.

The second piece that comes in is this analysis capability. How are you tracking the results? What are those metrics that you're using to show back to the business that they have their multi-generational plan and strategy laid out, but how is it that they are incrementally showing this value as they're delivering over and over again?

But start small. Maybe go after that neglected maintenance backlog being a really easy target, and then showing the incremental value over time, again, through the sensing that Sachin has mentioned. Also be able to analyze and predict those results and then be able to adapt over time with speed and accuracy.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tags:  BriefingsDirect  Cognizant Infrastructure Services  Dana Gardner  DevOps  Hewlett Packard Enterprise  HPE  Interarbor Solutions  Sachin Ohal  Todd DeCapua 

Share |
PermalinkComments (0)
 

Need for fast analytics in healthcare spurs Sogeti converged BI solutions partnership model

Posted By Dana L Gardner, Tuesday, December 08, 2015

The next BriefingsDirect big-data solution discussion explores how a triumvirate of big-data players are delivering a rapid and efficient analysis capability across disparate data types for the healthcare industry.

We'll learn how the drive for better patient outcomes amid economic efficiency imperatives has created a demand for a new type of big-data implementation model. This solutions approach -- with the support from Hewlett Packard Enterprise, Microsoft, and Sogeti -- leverages a nimble big-data platform, converged solutions, hybrid cloud, and deep vertical industry expertise.

The result is innovative and game-changing insights across healthcare ecosystems of providers, patients, and payers. The ramp-up to these novel and accessible insights is rapid, and the cost-per-analysis value is very impressive.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy.

Here to share the story on how the Data-Driven Decisions for Healthcare initiative arose and why it portends more similar vertical industry focused solutions, we're joined by Bob LeRoy, Vice President in the Global Microsoft Practice and Manager of the HPE Alliance at Sogeti USA. He's based in Cincinnati. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why the drive for a new model for big data analytics in healthcare? What are some of the drivers, some of the trends, that have made this necessary now?

LeRoy: Everybody is probably very familiar with the Affordable Care Act (ACA), also known as ObamaCare. They've put a lot of changes in place for the healthcare industry, and primarily it's around cost containment. Beyond that, the industry itself understands that they need to improve the quality of care that they're delivering to patients. That's around outcomes, how can we affect the care and the wellness of individuals.

LeRoy

So it’s around cost and the quality of the care, but it’s also about how the industry itself is changing, both from how providers are now doing more with payments and how classic payers are doing more to actually provide care themselves. There is this blur between the lines of payer and provider.

Some of these people are actually becoming what we call accountable care organizations (ACOs). We see a new one of these ACOs come up each week, where they are both payer and provider.

Gardner: Not only do we have a dynamic economic landscape, but the ability to identify what works and what doesn't work can really be important, especially when dealing with multiple players and multiple data types. This is really not just knowing your own data; this is knowing data across organizational boundaries.

LeRoy:  Exactly. And there are a lot of different data models that exists. When you look at things like big data and the volume of data that exist out in the field, you can put that data to use to understand who are your critical patients, and how that can affect your operations?

Gardner:  Why do we look to a triangulated solution between players like Hewlett Packard Enterprise, Microsoft, and Sogeti? What is it about the problem that you're trying to solve that has led it to a partnership type of solution?

Long-term partner

LeRoy: Sogeti, a wholly-owned subsidiary of the Capgemini Group, has been a long-term partner with Microsoft. The tools that Microsoft provides are one of the strengths of Sogeti. We've been working with HPE now for almost two years, and it's a great triangulation between the three companies. Microsoft provides the software, HPE provides the hardware, and Sogeti provides the services to deliver innovative solutions to customers and do it in a rapid way. What you're getting is best in class in all three of those categories -- the software, the hardware, and the services.

Gardner: There's another angle to this, too, and it’s about the cloud delivery model. How does that factor into this? When we talked about hardware, it sounds like there's an on-premises aspect to it, but how does the cloud play a role?

LeRoy: Everybody wants to hear about the cloud, and certainly it’s important in this space, too, because of the type of data that we're collecting. You could consider social data or data from third party software-as-a-service (SaaS) applications, and that data can exist everywhere.

Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs

You have your on-premise data and you have your off-premise data. The tool that we're using, in this case from HPE and Microsoft, really lend themselves well to developing a converged environment to deliver best in class across those different environments. They're secured, delivered quickly, and they provide the information and the insights that their hospitals and insurance companies really need.

Gardner: So we have a converged solution set from HPE. We have various clouds that we can leverage. We have great software from Microsoft. Tell us a little about Sogeti and what you're bringing to the table. What is it that you've been doing in healthcare that helps solidify this solution and the rapid analysis requirements?

Sogeti’s strength is that we're really focused on the technology and the implementations of technology.

LeRoy: This is one of the things that Sogeti brings into the table. Sogeti is part of the Capgemini Group, a global organization with 150,000 employees, and Sogeti is one of the five strategic business units of the group. Sogeti’s strength is that we're really focused on the technology and the implementations of technology and we are focused on several different verticals, healthcare being one of them.

We have experts on the technology stacks, but we also have experts in healthcare itself. We have people who we've pulled from the healthcare industry. We taught them what we do in the IT world, so they can help us focus best practices and technologies to solve real healthcare organizational problems, so that we can get toward the quality of care and the cost reduction that the ACA is really looking for. That’s a real strength that's going to add significant values to healthcare organizations.

Gardner: It’s very important to see that one size does not fit all when it comes to the systems. Having industry verticalization is required, and you're embarking on a retail equivalent to this model, and manufacturing in other sectors might come along as well.

Let's look at why this approach to this problem is so innovative. What have been some of the problems that have held back the ability of large and even mid-sized organizations in the healthcare vertical industry from getting these insights? What are some of the hurdles that they've had to overcome and that perhaps beg for a new and different model and a new approach?

Complexity of data

LeRoy: There are a couple of factors. For sure, it’s the complexity of the data itself. The data is distributed over a wide variety of systems. So it’s hard to get a full picture of a patient or a certain care program, because the systems are spread out all over the place. When the systems in so many different ways end up with you, you get part of the data. You don’t get the full picture. We call that poor data quality, and that y makes it hard for somebody who's doing analysis to really understand and gain insight from their data.

Of course, there's also the existing structure that’s in place within organizations. They've been around for a long time. People are sometimes resistant to change. Take all of those things together and you end up with a slow response time to delivering the data that they're looking for.

Access to the data becomes very complex or difficult for an end-user or a business analyst. The cost of changing those structures can be pretty expensive. If you look at all those things together, it really slows down an organization’s ability to understand the data that they've got to gain insights about their business.

Gardner: Just a few years ago, when we used to refer to data warehouses, it was a fairly large undertaking. It would take months to put these into place, required a data center or some sort of a leasing arrangement, and of course a significant amount of upfront costs. How has this new model approached those costs and length of time or ramp-up time issues?

HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

LeRoy: Microsoft’s model that they have put in place to support their Analytics Platform System (APS) allows them to license their tools at a lower price. The other thing that's really made a difference is the way HPE has put together their ConvergedSystem that allows us to tie these hybrid environments together to aggregate the data in a very simple solution that provides a lot of performance.

If I have to look at unstructured data and structured data, I often need two different systems. HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

They have also delivered it as an appliance, so I don't need to spend a lot of time buying, provisioning, or configuring servers, setting up software, and all those things, I can just order this ConvergedSystem from HPE, put it in my data center, and I am almost ready to go. That’s the second thing that really helps save a lot of time.

Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs

The third one is that at Sogeti Services, we have some intellectual property (IP) to help the data integration from these different systems and the aggregation of the data. We've put together some software and some accelerators to help make that integration go faster.

The last piece of that is a data model that structures all this data into a single view that makes it easier for the business people to analyze and understand what they have. Usually, it would take you literally years to come up with these data models. Sogeti has put all the time into it, created these models, and made it something that we can deliver to a customer much faster, because we've already done it. All we have to do is install it in your environment.

It's those three things together -- the software pricing from Microsoft, the appliance model from HP, and the IP and the accelerators that Sogeti has.

Consumer's view

Gardner: Bob, let’s look at this now through the lens of that consumer, the user. It wasn’t that long ago where most of the people doing analytics were perhaps wearing white lab coats, very accomplished in their particular query languages and technologies. But part of the thinking now for big data is to get it into the hands of more people.

What is it that your model, this triumvirate of organizations coming together for a solution approach, does in terms of making this data more available? What are the outputs, who can query it, and how has that had an impact in the marketplace?

LeRoy: We've been trying to get this to the end users for 30 years. I've been trying to get reports in the hands of users and let them do their own analysis, and every time I get to a point where I think this is the answer, the users are going to be able to do their own reports, that frees up guys in the IT world like me to go off and do other things, it doesn’t always work.

This time, though, it's really interesting. I think we have got it. We allow the users access directly to the data, using the tools that they already know. So I'm not going to create and introduce a new tool to them. We're using tools that are very similar to Excel, that point to a data source that’s well organized for them already and it’s the data that they are already familiar with.

This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

So if they're using Microsoft Excel-like tools, they can do Power Pivots and pivot tables that they've already being doing, but just in an offline manner. Now, I can give them direct access to real-time data.

Instead of waiting until noon to get reports out, they can go and look online and get the data much sooner, so we can accelerate their access time to it, but deliver it in a format that they're comfortable with. That makes it easier for them to do the analysis and gain their insights without the IT people having to hold their hands.

Gardner: Perhaps we have some examples that we can look to that would illustrate some of this. You mentioned social media, the cloud-based content or data. How has that come to bear on some of these ways that your users are delivering value in terms of better healthcare analytics?

LeRoy: The best example I have is the ability to bring in data that’s not in a structured format. We often think of external data, but sometimes it’s internal data, too -- maybe x-rays or people doing queries on the Internet. I can take all of that structured data and correlate it to my internal electronic medical records or my health information systems that I have on-premise.

If I'm looking at Google searches, and people are looking for keywords such as "stress," "heart attacks," "cardiac care," or something like that, those keywords, I can map the times that people are looking at those kinds of queries by certain regions. I can tie that back to my systems and ask what the behavior or the traffic patterns look like within my facility at those same times. You can target certain areas to maybe change my staffing model, if there is a big jump in searches, do a campaign to ask people to come in and do a screening, or encourage people to get to their primary-care physicians.

There are a lot of things we can do with the data by looking just at the patterns. It will help us narrow down the areas of our coverage that we need to work with, what geographic areas I need to work on, and how I manage the operations of the organization, just by looking at the different types of data that we have and tying them together. This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

Applying data science

Gardner: I can see now why you're calling it the Data Driven Decisions for Healthcare, because you're really applying data science to areas that would probably never have been considered for it before. People might use intuition or anecdote or deliver evidence that was perhaps not all that accurate. Maybe you could just illustrate a little bit more ways in which you're using data science and very powerful systems to gather insights into areas that we just never thought to apply such high-powered tools to before.

Converged Systems +
Analytics = Transformation
Learn More from Sogetilabs

LeRoy: Let’s go back to the beginning when we talked about how we change the quality of care that we are providing. Today, doctors collect diagnosis codes for just about every procedure that we have done. We don’t really look and see how many times those same procedures are repeated or which doctors are performing which procedures. Let’s look at the patients, too, and which patients are getting those procedures. So we can tie those diagnosis codes in a lot of different ways.

The one that I think I probably would like the best is that I want to know which doctors perform those procedures only once per patient and have the best results come from the treatments that that doctor performs. Now, if I'm from a hospital, I know which doctors perform which procedures the best and I can direct the patients that need those procedures to those doctors that provide the best care.

My quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

And the reverse of that might be that if the doctor doesn’t perform that procedure well, let’s avoid sending him those kinds of patients. Now, my quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

Gardner: Let’s dive into this solution a bit, because I'm intrigued by the fact that this model of bringing converged-infrastructure provider, a software provider and expertise in the field that crosses the chasm between a technology capability and a vertical industry knowledge-base works. So let’s dig in a little bit. The Microsoft APS, tell us a little bit about that -- what it includes and why it’s powerful and applicable in this situation?

LeRoy: The APS is a solution that combines unstructured data and structured data into a single environment and it allows the IT guys to run classic SQL queries against both.

On one side, we have what used to be called parallel data warehouse. It’s a really fast version of SQL Server. It's massively parallel processing and it can run queries super fast. That’s the important part. I have structured data that I can get to very quickly.

The other half of it is HDInsight, which is Microsoft's open source implementation of Hadoop. Hadoop is all unstructured data. In between these two things there is PolyBase. So I can query the two together and I can join structured and unstructured data together.

Then, since Microsoft created this APS specification, HPE then implemented that in a box that they call a ConvergedSystem 300. Sogeti has used that to build our IP against. We can consume data from all these different areas, put it into the APS, and deliver that data to an end user through a simple interface like Excel or Power BI or some other visualization tool.

Significant scale

Gardner: Just to be clear for our audience, sometimes people hear appliance and they don't think necessarily big scale, but the HPE ConvergedSystem 300 for the Microsoft APS is quite significant with server storage, networking technologies, and large amounts of data, up to 6 petabytes. So we're talking about some fairly significant amounts of data here, not small fry.

LeRoy: And they put everything into that one rack. We think of appliance as something like a toaster that we plug in. That’s pretty close to where they are, not exactly, but you drop this big rack into your data center, give it an IP address, give it some power, and now you can start to take existing data and put it in there. It runs extremely well because they've incorporated the networking and the computing platforms and the storage all within a single environment, which is really effective.

Gardner: Of course, one of the big initiatives at Microsoft has been cloud with Azure. Is there a way in which the HPE Converged Infrastructure in a data center can be used in conjunction with a cloud service like Azure or other cloud, public cloud, infrastructure-as-a-service (IaaS) cloud or even data warehousing cloud services that accelerates the ability to deliver this fast and/or makes it more inclusive or more types of data for more places? How does the public cloud fit into this?

One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

LeRoy: You can distribute the solution across that space. In fact, we take advantage of the cloud delivery as a model. We use a tool called Power BI from Microsoft that allows you to do visualizations.

The system from HPE is a hybrid solution. So we can distribute it. Some of it can be in the cloud and some of it can be on-prem. It really depends on what your needs are and how your different systems are already configured. It’s entirely flexible. We can put all of it on-prem, in a single rack or a single appliance or we can distribute it out to the cloud.

One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

Gardner: And of course, Bob, those end users that are doing those queries, that are getting insights, they probably don’t care where it's coming from as long as they can access it, it works quickly, and the costs are manageable.

LeRoy: Exactly.

Gardner: Tell me a little bit about where we take this model next -- clearly healthcare, big demand, huge opportunity to improve productivity through insights, improve outcomes, while also cutting costs.

You also have a retail solution approach in that market, in that vertical. How does that work? Is that already available? Tell us a little bit about why the retail was the next one you went to and where it might go next in terms of industries?

Four major verticals

LeRoy: Sogeti is focused on four major verticals: healthcare, retail, manufacturing, and life sciences. So we are kind of going across where we have expertise.

Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs

The healthcare one has been out now for nine months or so. We see retailers in another place. There are point solutions where people have solved part of this equation, but they haven’t really dug deep in understanding how to get it from end to end, which is something that Sogeti has done now. From the point a person walks into a store, we would be alerted through all of these analytics that we have. We would be alerted that the person arrived and take action against that.

We do what we can to increase our traffic and our sales with individuals and then aggregate all of that data. You're looking at things like customers, inventory, or sales across an organization. That end-to-end piece is something that I think is very unique within the retail space.

After that, we're going to go to manufacturing. Everybody likes to talk about the Internet of Things (IoT) today. We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line. Or if we have maybe heavy equipment out on a job site, in a mine, or something like that, we could better predict when equipment needs to be serviced, so we can maximize the manufacturing process time.

We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line.

Gardner: Any last thoughts in terms of how people who are interested in this can acquire it? Is this something that is being sold jointly through these three organizations, through Sogeti directly? How is this going to market in terms of how healthcare organizations can either both learn more and/or even experiment with it?

LeRoy: The best way to do it is search for us online. It's mostly being driven by Sogeti and HPE. Most of the healthcare providers that are also heavy HPE users could be aware of it already, and talking to an HPE rep or to a Sogeti rep is certainly the easiest path to move forward on.

We have a number of videos that are out on YouTube. If you search for Sogeti Labs and Data Driven Decisions, you will certainly find my name and a short video that shows it. And of course sales reps and customers are welcome to contact me or anybody from Sogeti or HP.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tags:  big data  Bob LeRoy  BriefingsDirect  Dana Gardner  data analytics  Hewlett Packard Enterprise  HPE  Interarbor Solutions  Microsoft  Sogeti 

Share |
PermalinkComments (0)
 

HPE's composable infrastructure sets stage for hybrid market brokering role

Posted By Dana L Gardner, Tuesday, December 08, 2015

Making a global splash at its first major event since becoming its own company, Hewlett Packard Enterprise (HPE) last week positioned itself as a new kind of market maker in enterprise infrastructure, cloud, and business transformation technology.

By emphasizing choice and adaptation in hybrid and composable IT infrastructure, HPE is betting that global businesses will be seeking, over the long term, a balanced and trusted partner -- rather than a single destination or fleeting proscribed cloud model.

HPE is also betting that a competitive and still-undefined smorgasbord of cloud, mobile, data, and API service providers will vie to gain the attention of enterprises across both vertical industries and global regions. HPE can exploit these dynamic markets -- rather than be restrained by them -- by becoming a powerful advocate for enterprises sorting out the complexity of transformation across hybrid, mobile, security, and data analysis shifts.

"The most powerful weapons of competition are now software, data, and algorithms," said Peter Ryan, HPE Senior Vice President and Managing Director for EMEA. "Time to value is your biggest enemy and your biggest opportunity."

HPE led off its announcements at HPE Discover in London with a new product designed to run both traditional and cloud-native applications for organizations seeking the benefits of running a "composable" hybrid infrastructure. [Disclosure: HPE is a sponsor of BriefingsDirect podcasts.]

Time to value is your biggest enemy and your biggest opportunity.

Based on new architecture, HPE Synergy leverages fluid resource pools, software-defined intelligence, and a unified API to provide the foundation for organizations to continually optimize the right mix of traditional IT and private cloud resources. HPE also announced new partnerships with Microsoft around cloud computing and Zerto for disaster recovery.

HPE Synergy leverages a new architectural approach called Composable Infrastructure, hailed as HPE's biggest debut in a decade. In addition to nourishing dynamic IT service markets and fostering choice, HPE is emphasizing the need to move beyond manual processes for making disparate hybrid services operating well together.

The next step for businesses is to "automate and orchestrate across all of enterprise IT," said Antonio Neri, HPE Executive Vice President and General Manager of the company's Enterprise Group, to the 17,000 attendees.

"Market data clearly shows that a hybrid combination of traditional IT and private clouds will dominate the market over the next five years," said Neri. "With HPE Synergy, IT can deliver infrastructure as code and give businesses a cloud experience in their data center."

Composable choice for all apps

Composable Infrastructure via unified APIs allows IT to converge and virtualize assets while leveraging hybrid models, he said. Both developers and IT operators need to access all their resources rapidly and quickly automate their use.

HPE is striving to strike the right balance between the ability to use hybrid models and access legacy resources, while recognizing that the market will continue to rapidly advance and differ widely from region to region. It's a wise brokering role to assume, given the level of confusion and concern among IT leaders.

"What's the right formula for services at the right price with the right SLAs? It's still a work in progress," I told Trevor Jones at SearchCloudComputing at TechTarget just after the conference.

Cloud brokers can pick and choose the right requirements at the right price for their customers, so there will be a market for those services.

Indeed, HPE will offer a cloud brokerage service in early 2016 for hybrid IT management. HPE Helion Managed Cloud Broker leverages existing HP orchestration, automation, and operations software, and builds a self-service portal, monitoring dashboards and reports to better support on-premises offerings from VMware and public clouds and #PaaS from Microsoft, Amazon, and others. The service will be available sometime in 2016.

"Cloud brokers can pick and choose the right requirements at the right price for their customers, so there will be a market for those services," I told TechTarget. "I look at it like the systems integrator of cloud computing."

And brokers factor into cloud choice and hybrid choice decisions such variables as jurisdiction, industry verticals, types of workloads and mobile devices. Rather than dictate to enterprise architects what "parts" or services to use, HPE is focusing on the management and repeatability of the services that specific application sets require -- even as that changes over time.

For example, as the interest in software containers grows, HPE will automate their use. New HPE ContainerOS solves two major problems with containers -- security and manageability, said HPE CTO Martin Fink. "Ops can now fall in love with containers just as much as developers," he told the conference audience, adding that virtual machines alone are "highly inefficient."

IoT gets a new edge

In yet another IT area that enterprises need to quickly adjust to, the Internet of Things (IoT), HPE has developed a flexible solution approach. HPE Edgeline servers, part of an Intel partnership, sit at the edge of networks.

"What will make IoT work for business is not devices. It's infrastructure you build to support it," said Robert Youngjohns, Executive Vice President and General Manager, HPE Enterprise Group.
Microsoft partnership

HPE and Microsoft announced new innovation in hybrid cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft's hybrid-cloud offerings.

The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models, and accelerate their business.

As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.

To simplify the delivery of infrastructure to developers, HPE Synergy, for example, has a powerful unified API and a growing ecosystem of partners like Arista, Capgemini, Chef, Docker, Microsoft, NVIDIA, and VMware. The unified API provides a single interface to discover, search, provision, update, and diagnose the Composable Infrastructure required to test, develop, and run code. With a single line of code, HPE's innovative Composable API can fully describe and provision the infrastructure that is required for applications, eliminating weeks of time-consuming scripting.

HPE and Microsoft are also introducing the first hyper-converged system with true hybrid-cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System Standard. Bringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers' data centers, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight.

Building on the success of HPE Quality Center and HPE LoadRunner on the Azure Marketplace, HPE and Microsoft will work together to make select HPE industry-leading application lifecycle management, big-data, and security software products available on the Azure Public Cloud.

HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile, more secure hybrid cloud that integrates with Azure.

Disaster recovery with Zerto

Zerto, disaster recovery provider in virtualized and cloud environments, has achieved the gold partnership status with HPE.

The first deliverable out of the partnership is the Zerto Automated Failover Testing Pack. This is the first of several packs which will simplify BC/DR automation using HPE Operations Orchestration (HPE OO) as the master orchestrator. The new automation failover testing capabilities for HPE OO increases IT data center time savings, while improving overall disaster recovery testing compliance.

Failover tests can now run nightly versus annually, providing compliance coverage for customers operating in highly regulated industries such as financial services and healthcare.

While Zerto Automated Failover Testing Pack automatically runs failover tests in full virtual-machine environments, other automated processes eliminate the need to cross check multi-department failover success thereby increasing efficiency and productivity for IT teams.

With Zerto Automated Failover Testing Pack, users now simply schedule the failover test in HPE OO. The test runs autonomously and sends a report showing it was a successful test. Failover tests can now run nightly versus annually, providing compliance coverage for customers operating in highly regulated industries such as financial services and healthcare.

With HPE recognizing that global businesses are seeking a long-term, balanced and trusted partner -- rather than a single destination or fleeting proscribed cloud model -- the 75-year-old company has elevated itself above the cloud fray.

"Real transformation is hard, but it can have amazing benefits," HPE CEO Meg Whitman told the conference.

You may also be interested in: 

Tags:  Antonio Neri  BriefingsDirect  cloud computing  Dana Gardner  disaster recovery  HPE  HPE Discover  HPE Synergy  hybrid cloud  Interarbor Solutions  Meg Whitman  Microsoft  Zerto 

Share |
PermalinkComments (0)
 
Page 1 of 63
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal