Learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment.
Many of the latest technologies -- such as Internet of Things (IoT) platforms, big data analytics, and cloud computing -- are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.
The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.
BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.
The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibus; Dr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at Boeing; Don Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?
Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens -- and money is just a means to achieving that end.
This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.
Bottom-up blueprint approach
Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down.
In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.
There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.
Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.
This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help.
Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken.
These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.
Sunderland: I agree with everything that’s been said so far. In the particular case of New York City -- and with a lot of cities in the US -- cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set.
The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive.
Create demand to drive demand
Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone.
If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need.
In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract -- without somehow having demand developed by the user base -- is pretty difficult.
By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector.
The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand.
Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications?
Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.
This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on.
It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.
Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives?
Framework for a common foundation
Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously -- and I suspect that this is actually a problem that cities share with corporate enterprises.
Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains -- and they need to share data. What’s needed is some kind of framework to allow data sharing to happen.
This framework has to be at two levels. It has to be at a policy level -- and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.
Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things?
Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.
Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned.
One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.
Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.
The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority -- not a specific agency or department of the city.
Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services.
Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?
Public- and private-sector parity
Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?
This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there.
Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.
So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.
Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public -- and if we know a common data framework is important -- why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?
Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.
In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward.
The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity -- like an Amazon or Facebook can -- would be doomed to failure, because nobody could actually afford to implement it.
Since you can’t do top-down solutions -- even if you pass a law -- the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up -- as opposed to the top-down.
Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?
Not something to legislate
Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful.
The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”
That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management.
It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.
A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice.
Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?
Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.
Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?
How to pay for IT?
Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us -- and the vendors -- into a world of interoperability and a common data approach.
Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.
Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector.
Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.
What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.
This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.
The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.
There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.
So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.
Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?
Cities in the cloud, but services grounded
Saha: Across the world, many governments -- not just local governments but even state and central governments -- are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.
It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.
Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.
Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?
Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.
One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.
There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.
This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.
Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.
Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?
Smart city success strategies
Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities.
One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.
At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.
Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.
Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?
Collective, crystal-clear standards
Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.
Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.
Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?
Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns. That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.
Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.
We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.
The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.
I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.
Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?
Use-case consumer value
Sunderland: I like the idea of reference architectures attached to use cases because -- for better or worse -- when folks engage around these issues -- even in large entities like New York City -- they are going to be engaging for specific needs.
Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.
Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.
When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?
Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced -- or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.
Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.
This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.
Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?
Prioritize policy and people
Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.
The development of policy around data sharing -- or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.
Gardner: Dr. Saha, same question to you.
Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health -- or whatever the priority is for that city.
Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.
The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.
It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.
Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?
City smarts and safety first
Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.
In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.
In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.
Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service -- all the way down to the data, so that eventually data can be tied back to the roles.
For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors.
Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?
Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.
The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are -- whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.
When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.
You may also be interested in:
The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.
We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks -- from use of illegal and unethical labor practices to hidden environmental malpractices.
Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?
Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.
First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good.
Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society -- and not just generating dividends and capital growth for shareholders.
Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent -- particularly Millennials and Generation Z -- and they are more likely to get better investment rates as well.
Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?
Better business decisions with AI
Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.
We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings.
These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.
What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.
Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba.
Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.
McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand -- not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements.
But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.
We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.
At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive.
They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks -- like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators -- it’s all that data matrix that comes together that tell the actionable story.
Some companies still just want to check a [compliance] box. Other companies want to dig deeper -- but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips.
Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.
Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”
They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business.
What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in.
Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score.
And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices.
Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own -- and SAP Ariba is now aggregating the aggregators.
How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from?
Timely insights, best business decisions
Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions.
The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there’s risk, there is opportunity.
So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.
We're looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there is risk there is opportunity.
Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?
Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.
When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt.
Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them?
AI risk assessment reaps rewards
Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology -- through our procurement suite, for example -- to impact the decisions they’re making.
For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time.
A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas.
Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?
Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.
We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.
AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.
The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material.
And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant.
Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.
We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article.
By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”
And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly.
Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem?
The AI-aggregation advantage
McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably
We look at many indices. We actually just revamped our social indices a couple of years ago.
Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources.
Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them.
We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it.
Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?
Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.
We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame.
And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly.
What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers.
Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?
Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba's supplier risk solutions directly.
Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those.
If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.
Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well -- or not so well.
Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency -- how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?
Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.
How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable.
And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”
And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process.
We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it -- but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.
Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen?
Pay attention, be prepared
Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that.
From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great.
They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.
Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.
And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.
Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?
Synergistic risk management
Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team.
I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.
I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers -- and who are the suppliers to the suppliers to the suppliers?
We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.
You may also be interested in:
- Envisioning procurement technology and techniques in 2025: The future looks bright
- Diversity spend: When doing good leads to doing well
- Bridging the education divide – How business networks level the playing field for those most in need
- SAP Ariba and MercadoLibre to consumerize business commerce in Latin America
- How a large Missouri medical center developed an agile healthcare infrastructure security strategy
- How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures Process and IT ills
- How a Florida school district tames the wild west of education security at scale and on budget
- How SAP Ariba became a first-mover as Blockchain comes to B2B procurement
- Awesome Procurement —Survey shows how business networks fuel innovation and business transformation
- Seven secrets to highly effective procurement: How business networks fuel innovation and transformation
The next BriefingsDirect IT business model innovation interview explores how pay-as-you-go models have emerged as a new way to align information technology (IT) needs with business imperatives.
We’ll now learn how global aerospace and defense integrator Northrop Grumman has sought a revolution in business model transformation in how it acquires and manages IT.
Here to help explore how cloud computing-like consumption models can be applied more broadly is Ron Foudray, Vice President, Business Development for Technology Services at Northrop Grumman. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.
The next BriefingsDirect IT financing and technology acquisition strategies interview examines how Nokia is refactoring the video delivery business. Learn both about new video delivery architectures and the creative ways media companies are paying for the technology that supports them.
Here to describe new models of Internet Protocol (IP) video and time-managed IT financing is Paul Larbey, Head of the Video Business Unit at Nokia, based in Cambridge, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: It seems that the video-delivery business is in upheaval. How are video delivery trends coming together to make it necessary for rethinking architectures? How are pricing models and business models changing, too?
Larbey: We sit here in 2017, but let’s look back 10 years to 2007. There were a couple key events in 2007 that dramatically shaped how we all consume video today and how, as a company, we use technology to go to market.
It’s been 10 years since the creation of the Apple iPhone. The iPhone sparked whole new device-types, moving eventually into the iPad. Not only that, Apple underneath developed a lot of technology in terms of how you stream video, how you protect video over IP, and the technology underneath that, which we still use today. Not only did they create a new device-type and avenue for us to watch video, they also created new underlying protocols.
It was also 10 years ago that Netflix began to first offer a video streaming service. So if you look back, I see one year in which how we all consume our video today was dramatically changed by a couple of events.
If we fast-forward, and look to where that goes to in the future, there are two trends we see today that will create challenges tomorrow. Video has become truly mobile. When we talk about mobile video, we mean watching some films on our iPad or on our iPhone -- so not on a big TV screen, that is what most people mean by mobile video today.
The future is personalized
When you can take your video with you, you want to take all your content with you. You can’t do that today. That has to happen in the future. When you are on an airplane, you can’t take your content with you. You need connectivity to extend so that you can take your content with you no matter where you are.
Take the simple example of a driverless car. Now, you are driving along and you are watching the satellite-navigation feed, watching the traffic, and keeping the kids quiet in the back. When driverless cars come, what you are going to be doing? You are still going to be keeping the kids quiet, but there is a void, a space that needs to be filled with activity, and clearly extending the content into the car is the natural next step.
And the final challenge is around personalization. TV will become a lot more personalized. Today we all get the same user experience. If we are all on the same service provider, it looks the same -- it’s the same color, it’s the same grid. There is no reason why that should all be the same. There is no reason why my kids shouldn’t have a different user interface.
There is no reason why I should have 10 pages of channels that I have to through to find something that I want to watch.
The user interface presented to me in the morning may be different than the user interface presented to me in the evening. There is no reason why I should have 10 pages of channels that I have to go through to find something that I want to watch. Why aren’t all those channels specifically curated for me? That’s what we mean by personalization. So if you put those all together and extrapolate those 10 years into the future, then 2027 will be a very different place for video.
Gardner: It sounds like a few things need to change between the original content’s location and those mobile screens and those customized user scenarios you just described. What underlying architecture needs to change in order to get us to 2027 safely?
Larbey: It’s a journey; this is not a step-change. This is something that’s going to happen gradually.
But if you step back and look at the fundamental changes -- all video will be streamed. Today, the majority of what we view is via broadcasting, from cable TV, or from a satellite. It’s a signal that’s going to everybody at the same time.
If you think about the mobile video concept, if you think about personalization, that is not going be the case. Today we watch a portion of our video streamed over IP. In the future, it will all be streamed over IP.
And that clearly creates challenges for operators in terms of how to architect the network, how to optimize the delivery, and how to recreate that broadcast experience using streaming video. This is where a lot of our innovation is focused today.
Gardner: You also mentioned in the case of an airplane, where it's not just streaming but also bringing a video object down to the device. What will be different in terms of the boundary between the stream and a download?
IT’s all about intelligence
Larbey: It’s all about intelligence. Firstly, connectivity has to extend and become really ubiquitous via technology such as 5G. The increase in fiber technology will dramatically enable truly ubiquitous connectivity, which we don’t really have today. That will resolve some of the problems, but not all.
But, by the fact that television will be personalized, the network will know what’s in my schedule. If I have an upcoming flight, machine learning can automatically predict what I’m going to do and make sure it suggests the right content in context. It may download the content because it knows I am going to be sitting in a flight for the next 12 hours.
Gardner: We are putting intelligence into the network to be beneficial to the user experience. But it sounds like it’s also going to give you the opportunity to be more efficient, with just-in-time utilization -- minimal viable streaming, if you will.
How does the network becoming more intelligent also benefit the carriers, the deliverers of the content, and even the content creators and owners? There must be an increased benefit for them on utility as well as in the user experience?
Larbey: Absolutely. We think everything moves into the network, and the intelligence becomes the network. So what does that do immediately? That means the operators don’t have to buy set-top boxes. They are expensive. They are very costly to maintain. They stay in the network a long time. They can have a much lighter client capability, which basically just renders the user interface.
The first obvious example of all this, that we are heavily focused on, is the storage. So taking the hard drive out of the set-top box and putting that data back into the network. Some huge deployments are going on at the moment in collaboration with Hewlett Packard Enterprise (HPE) using the HPE Apollo platform to deploy high-density storage systems that remove the need to ship a set-top box with a hard drive in it.
And Use IT
Now, what are the advantages of that? Everybody thinks it’s costly, so you’ve taken the hard drive out, you have the storage in the network, and that’s clearly one element. But actually if you talk to any operator, their biggest cause of subscriber churn is when somebody’s set-top box fails and they lose their personalized recordings.
The personal connection you had with your service isn’t there any longer. It’s a lot easier to then look at competing services. So if that content is in the network, then clearly you don’t have that churn issue. Not only can you access your content from any mobile device, it’s protected and it will always be with you.
Taking the CDN private
Gardner: For the past few decades, part of the solution to this problem was to employ a content delivery network (CDN) and use that in a variety of ways. It started with web pages and the downloading of flat graphic files. Now that's extended into all sorts of objects and content. Are we going to do away with the CDN? Are we going to refactor it, is it going to evolve? How does that pan out over the next decade?
Larbey: The CDN will still exist. That still becomes the key way of optimizing video delivery -- but it changes. If you go back 10 years, the only CDNs available were CDNs in the Internet. So it was a shared service, you bought capacity on the shared service.
Even today that's how a lot of video from the content owners and broadcasters is streamed. For the past seven years, we have been taking that technology and deploying it in private network -- with both telcos and cable operators -- so they can have their own private CDN, and there are a lot of advantages to having your own private CDN.
You get complete control of the roadmap. You can start to introduce advanced features such as targeted ad insertion, blackout, and features like that to generate more revenue. You have complete control over the quality of experience, which you don't if you outsource to a shared service.
There are a lot of advantages to having your own private CDN. You have complete control over the quality of experience which you don't if you outsource to a shared service.
What we’re seeing now is both the programmers and broadcasters taking an interest in that private CDN because they want the control. Video is their business, so the quality they deliver is even more important to them. We’re seeing a lot of the programmers and broadcasters starting to look at adopting the private CDN model as well.
The challenge is how do you build that? You have to build for peak. Peak is generally driven by live sporting events and one-off news events. So that leaves you with a lot of capacity that’s sitting idle a lot of the time. With cloud and orchestration, we have solved that technically -- we can add servers in very quickly, we can take them out very quickly, react to the traffic demands and we can technically move things around.
But the commercial model has lagged behind. So we have been working with HPE Financial Services to understand how we can innovate on that commercial model as well and get that flexibility -- not just from an IT perspective, but also from a commercial perspective.
Gardner: Tell me about Private CDN technology. Is that a Nokia product? Tell us about your business unit and the commercial models.
Larbey: We basically help as a business unit. Anyone who has content -- be that broadcasters or programmers – they pay the operators to stream the content over IP, and to launch new services. We have a product focused on video networking: How to optimize a video, how it’s delivered, how it’s streamed, and how it’s personalized.
It can be a private CDN product, which we have deployed for the last seven years, and we have a cloud digital video recorder (DVR) product, which is all about moving the storage capacity into the network. We also have a systems integration part, which brings a lot of technology together and allows operators to combine vendors and partners from the ecosystem into a complete end-to-end solution.
And Use IT
Gardner: With HPE being a major supplier for a lot of the hardware and infrastructure, how does the new cost model change from the old model of pay up-front?
Flexible financial formats
Larbey: I would not classify HPE as a supplier; I think they are our partner. We work very closely together. We use HPE ProLiant DL380 Gen9 Servers, the HPE Apollo platform, and the HPE Moonshot platform, which are, as you know, world-leading compute-storage platforms that deliver these services cost-effectively. We have had a long-term technical relationship.
We are now moving toward how we advance the commercial relationship. We are working with the HPE Financial Services team to look at how we can get additional flexibility. There are a lot of pay-as-you-go-type financial IT models that have been in existence for some time -- but these don’t necessarily work for my applications from a financial perspective.
Our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate.
In the private CDN and the video applications, our goal is to use 100 percent of the storage all of the time to maximize the cache hit-rate. With the traditional IT payment model for storage, my application fundamentally breaks that. So having a partner like HPE that was flexible and could understand the application is really important.
We also needed flexibility of compute scaling. We needed to be able to deploy for the peak, but not pay for that peak at all times. That’s easy from the software technology side, but we needed it from the commercial side as well.
And thirdly, we have been trying to enter a new market and be focused on the programmers and broadcasters, which is not our traditional segment. We have been deploying our CDN to the largest telcos and cable operators in the world, but now, selling to that programmers and broadcasters segment -- they are used to buying a service from the Internet and they work in a different way and they have different requirements.
So we needed a financial model that allowed us to address that, but also a partner who would take some of the risk, too, because we didn’t know if it was going to be successful. Thankfully it has, and we have grown incredibly well, but it was a risk at the start. Finding a partner like HPE Financial Services who could share some of that risk was really important.
Gardner: These video delivery organizations are increasingly operating on subscription basis, so they would like to have their costs be incurred on a similar basis, so it all makes sense across the services ecosystem.
Our tolerance just doesn't exist anymore for buffering and we demand and expect the highest-quality video.
Larbey: Yes, absolutely. That is becoming more and more important. If you go back to the very first the Internet video, you watched of a cat falling off a chair on YouTube. It didn’t matter if it was buffering, that wasn't relevant. Now, our tolerance just doesn’t exist anymore for buffering and we demand and expect the highest-quality video.
If TV in 2027 is going to be purely IP, then clearly that has to deliver exactly the same quality of experience as the broadcasting technologies. And that creates challenges. The biggest obvious example is if you go to any IP TV operator and look at their streamed video channel that is live versus the one on broadcast, there is a big delay.
So there is a lag between the live event and what you are seeing on your IP stream, which is 30 to 40 seconds. If you are in an apartment block, watching a live sporting event, and your neighbor sees it 30 to 40 seconds before you, that creates a big issue. A lot of the innovations we’re now doing with streaming technologies are to deliver that same broadcast experience.
And Use IT
Gardner: We now also have to think about 4K resolution, the intelligent edge, no latency, and all with managed costs. Fortunately at this time HPE is also working on a lot of edge technologies, like Edgeline and Universal IoT, and so forth. There’s a lot more technology being driven to the edge for storage, for large memory processing, and so forth. How are these advances affecting your organization?
Optimal edge: functionality and storage
Larbey: There are two elements. The compute, the edge, is absolutely critical. We are going to move all the intelligence into the network, and clearly you need to reduce the latency, and you need to able to scale that functionality. This functionality was scaled in millions of households, and now it has to be done in the network. The only way you can effectively build the network to handle that scale is to put as much functionality as you can at the edge of the network.
The HPE platforms will allow you to deploy that computer storage deep into the network, and they are absolutely critical for our success. We will run our CDN, our ad insertion, and all that capability as deeply into the network as an operator wants to go -- and certainly the deeper, the better.
The other thing we try to optimize all of the time is storage. One of the challenges with network-based recording -- especially in the US due to the content-use regulations compliance -- is that you have to store a copy per user. If, for example, both of us record the same program, there are two versions of that program in the cloud. That’s clearly very inefficient.
The question is how do you optimize that, and also support just-in-time transcoding techniques that have been talked about for some time. That would create the right quality of bitrate on the fly, so you don’t have to store all the different formats. It would dramatically reduce storage costs.
The challenge has always been that the computing processing units (CPUs) needed to do that, and that’s where HPE and the Moonshot platform, which has great compute density, come in. We have the Intel media library for doing the transcoding. It’s a really nice storage platform. But we still wanted to get even more out of it, so at our Bell Labs research facility we developed a capability called skim storage, which for a slight increase in storage, allows us to double the number of transcodes we can do on a single CPU.
That approach takes a really, really efficient hardware platform with nice technology and doubles the density we can get from it -- and that’s a big change for the business case.
Gardner: It’s astonishing to think that that much encoding would need to happen on the fly for a mass market; that’s a tremendous amount of compute, and an intense compute requirement.
Larbey: Absolutely, and you have to be intelligent about it. At the end of the day, human behavior works in our favor. If you look at most programs that people record, if they do not watch within the first seven days, they are probably not going to watch that recording. That content in particular then can be optimized from a storage perspective. You still need the ability to recreate it on the fly, but it improves the scale model.
Gardner: So the more intelligent you can be about what the users’ behavior and/or their use patterns, the more efficient you can be. Intelligence seems to be the real key here.
Larbey: Yes, we have a number of algorithms even within the CDN itself today that predict content popularity. We want to maximize the disk usage. We want the popular content on the disk, so what’s the point of us deleting a piece of a popular content just because a piece of long-tail content has been requested. We do a lot of algorithms looking at and trying to predict the content popularity so that we can make sure we are optimizing the hardware platform accordingly.
Gardner: Perhaps we can deepen our knowledge about this all through some examples. Do have some examples that demonstrate how your clients and customers are taking these new technologies and making better business decisions that help them in their cost structure -- but also deliver a far better user experience?
Larbey: One of our largest customers is Liberty Global, with a large number of cable operators in a variety of countries across Europe. They were enhancing an IP service. They started with an Internet-based CDN and that’s how they were delivering their service. But recognizing the importance of gaining more control over costs and the quality experience, they wanted to take that in-house and put the content on a private CDN.
We worked with them to deliver that technology. One of things that they noticed very quickly, which I don’t think they were expecting, was a dramatic reduction in the number of people calling in to complain because the stream had stopped or buffered. They enjoyed a big decrease in call-center calls as soon as they switched on our new CDN technology, which is quite an interesting use-case benefit.
When they deployed a private CDN, they reached costs payback in less than 12 months.
We do a lot with Sky in the UK, which was also looking to migrate away from an Internet-based CDN service into something in-house so they could take more control over it and improve the users’ quality of experience.
One of our customers in Canada, TELUS, when they deployed a private CDN, they reached costs payback in less than 12 months in terms of both the network savings and the Internet CDN costs savings.
Gardner: Before we close out, perhaps a look to the future and thinking about some of the requirements on business models as we leverage edge intelligence. What about personalization services, or even inserting ads in different ways? Can there be more of a two-way relationship, or a one-to-one interaction with the end consumers? What are the increased benefits from that high-performing, high-efficiency edge architecture?
VR vision and beyond
Larbey: All of that generates more traffic -- moving from standard-definition to high-definition to 4K, to beyond 4K -- it all generates more network traffic. You then take into account a 360-degree-video capability and virtual reality (VR) services, which is a focus for Nokia with our Ozo camera, and it’s clear that the data is just going to explode.
So being able to optimize, and continue to optimize that, in terms of new codec technology and new streaming technologies -- to be able to constrain the growth of video demands on the network – is essential, otherwise the traffic would just explode.
There is lot of innovation going on to optimize the content experience. People may not want to watch all their TV through VR headsets. That may not become the way you want to watch the latest episode of Game of Thrones. However, maybe there will be a uniquely created piece of content that’s an add-on in 360, and the real serious fans can go and look for it. I think we will see new types of content being created to address these different use-cases.
You may also be interested in:
Learn here why a new era of responsive and dynamic IT infrastructure is demanded, and how one high-performance digital manufacturing leader aims to get there sooner rather than later.
Here to describe how an entertainment industry innovator leads the charge for bleeding-edge IT-as-a-service capabilities is Jeff Wike, CTO of DreamWorks Animation in Glendale, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Tell us why the older way of doing IT infrastructure and hosting apps and data just doesn't cut it anymore. What has made that run out of gas?
Wike: You have to continue to improve things. We are in a world where technology is advancing at an unbelievable pace. The amount of data, the capability of the hardware, the intelligence of the infrastructure are coming. In order for any business to stay ahead of the curve -- to really drive value into the business – it has to continue to innovate.
Gardner: IT has become more pervasive in what we do. I have heard you all refer to yourselves as digital manufacturing. Are the demands of your industry also a factor in making it difficult for IT to keep up?
Wike: When I say we are a digital manufacturer, it’s because we are a place that manufacturers content, whether it's animated films or TV shows; that content is all made on the computer. An artist sits in front of a workstation or a monitor, and is basically building these digital assets that we put through simulations and rendering so in the end it comes together to produce a movie.
That's all about manufacturing, and we actually have a pipeline, but it's really like an assembly line. I was looking at a slide today about Henry Ford coming up with the first assembly line; it's exactly what we are doing, except instead of adding a car part, we are adding a character, we’re adding a hair to a character, we’re adding clothes, we’re adding an environment, and we’re putting things into that environment.
We are manufacturing that image, that story, in a linear way, but also in an iterative way. We are constantly adding more details as we embark on that process of three to four years to create one animated film.
Gardner: Well, it also seems that we are now taking that analogy of the manufacturing assembly line to a higher plane, because you want to have an assembly line that doesn't just make cars -- it can make cars and trains and submarines and helicopters, but you don't have to change the assembly line, you have to adjust and you have to utilize it properly.
So it seems to me that we are at perhaps a cusp in IT where the agility of the infrastructure and its responsiveness to your workloads and demands is better than ever.
Greater creativity, increased efficiency
Wike: That's true. If you think about this animation process or any digital manufacturing process, one issue that you have to account for is legacy workflows, legacy software, and legacy data formats -- all these things are inhibitors to innovation. There are a lot of tools. We actually write our own software, and we’re very involved in projects related to computer science at the studio.
We’ll ask ourselves, “How do you innovate? How can you change your environment to be able to move forward and innovate and still carry around some of those legacy systems?”
How HPE Synergy
And one of the things we’ve done over the past couple of years is start to re-architect all of our software tools in order to take advantage of massive multi-core processing to try to give artists interactivity into their creative process. It’s about iterations. How many things can I show a director, how quickly can I create the scene to get it approved so that I can hand it off to the next person, because there's two things that you get out of that.
One, you can explore more and you can add more creativity. Two, you can drive efficiency, because it's all about how much time, how many people are working on a particular project and how long does it take, all of which drives up the costs. So you now have these choices where you can add more creativity or -- because of the compute infrastructure -- you can drive efficiency into the operation.
So where does the infrastructure fit into that, because we talk about tools and the ability to make those tools quicker, faster, more real-time? We conducted a project where we tried to create a middleware layer between running applications and the hardware, so that we can start to do data abstraction. We can get more mobile as to where the data is, where the processing is, and what the systems underneath it all are. Until we could separate the applications through that layer, we weren’t really able to do anything down at the core.
Core flexibility, fast
Now that we have done that, we are attacking the core. When we look at our ability to replace that with new compute, and add the new templates with all the security in it -- we want that in our infrastructure. We want to be able to change how we are using that infrastructure -- examine usage patterns, the workflows -- and be able to optimize.
Before, if we wanted to do a new project, we’d say, “Well, we know that this project takes x amount of infrastructure. So if we want to add a project, we need 2x,” and that makes a lot of sense. So we would build to peak. If at some point in the last six months of a show, we are going to need 30,000 cores to be able to finish it in six months, we say, “Well, we better have 30,000 cores available, even though there might be times when we are only using 12,000 cores.” So we were buying to peak, and that’s wasteful.
What we wanted was to be able to take advantage of those valleys, if you will, as an opportunity -- the opportunity to do other types of projects. But because our infrastructure was so homogeneous, we really didn't have the ability to do a different type of project. We could create another movie if it was very much the same as a previous film from an infrastructure-usage standpoint.
By now having composable, or software-defined infrastructure, and being able to understand what the requirements are for those particular projects, we can recompose our infrastructure -- parts of it or all of it -- and we can vary that. We can horizontally scale and redefine it to get maximum use of our infrastructure -- and do it quickly.
Gardner: It sounds like you have an assembly line that’s very agile, able to do different things without ripping and replacing the whole thing. It also sounds like you gain infrastructure agility to allow your business leaders to make decisions such as bringing in new types of businesses. And in IT, you will be responsive, able to put in the apps, manage those peaks and troughs.
Does having that agility not only give you the ability to make more and better movies with higher utilization, but also gives perhaps more wings to your leaders to go and find the right business models for the future?
Wike: That’s absolutely true. We certainly don't want to ever have a reason to turn down some exciting project because our digital infrastructure can’t support it. I would feel really bad if that were the case.
In fact, that was the case at one time, way back when we produced Spirit: Stallion of the Cimarron. Because it was such a big movie from a consumer products standpoint, we were asked to make another movie for direct-to-video. But we couldn't do it; we just didn’t have the capacity, so we had to just say, “No.” We turned away a project because we weren’t capable of doing it. The time it would take us to spin up a project like that would have been six months.
The world is great for us today, because people want content -- they want to consume it on their phone, on their laptop, on the side of buildings and in theaters. People are looking for more content everywhere.
Yet projects for varied content platforms require different amounts of compute and infrastructure, so we want to be able to create content quickly and avoid building to peak, which is too expensive. We want to be able to be flexible with infrastructure in order to take advantage of those opportunities.
Gardner: How is the agility in your infrastructure helping you reach the right creative balance? I suppose it’s similar to what we did 30 years ago with simultaneous engineering, where we would design a physical product for manufacturing, knowing that if it didn't work on the factory floor, then what's the point of the design? Are we doing that with digital manufacturing now?
Artifact analytics improve usage, rendering
Wike: It’s interesting that you mention that. We always look at budgets, and budgets can be money budgets, it can be rendering budgets, it can be storage budgets, and networking -- I mean all of those things are commodities that are required to create a project.
Artists, managers, production managers, directors, and producers are all really good at managing those projects if they understand what the commodity is. Years ago we used to complain about disk space: “You guys are using too much disk space.” And our production department would say, “Well, give me a tool to help me manage my disk space, and then I can clean it up. Don’t just tell me it's too much.”
One of the initiatives that we have incorporated in recent years is in the area of data analytics. We re-architected our software and we decided we would re-instrument everything. So we started collecting artifacts about rendering and usage. Every night we ran every digital asset that had been created through our rendering, and we also collected analytics about it. We now collect 1.2 billion artifacts a night.
And we correlate that information to a specific asset, such as a character, basket, or chair -- whatever it is that I am rendering -- as well as where it’s located, which shot it’s in, which sequence it’s in, and which characters are connected to it. So, when an artist wants to render a particular shot, we know what digital resources are required to be able to do that.
One of the things that’s wasteful of digital resources is either having a job that doesn't fit the allocation that you assign to it, or not knowing when a job is complete. Some of these rendering jobs and simulations will take hours and hours -- it could take 10 hours to run.
At what point is it stuck? At what point do you kill that job and restart it because something got wedged and it was a dependency? And you don't really know, you are just watching it run. Do I pull the plug now? Is it two minutes away from finishing, or is it never going to finish?
Just the facts
Before, an artist would go in every night and conduct a test render. And they would say, “I think this is going to take this much memory, and I think it's going to take this long.” And then we would add a margin of error, because people are not great judges, as opposed to a computer. This is where we talk about going from feeling to facts.
So now we don't have artists do that anymore, because we are collecting all that information every night. We have machine learning that then goes in and determines requirements. Even though a certain shot has never been run before, it is very similar to another previous shot, and so we can predict what it is going to need to run.
Now, if a job is stuck, we can kill it with confidence. By doing that machine learning and taking the guesswork out of the allocation of resources, we were able to save 15 percent of our render time, which is huge.
I recently listened to a gentleman talk about what a difference of 1 percent improvement would be. So 15 percent is huge, that's 15 percent less money you have to spend. It's 15 percent faster time for a director to be able to see something. It's 15 percent more iterations. So that was really huge for us.
Gardner: It sounds like you are in the digital manufacturing equivalent of working smarter and not harder. With more intelligence, you can free up the art, because you have nailed the science when it comes to creating something.
Creative intelligence at the edge
Wike: It's interesting; we talk about intelligence at the edge and the Internet of Things (IoT), and that sort of thing. In my world, the edge is actually an artist. If we can take intelligence about their work, the computational requirements that they have, and if we can push that data -- that intelligence -- to an artist, then they are actually really, really good at managing their own work.
It's only a problem when they don't have any idea that six months from now it's going to cause a huge increase in memory usage or render time. When they don't know that, it's hard for them to be able to self-manage. But now we have artists who can access Tableau reports everyday and see exactly what the memory usage was or the compute usage of any of the assets they’ve created, and they can correct it immediately.
On Megamind, a film DreamWorks Animation released several years ago, it was prior to having the data analytics in place, and the studio encountered massive rendering spikes on certain shots. We really didn't understand why.
After the movie was complete, when we could go back and get printouts of logs to analyze, we determined that these peaks in rendering resources were caused by his watch. Whenever the main character’s watch was in a frame, the render times went up. We looked at the models, and well-intended artists had taken a model of a watch and every gear was modeled, and it was just a huge, heavy asset to render.
But it was too late to do anything about it. But now, if an artist were to create that watch today, they would quickly find out that they had really over-modeled that watch. We would then need to go in and reduce that asset down, because it's really not a key element to the story. And they can do that today, which is really great.
Gardner: I am a big fan of animated films, and I am so happy that my kids take me to see them because I enjoy them as much as they do. When you mention an artist at the edge, it seems to me it’s more like an army at the edge, because I wait through the end of the movie, and I look at the credits scroll -- hundreds and hundreds of people at work putting this together.
So you are dealing with not just one artist making a decision, you have an army of people. It's astounding that you can bring this level of data-driven efficiency to it.
Movie-making’s mobile workforce
Wike: It becomes so much more important, too, as we become a more mobile workforce.
Now it becomes imperative to be able to obtain the information about what those artists are doing so that they can collaborate. We know what value we are really getting from that, and so much information is available now. If you capture it, you can find so many things that we can really understand better about our creative process to be able to drive efficiency and value into the entire business.
Gardner: Before we close out, maybe a look into the crystal ball. With things like auto-scaling and composable infrastructure, where do we go next with computing infrastructure? As you say, it's now all these great screens in people's hands, handling high-definition, all the networks are able to deliver that, clearly almost an unlimited opportunity to bring entertainment to people. What can you now do with the flexible, efficient, optimized infrastructure? What should we expect?
Wike: There's an explosion in content and explosion in delivery platforms. We are exploring all kinds of different mediums. I mean, there’s really no limit to where and how one can create great imagery. The ability to do that, the ability to not say “No” to any project that comes along is going to be a great asset.
We always say that we don't know in the future how audiences are going to consume our content. We just know that we want to be able to supply that content and ensure that it’s the highest quality that we can deliver to audiences worldwide.
Gardner: It sounds like you feel confident that the infrastructure you have in place is going to be able to accommodate whatever those demands are. The art and the economics are the variables, but the infrastructure is not.
Wike: Having a software-defined environment is essential. I came from the software side; I started as a programmer, so I am coming back into my element. I really believe that now that you can compose infrastructure, you can change things with software without having to have people go in and rewire or re-stack, but instead change on-demand. And with machine learning, we’re able to learn what those demands are.
I want the computers to actually optimize and compose themselves so that I can rest knowing that my infrastructure is changing, scaling, and flexing in order to meet the demands of whatever we throw at it.
You may also be interested in:
- How Imagine Communications Leverages Edge Computing for Efficient Live IP Multiscreen Digital Video
- Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack
- The next line of defense—How new security leverages virtualization to counter sophisticated threats
- Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future
- India Smart Cities Mission shows IoT potential for improving quality of life at vast scale
- Converged IoT systems: Bringing the data center to the edge of everything
- Why effective IoT adoption is a team sport, and how to become a player
- How AI, IoT and blockchain will shake up procurement and supply chains
- Logicalis chief technologist defines the new ideology of hybrid IT
- TasmaNet ups its cloud game to deliver a regional digital services provider solution
The next BriefingsDirect digital business insights panel discussion focuses on the evolving role of women in business leadership. We’ll explore how pervasive business networks are impacting relationships and changes in business leadership requirements and opportunities for women.
To learn more about the transformation of talent management strategies as a result of digital business and innovation, please join me in welcoming our guests, Alicia Tillman, Chief Marketing Officer at SAP Ariba, and Lisa Skeete Tatum, Co-founder and CEO of Landit in New York. The panel was recorded in association with the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Alicia, looking at a confluence of trends, we have the rise of business networks and we have an advancing number of women in business leadership roles. Do they have anything to do with one another? What's the relationship?
Tillman: It is certainly safe to say that there is a relationship between the two. Networks historically connected businesses mostly from a transactional standpoint. But networks today are so much more about connecting people. And not only connecting them in a business context, but also from a relationship-standpoint as well.
There is as much networking and influence that happens in a digital network as does from meeting somebody at an event, conference or forum. It has really taken off in the recent years as being a way to connect quickly and broadly -- across geographies and industries. There is nothing that brings you speed like a network, and that’s why I think there is such a strong correlation to how digital networking has taken off -- and what a true technical network platform can allow.
Gardner: When people first hear “business networks,” they might think about transactions and applications talking to applications. But, as you say, this has become much broader in the last few years; business networks are really about social interactions, collaboration, and even joining companies culturally.
How has that been going? Has this been something that’s been powerful and beneficial to companies?
Tillman: It’s incredibly powerful and beneficial. If you think about how buying habits are these days, buyers are very particular about the goods that they are interested in, and, frankly, the people that they source from.
If I look at my buying population in particular at SAP Ariba, there is a tremendous movement toward sustainable goal or fair-trade types of responsibilities, of wanting to source goods from minority-owned businesses, wanting to source only organic or fair-trade products, wanting to only partner with organizations where they know within their supply chain the distribution of their product is coming from locations in the world where the working conditions are safe and their employees are being paid fairly.
A network allows for that; the SAP Ariba Network certainly allows for that, as we can match suppliers directly with what those incredibly diverse buyer needs are in today’s environment.
Gardner: Lisa, we just heard from Alicia about how it's more important that companies have a relationship with one another and that they actually look for culture and character in new ways. Tell us about Landit, and how you're viewing this idea of business networks changing the way people relate to their companies and even each other?
Skeete Tatum: Our goal at Landit is to democratize career success for women around the globe. We have created a technology platform that not only increases the success and engagement of women in the workplace, but it also enables companies in this new environment to attract, develop, and retain high-potential diverse talent.
We do that by providing each woman with the personalized playbook in the spirit of one-size-fits-one. That empowers them with the access to the tools, the resources, the know-how, and, yes, the human connections that they need to more successfully navigate their paths.
It’s really in response to the millions of women who will find themselves at an inflection point; whether they are in a company that they love but are just trying to figure out how to more successfully navigate there, or they may be feeling a little stuck and are not sure how to get out. The challenge is: “I am motivated, I have the skills, I just don’t know where to start.”
We have really focused on knitting what we believe are those key elements together -- leveraged by technology that actually guides them. But we find that companies in this new environment are often overwhelmed and trying to figure out a way to manage this new diverse workforce in this era of connectedness. So we give them a turnkey, one-size-fits-one solution, too.
As Alicia mentioned, in this next stage of collaborative businesses, there are really two things. One, we are more networked and more visible than ever before, which is great, because it’s created more opportunities and flexibility than we have seen -- not to mention more access. However, those opportunities are highly dependent on how someone showcases their value, their contribution, and their credibility, which makes it even more important to cultivate not only your brand and your network. It goes beyond just individual capabilities of getting at what is the sponsorship in the support of a strong network.
The second thing I would say, that Alicia also mentioned, is that today’s business environment -- which is more global, more diverse in its tapestry -- requires businesses to create an environment where everyone feels valued. People need to feel like they can bring the full measure of their talent and passion to the workplace. Companies want amazing talent to find a place at their company.
Gardner: If I’m at a company looking to be more diverse, how would I use Landit to accomplish that? Also, if I were an individual looking to get into the type of company that I want to be involved with, how would I use Landit?
Connecting supply and demand for talent
Skeete Tatum: As an individual, when you come on to Landit, we actually give you one of the key ingredients for success. Because we often don’t know what we don’t know, we knit together the first step, of “Where do I fit?” If you are not in a place that fits with your values, it’s not sustainable.
So we help you figure out what is it that fits with “all of me,” and we then connect you to those opportunities. Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.
Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.
And so, whether it is building your personal brand or building your board of advisors or continuing with your skill development in a personalized, relevant way -- or access to coaching because often many of us don’t have that unless we are in the C-suite on the way -- we are able to knit that together in a way that is relevant, that’s right-sized for the individual.
For the company, we give them a turnkey solution to invest in a scalable way, to touch more lives across their company, particularly in a more global environment. Rather than having to place multiple bets, they place one bet with Landit. We leverage that one-size-fits-one capability with things that we all know are keys to success. We are then able to have them deliver that again, whether it is to the newly minted managers or people they have just acquired or maybe they are leaders that they want to continue to invest in. We enable them to do that in a measurable way, so that they can see the engagement and the success and the productivity.
Gardner: Alicia, I know that SAP Ariba is already working to provide services to those organizations that are trying to create diversity and inclusion within their supply chains. How do you see Landit fitting into the business network that SAP Ariba is building around diversity?
Tillman: First, the SAP Ariba Network is the largest business to business (B2B) network on the planet. We connect more than 2.5 million companies that transact over $1 trillion in commerce annually. As you can imagine, there is incredible diversity in the buying requirements that exist amongst those companies that are located in all parts of the world and work in virtually every industry.
One of things that we offer as an organization is a Discovery tool. When you have a network that is so large, it can be difficult and a bit daunting for a buyer to find the supplier that meets their business requirements, and for a supplier to find their ideal buyer. So our SAP Ariba Discovery application is a matching service, if you will, that enables a buyer to list their requirements. You then let the tool work for you to allow matching you to suppliers that most meet your requirements, whatever they may be.
I’m very proud to have Lisa present at our Women in Leadership Forum at SAP AribaLIVE 2017. I am showcasing Lisa not only because of her entrepreneurial spirit and the success that she’s had in her career -- that I think will be very inspirational and motivational to women who are looking to continue to develop their careers -- but she has also created a powerful platform with Landit. For women, it helps provide a digital environment that allows them to harness precisely what it is that’s important to them when it comes to career development, and then offers the coaching in the Landit environment to enable that.
Landit also offers companies an ability to support their goals around gender diversity. They can look at the Landit platform and source talent that is not only very focused on careers -- but also supports a company in their diversity goals. It’s a tremendous capability that’s necessary and needed in today’s environment.
Gardner: Lisa, what has changed in the past several years that has prompted this changed workforce? We have talked about the business network as an enabler, and we have talked about social networks connecting people. But what's going to be different about the workforce going forward?
Collaborative visibility via networking
Skeete Tatum: There are three main things. First, there is a recognition that diversity is not a “nice to have,” it’s a “must-have” from a competitive standpoint; to acquire the best ideas and gain a better return on capital. So it’s a business imperative to invest in and value diversity within one's workforce. Second, businesses are continuing to shift toward matching opportunities with the people who are best able to do that job, but in a less-biased way. Thirdly, business-as-usual isn’t going to work in this new reality of career management.
It’s no longer one- or bi-directional, where it’s just the manager or the employee. It’s much more collaborative and driven by the individual. And so all of these things … where there is much more opportunity, much more freedom. But how do you anchor that with a problem and a framework and a connectivity that enables someone to more successfully navigate the new environment and new opportunities? How do you leverage and build your network? Everyone knows they need to do it, but many people don’t know how to do it. Or when your brand is even more important, visibility is more important, how do you develop and communicate your accomplishments and your value? It is the confluence of those things coming together that creates this new world order.
Gardner: Alicia, one of the biggest challenges for most businesses is getting the skills that they need in a timely fashion. How do we get past the difficulty of best matching hiring? How do we use business networks to help solve that?
Tillman: This is the beauty of technology. Technology is an enabler in business to form relationships more quickly, and to transact more quickly. Similarly, technology also provides a network to help you grow from a development standpoint. Lisa’s organization, Landit, is one example of that.
Within SAP Ariba we are very focused on closing the gap in gaining the skills that are necessary to be successful in today’s business environment. I look at the offering of SAP SuccessFactors - which is focused on empowering the humancapital management (HCM) organization to lead performance management and career development. And SAP Fieldglass helps companies find and source the right temporary labor that they need to service their most pressing projects. Combine all that with a business network, and there is no better place in today’s environment to find something you need -- and find it quickly.
But it all comes down to the individual’s desire to want to grow their skills, or find new skills, to get out of their comfort zone and try something new. I don’t believe there is a shortage of tools or applications to help enable that growth and talent. It comes down to the individual’s desire to want to grab it and go after it.
Maximize your potential with technology
Skeete Tatum: I couldn’t agree more. The technology and the network are what create the opportunity. In the past, there may have been a skills gap, but you have to be able to label it, you have to be able to identify it in a way that is relevant to the individual. As Alicia said, there are many opportunities out there for development, but how do you parse that down and deliver it to the individual in a way that is relevant -- and that’s actionable? That’s where a network comes in and where the power of one can be leveraged in a scalable way.
Now is probably one of the best times to invest in and have an individual grow to reach their full potential. The desire to meet their goals can be leveraged by technology in a very personal way.
Gardner: As we have been hearing here at SAP Ariba LIVE 2017, more-and-more technologies along the lines of artificial intelligence (AI) and machine learning (ML) – are taking advantage of all the data and analyzing it and making it actionable -- can now be brought to bear on this set of issues of matching workforce requirements with skill sets.
Where should we expect to see these technologies reduce the complexity and help companies identify the right workforce, and the workforce identify the right companies?
Skeete Tatum: Having the data and being able to quantify and qualify it gives you the power to set a path forward. The beauty is that it actually enables everyone to have the opportunity to contribute, the opportunity to grow, and to create a path and a sense of belonging by having a way to get there. From our perspective, it is that empowerment and that ownership -- but with the support of the network from the overall organization -- that enables someone to move forward. And it enables the organization to be more successful and more embracing of this new workforce, this diverse talent.
Tillman: Individuals should feel more empowered today than ever before to really take their career development to unprecedented levels. There are so many technologies, so many applications out there to help coach you on every level. It’s up to the individual to truly harness what is standing in front of them and
Gardner: Lisa, what should you be thinking about from a personal branding perspective when it comes to making the best use of tools like Landit and business networks?
Skeete Tatum: The first thing is that people actually have to think of themselves as a brand, as opposed to thinking that they are bragging or being boastful. The most important brand you have is the brand of you.
Second, people have to realize that this notion of building your brand is something that you nurture and it develops over time. What we believe is important is that we have to make it tangible, we have to make it actionable, and we have to make it bite-size, otherwise it seems overwhelming.
So we have defined what we believe are the 12 key elements for anyone to have a successful brand, such as have you been visible, do you have a strategic plan of you, are you seeking feedback, do you have a regular cadence of interaction with your network, et cetera. Knowing what to do and how to do it and at what cadence and at what level is what enables someone to move forward. And in today’s environment, again, it’s even more important.
Pique their curiosity by promoting your own
Tillman: Employers want to be sure that they are attracting candidates and employing candidates that are really invested in their own development. An employer operates in the best interest of the employee in terms of helping to enable tools and allow for that development to occur. At the same time, where candidates can really differentiate themselves in today’s work environment is when they are sitting across the table and they are in that interview. It's really important for a candidate to talk about his or her own development and what are they doing to constantly learn and support their curiosity.
Employers want curious people. They want those that are taking advantage of development and tools and learning, and these are the things that I think set people apart from one another when they know that individually they are going to go after learning opportunities and push themselves out of their comfort zone to take themselves – and ultimately the companies that employ them - to the next level.
Gardner: Before we close out, let’s take a peek into the crystal ball. What, Alicia, would be your top two predictions given that we are just on sort of an inflection point with this new network, with this new workforce and the networking effect for it?
Tillman: First, technology is only going to continue to improve. Networks have historically enabled buyers and sellers to come together and transact to build their organizations and support growth, but networks are taking on a different form.
Technology is going to continue to enable priorities professionally and priorities personally. Technology is going to become a leading enabler of a person’s professional development.
Second, individuals are going to set themselves apart from others by their desire and their hunger to really grab hold of that technology. When you think about decision-making among companies in terms of candidates they hire and candidates they don’t, employers are going to report back and say, “One of the leading reasons why I selected one candidate over another is because of their desire to learn and their desire to grab hold of technologies and networks that were standing in front of them to bring their careers to an unprecedented level.”
Gardner: Lisa, what are your top two predictions for the new workforce and particularly for diversity playing a bigger role?
Skeete Tatum: Technology is the ultimate leveler of the playing field. It enables companies as well as the individual to make decisions based on things that matter. That is what enables people to bring their full selves, the full measure of their talent, to the workplace.
In terms of networks in particular, they have always been a key element to success but now they are even more important. It actually poses a special challenge for diverse talent. They are often not part of the network, and they may have competing personal responsibilities that make the investment of the time and the frequency in those relationships a challenge.
Sometimes there is a discomfort with how to do it. We believe that through technology people will have to get comfortable with being uncomfortable. They need to learn not only how to codify their network, but also have the right access to the right person with the right cadence, and access to that know how, that guidance, can be delivered through technology.
You may also be interested in:
- SAP Ariba and MercadoLibre to Consumerize Business Commerce in Latin America
- Making Procurement Awesome—Latest Trends in How Business Networks are Fueling Innovation and Transformation
- Experts define new ways to manage supply chain risk in a digital economy
- How SAP Ariba became a first-mover as Blockchain comes to B2B procurement
- How AI, IoT and Blockchain will shake up procurement and supply chains
- Why effective IoT adoption is a team sport
- Diversity Spend: When Doing Good Leads to Doing Well
- Seven secrets to highly effective procurement: How business networks fuel innovation and transformation
- Meet George Jetson – your new AI-empowered chief procurement officer
- SAP Ariba's chief strategy officer on the digitization of business and future of technology
- Business in the Cloud: How Efficient Networks Help the Smallest Companies Do Brisk Business with the Largest
- A Hit with Consumers, Digital Payments Now Catching On Across the Business World Too
- How new technology trends disrupt the very nature of business
- Winning the B2B Commerce Game: What Sales Organizations Should Do Differently
The next BriefingsDirect digital business insights interview explores the successful habits, practices, and culture that define highly effective procurement organizations.
We'll uncover unique new research that identifies and measures how innovative companies have optimized their practices to overcome the many challenges facing business-to-business (B2B) commerce.
To learn more about the traits and best practices of the most successful procurement organizations, please join Kay Ree Lee, Director of Business Analytics and Insights at SAP Ariba. The interview was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Procurement is more complex than ever, supply chains stretch around the globe, regulation is on the rise, and risk is heightened across many fronts. Despite these, innovative companies have figured out how to overcome their challenges, and you have uncovered some of their secrets through your Annual Benchmarking Survey. Tell us about your research and your findings.
Lee: Every year we conduct a large benchmark program benefiting our customers that combines a traditional survey with data from the procurement applications, as well as business network.
This past year, more than 200 customers participated, covering more than $400 billion in spend. We analyzed the quantitative and qualitative responses of the survey and identified the intersection between those responses for top performers compared to average performers. This has allowed us to draw correlations between what top performers did well and the practices that drove those achievements.
Gardner: What’s changed from the past, what are you seeing as long-term trends?
Lee: There are three things that are quite different from when we last talked about this a year ago.
The number one trend that we see is that digital procurement is gaining momentum quickly. A lot of organizations are now offering self-service tools to their internal stakeholders. These self-service tools enable the user to evaluate and compare item specifications and purchase items in an electronic marketplace, which allows them to operate 24x7, around-the-clock. They are also utilizing digital networks to reach and collaborate with others on a larger scale.
The second trend that we see is that while risk management is generally acknowledged as important and critical, for the average company, a large proportion of their spend is not managed. Our benchmark data indicates that an average company manages 68% of their spend. This leaves 32% of spend that is unmanaged. If this spend is not managed, the average company is also probably not managing their risk. So, what happens when something unexpected occurs to that non-managed spend?
The third trend that we see is related to compliance management. We see compliance management as a way for organizations to deliver savings to the bottom line. Capturing savings through sourcing and negotiation is a good start, but at the end of the day, eliminating loopholes through a focus on implementation and compliance management is how organizations deliver and realize negotiated savings.
Gardner: You have uncovered some essential secrets -- or the secret sauce -- behind procurement success in a digital economy. Please describe those.
Five elements driving procurement processes
Lee: From the data, we identified five key takeaways. First, we see that procurement organizations continue to expand their sphere of influence to greater depth and quality within their organizations. This is important because it shows that the procurement organization and the work that procurement professionals are involved in matters and is appreciated within the organization.
The second takeaway is that – while cost reduction savings is near and dear to the heart of most procurement professionals -- leading organizations are focused on capturing value beyond basic cost reduction. They are focused on capturing value in other areas and tracking that value better.
The third takeaway is that digital procurement is firing on all cylinders and is front and center in people's minds. This was reflected in the transactional data that we extracted.
The fourth takeaway is related to risk management. This is a key focus area that we see instead of just news tracking related to your suppliers.
The fifth takeaway is -- compliance management and closing the purchasing loopholes is what will help procurement deliver bottom-line savings.
Gardner: What next are some of the best practices that are driving procurement organizations to have a strategic impact at their companies, culturally?
Lee: To have a strategic impact in the business, procurement needs to be proactive in engaging the business. They should have a mentality of helping the business solve business problems as opposed to asking stakeholders to follow a prescribed procurement process. Playing a strategic role is a key practice that drives impact.
They should also focus on broadening the value proposition of procurement. We see leading organizations placing emphasis on contributing to revenue growth, or increasing their involvement in product development, or co-innovation that contributes to a more efficient and effective process.
Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks, system controls to direct compliance, automation through workflow, et cetera.
These are examples of practices and focus areas that are becoming more important to organizations.
Using technology to track technology usage
Gardner: In many cases, we see the use of technology having a virtuous adoption cycle in procurement. So the more technology used, the better they become at it, and the more technology can be exploited, and so on. Where are we seeing that? How are leading organizations becoming highly technical to gain an advantage?
Lee: Companies that adopt new technology capabilities are able to elevate their performance and differentiate themselves through their capabilities. This is also just a start. Procurement organizations are pivoting towards advanced and futuristic concepts, and leaving behind the single-minded focus on cost reduction and cost efficiency.
Digital procurement utilizing electronic marketplaces, virtual catalogs, gaining visibility into the lifecycle of purchase transactions, predictive risk management, and utilizing large volumes of data to improve decision-making – these are key capabilities that benefit the bold and the future-minded. This enables the transformation of procurement, and forms new roles and requirements for the future procurement organization.
Gardner: We are also seeing more analytics become available as we have more data-driven and digital processes. Is there any indication from your research that procurement people are adopting data-scientist-ways of thinking? How are they using analysis more now that the data and analysis are available through the technology?
Lee: You are right. The users of procurement data want insights. We are working with a couple of organizations on co-innovation projects. These organizations actively research, analyze, and use their data to answer questions such as:
- How does an organization validate that the prices they are paying are competitive in the marketplace?
- After an organization conducts a sourcing event and implements the categories, how do they actually validate that the price paid is what was negotiated?
- How do we categorize spend accurately, particularly if a majority of spend is services spend where the descriptions are non-standard?
- Are we using the right contracts with the right pricing?
As you can imagine, when people enter transactions in a system, not all of it is contract-based or catalog-based. There is still a lot of free-form text. But if you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights. This goes back to the managing compliance practice we talked about earlier.
They are also looking to answer questions like, how do we scale supplier risk management to manage all of our suppliers systematically, as opposed to just managing the top-tier suppliers?
These two organizations are taking data analysis further in terms of creating advantages that begin to imbue excellence into modern procurement and across all of their operations.
Gardner: Kay Ree, now that you have been tracking this Benchmark Survey for a few years, and looking at this year's results, what would you recommend that people do based on your findings?
Future focus: Cost-reduction savings and beyond
Lee: There are several recommendations that we have. One is that procurement should continue to expand their span of influence across the organization. There are different ways to do this but it starts with an understanding of the stakeholder requirements.
The second is about capturing value beyond cost-reduction savings. From a savings perspective, the recommendation we have is to continue to track sourcing savings -- because cost-reduction savings are important. But there are other measures of value to track beyond cost savings. That includes things like contribution to revenue, involvement in product development, et cetera.
The third recommendation relates to adopting digital procurement by embracing technology. For example, SAP Ariba has recently introduced some innovations. I think the user really has an advantage in terms of going out there, evaluating what is out there, trying it out, and then seeing what works for them and their organization.
As organizations expand their footprint globally, the fourth recommendation focuses on transaction efficiency. The way procurement can support organizations operating globally is by offering self-service technology so that they can do more with less. With self-service technology, no one in procurement needs to be there to help a user buy. The user goes on the procurement system and creates transactions while their counterparts in other parts of the world may be offline.
The fifth recommendation is related to risk management. A lot of organizations when they say, “risk management,” they are really only tracking news related to their suppliers. But risk management includes things like predictive analytics, predictive risk measures beyond your strategic suppliers, looking deeper into supply chains, and across all your vendors. If you can measure risk for your suppliers, why not make it systematic? We now have the ability to manage a larger volume of suppliers, to in fact manage all of them. The ones that bubble to the top, the ones that are the most risky, those are the ones that you create contingency plans for. That helps organizations really prepare to respond to disruptions in their business.
The last recommendation is around compliance management, which includes internal and external compliance. So, internal adherence to procurement policies and procedures, and then also external following of governmental regulations. This helps the organization close all the loopholes and ensure that sourcing savings get to the bottom line.
Be a leader, not a laggard
Gardner: When we examine and benchmark companies through this data, we identify leaders, and perhaps laggards -- and there is a delta between them. In trying to encourage laggards to transform -- to be more digital, to take upon themselves these recommendations that you have -- how can we entice them? What do you get when you are a leader? What defines the business value that you can deliver when you are taking advantage of these technologies, following these best practices?
Lee: Leading organizations see higher cost reduction savings, process efficiency savings and better collaboration internally and externally. These benefits should speak for themselves and entice both the average and the laggards to strive for improvements and transformation.
From a numbers perspective, top performers achieve 9.7% savings as a percent of sourced spend. This translates to approximately $20M higher savings per $B in spend compared to the average organization.
We talked about compliance management earlier. A 5% increase in compliance increases realized savings of $4.4M per $1B in spend. These are real hard dollar savings that top performers are able to achieve.
In addition, top performers are able to attract a talent pool that will help the procurement organization perform even better. If you look at some of the procurement research, industry analysts and leaders are predicting that there may be a talent shortage in procurement. But, as a top performer, if you go out and recruit, it is easier to entice talent to the organization. People want to do cool things and they want to use new technology in their roles.
Gardner: Wrapping up, we are seeing some new and compellingtechnologies here at Ariba LIVE 2017 -- more use of artificial intelligence(AI), increased use of bringing predictive tools into a context so that they can be of value to procurement during the life-cycle of a process.
As we think about the future, and more of these technologies become available, what is it that companies should be doing now to put themselves in the best position to take advantage of all of that?
Lee: It's important to be curious about the technology available in the market and perhaps structure the organization in such a way that there is a team of people on the procurement team who are continuously evaluating the different procurement technologies from different vendors out there. Then they can make decisions on what best fits their organization.
Having people who can look ahead, evaluate, and then talk about the requirements, then understand the architecture, and evaluate what's out there and what would make sense for them in the future. This is a complex role. He or she has to understand the current architecture of the business, the requirements from the stakeholders, and then evaluate what technology is available. They must then determine if it will assist the organization in the future, and if adopting these solutions provides a return on investment and ongoing payback.
So I think being curious, understanding the business really well, and then wearing a technology hat to understand what's out there are key. You can then be helpful to the organization and envision how adopting these newer technologies will play out.
You may also be interested in:
- Experts define new ways to manage supply chain risk in a digital economy
- How SAP Ariba became a first-mover as Blockchain comes to B2B procurement
- How AI, IoT and Blockchain will shake up procurement and supply chains
- Why effective IoT adoption is a team sport
- Diversity Spend: When Doing Good Leads to Doing Well
- Seven secrets to highly effective procurement: How business networks fuel innovation and transformation
- Meet George Jetson – your new AI-empowered chief procurement officer
- SAP Ariba's chief strategy officer on the digitization of business and future of technology
- Business in the Cloud: How Efficient Networks Help the Smallest Companies Do Brisk Business with the Largest
- A Hit with Consumers, Digital Payments Now Catching On Across the Business World Too
- How new technology trends disrupt the very nature of business
- Winning the B2B Commerce Game: What Sales Organizations Should Do Differently
The next BriefingsDirect technology innovation thought leadership discussion explores how rapid advances in artificial intelligence (AI) and machine learning are poised to reshape procurement -- like a fast-forwarding to a once-fanciful vision of the future.
Whereas George Jetson of the 1960s cartoon portrayed a world of household robots, flying cars, and push-button corporate jobs -- the 2017 procurement landscape has its own impressive retinue of decision bots, automated processes, and data-driven insights.
We won’t need to wait long for this vision of futuristic business to arrive. As we enter 2017, applied intelligence derived from entirely new data analysis benefits has redefined productivity and provided business leaders with unprecedented tools for managing procurement, supply chains, and continuity risks.
The next BriefingsDirect innovation discussion focuses on how technology, data analysis, and digital networks are transforming procurement and the source-to-pay process as we know it. We’ll also discuss what it takes to do procurement well in this new era of business networks.
Far beyond just automating tasks and transactions, procurement today is a strategic function that demands an integrated, end-to-end approach built on deep insights and intelligence to drive informed source-to-pay decisions and actions that enable businesses to adopt a true business ecosystem-wide digital strategy.
Learn how UPS -- across billions of dollars of supplier spend per year -- automates supply-chain management and leverages new technologies to provide greater insight into procurement networks. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.
To explore how procurement has become strategic for UPS, BriefingsDirect sat down with Jamie Dawson, Vice-President of UPS's Global Business Services Procurement Strategy in Atlanta. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting technology advances in procurement and finance services to produce new types of productivity benefits.
We'll now hear from a procurement expert on how companies can better manage their finances and have tighter control over procurement processes and their supply chain networks. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.
To learn more about how technology trends are driving innovation into invoicing and spend management, please welcome Joanna Martinez, Founder at Supply Chain Advisors and former Chief Procurement Officer at both Cushman and Wakefield and AllianceBernstein. She's based in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
We'll hear now from a leading industry analyst on how machine learning, cloud services, and artificial intelligence-enabled human agents are all combining to change the way that companies can order services, buy goods, and even hire employees and contractors. This business process innovation exchange comes to you in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.
The next BriefingsDirect business innovation thought leadership discussion focuses on how companies are exploiting advances in procurement and finance services to produce new types of productivity benefits.
We'll now hear from a leading industry analyst on how more data, process integration, and analysis efficiencies of cloud computing are helping companies to better manage their finances in tighter collaboration with procurement and supply-chain networks. This business-process innovation exchange comes in conjunction with the Tradeshift Innovation Day held in New York on June 22, 2016.