.banner-thumbnail-wrapper { display:none; }

Legacy IT evolves: How cloud choices like Microsoft Azure can conquer the VMware Tax

Legacy IT evolves: How cloud choices like Microsoft Azure can conquer the VMware Tax

For thousands of companies, the evaluation of their cloud choices also impacts how they on can help conquer the “VMware tax” by moving beyond a traditional server virtualization legacy.

Consumption model: De-horning a couple of major IT dilemmas

Pointnext.JPG

By: Stephen Mease, Supplier Business Executive – HPE Pointnext Services, Tech Data

 

 

 

The expression “horns of a dilemma” has always struck me as a perfect description of the terrible, nasty feeling you have when faced with a seemingly no-win situation. It definitely applies to the way many enterprises now feel when they need to somehow deal with the rapid changes occurring in both technology and the marketplace. Questions that illustrate these dilemmas are how to:

dilemma.jpg
  • Implement IoT without adding staff or skills
  • Rely on proven solutions
  • Keep control
  • Scale and innovate
  • Pay as you go
  • Align investments with returns
  • Minimize security exposure

Here are a couple of pointed (ouch) examples:

 

Too much? Not enough?  

When it comes to acquiring those solutions, the dilemma for servers and storage focuses on capacity. Because realities are changing so quickly, capacity planning becomes a definite challenge. This creates a situation where it’s possible to purchase too little or too much capacity.

Purchase too little and you risk not being able to meet business demands down the road. According to 451 Research, 50% of enterprises have suffered downtime as a result of poor capacity planning. Remember, adding capacity often requires going through a lengthy purchasing process. What happens if demand drops again in response to another change once you have added the new capacity? Ouch. 

The same 451 Research report states that enterprises over-provision compute capacity by an average of 59% and storage capacity by an average of 48%. That’s a clear waste of $$ that could be better used to meet other business requirements.

 

Spend, spend, spend?

Every one of the rapid changes seems to generate one or more new technology solutions. The great thing is that most solutions go beyond simply helping you deal with change. They actually enhance your ability to success and compete. But here’s the dilemma. Can you really afford to acquire every new solution that becomes available? On the other hand, can you really afford not to?

If you go the spend route, you end up with the latest and best tools for analytics, cloud, IoT, mobility, and much more.  But you’d end up with a massive hole in your budget, because the changes are never-ending. 

If you resist the temptation to splurge on IT, you gradually (sometimes not so gradually) fall behind the competition and can no longer deliver what customers expect. And nobody can afford to have that happen, as so many major retail stores have recently discovered.  

 

How an IT consumption model de-horns the dilemma  

IT Consumption models turns the no-win situations I describe above into clear winners.

In terms of acquisition, these models allow you to acquire new technology with OPEX vs. CAPEX funds. They also eliminate the lengthy process associated with purchasing additional technology by making more capacity instantly available as part of the consumption agreement.

There’s also no fear of choosing too much or too little capacity. These consumption models let you provision for what you need and yet pay only for what you actually use over and above a basic charge.

No waste, no worry, no horns, no dilemmas.  Just a flexible, sensible way to acquire and consume the technology you need to thrive in the face of rapidly-changing realities. In other words, accelerated outcomes on your terms. 

 

Take a swim in the HPE GreenLake waters

DcRxdsxWkAAUkJn.jpg

Hewlett Packard Enterprise offers three ways to make the IT consumption-model yours with its GreenLake approach:

 

  1. HPE GreenLake suite of on-premises, consumption-based solutions for your top workloads—big data, backup, database platform, SAP HANA, and edge computing—delivered on your terms.  
  2. HPE GeenLake Flex Capacity for an enterprise-wide or larger-scale consumption-based solution  
  3. HPE GreenLake Standard Packages - small-scale point solutions for individual technologies like 3PAR storage and converged systems, as well as for specific use cases like VDI and composable infrastructure

Choose the approach or combination of approaches that best meet your operation and its requirements.

 

More information

Complete information on the HPE consumption models is available on the HPE GreenLake website. An excellent video on this site provides a quick overview.

If you are currently a Tech Data HPE Channel Partner, please visit https://www.themaxmind.com/hpepointnext for HPE Pointnext on-demand training videos and resources.     

Learn how to simplify your business at HPE Discover Las Vegas

0,0,1024,1024.jpg

sansan_strozier  2 weeks ago

Join us at HPE Discover to learn more about solutions built on secure HPE ProLiant Gen10 platforms—and how they can simplify your business affordably.

Heading to HPE Discover? Please join Tim Peters, Vice President and General Manager SMB Segment and ProLiant Tower Servers, Joaquin Ochoa, Owner J-Tech, and Ruth Wrestling Owner/Director, Faith Lutheran church and school, as they discuss new and improved ways to help small businesses succeed.

medium (1).jpg

Led by Peters and these two business owners, the discussion will focus on new, lower-cost solutions tailored to meet business needs. Whether you are looking for a unified threat management (UTM), virtualization, storage and backup or a multi-function business solution, we’ve put together offers that you will not be able to pass by. Built on the industry’s most secure HPE ProLiant Gen10 platforms, the solutions pull together best-in-breed server, storage, network, management and software products in a way that makes them affordable to even the most demanding business.

Register today for HPE Discover Session B5146

  • Location: Sands conference center
  • Room: Titian 2306
  • Date: 06/19/2018
  • Time: 10:30 AM

Check out the Agenda Builder to make the most of your time at Discover.

More to explore at HPE Discover

In the Technology Showcase:
Futures including new ProLiant Gen10 servers and solutions for SMBs

In the Solutions Showcase:
Unified Threat Management demo and HPE ProLiant ML350, ML110 and MicroServer platforms

Learn more now about entry-level servers and solutions.

And hope to see you soon at HPE Discover in Las Vegas, June 19-21.

What Microsoft means by “Intelligent Communications”

Richard-j-smith-sq.jpg

RICHARD SMITH
 30-May-2018

microsoft-intelligent-communications-c.jpg
 

Microsoft launched its vision for Intelligent Communications in September 2017 and celebrated the 1st anniversary of Microsoft Teams 6 months later, in March of this year.  This bold vision describes a world beyond that of traditional unified communications to one where Microsoft seeks to enable people to complete tasks more efficiently with minimal context switching, to participate in more productive meetings that cover the entire meeting lifecycle and to help people better manage their everyday communications overload.  

To facilitate this, Microsoft is bringing the real-time communication capabilities of Skype for Business into Teams and delivering a single hub for teamwork which will have built-in, fully integrated voice and video calling as well as ad-hoc and scheduled online meetings. Microsoft is tightly weaving communications into the applications people use to collaborate every day, alongside AI, Microsoft Graph, LinkedIn and other data and cognitive services, and by doing so will enable this vision of "Intelligent Communications".  

Microsoft Teams remains central to this vision by becoming the primary client for intelligent communications and collaboration delivered as a cloud service in Office replacing Skype for Business online and the Skype for Business client. In fact, internally at Microsoft many groups have already made this switch and are using Teams as the primary communications client as you might expect.   

So just how quickly might this happen for the for the rest of us? This really depends on your business needs and the current and pending feature capabilities in Teams. The Skype for Business to Microsoft Teams Capabilities Roadmap provides the latest updates for customers to review and determine if their business needs will be met with a move to Teams now, or if they should wait. For those customers not yet ready to adopt Teams or run in the cloud only, the new Skype for Business Server 2019 is due to be released towards the end of 2018 enabling on-premises deployments to continue.  

For customers that are in the process of moving to Skype for Business Online, the guidance from Microsoft is still to evaluate the roadmap and Teams' current capabilities and if it meets the business need, adjust strategy and move to Teams. If not, continue with Skype for Business Online and consider running Teams independently or in parallel.  

Whatever course of action you take, the underlying need to ensure your ongoing ability to plan, build, deploy, manage and optimize your Unified Communications platforms and architectures remains critical; and that is where Prognosis can help. More than 1000 organizations in over 60 countries—including some of the world’s largest banks, airlines and telecommunication companies, rely on IR Prognosis to provide business critical insights and ensure continuity-critical systems deliver high availability and performance for millions of their customers across the globe. For more information check out our Microsoft (and multi-vendor) offering here.  

In Celebration of Female Engineers and Innovators

sgraye.jpg

In Celebration of Female Engineers and Innovators

sgraye 
Talent Acquisition and Attraction
 ‎04-04-2018 06:00 AM

 NCWIT Group Photo 2015

NCWIT Group Photo 2015

Technology innovation is the bedrock of Hewlett Packard Enterprise (HPE) and our employees are the engine that fuels it. Over the past three years, we have been on a journey to rapidly transform the company to better align with changing technology trends and evolving customer needs. A critical element of this transformation has been the re-ignition of our innovation engine. Every HPE innovation comes from a team of individuals, each contributing their unique perspective, knowledge and experience to advance the way the world works and lives. The full power of our people is driving HPE’s success. A focus on Inclusion and Diversity helps to drive new business, fuel innovation, attract and attain the best employees.

Our culture supports and inspires women in technical roles through the stages of their careers and lives as we continuously push the boundaries of technology to deliver life-enriching innovations that impact our customers, partners and the world.

 NCWIT awards with Pat Russo and Janice Zdankus

NCWIT awards with Pat Russo and Janice Zdankus

HPE has a long history of supporting and partnering with organizations that create, celebrate and support female innovators.  A big part of HPE’s impact in the industry starts with supporting and growing technology and STEM interest in young women from early in their education, up and throughout their employment. One partner we amplify impact with is the National Center for Women & Information Technology (NCWIT). NCWIT is dedicated to increasing the meaningful participation of young women in computing and technology careers.  NCWIT offers a number of programs that complement and supplement HPE’s efforts in this critical area.

HPE has provided leadership and financial support, as an NCWIT investment partner, for the NCWIT Collegiate Award, an honor that annually recognizes undergraduate and graduate women’s technical contributions to projects that demonstrate a high level of innovation and potential impact:

  • Since 2015, 47 college women have been recognized with the Collegiate Award.
  • Each recipient receives a trip to the NCWIT Summit with a private networking reception.
  • Winners receive $10,000 in cash and an engraved award; and honorable mentions receive $2,500 in cash and a covered certificate.
  • Winners and honorable mentions hail from across the country, representing 30 different colleges and universities.
  • HPE employees have participated in the review and selection of award winners

 The Collegiate Award is a component of the NCWIT Aspirations in Computing (AiC) program, which provides technical girls and women with ongoing engagement, visibility, and encouragement for their computing-related interests and achievements from high school through college and into the workforce, and it is making a difference:

  • The AiC Community, the largest network of its kind, includes more than 10,000 technical women.
  • Recipients of the Award for AiC, another component of the comprehensive AiC program, consistently report greater confidence, awareness of computing fields, motivation to persist, as well as less anxiety and uncertainty about computing skills when asked to describe the impact of the award.
  • Furthermore, 91 percent of past Award for AiC winners report a major or minor in a STEM field while in college — 77 percent in computer science or engineering.
  • NCWIT Collegiate Award recipients often receive national media attention for their mobile applications, devices, visualization tools, and other innovations.

 HPE Vice President of Quality Janice Zdankus serves on the NCWIT Board of Directors and is a visible example of leadership that is making a difference, making participation “real” and inspiring participation and outreach amongst employees. Janice provides input to NCWIT’s strategy and program directions, and brings ideas and best practices back to HPE. Janice will join NCWIT in honoring the 2018 Collegiate Award and Pioneer in Tech Award recipients at the 2018 NCWIT Summit. The NCWIT Summit is the world’s largest annual convening of change leaders focused on significantly improving diversity and inclusion in computing. Educators, entrepreneurs, corporate executives, and social scientists (both men and women) from across industries and disciplines participate in this one-of-a-kind opportunity.

 NCWIT Tucson - 2017

NCWIT Tucson - 2017

During celebrative times like Women’s History Month, elevating the technical women of today who are making history, is just as critical as paying tribute to previous generations of women whose technical contributions have proved invaluable to society and have inspired young women. This support and recognition is important to lift meaningful participation of all women — at the intersections of race, ethnicity, class, age, sexual orientation, and disability status — in the influential field of computing, particularly in terms of innovation and development.

 HPE 2018 Women's Network North Texas International Women's Day Celebration

HPE 2018 Women's Network North Texas International Women's Day Celebration

 

About the Author

  Jill Sweeney, Senior Manager

Jill Sweeney, Senior Manager

Jill Sweeney leads technical Knowledge management for volume servers, composable systems, high performance computing and Artificial Intelligence (AI) at Hewlett Packard Enterprise.  Jill and her team are transforming the experiences customers and partners have with HPE's products, solutions and support information to foster positive customer business outcomes.

No stranger to change management and transformation, Jill has held technology focused and marketing roles at HPE including launching both the Internet of Things (IoT) and mobility go-to-market programs as well as managing global brand programs for Hewlett Packard's Starbucks Alliance and employee communication engagement. 

Prior to the HP/Compaq merger, Jill drove alliances for a Compaq owned start-up, B2E Solutions.  Jill is a champion for Inclusion and Diversity as well as STEM Careers.  She actively support HPE Code Wars and University recruiting.

This year, Jill has taken a new challenge,  addressing a societal problem of human trafficking.  She is working with a local organization to give female victims of human trafficking career coaching and referrals to coding camps to break the economic cycle, supporting dignity and sharing hope..

An inspirational and motivational speaker, Jill has recently given industry keynotes on topics including IoT trends, diversity, employee engagement and work-life transformaiton.  Jill has served on the Anita.Borg Partner Forum to select technical topics and source industry leading speakers for the Grace Hopper Celebration panel submissions.

Follow Jill on Twitter @JillW_Sweeney and on LinkedIn www.linkedin.com/in/jillsweeney.

Featured articles:

How new tools help any business build ethical and sustainable supply chains

The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.

We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks -- from use of illegal and unethical labor practices to hidden environmental malpractices

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?

Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.

 Harris

Harris

First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good. 

Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society -- and not just generating dividends and capital growth for shareholders. 

Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent -- particularly Millennials and Generation Z -- and they are more likely to get better investment rates as well. 

Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?

Better business decisions with AI

Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.

We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings. 

These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.

What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.

Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba. 

Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.

McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand -- not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements. 

 McVeigh

McVeigh

But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.

We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.

At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive. 

They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks -- like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators -- it’s all that data matrix that comes together that tell the actionable story. 

Some companies still just want to check a [compliance] box. Other companies want to dig deeper -- but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips. 

Global compliance

Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.

Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”

 Rakowski

Rakowski

They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business. 

What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in. 

Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score. 

And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices. 

Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own -- and SAP Ariba is now aggregating the aggregators.

How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from? 

Timely insights, best business decisions

Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions. 

The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there’s risk, there is opportunity.

So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.

We're looking at not just trying to support risk through that source-to-settlement process -- trying to surface those risk insights -- but also understanding that where there is risk there is opportunity.

Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?

Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.

When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt. 

Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them? 

AI risk assessment reaps rewards

Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology -- through our procurement suite, for example -- to impact the decisions they’re making. 

For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time. 

A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas. 

Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?

Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.

We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.

AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.

The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material. 

And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant. 

Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.

We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article. 

By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”

And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly. 

Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem? 

The AI-aggregation advantage

McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably

We look at many indices. We actually just revamped our social indices a couple of years ago.

Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources. 

Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them. 

We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it. 

Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?

Partnership prowess

Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.

We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame. 

And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly. 

What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers. 

Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?

Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba's supplier risk solutions directly.

Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those. 

If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.

Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well -- or not so well.

Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency -- how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?

Nice-to-have ROI

Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.

How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable. 

And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”

And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process. 

We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it -- but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.

Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen? 

Pay attention, be prepared

Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that. 

From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great. 

They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.

Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.

And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.

Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?

Synergistic risk management

Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team. 

I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.

I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers -- and who are the suppliers to the suppliers to the suppliers?

We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Panel explores new ways to solve the complexity of hybrid cloud monitoring

The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.

We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.

Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.

We will also hear about new solutions and explore examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let's begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing -- and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.

Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.

They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.

Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.

The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.

In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing -- but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs -- never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.

Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.

Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.

Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured -- and at compelling prices.

Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.

Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.

IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.

We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.  

Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information. 

Ian, please tell us more about the survey findings.

IT leaders respond to their needs 

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.

 Bromehead

Bromehead

More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.

The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].

What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments -- whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.

What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.

The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.

Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.

What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed -- and what depends on the internal, on-premise as well as private and public cloud instances.

People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.

They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.

It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.

I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.

The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts. 

So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.

People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs -- the cost of what’s being deployed -- and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.

The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.

For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users -- then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.

Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading. 

So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.

So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics 

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.

Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.

 Brandt

Brandt

So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.

Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.

We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on -- they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.

But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey -- some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.

Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data? 

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical -- and must be applied across hybrid IT against your resources and map them to your services that you define.

Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.

And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.

When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt. 

And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.

We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.

Operations Bridge in more detail  

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.

We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.

Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.

 Burose

Burose

All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data -- all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.

And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.

This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.

As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting -- independent of the technology -- whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.

They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology -- everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.

Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like AzureAmazon, or Google Cloud.

We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.

Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.

This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.

Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks -- or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.

If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.

Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.

Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data -- not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?

And you can have this on a network operations center (NOC) wall, on your tablet, or your phone -- wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.

Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.

David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.

 Herrera

Herrera

We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.

We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.

The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using -- almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.

The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.

[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.

Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.

We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Micro Focus.

You may also be interested in:

How HudsonAlpha transforms hybrid cloud complexity into an IT force multiplier

The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.

We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?

Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.

 Mullican

Mullican

We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.

Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?

Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.

Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.

Software-defined everything 

Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.

Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize -- yet allow for experimentation, too?

Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.

Simplified 

Hybrid Cloud

Management

Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.

Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.

Gardner: What about the IT requirements? How you fulfill that mandate using technology?

Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers. 

Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?

Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.

Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?

Agnostic arrangements of clouds

Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.

There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind -- so you don’t lock into any one solution necessarily -- then you can optimize and put everything together.

Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.

Cloud cost control, granted

Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud -- from storage to compute -- and that cost is covered by the grant.

So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.

Simplified 

Hybrid Cloud

Management

Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?

Variety is the spice of IT

Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.

The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access -- through a common API -- that infrastructure. So it’s very exciting.

Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?

Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully -- and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.

Gardner: What this might capability bring to you and your organization?

Cross-training the clouds

Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds -- AWSGoogleMicrosoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.

For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.

But the key here is that our developer and researchers -- the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline -- and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.

Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.

Simplified 

Hybrid Cloud

Management

Mullican: Right, and that will be an ongoing effort to always optimize cost -- but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.

Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.

Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.

Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.

The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.

As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE SynergyHPE SimpliVity HCI, and HPE 3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a "whole greater than the sum of the parts" effect?

Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer -– open-source operating systems and applications.

The ability to utilize that in the mix that I have architected natively with OneSphere -- in addition to the public clouds -- is very powerful, and I am excited to see where that goes.

Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure? 

Get adept at DevOps

Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.

As far as putting an emphasis on automation -- and being able to provision infrastructure in the datacenter and the cloud through automated APIs -- a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.

It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources -- that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.

That’s the partnership that companies really need to foster -- and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.

On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.

Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.

We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.

Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?

HCI advances across the globe

Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.

And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.

Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.

Is there something about having that visibility into hybrid IT that extends into hybrid data environments?

Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.

So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.

Simplified 

Hybrid Cloud

Management

Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.

Hybrid hard work pays off

Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.

HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

South African insurer King Price gives developers the royal treatment as HCI meets big data

The next BriefingsDirect developer productivity insights interview explores how a South African insurance innovator has built a modern hyperconverged infrastructure (HCI) IT environment that replicates databases so fast that developers can test and re-test to their hearts’ content.

We’ll now learn how King Price in Pretoria also gained data efficiencies and heightened disaster recovery benefits from their expanding HCI-enabled architecture

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help us explore the myriad benefits of a data transfer intensive environment is Jacobus Steyn, Operations Manager at King Price in Pretoria, South Africa. The discussion is moderated by  Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What have been the top trends driving your interest in modernizing your data replication capabilities?

Steyn: One of the challenges we had was the business was really flying blind. We had to create a platform and the ability to get data out of the production environment as quickly as possible to allow the business to make informed decisions -- literally in almost real-time.

Gardner: What were some of the impediments to moving data and creating these new environments for your developers and your operators?

How to solve key challenges

With HPE SimpliVity HCI

Steyn: We literally had to copy databases across the network and onto new environments, and that was very time consuming. It literally took us two to three days to get a new environment up and running for the developers. You would think that this would be easy -- like replication. It proved to be quite a challenge for us because there are vast amounts of data. But the whole HCI approach just eliminated all of those challenges.

Gardner: One of the benefits of going at the infrastructure level for such a solution is not only do you solve one problem-- but you probably solve multiple ones; things like replication and deduplication become integrated into the environment. What were some of the extended benefits you got when you went to a hyperconverged environment?

Time, Storage Savings 

Steyn: Deduplication was definitely one of our bigger gains. We have had six to eight development teams, and I literally had an identical copy of our production environment for each of them that they used for testing, user acceptance testing (UAT), and things like that.

 Steyn

Steyn

At any point in time, we had at least 10 copies of our production environment all over the place. And if you don’t dedupe at that level, you need vast amounts of storage. So that really was a concern for us in terms of storage.

Gardner: Of course, business agility often hinges on your developers’ productivity. When you can tell your developers, “Go ahead, spin up; do what you want,” that can be a great productivity benefit.

Steyn: We literally had daily fights between the IT operations and infrastructure guys and the developers because they were needed resources and we just couldn’t provide them with those resources. And it was not because we didn’t have resources at hand, but it was just the time to spin it up, to get to the guys to configure their environments, and things like that.

It was literally a three- to four-day exercise to get an environment up and running. For those guys who are trying to push the agile development methodology, in a two-week sprint, you can’t afford to lose two or three days.

Gardner: You don’t want to be in a scrum where they are saying, “You have to wait three or four days.” It doesn’t work.

Steyn: No, it doesn’t, definitely not.

Gardner: Tell us about King Price. What is your organization like for those who are not familiar with it?

As your vehicle depreciates, so does your monthly insurance premium. That has been our biggest selling point.  

Steyn: King Price initially started off as a short-term insurance company about five years ago in Pretoria. We have a unique, one-of-a-kind business model. The short of it is that as your vehicle’s value depreciates, so does your monthly insurance premium. That has been our biggest selling point.

We see ourselves as disruptive. But there are also a lot of other things disrupting the short-term insurance industry in South Africa -- things like Uber and self-driving cars. These are definitely a threat in the long term for us.

It’s also a very competitive industry in South Africa. Sowe have been rapidly launching new businesses. We launched commercial insurance recently. We launched cyber insurance. Sowe are really adopting new business ventures.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: And, of course, in any competitive business environment, your margins are thin; you have to do things efficiently. Were there any other economic benefits to adopting a hyperconverged environment, other than developer productivity?

Steyn: On the data center itself, the amount of floor space that you need, the footprint, is much less with hyperconverged. It eliminates a lot of requirements in terms of networking, switching, and storage. The ease of deployment in and of itself makes it a lot simpler.

On the business side, we gained the ability to have more data at-hand for the guys in the analytics environment and the ratings environment. They can make much more informed decisions, literally on the fly, if they need to gear-up for a call center, or to take on a new marketing strategy, or something like that.

Gardner: It’s not difficult to rationalize the investment to go to hyperconverged.

Worth the HCI Investment

Steyn: No, it was actually quite easy. I can’t imagine life or IT without the investment that we’ve made. I can’t see how we could have moved forward without it.

Gardner: Give our audience a sense of the scale of your development organization. How many developers do you have? How many teams? What numbers of builds do you have going on at any given time?

Steyn: It’s about 50 developers, or six to eight teams, depending on the scale of the projects they are working on. Each development team is focused on a specific unit within the business. They do two-week sprints, and some of the releases are quite big.

It means getting the product out to the market as quickly as possible, to bring new functionality to the business. We can’t afford to have a piece of product stuck in a development hold for six to eight weeks because, by that time, you are too late.

Gardner: Let’s drill down into the actual hyperconverged infrastructure you have in place. What did you look at? How did you make a decision? What did you end up doing? 

Steyn: We had initially invested in Hewlett Packard Enterprise (HPE) SimpliVity 3400 cubes for our development space, and we thought that would pretty much meet our needs. Prior to that, we had invested in traditional blades and storage infrastructure. We were thinking that we would stay with that for the production environment, and the SimpliVity systems would be used for just the development environments.

The gains we saw were just so big ... Now we have the entire environment running on SimpliVity cubes.  

But the gains we saw in the development environment were just so big that we very quickly made a decision to get additional cubes and deploy them as the production environment, too. And it just grew from there. Sowe now have the entire environment running on SimpliVity cubes.

We still have some traditional storage that we use for archiving purposes, but other than that, it’s 100 percent HPE SimpliVity.

Gardner: What storage environment do you associate with that to get the best benefits?

Keep Storage Simple

Steyn: We are currently using the HPE 3PAR storage, and it’s working quite well. We have some production environments running there; a lot of archiving uses for that. It’s still very complementary to our environment.

Gardner: A lot of organizations will start with HCI in something like development, move it toward production, but then they also extend it into things like data warehouses, supporting their data infrastructure and analytics infrastructure. Has that been the case at King Price?

Steyn: Yes, definitely. We initially began with the development environment, and we thought that’s going to be it. We very soon adopted HCI into the production environments. And it was at that point where we literally had an entire cube dedicated to the enterprise data warehouse guys. Those are the teams running all of the modeling, pricing structures, and things like that. HCI is proving to be very helpful for them as well, because those guys, they demand extreme data performance, it’s scary.

How to solve key challenges

With HPE SimpliVity HCI

Gardner: I have also seen organizations on a slippery slope, that once they have a certain critical mass of HCI, they begin thinking about an entire software-defined data center (SDDC). They gain the opportunity to entirely mirror data centers for disaster recovery, and for fast backup and recovery security and risk avoidance benefits. Are you moving along that path as well?

Steyn: That’s a project that we launched just a few months ago. We are redesigning our entire infrastructure. We are going to build in the ease of failover, the WAN optimization, and the compression. It just makes a lot more sense to just build a second active data center. So that’s what we are busy doing now, and we are going to deploy the next-generation technology in that data center.

Gardner: Is there any point in time where you are going to be experimenting more with cloud, multi-cloud, and then dealing with a hybrid IT environment where you are going to want to manage all of that? We’ve recently heard news from HPE about OneSphere. Any thoughts about how that might relate to your organization?

Cloud Common Sense

Steyn: Yes, in our engagement with Microsoft, for example, in terms of licensing of products, this is definitely something we have been talking about. Solutions like HPE OneSphere are definitely going to make a lot of sense in our environment.

There are a lot of workloads that we can just pass onto the cloud that we don’t need to have on-premises, at least on a permanent basis. Even the guys from our enterprise data warehouse, there are a lot of jobs that every now and then they can just pass off to the cloud. Something like HPE OneSphere is definitely going to make that a lot easier for us. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Containers, microservices, and HCI help governments in Norway provide safer public data sharing

The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.

We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI)containers, and microservices.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us discover the benefits of a modern platform for smarter government data sharing is FrodeSjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving interest in having a common platform for public information in your country?

SjovatsenWe need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.

Gardner: What problems do you need to overcome in order to create a more common approach?

Common API at the core

SjovatsenOne of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes -- that’s where the problem is. And that’s what the FINT project is all about.

  Sjovatsen

Sjovatsen

[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal -- for all of those business applications. It gives us the ability to use our data much more efficiently.

Gardner: Please describe for us what the FINT project is and why this is so important for public health.

SjovatsenIt’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs -- both for public use and also for internal use.

Gardner: What sort of information are we dealing with here? Why is it important for the general public health? 

SjovatsenIt’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.

Learn More About

HPE Pointnext Services

Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?

Agility, scalability, and speed

SjovatsenWe needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails. 

The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.

Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach? 

SjovatsenWe are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity

Container comfort

Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?

SjovatsenYes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.

The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.

Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?

Learn More About

HPE Pointnext Services

SjovatsenThe container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.

Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sector applications?

Unlocking the potential

SjovatsenYes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.

Gardner: That could have some very beneficial impacts, very good payoffs.

SjovatsenYes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.

In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.

Gardner: It shows the wide applicability of this. Any advice, Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?

SjovatsenI definitely recommend it -- it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks

The next BriefingsDirect agile data center architecture interview explores how an Ericsson and Hewlett Packard Enterprise (HPE) partnership establishes a mobile telecommunications stack that accelerates data services adoption in rapidly advancing economies. 

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

We’ll now learn how this mobile business support infrastructure possesses a low-maintenance common core -- yet remains easily customizable for regional deployments just about anywhere. 

Here to help us define the unique challenges of enabling mobile telecommunications operators in countries such as Bangladesh and Uzbekistan, we are joined by Mario Agati, Program Director at Ericsson, based in Amsterdam, and Chris James-Killer, Sales Director for HPE. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions

Here are some excerpts:

Gardner: What are the unique challenges that mobile telecommunications operators face when they go to countries like Bangladesh?

 Agati

Agati

Agati: First of all, these are countries with a very low level of revenue per user (RPU). That means for them cost efficiency is a must. All of the solutions that are going to be implemented in those countries should be, as much as possible, focused on cost efficiency, reusability, and industrialization. That’s one of the main reasons for this program. We are addressing those types of needs -- of high-level industrialization and reusability across countries where cost-efficiency is king.

Gardner: In such markets, the technology needs to be as integrated as possible because some skill sets can be hard to come by. What are some of the stack requirements from the infrastructure side to make it less complex?

James-Killer: These can be very challenging countries, and it’s key to do the pre-work as systematically as you can. So, we work very closely with the architects at Ericsson to ensure that we have something that’s repeatable, that’s standardized and delivers a platform that can be rolled out readily in these locations. 

Even countries such as Algeria are very difficult to get goods into, and so we have to work with customs, we have to work with goods transfer people; we have to work on local currency issues. It’s a big deal.

Learn More About the

HPE and Ericsson Alliance

Gardner: In a partnership like this between such major organizations as Ericsson and HPE, how do you fit together? Who does what in this partnership?

Agati: At Ericsson, we are the prime integrator responsible for running the overall digital transformation. This is for a global operator that is presently in multiple countries. It shows the complexity of such deals.

We are responsible for delivering a new, fully digital business support system (BSS). This is core for all of the telco services. It includes all of the business management solutions -- from the customer-facing front end, to billing, to charging, and the services provisioning.

In order to cope with this level of complexity, we at Ericsson rely on a number of partners that are helping us where we don’t have our own solutions. And, in this case, HPE is our selected partner for all of the infrastructure components. That’s how the partnership was born.

Gardner: From the HPE side, what are the challenges in bringing a data center environment to far-flung parts of the world? Is this something that you can do on a regional basis, with a single data center architecture, or do you have to be discrete to each market?

Your country, your data center

James-Killer: It is more bespoke than we would like. It’s not as easy as just sending one standard shipping container to each country. Each country has its own dynamic, its own specific users. 

The other item worth mentioning is that each country needs its own data center environment. We can’t share them across countries, even if the countries are right next to each other, because there are laws that dictate this separation in the telecommunications world. 

 James-Killer

James-Killer

So there are unique attributes for each country. We work with Ericsson very closely to make sure that we remove as many itemized things as we can. Obviously, we have the technology platform standardized. And then we work out what’s additionally required in each country. Some countries require more of something and some countries require less. We make sure it’s all done ahead of time. Then it comes down to efficient and timely shipping, and working with local partners for installation.

Gardner: What is the actual architecture in terms of products? Is this heavily hyper-converged infrastructure (HCI)-oriented, and software-defined? What are the key ingredients that allow you to meet your requirements?

James-Killer: The next iterations of this will become a lot more advanced. It will leverage a composable infrastructure approach to standardize resources and ensure they are available to support required workloads. This will reduce overall cost, reduce complexity, and make the infrastructure more adaptable to the end customers’ business needs and how they change over time. Our HPE Synergy solution is a critical component of this infrastructure foundation. 

At the moment we have to rely on what’s been standardized as a platform for supporting this BSS portfolio.

This platform has been established for years and years. So it is not necessarily on the latest technology ... but it's a good, standardized, virtualized environment to run this all in a failsafe way.

We have worked with Ericsson for a long time on this. This platform has been established for years and years. So it is not necessarily on the latest technology; the latest is being tested right now. For example, the Ericsson Karlskrona BSS team in Sweden is currently testing HPE Synergy. But, as we speak, the current platform is HPE Gen9 so it’s ProLiant Servers. HPE Aruba is involved; a lot of heavy-duty storage is involved as well. 

But it’s a good, standardized, virtualized environment to run this all in a failsafe way. That’s really the most critical thing. Instead of being the most advanced, we just know that it will work. And Ericsson needs to know that it will work because this platform is critical to the end-users and how they operate within each country.

Gardner: These so-called IT frontiers countries -- in such areas as Southeast Asia, Oceania, the Middle East, Eastern Europe, and the Indian subcontinent -- have a high stake in the success of mobile telecommunications. They want their economies to grow. Having a strong mobile communications and data communications infrastructure is essential to that. How do we ensure the agility and speed? How are you working together to make this happen fast?

Architect globally, customize locally

Agati: This comes back to the industrialization aspect. By being able to define a group-wide solution that is replicable in each of these countries, you are automatically providing a de facto solution in countries where it would be very difficult to develop locally. They obtain a complex, state-of-the-art core telco BSS solution. Thanks to this group initiative, we are able to define a strong set of capabilities and functions, an architecture that is common to all of the countries. 

That becomes a big accelerator because the solution comes pre-integrated, pre-defined, and is just ready to be customized for whatever remains to be done locally. There are always aspects of the regulations that need to be taken care of locally. But you can start from a predefined asset that is already covering some 80 percent of your needs.

Learn More About the

HPE and Ericsson Alliance

In a relatively short time, in those countries, they obtain a state-of-the-art, brand-new, digital BSS solution that otherwise would have required a local and heavy transformation program -- with all of the complexity and disadvantages of that.

Gardner:And there’s a strong economic incentive to keep the total cost of IT for these BSS deployments at a low percentage of the carriers’ revenue. 

Shared risk, shared reward

Agati: Yes. The whole idea of the digital transformation is to address different types of needs from the operator’s perspective. Cost efficiency is probably the biggest driver because it’s the one where the shareholders immediately recognize the value. There are other rationales for digital transformation, such as relating to the flexibility in the offering of new services and of embracing new business models related to improved customer experiences. 

On the topic of cost efficiency, we have created with a global operator an innovative revenue-share deal. From our side, we commit to providing them a solution that enables them a certain level of operational cost reduction. 

The current industry average cost of IT is 5 to 6 percent of total mobile carrier revenue. Now, thanks to the efficiency that we are creating from the industrialization and re-use across the entire operator’s group, we are committed to bringing the operational cost down to the level of around 2 percent. In exchange, we will receive a certain percentage of the operator’s revenue back. 

That is for us, of course, a bold move. I need to say this clearly, because we are betting on our capability of not only providing a simple solution, but on also providing actual shareholder value, because that's the game we are actually playing in now.

It's a real quality of life issue ... These people need to be connected and haven't been connected before.

We are risking our own money on it at the end of the game. So that's what makes the big difference in this deal against any other deal that I have seen in my career -- and in any other deal that I have seen in this industry. There is probably no one that is really taking on such a huge challenge.

Gardner: It's very interesting that we are seeing shared risks, but then also shared rewards. It's a whole different way of being in an ecosystem, being in a partnership, and investing in big-stakes infrastructure projects.

Agati: Yes. 

Gardner: There has been recent activity for your solutions in Bangladesh. Can you describe what's been happening there, and why that is illustrative of the value from this approach?

Bangladesh blueprint

Agati:Bangladesh is one of the countries in the pipeline, but it is not yet one of the most active. We are still working on the first implementation of this new stack. That will be the one that will set the parameters and become the template for all the others to come. 

The logic of the transformation program is to identify a good market where we can challenge ourselves and deliver the first complete solution, and then reuse that solution for all of the others. This is what is happening now; we’re in the advanced stages of this pilot project.

Gardner: Yes, thank you. I was more referring to Bangladesh as an example of how unique and different each market can be. In this case, people often don't have personal identification; therefore, one needs to use a fingerprint biometric approach in the street to sell a SIM to get them up and running, for example. Any insight on that, Chris?

Learn More About the

HPE and Ericsson Alliance

James-Killer: It speaks to the importance of the work that Ericsson is doing in these countries. We have seen in Africa and in parts of the Middle East how important telecommunications is to an individual. It's a real quality of life issue. We take it for granted in Sweden; we certainly take advantage of it in my home country of Australia. But in some of these countries you are actually making a genuine difference.

These people need to be connected and haven’t been connected before. And you can see what has happened politically when the people have been exposed to this kind of technology. So it's admirable, I believe, what Ericsson is doing, particularly commercially, and the way that they are doing it. 

It also speaks to Ericsson's success and the continued excitement around LTE and 4G in these markets; not actually 5G yet. When you visit Ericsson's website or go to Ericsson’s shows, there's a lot of talk about autonomous vehicles and working with Volvo and working with Scania, and the potential of 5G for smart cities initiatives. But some of the best work that Ericsson does is in building out the 4G networks in some of these frontier countries.

Agati: If I can add one thing. You mentioned how specific requirements are coming from such countries as Bangladesh, where we have the specific issue related to identity management. This is one of the big challenges we are now facing, of gaining the proper balance between coping with different local needs, such as different regulations, different habits, different cultures -- but at the same time also industrializing the means, making them repeatable and making that as simple as possible and as consistent as possible across all of these countries. 

There is a continuous battle between the attempts to simplify and the reality check on what does not always allow simplification and industrialization. That is the daily battle that we are waging: What do you need and what don’t you need. Asking, “What is the business value behind a specific capability? What is the reasoning behind why you really need this instead of that?”

We at Ericsson want to be the champion of simplicity and this project is the cornerstone of going in that direction.

At the end of the game, this is the bet that we are making together with our customers -- that there is a path to where you can actually find the right way to simplification. Ericsson has recently been launching our new brand and it is about this quest for making it easier. That's exactly our challenge. We want to be the champion of simplicity and this project is the cornerstone of going in that direction.

Gardner: And only a global integrator with many years of experience in many markets can attain that proper combination of simplicity and customization.

Agati: Yes.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

In Celebration of Female Engineers and Innovators

In Celebration of Female Engineers and Innovators

Technology innovation is the bedrock of Hewlett Packard Enterprise (HPE) and our employees are the engine that fuels it. Over the past three years, we have been on a journey to rapidly transform the company to better align with changing technology trends and evolving customer needs. A critical element of this transformation has been the re-ignition of our innovation engine. Every HPE innovation comes from a team of individuals, each contributing their unique perspective, knowledge and experience to advance the way the world works and lives. The full power of our people is driving HPE’s success. A focus on Inclusion and Diversity helps to drive new business, fuel innovation, attract and attain the best employees.

From bottleneck to powerhouse: Leverage in-memory computing and SAP HANA to empower retail business

From bottleneck to powerhouse: Leverage in-memory computing and SAP HANA to empower retail business

Slow growth and increasing consumer demands challenge the retail market. Learn how to tackle these challenges with accelerated data insights, thanks to in-memory computing and SAP HANA.

 “How do we avoid the waste and financial loss of overstocking products? When do we need to adjust prices? Faster data generation with the HPE solution supports better business decisions.”

These are the words of Alan Jensen, CIO, Dansk Supermarked Group, one of hundreds of retailers leveraging the power of in-memory computing, SAP HANA and HPE solutions to thrive in a hyper-competitive industry.

Making Artificial Intelligence Enterprise-Ready: HPE Unveils New AI Solutions

 Making Artificial Intelligence Enterprise-Ready: HPE Unveils New AI Solutions

Robots, driverless cars, chatbots, face and voice recognition. All are applications of artificial intelligence and machine learning that are sweeping just about every industry these days. The possibilities are truly exciting. AI can amplify human capabilities and turn exponentially growing data into insight, action, and value, helping companies develop innovative customer experiences and a new competitive edge.

As an enterprise leader, you may be wondering how to apply artificial intelligence in your own environment to your unique business needs. After all, AI initiatives, like any enterprise investment, demand business-aligned use cases. Understanding what challenges to address and proving success with a focused first project are the keys to achieving business value.

Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman

The next BriefingsDirect IT business model innovation interview explores how pay-as-you-go models have emerged as a new way to align information technology (IT) needs with business imperatives.

We’ll now learn how global aerospace and defense integrator Northrop Grumman has sought a revolution in business model transformation in how it acquires and manages IT.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore how cloud computing-like consumption models can be applied more broadly is Ron Foudray, Vice President, Business Development for Technology Services at Northrop Grumman. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

AI-driven management with HPE InfoSight

AI-driven management with HPE InfoSight

Yesterday I had a demo showing 3PAR on HPE InfoSight with cross stack analytics - a new feature we had announced recently. Today I have another demo of another new feature with InfoSight: Artificial Intelligence or AI-driven management. Competitors are struggling to keep up with the benefits that HPE InfoSight provides customers and AI-driven management pushes the gap even further. Before I jump into the demo, here's a bit of background. 

5 Steps to Better Identity and Access Management for Hybrid IT

 5 Steps to Better Identity and Access Management for Hybrid IT

In their quest for faster time-to-value and an optimized digital supply chain, businesses are increasingly turning to hybrid IT’s blend of public and private cloud solutions with traditional on-prem gear and composable infrastructures. But they’re hitting a speedbump on the way. Identity and access management (IAM) systems, long recognized as a core component of IT security strategy, are showing signs of strain in a hybrid world.

Users perceive today’s IAM controls as overly complex, slowing down access to the tools and data they need for their work and as a result making them less productive. They often have to juggle multiple sets of identity factors and credentials.

Want to drive innovation and profitability? Create a great employee experience.

Want to drive innovation and profitability? Create a great employee experience.

Few of us need reminding these days that we’re living in a hyper-competitive business environment. Technology is severely disrupting businesses of all sizes and all sectors, from financial services to entertainment to retail to automotive. Any edge that a company can get over its competitors is worth investigating. As the quest for new ways to drive innovation and creativity intensifies, companies may be overlooking an important source of those competitive advantages: the quality of the experience they provide to their workforce.