Posted By Dana L Gardner,
6 hours ago
| Comments (0)
BARCELONA — HP, taking a new leap in its marathon to remake itself,
has further assembled, refined and delivered the IT infrastructure,
big data and cloud means by which other large enterprises can
effectively remake themselves.
This week here at the HP Discover 2013 conference — despite the gulf of 70 years but originating in the same Silicon Valley byways
— has found a kindred spirit in … Facebook. The social media
juggernaut, also based in Palo Alto, is often pointed to with both envy
and amazement at its new-found and secretive feats of IT data center
scale, reach, efficiency and adaptability. It’s a mantle of
technological respect that HP itself once long held.
So for Facebook’s CIO, Tim Campos, to get on stage in Europe and declare that, "A partner like HP Vertica thinks like we do” and is a "key part” of Facebook’s big data capabilities, is one the best endorsements, err … "likes,” that any modern IT infrastructure vendor could hope for. With Facebook’s data growing by 500 terabytes a day, this is quite a coup for HP’s analytics platform, which is part of its HAVEn initiative.
fully expected HP to shout it all day from the verdant ancient
hilltops with echoes through the crooked 12th century streets and
across the packed soccer stadium in this beautiful Mediterranean port city: "Facebook runs on HP Vertica."
odd it is that the very newest of California IT gold rush denizens
rubs off its glow on the very oldest, HP has nonetheless quickly made
itself a key supplier of some of the most important technologies of the
present corporate era: cloud computing and big data processing.
And while the punditry often fixates on the vital signs
— or lack of — in the Windows PC business, HP is rightfully and
successfully chasing the bigger long-term vendor opportunity: the
all-purpose software-defined yet hardware-optimized data center.
News you can use
here at Discover show how HP is advancing these core technologies that
will prove indispensable in helping enterprises and service provider
alike to master their data centers, exploit big data, expand mobile, and prepare for cloud adoption.
Among the innovations and updates announced at the conference were a HP’s Converged System,
Converged Cloud System, new Cloud Service Automation, a Hybrid Cloud
Management platform, and Propel for acquiring and using new
applications. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
HP Vertica also scored a major goal with the announcement an innovative collaboration with Conservation International (CI)
— a leading non-governmental organization dedicated to protecting
nature for people — to dramatically improve the accuracy and speed of
analysis of data collection in environmental science.
initiative, called HP Earth Insights, uses the Vertica platform to
deliver near-real-time analytics and is already yielding new information
that indicates a decline in a significant percentage of species
monitored. The project serves as an early warning system for
conservation efforts, enabling proactive responses to environmental
HP Earth Insights applies big data technology to the
ecological research being conducted across 16 tropical forests around
the world by CI, the Smithsonian Institution, and the Wildlife Conservation Society,
as part of the Tropical Ecology Assessment and Monitoring (TEAM)
Network. Data and analysis from HP Earth Insights will be shared with
protected area managers to develop policies regarding hunting and other
causes of species loss in these ecosystems.
The new HP ConvergedSystem
products deliver a total systems experience that simplifies IT,
enabling clients to go from order to operations in as few as 20 days.
With quick deployment, intuitive management, and system-level support,
IT organizations can shift their focus from systems integration to
delivering the applications that power their business.
HP ConvergedSystem, a new product line engineered from the ground up, was built using HP Converged Infrastructure’s servers, storage, networking, software and services.
HP ConvergedSystem products also come with a unified support model from HP Proactive Care,
providing clients with a single point of accountability for all system
components, including partner software. HP also offers consulting
capabilities to plan, design and integrate HP ConvergedSystem offerings
into broader cloud, big-data, and virtualization solutions, while mapping physical and virtual workloads onto clients’ new HP ConvergedSystem.
enterprises embrace new delivery models, one of the biggest decisions
chief information officers (CIOs) need to make on their cloud journey
is determining where applications or workloads should live -- on
traditional IT or in the cloud. Often, applications will continue to
live across multiple environments, and hybrid delivery becomes an
imperative. Solutions announced this week at HP Discover in Barcelona
build on this strategy, including the introduction of the
next-generation HP CloudSystem, HP’s flagship offering for building and managing private clouds.
includes a new consumer-inspired user interface, simplified management
tools, and an improved deployment process that enable customers to set
up and deploy a complete private cloud environment in just hours,
compared to weeks for other private cloud solutions. As the foundation
of a hybrid cloud solution, HP CloudSystem bursts to multiple public
cloud platforms, including three new ones: Microsoft Windows Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company.
HP CloudSystem integrates OpenStack-based
HP Cloud OS technology, providing customers a hardened, tested
OpenStack distribution that is easy to install and manage. The
next-generation CloudSystem also incorporates the new HP Hybrid Cloud Management Platform,
a comprehensive management solution that enables enterprises and
service providers to deliver secure cloud services across public or
private clouds, as well as traditional IT.
new consulting services help customers evolve their cloud strategy and
vision into a solution based on delivering business outcomes and ready
The Hybrid Cloud Management platform integrates HP Cloud Service Automation (CSA) version 4.0 and includes native support for both HP CloudOS with OpenStack and the open-source
standard TOSCA (Topology and Orchestration Specification for Cloud
Applications), enabling easier application portability and management of
hybrid and heterogeneous IT environments.
HP is also adding a new hybrid-design capability to its global professional services capabilities. HP Hybrid Cloud Design Professional Services
offer a highly modular design approach to help organizations architect
a cloud solution that aligns with their technical, organizational, and
business needs. The new consulting services help customers evolve
their cloud strategy and vision into a solution based on delivering
business outcomes and ready for implementation.
Also expanding is the Virtual Private Cloud (VPC)
offering, which helps customers take advantage of the economics of a
public cloud with the security and control of a private cloud solution.
The new HP VPC Portfolio provides a range of VPC solutions, from
standardized self service to a customized, fully managed service model.
HP VPC solutions deliver the security and control of a private cloud
in a flexible, cost-effective, multitenant cloud environment. The
latest version of HP Managed VPC now allows customers to choose among
virtual or physical configurations, multiple tiers of storage,
hypervisors, and network connectivity types.
As part of its overall hybrid delivery, HP offers HP Flexible Capacity (FC), an on-premises solution of enterprise-grade infrastructure with cloud economics, including pay-for-use and instant scalability
of server, storage, networking and software capacity. This allows for
treating cloud costs as operating expenses, rather than capital
expenses. It also supports existing customer third-party equipment for a
true heterogeneous environment. HP CloudSystem also can burst to HP
FC infrastructure for customers needing the on-demand capacity without
the data ever leaving the premises.
HP is also offering HP Propel,
a cloud-based service solution that enables IT organizations to
deliver self-service capabilities to end users, with an eye toward
improved service delivery, quicker time to value, and lower
Available on both desktop and mobile
platforms, the free version of HP Propel includes a standard service
catalog; the HP Propel Knowledge Management solution, which accelerates
user self-service with immediate access to information needed; and IT
news feeds delivered via RSS.
The premium version extends those
capabilities with an enhanced catalog; advanced authentication
methods, such as single sign-on; and access to the extended Propel
Knowledge Management solution. Clients also can integrate their
on-premises service management solutions through the HP-hosted Propel
Propel is built on an open and extensible
service exchange, with the ability to add catalogs and services as
clients’ demands evolve. To further simplify configuration,
administration and maintenance of the solution, HP and its worldwide
partners provide comprehensive and strategic assessment and
is built on an open and extensible service exchange, with the ability
to add catalogs and services as clients’ demands evolve.
will be available in the Americas and Europe, the Middle East and
Africa in January and in Asia Pacific and Japan in March. Additional
information is available at www.hp.com/go/propel.
HP further announced Converged Storage
innovations that restore business productivity at record speed, reduce
All Flash Array costs significantly while increasing performance and quality-of-service (QoS) capabilities, and expand agility by enabling cloud storage application mobility.
Additions to the HP Converged Storage portfolio include the next generation of HP StoreOnce Backup and HP StoreAll Archive systems to reduce risk as well as enhancements to HP 3PAR StoreServ Storage
with cost-reduced flash technology, performance improvements, and
software QoS enhancements to meet the needs of IT-as-a-service
In other HP news, HP has announced a new lineup of
servers designed to save space and reduce cost, but more importantly
cut energy usage. The Moonshot servers are small servers that can be packed into dense arrays and aid with heavy workload computing.
web servers are designed and tailored for specific workloads to
deliver optimum performance. These low power servers share management,
power, cooling, networking, and storage. This architecture is key to
achieving 8x efficiency at scale, and enabling 3x faster innovation
cycle. The power-saving feature addresses the problem cause by power
consumption in cloud operations.
You may also be interested in:
Posted By Dana L Gardner,
Monday, December 09, 2013
| Comments (0)
The next BriefingsDirect innovator interview targets how the recent and rapid evolution of mobile and client management requirements have caused considerable complexity and confusion.
We’ll examine how incomplete solutions and a lack of a clear pan-client
strategy have hampered the move to broader mobile support at
enterprises and mid-market companies alike. This state of muddled
direction has put IT in a bind, while frustrating users who are eager to
gain greater productivity and flexibility in their work habits, and device choice.
share his insights on how to better prepare for a mobile-enablement
future that quickly complements other IT imperatives such as cloud, big data, and even more efficient data centers, we’re joined by Tom Kendra, Vice President and General Manager, Systems Management at Dell Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Dell is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Kendra: There is an enormous amount of conversation now in this mobility area and it’s moving very, very rapidly. This is an evolving space. There are a lot of moving parts, and hopefully, in the next few minutes, we’ll be able to dive into some of those.
People have been dealing with a fast-moving client environment for
decades. Things have always changed rapidly with the client. We went
through the Web transition and client-server. We’ve seen all kinds of different ways of getting apps to devices. What’s different about the mobile and BYOD challenges today?
Speed and agility
Our industry is characterized by speed and agility. Right now, the big
drivers causing the acceleration can be put into three categories: the
amount and type of data that’s available, all the different ways and
devices for accessing this data, as well as the evolving preferences and
policies for dictating who, what, and how data is shared.
For example, training videos, charts and graphs
versus just text, and the ability to combine these assets and deliver
them in a way that allows a front-line salesperson, a service desk staffer or anyone else in the corporate ecosystem to satisfy customer requests much more efficiently and rapidly.
second area is the number of devices we need to support. You touched
on this earlier. In yesterday’s world -- and yesterday was a very short
time ago -- mobility was all around the PC. Then, it was around a
corporate-issued device, most likely a business phone. Now, all of a
sudden, there are many, many, many more devices that corporations are
issuing as well as devices people are bringing into their work
environment at a rapid pace.
We’ve moved from laptops to smartphones that were corporate-issued to tablets. Soon, we’ll get more and more wearables in the environment and machine-to-machine
communications will become more prevalent. All of these essentially
create unprecedented opportunities, yet also complicate the problem.
third area that’s driving change at a much higher velocity is the
ever-evolving attitude about work and work-life balance. And, along with
that ... privacy. Employees want to use what they’re comfortable using
at work and they want to make sure their information and privacy
rights are understood and protected. These three items are really
driving the acceleration.
want to use what they’re comfortable using at work and they want to
make sure their information and privacy rights are understood and
Gardner: And the response to this complexity so far, Tom, has been some suite, some mobile device management (MDM)
approaches, trying to have multiple paths to these devices and
supporting multiple types of infrastructure behind that. Why have these
not yet reached a point where enterprises are comfortable? Why have we
not yet solved the problem of how to do this well?
When you think about all the different requirements, you realize there
are many ways to achieve the objectives. You might postulate that, in
certain industries, there are regulatory requirements that somewhat
dictate a solution. So a lot of organizations in those industries move
down one path. In industries where you don’t have quite the same
regulatory environment, you might have more flexibility to choose yet
The range of available options is wide, and many
organizations have experimented with numerous approaches. Now, we’ve
gotten to the point where we have the unique opportunity -- today and
over the next couple of years -- to think about how we consolidate
these approaches into a more integrated, holistic mobility solution that
elevates data security and mobile workforce productivity.
of them are inherently good or bad. They all serve a purpose. We have
to ask, "How do I preserve the uniqueness of what those different
approaches offer, while bringing together the similarities?”
can you take advantage of similarities, such as the definition of
roles or which roles within the organization have access to what types
of data? The commonalities may be contextual in the sense that I’m
going to provide this kind of data access if you are in these kinds of
locations on these kinds of devices. Those things we could probably
pull together and manage in a more efficient way.
But we still
want to give companies the flexibility to determine what it means to
support different form factors, which means you need to understand the
characteristics of a wearable device versus a smartphone or an iPad.
also need to understand the different use cases that are most
prevalent in my organization. If I’m a factory worker, for example, it
may be better to have a wearable in the future, rather than a tablet.
In the medical field, however, tablets are probably preferred over
wearables because of the need to enter, modify and view electronic medical records. So there are different tradeoffs, and we want to be able to support all of them.
Looking again at the historical perspective, in the past when IT was
faced with a complexity -- too many moving parts, too many variables --
they could walk in and say, "Here’s the solution. This is the box
we’ve put around it. You have to use it this way. That may cause you
some frustration, but it will solve the bigger problem.” And they could
get away with that.
Today, that’s really no longer the case. There’s shadow IT. There’s consumerization of IT.
There are people using cloud services on their own volition without
even going through any of the lines of business. It's right down to the
individual user. How does IT now find a way to get some control, get
the needed enterprise requirements met, but recognize that their
ability to dictate terms is less than it used to be?
owners are coming forward to request that different employees or
organizational groups have access to information from a multitude of
bringing up a very big issue. Companies today are getting a lot of
pressure from individuals bringing in their own technology. One of the
case studies you and I have been following for many months is Green Clinic Health System, a physician-owned community healthcare organization in Louisiana. As you know, Jason Thomas,
the CIO and IT Director, has been very open about discussing their
progress -- and the many challenges -- encountered on their BYOD
As part of Green Clinic’s goal to ensure excellent
patient care, the 50 physicians started bringing in different
technologies, including tablets and smartphones, and then asked IT to
support them. This is a great example of what happens when major
organizational stakeholders -- Green Clinic’s physicians, in this case
-- make technology selections to deliver better service. With Green
Clinic, this meant giving doctors and clinicians anytime, anywhere
access to highly sensitive patient information on any
Internet-connected device without compromising security or HIPAA compliance requirements.
In other kinds of businesses, similar selection processes are underway as line-of-business
owners are coming forward to request that different employees or
organizational groups have access to information from a multitude of
devices. Now, IT has to figure out how to put the security in place to
make sure corporate information is protected while still providing the
flexibility for users to do their jobs using preferred devices.
IT often emerges in scenarios where IT puts too many restrictions on
device choice, which leads line-of-business owners and their
constituents to seek workarounds. As we all know, this can open the door
to all sorts of security risks. When we think about the Green Clinic
example, you can see that Jason Thomas strives to be as flexible as
possible in supporting preferred devices while taking all the necessary
precautions to protect patient privacy and HIPAA regulations.
When we think about how IT needs to approach this differently --
perhaps embracing and extending what's going on, while also being
mindful of those important compliance risk and governance issues --
we’re seeing a similar shift from the IT vendors.
there’s such a large opportunity in the market for mobile, for the
modern data center, for the management of the data and the apps out to
these devices, that we are seeing vendor models shifting, and we’re
seeing acquisitions happening. What's different this time from the
Kendra: The industry has to move from a position of providing a series of point-solutions to guiding and leading with a strategy
for pulling all these things together. Again, it comes down to giving
companies a plan for the future that keeps pace with their emerging
requirements, accommodates existing skill sets and grows with them as
mobility becomes more ingrained in their ways of doing business. That’s
the game -- and that’s the hard part.
The types of solutions Dell
is bringing to the market embrace what’s needed today while being
flexible enough to accommodate future applications and evolving data
The goal is to leverage customers’ existing
investments in their current infrastructures and find ways to build and
expand on those with foundational elements that can scale easily as
needs dictate. You can imagine a scenario in which an IT shop is not
going to have the resources, especially in the mid-market, to embrace
multiple ways of managing, securing, granting access, or all of these
That’s why I think this is easily going to be a three- to five-year
affair. Perhaps it will be longer, because we’re not just talking about
plopping in a mobile device management capability. We’re really
talking about rethinking processes, business models, productivity, and
how you acquire working skills. We’re no longer just doing word
processing instead of using typewriters. We’re not just repaving cow
paths. We’re charting something quite new.
There is that
interrelationship between the technology capabilities and the work. I
think that’s something that hasn’t been thought out. Companies were
perhaps thinking, "We'll just add mobile devices onto the roster of
things that we support.” But that’s probably not enough. How does the
vision from that aspect work, when you try to do both a technology
shift and a business transformation?
You used the term "plop in a MDM solution.” It's important to
understand that the efforts and the initiatives that have taken place
have all been really valuable. We’ve learned a lot. The issue is, as
you are talking about, how to evolve this strategy and why.
important is having an understanding of the business transformation
that takes place when you put all these elements together—it’s much
more far-reaching than simply "plopping” in a point solution for a
In yesterday's world, I might have had the
right or ability to wipe entire devices. Let’s look at the
corporate-issued device scenario. The company owns the device and
therefore owns the data that resides or is accessed on that device.
Wiping the device would be entirely within my domain or purview. But in
a BYOD environment, I’m not going to be able to wipe a device. So, I
have to think about things much differently than I did before.
based on their roles, need to have access to applications and data,
and they need to have it served up in a very easy, user-friendly
As companies evolve their own mobility strategies,
it’s important to leverage their learnings, while remaining focused on
enhancing their users’ experiences and not sacrificing them. That’s why
some of the research we’ve done suggests there is a very high
reconsideration rate in terms of people and their current mobility
They’ve tried various approaches and point solutions
and some worked out, but others have found these solutions lacking,
which has caused gaps in usability, user adoption, and manageability.
Our goal is to address and close those gaps.
Gardner: Let's get to what needs to happen. It seems to me that containerization
has come to the fore, a way of accessing different types of
applications, acquiring those applications perhaps on the fly, rather
than rolled out for the entire populace of the workforce over time.
Tell us a little bit more about how you see this working better, moving
toward a more supported, agile, business-friendly and
user-productivity vision or future for mobility.
Giving users the ability to acquire applications on the fly is hugely
important as users, based on their roles, need to have access to
applications and data, and they need to have it served up in a very
easy, user-friendly manner.
The crucial considerations here are
role-based, potentially even location-based. Do I really want to allow
the same kinds of access to information if I’m in a coffee house in
China as I do if I am in my own office? Does data need to be resident
on the device once I’m offline? Those are the kinds of considerations
we need to think about.
needed to ensure a seamless offline experience is where the issue of
containerization arises. There are capabilities that enable users to
view and access information in a secure manner when they’re connected to
an Internet-enabled device.
But what happens when those same
users are offline? Secure container-based workspaces allow me to take
documents, data or other corporate information from that online
experience and have it accessible whether I’m on a plane, in a tunnel
or outside a wi-fi area.
The container provides a protected place
to store, view, manage and use that data. If I need to wipe it later
on, I can just wipe the information stored in the container, not the
entire device, which likely will have personal information and other
unrelated data. With the secure digital workspace, it’s easy to
restrict how corporate information is used, and policies can be readily
established to govern which data can go outside the container or be
used by other applications.
The industry is clearing moving in this direction, and it’s critical that we make it across corporate applications.
Heretofore, it's been largely device-centric and management-centric, as opposed to user productivity role-centric.
If I hear you correctly, Tom, it sounds as if we’re going to be able
to bring down the right container, for the right device, at the right
time, for the right process and/or data or application activity. That’s
putting more onus on the data center, but that’s probably a good thing.
That gives IT the control that they want and need.
seems to me that, when you have that flexibility on the device and you
can manage sessions and roles and permissions, this can be a cost and
productivity benefit to the operators of that data center. They can
start to do better data management, dedupe, reduce their storage costs,
and do backup and recovery with more of a holistic, agile or strategic
approach. They can also meter out the resources they need to support
these workloads with much greater efficiency, predict those workloads,
and then react to them very swiftly.
We’ve talked so far about
all how difficult and tough this is. It sounds like if you crack this
nut properly, not only do you get that benefit of the user experience
and the mobility factor, but you can also do quite a bit of a good IT
blocking and tackling on the backend. Am I reading that correctly or am
I overstating that?
Kendra: I think
you’re absolutely on the money. Take us as individuals. You may have a
corporate-issued laptop. You might have a corporate-issued phone. You
also may have an iPad, a Dell tablet,
or another type of tablet at home. For me, it’s important to know what
Tom Kendra has access to across all of those devices in a very simple
I don’t want to set up a different approach based on
each individual device. I want to set up a way of viewing my data,
based on my role, permissions and work needs. Heretofore, it's been
largely device-centric and management-centric, as opposed to user
Dell position -- and where we see the industry going -- is
consolidating much of the management and security around those devices
in a holistic manner, so I can focus on what the individual needs. In
doing so, it’s much easier to serve the appropriate data access in a
fairly seamless manner. This approach rings true with many of our
customers who want to spend more resources on driving their businesses
and facilitating increased user productivity and fewer resources on
managing a myriad of multiple systems.
By bringing the point of management -- the point of power, the point
of control and enablement -- back into the data center, you’re also
able to link up to your legacy assets much more easily than if you had
to somehow retrofit those legacy assets out to a specific device
platform or a device's format.
You’re hitting on the importance of flexibility. Earlier, we said the
user experience is a major driver along with ensuring flexibility for
both the employee and IT. Reducing risk exposure is another crucial
driver and by taking a more holistic approach to mobility enablement, we
can address policy enforcement based on roles across all those
devices. Not only does this lower exposure to risk, it elevates data
security since you’re addressing it from the user point of view instead
of trying to sync up three or four different devices with multiple
Gardner: And if I am
thinking at that data center level, it will give me choices on where
and how I create that data center, where I locate it, how I produce it,
and how I host it. It opens up a lot more opportunity for utilizing
public cloud services, or a combination that best suits my needs and
that can shift and adapt over time.
It really does come down to freedom of choice, doesn’t it? The freedom
to use whatever device in whichever data center combination that makes
the most sense for the business is really what everyone is striving
for. Many of Dell’s customers are moving toward environments where they
are taking both on-premise and off-premise compute resources. They
think about applications as, "I can serve them up from inside my
company or I can serve them up from outside my company.”
We’re a very trusted brand, and companies are interested in what Dell has to say.
issue comes down to the fact that I want to integrate wherever
possible. I want to serve up the data and the applications when needed
and how needed, and I want to make sure that I have the appropriate
management and security controls over those things.
Okay, I think I have the vision much more clearly now. I expect we’re
going to be hearing more from Dell Software on ways to execute toward
that vision. But before we move on to some examples of how this works in
practice, why Dell? What is it about Dell now that you think puts you
all in a position to deliver the means to accomplish this vision?
Dell has relationships with millions of customers around the world.
We’re a very trusted brand, and companies are interested in what Dell
has to say. People are interested in where Dell is going. If you think
about the PC market, for example, Dell has about an 11.9 percent
worldwide market share. There are hundreds and hundreds of millions of
PCs used in the world today. I believe there were approximately 82
million PCs sold during the third quarter of 2013.
The point here
is that we have a natural entrée into this discussion and the
discussion goes like this: Dell has been a trusted supplier of hardware
and we’ve played an important role in helping you drive your business,
increase productivity and enable your people to do more, which has
produced some amazing business results. As you move into thinking about
the management of additional capabilities around mobile, Dell has
hardware and software that you should consider.
we’re in the conversation, we can highlight Dell’s world-class
technologies, including end-user computing, servers, storage,
networking, security, data protection, software, and services.
a trusted brand with world-class technologies and proven solutions,
Dell is ideally suited to help bring together the devices and underlying
security, encryption, and management technologies required to deliver a
unified mobile enablement solution. We can pull it all together and
deliver it to the mid-market probably better than anyone else.
the Dell advantages are numerous. In our announcements over the next
few months, you’ll see how we’re bringing these capabilities together
and making it easier for our customers to acquire and use them at a
lower cost and faster time to value.
One of the things that I'd like to do, Tom, is not just to tell how
things are, but to show. Do we have some examples of organizations --
you already mentioned one with the Green Clinic -- that have bitten the
bullet and recognized the strategic approach, the flexibility on the
client, leveraging containerization, retaining control and governance,
risk, and compliance requirements through IT, but giving those end-users
the power they want? What's it like when this actually works?
When it actually works, it's a beautiful thing. Let’s start there. We
work with customers around the world and, as you can imagine, given
people's desire for their own privacy, a lot of them don't want their
names used. But we’re working with a major North American bank that has
the problems that we have been discussing.
The concept of an integrated suite of policy and management capabilities is going to be extremely important going forward.
have 20,000-plus corporate-owned smartphones, growing to some 35,000
in the next year. They have more than a thousand iPads in place,
growing rapidly. They have a desktop virtualization (VDI) solution, but the VDI solution, as we spoke about earlier, really doesn't support the offline experience that they need.
are trying to leverage an 850-person IT department that has worldwide
responsibilities, all the things that we spoke about earlier. And they
use technology from companies that haven’t evolved as quickly as they
should have. So they're wondering whether those companies are going to
be around in the future.
This is the classic case of, "I have a
lot of technology deployed. I need to move to a container solution to
support both online and offline experiences, and my IT budget is being
squeezed.” So how do you do this? It goes back to the things we talked
First, I need to leverage what I have. Second, I need to
pick solutions that can support multiple environments rather than a
point solution for each environment. Third, I need to think about the
future, and in this case, that entails a rapid explosion of mobile
I need to mobilize rapidly without compromising security
or the user experience. The concept of an integrated suite of policy
and management capabilities is going to be extremely important to my
organization going forward.
Dell is approaching this enterprise mobility manager market with an
aggressive perspective, recognizing a big opportunity in the market and
an opportunity that they are uniquely positioned to go at. There’s not
too much emphasis on the client alone and not just emphasis on the
data center. It really needs to be a bridging type of a value-add these
days. Can you tease us a little bit about some upcoming news? What
should we expect next?
Kendra: The solutions we announced in April
essentially laid out our vision of Dell’s evolving mobility
strategies. We talked about the need to consolidate mobility management
systems and streamline enablement. We focused on the importance of
leveraging world-class security, including secure remote access and
encryption. And the market has responded well to Dell's point of view.
we move forward, we have the opportunity to get much more prescriptive
in describing our unified approach that consolidates the capabilities
organizations need to ensure secure control over their corporate data
while still ensuring an excellent user experience.
more from us detailing how those integrated solutions come together to
deliver fast time to value. You'll also see different delivery
vehicles, giving our customers the flexibility to choose from on
premise, software-as-a-service (SaaS) based or cloud-based approaches. You'll see additional device support, and you'll see containerization.
plan to leverage our advantages, our best-in-class capabilities around
security, encryption, device management; this common functionality
approach. We plan to leverage all of that in upcoming announcements.
we take the analyst community through our end-to-end mobile/BYOD
enablement plans, we’ve gotten high marks for our approach and
direction. Our discussions involving Dell’s broad OS support, embedded
security, unified management and proven customer relationship all have
been well received.
Our next step is to make sure that, as we
announce and deliver in the coming months, customers absolutely
understand what we have and where we're going. We think they're going be
very excited about it. We think we're in the sweet spot of the
mid-market and the upper mid-market in terms of what solutions they need
to ease their mobile enablement objectives.
also believe we can provide a unique point-of-view and compelling
technology roadmaps for those very large customers who may have a longer
journey in their deployments or rollout.
We also believe
we can provide a unique point-of-view and compelling technology
roadmaps for those very large customers who may have a longer journey
in their deployments or rollout.
We're very excited about what
we're doing. The specifics of what we're doing play out in early
December, January, and beyond. You'll see a rolling thunder of
announcements from Dell, much like we did in April. We’ll lay out the
solutions. We’ll talk about how these products come together and we’ll
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.
You may also be interested in:
mobile device management
Posted By Dana L Gardner,
Sunday, December 08, 2013
| Comments (0)
Creating big-data capabilities and becoming a data-driven organization are near the tops of surveys for the most pressing business imperatives as we approach 2014.
These improved business-intelligence (BI)
trends are requiring better access and automation across data flows
from a variety of sources, formats, and from many business
The next BriefingsDirect panel discussion then
focuses on ways that enterprises are effectively harvesting data in all
its forms, and creating integration that fosters better use of big
data throughout the business process lifecycle.
Here now to share
their insights into using data strategically by exploiting all of the
data from all of the applications across business ecosystems, we’re
joined by Jon Petrucelli, Senior Director of the Hitachi Solution Dynamics, CRM and Marketing Practice, based in Austin, Texas; Rick Percuoco, Senior Vice President of Research and Development at Trillium Software in Bedford, Mass., and Betsy Bilhorn, Vice President of Product Management at Scribe Software in Manchester NH.
The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Scribe Software is a sponsor of BriefingsDirect podcasts.]
Here are some edited excerpts:
Big-data analytics platforms have become much more capable, but we
still come back to the same problem of getting to the data, putting it
in a format that can be used, directing it, managing that flow,
automating it, and then, of course, dealing with the compliance,
governance, risk, and security issues.
Is that the correct read
on this, that we've been able to move quite well in terms of the
analytics engine capability, but we're still struggling with getting
the fuel to that engine?
would absolutely agree with that. When we talk about big data, big
analytics and all of that, it's moved much faster than capturing those
data sources. Some of these systems that we want to get the data from
were never built to be open. So there is a lot of work just to get them
out of there.
The other thing a lot of people like to talk about is an application programming interface (API) economy.
"We will have an API and we can get through web services at all this
great stuff," but what we’ve seen in building a platform ourselves and
having that connectivity, is that not all of those APIs are created
The vendors who are supplying this data, or these data
services, are kind of shooting themselves in the foot and making it
difficult for the customer to consume them, because the APIs are poorly
written and very hard to understand, or they simply don’t have the
performance to even get the data out of the system.
vendors who are supplying this data, or these data services
themselves, are kind of shooting themselves in the foot and making it
difficult for the customer to consume them.
On top of that,
you have other vendors who have certain types of terms of service,
where they cut off the service or they may charge you for it. So when
they talk about how it's great that they can do all these analytics, in
getting the data in there, there are just so many show stoppers on a
number of fronts. It's very, very challenging.
Gardner: Customer relationship management (CRM),
I imagine, paved the way where we’re trying to get a single view of
the customer across many different data type of activities. But now,
we’re pushing the envelope to a single view of the patient across
multiple healthcare organizations or a single view of a process that
has a cloud part, an on-premises part, and an ecosystem supply-chain part.
seems as if we’ve moved in more complexity here. Jon Petrucelli, how
are the systems keeping up with these complex demands, expanding
concentric circles of data inclusion, if you will?
Petrucelli: That’s a huge challenge. We see integration as critical at the high levels of adoption and return on investment (ROI).
Adoption by the users and then ultimately ROI by the businesses is
important, because integration is like gas in the sports car. Without
the gas, it's not going to go.
What we do for a lot of customers is intentionally
build integration using Scribe, because we know that if we can take
them down from five different interfaces, you're looking at getting a
360-degree view of the customer that’s calling them or that they’re
about to call on. We can take that down to one interface from five.
want to give them one user experience or one user interface to
productive users -- especially sales reps in the CRM world and customer
service reps. You don’t want them all tabbing between a bunch of
different systems. So we bring them into one interface, and with a
platform like Microsoft CRM, they can use their interface of choice.
They can move from a desktop, to a laptop, to a tablet, to a mobile device
and they’re seeing one version of the truth, because they’re all
looking into windows looking into the same realm. And in that realm,
what is tunneled in comes through pipes that are Scribe.
really going to like that. Their adoption is going to be higher and
their productivity is going to be higher. If you can raise the
productivity of the users, you can raise the top line of the company
when you’re talking about a sales organization. So, integration is the
key to drive high level of adoption and high level of ROI and high
levels of productivity.
We used to do custom software integration. With a lot of our customers we see lot of custom .NET
code or other types of codesets, Java for example, that do the
integration. They used to do that, and we still see some bigger
organizations that are stuck on that stuff. That’s a way to paint
yourself into a corner and make yourself captive to some developer.
Percuoco: You do have to watch out for custom APIs. Trillium has a connectivity business as does Scribe.
As long as you stick with industry-standard handshaking methods, like XML or JSON or web services and RESTful
APIs, then usually you can integrate packages fairly smoothly. You
really need to make sure that you're using industry-standard hand-offs
for a lot of the integration methods. You have four or five different
ways to do that, but it’s pretty much the same four or five.
We highly recommend that people move away from that and go to a
platform-based middleware application like Scribe. Scribe is our
preferred platform middleware, because that makes it much more
sustainable and changeable as you move forward. Inevitably, in
integration, someone is going to want to change something later on.
you have a custom code integration someone has to actually crack open
that code, take it offline, or make a change and then re-update the
code and things like -- and its all just pure spaghetti code.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe.
a platform like Scribe, its very easy to pick up industry-standard
training available online. You’re not held hostage anymore. It’s a graphical user interface (GUI). It's literally drag-and-drop mappings and interlock points. That’s really amazing, being this nice capability in their Scribe Online
service. Even children can do an integration. It’s a teaching
technique that was developed at Harvard or MIT about how to put puzzle
pieces together through integration. If it doesn’t work, the puzzle
pieces don’t fit.
They’ve done a really amazing job of making
integration for rest of us, not just for developers. We highly recommend
people to take a look at that, because it just brings the power back
to the business and takes it away from just one developer, a small
development shop, or an outsourced developer.
Gardner: What else has been holding businesses back from gaining access to the most relevant data?
Bilhorn: One is the explosion in the different types and kinds of data. Then, you start mixing that with legacy systems
that have always been somewhat difficult to get to. Bringing those all
together and making sense of that are the two biggest ones. Those have
been around for a long, long time.
That problem is getting
exponentially harder, given the variety of those data sources, and then
all the different ways to get into those. It’s just trying to put all
that together. It just gets worse and worse. When most people look at
it today, it almost seems somewhat insurmountable. Where do you even
We work with a lot of large enterprise, global-type customers. To
build on what Betsy said, they have a lot of legacy systems. There's a
lot of data that’s captured inside these legacy systems, and those
systems were not designed to be open architected, with sharing their
data with other systems.
When you’re dealing with modern systems, it's definitely getting easier. When you deal with middleware
software like Scribe, especially with Scribe Online, it gets much
easier. But the biggest thing that we encounter in the field with these
larger companies is just a lack of understanding of the modern
middleware and integration and lack of understanding of what the
business needs. Does it really need real-time integration?
our customers definitely have a good understanding of what the
business wants and what their customers want, but usually the
evaluator, decision-maker, or architect doesn’t have a strong background
in data integration.
really a people issue. It's an educational issue of helping them
understand that this isn't as hard as they think it is. Let's scope it
down. Let's understand what the business really needs. Usually, that
becomes something a lot more realistic, pragmatic, and easier to do than
they originally anticipated going into the project.
In the last
5 to 10 years, we've seen data integration get much easier to do, and a
lot of people just don’t understand that yet. That’s the lack of
understanding and lack of education around data integration and how to
exploit this big-data
proliferation that’s happening. A lot of users don't quite understand
how to do that, and that’s the biggest challenge. It’s the people side
of it. That’s the biggest challenge for us.
Rick Percuoco at Trillium, tell us what you are seeing when it comes
to the impetus for doing data integration. Perhaps in the past, folks
saw this as too daunting and complex or involved skill sets that they
didn't have. But it seems now that we have a rationale for wanting to
have a much better handle on as much data as possible. What's driving
the need for this?
Percuoco: Certain companies, by their nature, deal with volume data. Telecom
providers or credit card companies are being forced into building
these large data repositories because the current business needs would
support that anyway.
So they’re really at the forefront of most of these.
What we have are large data-migration projects. There are disparate
sources within the companies, siloed bits of information that they want
to put into one big-data repository.
Mostly, it's used from an analytics or BI standpoint, because now you have the capability of using big-data SQL
engines to link and join across disparate sources. You can ask
questions and get information, mines of information, that you never
The aspect of extract, transform, load (ETL) will definitely be affected with the large data volumes, as you can't move the data like you used to in the past. Also, governance
is becoming a stronger force within companies, because as you load
many sources of data into one repository, it’s easier to have some kind
of governance capabilities around that.
Software has always been a data-quality company. We have a fairly
mature and diverse platform for data that you push through. Because for
analytics, for risk and compliance, or for anything where you need to
use your data to calculate some kind of risk quotient ratios or
modeling whereby you run your business, the quality of your data is
very, very important.
the advent of big data and the volume of more and varied unstructured
data, the problem of data quality is like on steroids now.
you’re using that data that comes in from multiple channels to make
decisions in your business, then obviously data quality and making that
data the most accurate that it can be by matching it against structured
sources is a huge difference in terms of whether you'll be making the
right decisions or not.
With the advent of big data and the volume of more and varied unstructured data,
the problem of data quality is on steroids now. You have a quality
issue with your data. If anybody who works in any company is really
honest with themselves and with the company, they see that the integrity
of the data is a huge issue.
As the sources of data become more varied and they come from unstructured data sources like social media,
the quality of the data is even more at risk and in question. There
needs to be some kind of platform that can filter out the chatter in
social media and the things that aren't important from a business
Gardner: Betsy Bilhorn, tell us about Scribe Software and how what Trillium and Hitachi Solutions are doing helps data management.
We look at ourselves as the proverbial PVC pipe, so to speak, to bring
data around to various applications and the business processes and
analytics. Where folks like Hitachi leverage our platform is in being
able to make that process as easy and as painless as possible.
want people to get value out of their data, increase the pace of their
business, and increase the value that they’re getting out of their
business. That shouldn’t be a multi-year project. It shouldn’t be
something that you’re tearing your hair out over and running screaming
off a bridge.
As easy as possible
goal here at Scribe is to make that data integration and to get that
data where it needs to go, to the right person, at the right time, as
easily and simply as possible for companies like Hitachi and their
with Trillium, one of the great things with that partnership is
obviously that there is the problem of garbage in/garbage out. Trillium
provides that platform by which not only can you get your data where
you need it to go, but you can also have it clean and you can have it deduped.
You can have a better quality of data as it's moving around in your
business. When you look at those three aspects together, that’s where
Scribe sits in the middle.
Let's talk about some examples of how organizations are using these
approaches, tools, methods, and technologies to improve their business
and their data value. I know that you can’t always name these
organizations, but let's hear a few examples of either named or
non-named organizations that are doing this well, doing this correctly,
and what it gets for them.
you can raise the productivity of the users, you can raise the top
line of the company when you’re talking about a sales organization.
Petrucelli: One that pops to mind, because I just was recently dealing with them, is the Oklahoma City Thunder
NBA basketball team. I know that they’re not a humongous enterprise
account, but sometimes it's hard for people to understand what's going
on inside an enterprise account.
Most people follow and are aware
of sports. They have an understanding of buying a ticket, being a
season ticket holder, and what those concepts are. So it's a very
The Thunder had a problem where they were
using a ticketing system that would sell the tickets, but they had very
little CRM capabilities. All this ticketing was done at the industry
standard for ticketing and that was great, but there was no way to
track, for example, somebody's preferences. You’d have this record of
Jon Petrucelli who buys season tickets and comes to certain games. But
that’s it; that’s all you’d have.
They couldn’t track who my favorite player was, how many kids I have, if I was married, where I live, what my blog is, what my Facebook
profile is. People are very passionate about their sports team. They
want to really be associated with them, and they want to be connected
with those people. And the sports teams really want to do that, too.
So we had a great project, an award winning project. It's won a Gartner
award and Microsoft awards. We helped the Oklahoma City Thunder to
leverage this great amount of rich interaction data, this transactional
data, the ticketing data about every seat they sat in, and every time
a cool record and that might be one line in the database. Around that
record, we’re now able to wrap all the rich information from the
internet. And that customer, that season ticket holder, wants to share
information, so they can have a much more personalized experience.
Scribe and without integration we couldn’t do that. We could easily
deploy Microsoft CRM and integrate it into the ticketing system, so all
this data was in one spot for the users. It was a real true
win-win-win, because not only did the Oklahoma City Thunder have a much
more productive experience, but their season ticket account managers
could now call on someone and could see their preferences. They could
see everything they needed to track about them and see all of their
ticketing history in one place.
And they could see if they’re
attending, if they are not attending, everything about what's going on
with that very high-value customer. So that’s a win for them. They can
deliver personalized service. On the other end of it, you have the
customer, the season ticket holder and they’re paying a lot of money.
For some of them, it’s a lifelong dream to have these tickets or their
family has passed them down. So this is a strong relationship.
in this day and age, people expect a personalized touch and a
personalized experience, and with integration, we were able to deliver
that. With Scribe, with the integration with the ticketing system,
putting that all in Microsoft CRM where it's real-time, it's accessible
and it's insightful.
It’s not just data anymore. It's real time
insights coming out of the system. They could deliver a much better
user experience or customer experience, and they have been benchmarked
against the best customer organizations in the world. The Oklahoma City
Thunder are now rated as the top professional sports fan experience.
Of all professional sports, they have the top fan experience -- and
it's directly relatable to the CRM platform and the data being driven
into it through integration.
It’s not just data anymore. It's real time insights coming out of the system.
I’ve seen a couple of pretty interesting use cases. One of them is
with one of our technical partnerships. They have a data platform also
where they use a behavior account-sharing model. It's very interesting
in that they take multiple feeds of different data, like social media
data, call-center data, data that was entered into a blog from a
website. As Jon said, they create a one-customer view of all of those
disparate sources of data including social media and then they map for
different vertical industries behavioral churn models.
words, before someone churns their account or gets rid of their account
within a particular industry -- like insurance, for example -- what
steps do they go through before they churn their account? Do they send
an e-mail to someone? Do they call the call center? Do they send social
media messages? Then, through statistical analysis, they build these
behavioral churn models.
They put data through these models of
transactional data, and when certain accounts or transactional data
fall out at certain parts, they match that against the strategic client
list and then decide what to do at the different phases of the account
I've heard of companies, large companies, saving as
much as $100 million in account churn by basically understanding what
the clients are doing through these behavioral churn models.
the other most prevalent that I've seen with our clients is sentiment
analysis. Most people are looking at social media data, seeing what
people are saying about them on social media channels, and then using
all different creative techniques to try and match those social media
personas to client lists within the company to see who is saying what
Sentiment analysis is probably the biggest use case
that I've seen, but the account churn with the behavioral models was
very, very interesting, and the platform was very complex. On top, it
had a productive analytics engine that had about 80 different modeling
graphs and it also had some data visualization tools. So it was very, very easy to create shots and graphs and it was actually pretty impressive.
Betsy, do you have any examples that also illustrate what we're
talking about when it comes to innovation and value around data
gathering analytics and business innovation.
I’m going to do a little bit of a twist here on that problem. We have
had a recent customer, who is one of the top LED lighting franchisors
in United States, and they had a different bit of a problem. They have
about 150 franchises out there and they are all disconnected.
Sentiment analysis is probably the biggest use case that I've seen.
in the central office, I can't see what my individual franchises are
doing and I can't do any kind of forecasting or business reporting to be
able to look at the health of all my franchises all over the country.
That was the problem.
The second problem was that they had decided on a standardized NetSuite
platform and they wanted all of their franchises to use these.
Obviously, for the individual franchise owner, NetSuite was a little too
heavy for them and they said overwhelmingly they wanted to have QuickBooks.
customer came to us and said, "We have a problem here. We can't find
anybody to integrate QuickBooks to our central CRM system and we can't
report. We’re just completely flying blind here. What can you do for
Via integration, we were able to satisfy that customer
requirement. Their franchises can use QuickBooks, which was easy for
them, and then through all of that synchronized information back from
all of these franchises into central CRM, they were able to do all
kinds of analytics and reporting and dashboarding on the health of the
The other side benefit, which also makes them
very competitive, is that they’re able to add franchises very, very
quickly. They can have their entire IT systems up and running in 30
minutes and it's all integrated. So the franchisee is ready to go. They
have everything there. They can use a system that’s easy for them to
use and this company is able to have them up and are getting their data
Consistency and quality
that’s a little bit different. Big data is not social, but it’s a
problem that a lot of businesses face. How do I even get these systems
connected so I can even run my business? This rapid repeatable model for
this particular business is pretty new. In the past, we’ve seen a lot
of people try to wire things up with custom codes, or every thing is ad
hoc. They’re able to stand up full IT systems in 30 minutes, every
single time over and over again with a high level consistency and
Gardner: Well we have to
begin to wrap it up, but I wanted to take a gauge of where we are on
this. It seems to me that we’re just scratching the surface. It’s the
opening innings, if you will.
Will we start getting these data
visualizations down to mobile devices, or have people inputting more
information about themselves, their devices, or the internet of things?
Let's start with you, Jon. Where are we on the trajectory of where this
Petrucelli: We’re working on some projects right now with geolocation,
geocaching, and geosensing, where when a user on a mobile device comes
within a range of a certain store, it will serve that user up, if they
have downloaded the app. It will be an app on their smartphone and
they have opted into those. It will serve them up special offers to try
to pull them into the store the same way in which, if you’re walking
by a store, somebody might say, "Hey, Jon.” They know who I am and know
my personalization, when I come in a range, it now knows my location.
Integration is really the key to drive high levels of adoption, which drives high levels of productivity.
is somebody who has an affinity card with a certain retailer, or it
could be a sports team in the venue that the organization knows during
the venue, it knows what their preferences are and it puts exactly the
right offer in front of the right person, at the right time, in the
right context, and with the right personalization.
We see some
organizations moving to that level of integration. With all of the
available technology, with the electronic wallets, now with Google Glass,
and with smart watches, there is a lot of space to go. I don’t know if
it's really relevant to this, but there is a lot of space now.
more in the business app side of it, and I don’t see that going away.
Integration is really the key to drive high levels of adoption, which
drives high levels of productivity which can drive top line gain and
ultimately a better ROI for the company that’s how we really look it
very, very important to be able to deliver that information, at least
in a dashboard format or a summary format on all the mobile devices.
Gardner: What is Scribe Software's vision, and what are the next big challenges that you will be taking your technology to?
Ideally, what I would like to see, and what I’m hoping for, is that
with mobile and consumerization of IT you’re beginning to see that
business apps act more like consumer apps, having more standard APIs and
forcing better plug and play. This would be great for business. What
we’re trying to do, in absence of that, is create that plug-and-play
environment to, as Jon said, make it so easy a child can do it.
vision in the future is really flattening that out, but also being
able to provide seamless integration experience between this break
systems, where at some point you wouldn’t even have to buy middleware
as an individual business or a consumer.
The cloud vendors and
legacy vendors could embed integration and then be able to have really a
plug and play so that the individual user could be doing integration
on their own. That’s where we would really like to get to. That’s the
vision and where the platform is going for Scribe.
Posted By Dana L Gardner,
Thursday, December 05, 2013
| Comments (0)
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
The next edition of the HP Discover Podcast Series details how German telco EWE TEL has solved performance complexity across an extended enterprise billing process by using service virtualization.
doing so, EWE has significantly improved applications performance and
quality for their end users, while also gaining predictive insights in
the composite application services behavior. The use-case will be
featured next week at the HP Discover conference in Barcelona.
To learn more about how EWE is leveraging service virtualization technologies and techniques for composite applications, we recently sat down with Bernd Schindelasch, Leader for Quality Management and Testing at EWE TEL based in Oldenburg, Germany. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: Tell us about EWE TEL, what it does, and what you do there.
Schindelasch: EWE is a telecommunications
company. We operate the network for EWE and we provide a large range
of telecommunications services. So we invest a lot of money into
infrastructure and we supply the region with high-speed Internet access.
EWE TEL was founded in 1996, is a fully owned subsidiary of EWE, and
has about 1,400 employees.
Your software and IT systems are obviously so important. This is how
you interact with your end-users. So these applications must be kept
indeed. Our IT systems are very important for us to fulfill our
customers’ needs. We have about 40 applications, which are involved in
the process of a customer, starting from customer self-service
application, to the activation component, and the billing system. It’s a
quite complex infrastructure and it’s all based on our IT systems.
have a special situation here. Because the telecommunications business
is very specialized, we need very customized IT solutions. Often, the
effort to customize standard software is so high that we decided to
develop a lot of our applications on our own.
Developed in house
half of our applications are developed in house, for example, the
customer self service portal I just mentioned, or our customer care
system or Activation Manager.
had to find a way to test it. So we created a team to test all those
systems we developed on our own. We recruited personnel from the
operating departments and added IT staff, and we started to certify them
all as testers. We created a whole new team with a common foundation,
and that made it very easy for us to agree on roles, tasks, processes,
and so on, concerning our tests.
Gardner: Tell me about the problem that led you to discover service virtualization as a solution.
When we created this new team, we faced the problem of testing the
systems end to end. When you have 40 applications and have to test an
end-to-end process over all of those applications, all the contributing
applications have to be available and have to have a certain level of
quality to be useful.
created a whole new team with a common foundation, and that made it
very easy for us to agree on roles, tasks, processes, and so on,
concerning our tests.
What we encountered was that the
order interface of another service provider was often unavailable and
responses from that system were faulty. So we hadn’t been able to test
our processes end to end.
We once tried to do a load test
and, because of the bottleneck of that other interface, we experienced
the failure of that other interface and weren’t able to test our own
systems. That’s the reason we needed a solution to bypass this problem
with the other interface. That was the initial initiative that we had.
Gardner: Why weren’t traditional testing or scripting technologies able to help you in this regard?
We tried it. We developed diverse simulations based on traditional
mockup scripts. These are very useful for developers to do unit
testing, but they weren’t configurable for testers to be used to create
the right situations for positive and negative tests.
there was a big effort to create these mockups, and sometimes the
effort to create the mockup would have been bigger than the real
development effort. That was the problem we had.
Complex and costly
So any simulations you were approaching were going to be very complex
and very costly. It didn't really seem to make sense. So what did you
constantly analyzed the market and searched for products that might be
able to help us with our problem. In 2012, we found such solutions and
finally made a proof of concept (POC) with HP Service Virtualization.
found that it supported different protocols, all the protocols we
needed, and with a rule set to predict the responses. During the POC we
found that benefits were both for developers and testers. Even our
architects found it to be a good solution. So in the end, we decided to
purchase that software this year.
We implemented service
virtualization in a pilot project and we virtualized even that order
interface we talked about. We had to integrate service virtualization
as a proxy between our customer care system and the order system. The
actual steps you have to take vary by the used protocols, but you have
to put it in between them and let the system work as a proxy. Then, you
have the ability to let it learn.
That reduced our efforts and cost in development and testing and it’s the basis for further test automation at low testing cost.
in the middle, between your systems, and records all messages and
their responses. Afterward, you can just replay this message response
or you can improve the rules manually. For example, you can add data
tables so you can configure the system to work with the actual test
data you are using for you test cases to be able to support positive
and negative tests.
Gardner: For those folks that aren’t familiar with HP Service Virtualization
for composite applications, how has this developed in terms of its
speed and its cost? What are some of the attributes of it that appeal
Schindelasch: Our main
objective was to find a way to do our end-to-end testing to optimize
it, but we were able to gain more benefits by using service
virtualization. We’ve reduced the effort to create simulations by 80
percent, which is a huge amount, and have been able to virtualize
services that were still under development.
So we have been able
to uncouple the tests of the self service application from a new
technical feasibility check. Therefore, we’ve been able to test earlier
in our processes. That reduced our efforts and cost in development and
testing and it’s the basis for further test automation at low testing
In the end, we’ve improved quality. It’s even better for
our customers, because we’re able to deliver fast and have a better
time to market for new products.
Gardner: What would you like to see next?
Schindelasch: One important thing is that development is shifting to agile
more and more. Therefore, the people using the software have changed.
So we have to have better integration with development tools.
a virtualization perspective, there will be new protocols, more
complex rules to address every situation you can think of without
complicated scripting or anything like that. I think that’s what’s
coming in the future.
Bernd, has the use of HP Service Virtualization allowed you to proceed
toward more agile development and, as well, to start to benefit from DevOps, more tight association and integration between development and deployment and operations?
virtualization has the potential to change the performance model, so
you can let your application answer slower or faster.
We already put it together with our development, I think it’s very
crucial to cooperate with development and testing, because there
wouldn’t be a real benefit to virtualize the service after development
already mocked up in an old-fashioned way.
We brought them
together. We had the training for a lot of developers. They started to
see the benefits and started to use service virtualization the way the
testers already did.
We’re working together more closely and
earlier in the process. What’s coming in the future is that the
developers will start to use service virtualization for their
continuous integration, because service virtualization has the potential
to change the performance model, so you can let your application
answer slower or faster.
If you put it into fast mode, then you
use it in continuous integration. That’s a really big benefit for the
developers, because their continuous integration will be faster and
therefore they will be able to deploy faster. So for our development,
it’s a real benefit.
Could you offer some insights to those who are considering the use of
service virtualization with composite applications now that you have
been doing it? Are there any lessons learned? Are there any suggestions
that you would make for others as they begin to explore new service
thing I’ve already mentioned is that it’s important to work together
with development and testing. To gain maximum benefit from HP Service
Virtualization, you have to design your future solutions. What service
do you want to virtualize, which protocols will you use, and where are
the best places to intercept? Do I want to replace real systems or
create the whole environment as virtualized? In which way do I want to
use performance model and so on?
It’s very important to really
understand what your needs are before you start using the tools and
just virtualize everything. It’s easy to virtualize, but there is no
real benefit if you virtualize a lot of things you didn’t really want.
As always, it’s important to think first, design your future solutions,
and then start to do it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Bernd Schindelasch. telco
HP Service Virtualization
Posted By Dana L Gardner,
Wednesday, December 04, 2013
| Comments (0)
Business trends like bring your own device (BYOD)
are forcing organizations to safely allow access to all kinds of
applications and resources anytime, anywhere, and from any device.
According to research firm MarketsandMarkets, the demand for improved identity and access management (IAM) technology is estimated to grow from more than $5 billion this year to over $10 billion in 2018.
The explosive growth -- doubling of the market in five years -- will also fuel the move to more pervasive use of identity and access management as a service (IDaaS). The cloud variety of IAM will be driven on by the need for pervasive access and management over other cloud, mobile, and BYOD activities, as well as by the consumerization of IT and broader security concerns.
To explore the why and how of IDaaS, BriefingsDirect recently sat down with Paul Trulove, Vice President of Product Marketing at SailPoint Technologies in Austin, Texas, to explore the changing needs for -- and heightened value around -- improved IAM.
We also discover how new IDaaS offerings are helping companies far better protect and secure their information assets. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: SailPoint is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
The word "control" comes up so often when I talk to people about
security and IT management issues, and companies seem to feel that they
are losing control, especially with such trends as BYOD. How do companies regain that control, or do we need to think about this differently?
The reality in today's market is that a certain level of control will
always be required. But as we look at the rapid adoption of new
corporate enterprise resources, things like cloud-based applications or
mobile devices where you could access corporate information anywhere in
the world at any time on any device, the reality is that we have to
put a base level of controls in place that allow organizations to
protect the most sensitive assets. But you have to also provide ready
access to the data, so that the organizations can move at the pace of
what the business is demanding today.
The expectations of users has changed, they're used to having more of
their own freedom. How is that something that we can balance, allow them
to get the best of their opportunity and their productivity benefits,
but at the same time, allow for the enterprise to be as low risk as
Trulove: That's the area
that the organization has to find the right balance for their
particular business that meets the internal demands, the external
regulatory requirements, and really meet the expectations of their
customer base. While the productivity aspect can't be ignored, taking a
blind approach to allowing an individual end-user to begin to migrate
structured data out of something like an SAP or other enterprise resource planning (ERP) systems, up to a personal Box.com account is something most organizations are just not going to allow.
organization has to step back, redefine the different types of
policies that they're trying to put in place, and then put the right
kind of controls that mitigate risk in terms of inappropriate acts,
access to critical enterprise resources and data, but also allow the
end user to have a little bit more control and little bit more freedom
to do things that make them the most productive.
Uptake in SaaS
We've seen a significant uptake in SaaS, certainly at the number of
apps level, communications, and email, but it seems as if some of the
infrastructure services around IAM are lagging. Is there a maturity
issue here, or is it just a natural way that markets evolve? What's the
case in understanding why the applications have gone fast, but we're
now just embarking on IDaaS?
We're seeing a common trend in IT if you look back over time, where a
lot of the front-end business applications were the first to move to a
new paradigm. Things like ERP and service resource management
(SRM)-type applications have all migrated fairly quickly.
Over the last decade, we've really seen a lot of the sales management applications, like Salesforce and NetSuite come on as full force. Now, there are things like Workday
and even some of the work force management becoming very popular.
However, the infrastructure generally lagged for a variety of reasons.
the IAM space, this is a critical aspect of enterprise security and
risk management as it relates to guarding the critical assets of the
organization. Security practitioners are going to look at new technology
very thoroughly before they begin to move things like IAM out to a new
delivery paradigm such as SaaS.
The other thing is that
organizations right now are still fundamentally protecting internal
applications. So there's less of a need to move your infrastructure out
into the cloud until you begin to change the overall delivery paradigm
for your internal application.
As customers implement more and more of their software out in the cloud, that's a good time for them to begin to explore IDaaS.
we're seeing in the market, and definitely from a customer
perspective, is that as customers implement more and more of their
software out in the cloud, that's a good time for them to begin to
Look at some of the statistics being thrown
around. In some cases, we've seen that 80 percent of new software
purchases are being pushed to a SaaS model. Those kinds of companies
are much more likely to embrace moving infrastructure to support that
large cloud investment with fewer applications to be managed back in
the data center.
Gardner: The notion of mobile-first applications
now has picked up in just the last two or three years. I have to
imagine that's another accelerant to looking at IAM differently when
you get to the devices. How does the mobile side of things impact this?
Mobile plays a huge part in organizations' looking at IDaaS, and the
reason is that you’re moving the device that's interacting with the
identity management service outside the bounds of the firewall and the
network. So, having a point of presence in the cloud gives you a very
easy way to generate all of the content out to the devices that are
being operated outside of the traditional bounds of the IT organization,
which was generally networked in to the PCs, laptops, etc that are on
the network itself.
Moving to IDaaS
I'd like to get into what hurdles organizations need to overcome to
move in to IDaaS, but let's define this a little better for folks that
might not be that familiar with it. How does SailPoint define IDaaS? What are we really talking about?
SailPoint looks at IDaaS as a set of capabilities across compliance
and governance, access request and provisioning, password management, single sign-on (SSO),
and Web access management that allow for an organization to do
fundamentally the same types of business processes and activities that
they do with an internal IAM systems, but delivered from the cloud.
also believe that it's critical, when you talk about IDaaS to not only
talk about the cloud applications that are being managed by that
service, but as importantly, the internal applications behind the
firewall that still have to be part of that IAM program.
So, this is not just green field. You have to work with what's already
in place, and it has to work pretty much right the first time.
Yes, it does. We really caution organizations against looking at cloud
applications in a siloed manner from all the things that they're
traditionally managing in the data center. Bringing up a secondary IAM
system to only focus on your cloud apps, while leaving everything that
is legacy in place, is a very dangerous situation. You lose visibility,
transparency, and that global perspective that most organizations have
struggled to get with the current IAM approaches across all of those
areas that I talked about.
see a little bit less of the data export concerns with companies here
in the US, but it's a much bigger concern for companies in Europe and
Asia in particular.
So, we recognize that these large trends are forcing a change, users
want their freedom, more mobile devices, more different services from
different places, and security being as important if not more than
ever. What is holding organizations back from moving towards IDaaS,
given that it can help accommodate this very complex set of
Trulove: It can. The
number one area, and it's really made up of several different things,
is the data security, data privacy, and data export concerns.
Obviously, the level at which each of those interplay with one another,
in terms of creating concern within a particular organization, has a
lot to do with where the company is physically located. So, we see a
little bit less of the data export concerns with companies here in the
US, but it's a much bigger concern for companies in Europe and Asia in
Data security and privacy are the two that are very
common and are probably at the top of every IT security professional’s
list of reasons why they're not looking at IDaaS.
It would seem that just three or four years ago, when we were talking
about the advent of cloud services, quite a few people thought that
cloud was less secure. But I’ve certainly been mindful of increased and
improved security as a result of cloud, particularly when the cloud
organization is much more comprehensive in how they view security.
able to implement patches with regularity. In fact, many of them have
just better processes than individual enterprises ever could. So, is
that the case here as well? Are we dealing with perceptions? Is there a
case to be made for IDaaS being, in fact, a much better solution
IAM as secure
Much like organizations have come to recognize the other categories of
SaaS as being secure, the same thing is happening within the context
of IAM. Even a lot of the cloud storage services, like Box.com, are now
signing up large organizations that have significant data security and
privacy concerns. But, they're able to do that in a way and provide
the service in a way where that assurance is in place that they have
control over the environment.
And so, I think the same thing
will happen with identity, and it's one of the areas where SailPoint is
very focused on delivering capabilities and assurances to the customers
that are looking at IDaaS, so that they feel comfortable putting the
kinds of information and operating the different types of IAM
components, so that they get over that fear of the unknown.
the biggest benefits of moving from a traditional IAM approach to
something that is delivered as IDaaS is the rapid time to value. It's
also one of the biggest changes that the organization has to be
prepared to make, much like they would have as they move from a Siebel- to a Salesforce-type model back in the day.
delivered as a service needs to be much more about configuration,
versus that customized solution where you attempt to map the product and
technology directly back to existing business processes.
benefit that they get out of that is a much lower total cost of
ownership (TCO), especially around the deployment aspects of IDaaS.
of the biggest changes from a business perspective is that the
business has to be ready to make investments in business process
management, and the changes that go along with that, so that they can
accommodate the reality of something that's being delivered as a
service, versus completely tailoring a solution to every aspect of
The benefit that they get out of that is a much lower total cost of ownership (TCO), especially around the deployment aspects of IDaaS.
It's interesting that you mentioned business process and business
process management. It seems to me that by elevating to the cloud for a
number of services and then having the access and management controls
follow that path, you’re able to get a great deal of flexibility and
agility in how you define who it is you’re working with, for how long,
It seems to me that you can use policies and create
rules that can be extended far beyond your organization’s boundaries,
defining workgroups, defining access to assets, creating and spinning
up virtualized companies, and then shutting them down when you need.
So, is there a new level of consideration about a boundaryless
organization here as well?
There is. One of the things that is going to be very interesting is the
opportunity to essentially bring up multiple IDaaS environments for
different constituents. As an organization, I may have two or three
fundamentally distinct user bases for my IAM services.
may have an internal population that is made up of employees, and
contractors that essentially work for the organization that need access
to a certain set of systems. So I may bring up a particular environment
to manage those employees that have specific policies and workflows
and controls. Then, I may bring up a separate system that allows for
business partners or individual customers to have access to very
different environments within the context of either cloud or on-prem IT
The advantage is that I can deploy these services
uniquely across those. I can vary the services that are deployed. Maybe
I provide only SSO and basic provisioning services for my external
user populations. But for those internal employees, I not only do that,
but I add access certifications, and segregation of duties (SOD)
policy management. I need to have much better controls over my
internal accounts, because they really do guard the keys to the kingdom
in terms of data and application access.
We began this conversation talking about balance. It certainly seems to
me that that level of ability, agility, and defining new types of
business benefits far outweighs some of the issues around risk and
security that organizations are bound to have to solve one way or the
other. So, it strikes me as a very compelling and interesting set of
benefits to pursue.
You've delivered the SailPoint IdentityNow suite.
You have a series of capabilities, and there are more to come. As you
were defining and building out this set of services, what were some of
the major requirements that you had, that you needed to check off
before you brought this to market?
The number one capability that we really talk to a lot of customers
about is an integrated set of IAM services that span everything from
that compliance and governance to access request provisioning and
password management all the way to access management and SSO.
They can get value out of it, not necessarily on day one, but within weeks, as opposed to months.
of the things that we found as a critical driver for the success of
these types of initiatives within organizations is that they don't
become siloed, and that as you implement a single service, you get to
take advantage of a lot of the work that you've done as you bring on the
second, third, or fourth services.
The other big thing is that
it needs to be ready immediately. Unlike a traditional IAM solution,
where you might have deployment environments to buy and implement
software to purchase and deploy and configure, customers really expect
IDaaS to be ready for them to start implementing the day that they buy.
It's a quick time-to-value, where the organization deploying it
can start immediately. They can get value out of it, not necessarily
on day one, but within weeks, as opposed to months. Those things were
very critical in deploying the service.
The third thing is that
it is ready for enterprise-level requirements. It needs to meet the use
cases that a large enterprise would have across those different
capabilities, but also as important, that it meets data security,
privacy, and export concerns that a large enterprise would have
relative to beginning to move infrastructure out to the cloud.
as a cloud service, it needs a very secure way to get back into the
enterprise and still manage the on-prem resources that aren’t going away
anytime soon. n one hand we would talk to customers about managing
things like Google Apps, Salesforce and Workday. In the same breath,
they also talk about still needing to manage the mainframe and the on-premises enterprise ERP system that they have in place.
being able to span both of those environments to provide that secure
connectivity from the cloud back into the enterprise apps was really a
key design consideration for us as we brought this product to market.
Gardner: It sounds if it's a hybrid model from the get-go. We hear about public cloud, private cloud, and then hybrid. It sounds as if hybrid is really a starting point and an end point for you right away.
It's hybrid only in that it's designed to manage both cloud and
on-prem applications. The service itself all runs in the cloud. All of
the functionality, the data repositories, all of those things are 100
percent deployed as a service within the cloud. The hybrid nature of it
is more around the application that it's designed to manage.
You support a hybrid environment, but I see, given what you've just
said, that means that all the stock in trade and benefits as a service
offering are there, no hardware or software, going from a CAPEX to OPEX model, and probably far lower cost over time were all built in.
Trulove: Exactly. The deployment model is very much that classic SaaS, a multitenant application where we basically run a single version of the service across all of the different customers that are utilizing it.
we've put a lot of time, energy, and focus on data protection, so that
everybody’s data is protected uniquely for their organization. But we
get the benefits of that SaaS deployment model where we can push a
single version of the application out for everybody to use when we add a
new service or we add new capabilities to existing services. We take
care of upright processes and really give the customers that are
subscribing to the services the option of when and how they want to turn
new things on.
put a lot of time, energy, and focus on data protection, so that
everybody’s data is protected uniquely for their organization.
IdentityNow suite is made up of multiple individual services that can
be deployed distinctly from one another, but all leverage a common
back-end governance foundation and common data repository.
first service is SSO and it very much empowers users to sign on to
cloud, mobile, and web applications from a single application platform.
It provides central visibility for end users into all the different
application environments that they maybe interacting with on a daily
basis, both from a launch-pad type of an environment, where I can go to a
single dashboard and sign on to any application that I'm authorized to
Or I may be using back-end Integrated Windows Authentication,
where as soon as I sign into my desktop at work in the morning, I'm
automatically signed into all my applications as I used them during the
day, and I don’t have to do anything else.
The second service
is around password management. This is enabling that end-user
self-service capability. When end users need to change their password
or, more commonly, reset them because they’ve forgotten them over a long
weekend, they don’t have to call the help desk.
can go through a process of authenticating through challenge questions
or other mechanisms and then gain access to reset that password and
even use some strong authentication mechanisms like one-time password
tokens that are going to be issued, allow the user to get in and then,
change that password to something that they will use on an ongoing
The third service is around access certifications, and
this automates that process of allowing organizations to put in place
controls through which managers or other users within the organization
are reviewing who has access to what on a regular basis. It's a very
business-driven process today, where an application owner or business
manager is going to go in, look at the series of accounts and
entitlements that a user has, and fundamentally make a decision whether
that access is correct at a point in time.
One of the key things
that we're providing as part of the access certification service is
the ability to automatically revoke those application accounts that are
no longer required. So there's a direct tie into the provisioning
capabilities of being able to say, Paul doesn’t need access to this
particular active directory group or this particular capability within
the ERP system. I'm going to revoke it. Then, the system will
automatically connect to that application and terminate that account or
disable that account, so the user no longer has access.
final two services are around access request and provisioning and
advanced policy and analytics. On the access request and provisioning
side, this is all about streamlining, how users get access. It can be
the automated birth-right provisioning of user accounts based on a new
employee or contractor joining new organization, reconciling when a
user moves to a new role, what they should or should not have, or
terminating access on the back end when a user leaves the organization.
What most customers see, as they begin to deploy IDaaS is the ability to get value very quickly.
of those capabilities are provided in an automated provisioning model.
Then we have that self-service access request, where a user can come
in on an ad-hoc basis and say, "I'm starting a new project on Monday
and I need some access to support that. I'm going to go in, search for
that access. I'm going to request it." Then, it can go through a
flexible approval model before it actually gets provisioned out into
The final service around advanced policy
and analytics is a set of deeper capabilities around identifying where
risks lie within the organization, where people might have
inappropriate access around a segregation of duty violation.
putting an extra level of control in place, both of a detective
nature, in terms of what the actual environment is and which accounts
that may conflict that people already have. More importantly, it's
putting preventive controls in place, so that you can attach that to an
access request or provisioning event and determine whether a policy
violation exists before a provisioning action is actually taken.
What are your customers finding now that they are gaining as a result
of moving to IDaaS as well, as the opportunity for specific services
within the suite? What do you get when you do this right?
What most customers see, as they begin to deploy IDaaS is the ability
to get value very quickly. Most of our customers are starting with a
single service and they are using that as a launching pad into a broader
deployment over time.
So you could take SSO as a distinct
project. We have customers that are implementing that SSO capability to
get rapid time to value that is very distinct and very visible to the
business and the end users within their organization.
they have that deployed and up and running, they're leveraging that to
go back in and add something like password management or access
certification or any combination thereof.
We’re not stipulating
how a customer starts. We're giving them a lot of flexibility to start
with very small distinct projects, get the system up and running
quickly, show demonstrable value to the business, and then continue to
build out over time both the breadth of capabilities that they are
using but also the depth of functionality within each capability.
is driving a significant increase in why customers are looking at
IDaaS. The main reason is that mobile devices operate outside of the
corporate network in most cases. If you're on a smartphone and you are
on a 3G, 4G, LTE
type network, you have to have a very secure way to get back into
those enterprise resources to perform particular operations or access
certain kinds of data.
One of the benefits that an IDaaS
service gives you is a point of presence in cloud that allows the mobile
devices to have something that is very accessible from wherever they
are. Then, there is a direct and very secure connection back into those
on-prem enterprise resources as well as out to the other cloud
applications that you are managing.
The other big thing we're seeing in addition to mobile devices is just the adoption of cloud applications.
reality in a lot of cases is that, as organizations add those BYOD
type policies and the number of mobile devices that are trying to
access corporate data increase significantly, providing an IAM
infrastructure that is delivered from the cloud is a very convenient
way to help bring a lot of those mobile devices under control across
your compliance, governance, provisioning, and access request type
The other big thing we're seeing in addition to
mobile devices is just the adoption of cloud applications. As
organizations go out and acquire multiple cloud applications, having a
point of presence to manage those in the cloud makes a big difference.
fact, we've seen several deployment projects of something like Workday
actually gated by needing to put in the identity infrastructure before
the business was going to allow their end users to begin to use that
service. So the combination of both mobile and cloud adoption are
driving a renewed focus on IDaaS.
If you look at the road map that we have for the IdentityNow product,
the first three services are available today, and that’s SSO, password
management, and access certification. Those are the key services that
we're seeing businesses drive into the cloud as early adopters. Behind
that, we'll be deploying the access request and provisioning service
and the advanced policy and analytic services in the first half of
that, what we're really looking at is continued maturation of the
individual services to address a lot of the emerging requirements that
we're seeing from customers, not only across the cloud and mobile
application environments, but as importantly as they begin to deploy the
cloud services and link back to their on-prem identity and access
management infrastructure, as well as the applications that they are
continuing to run and manage from the data center.
So, more inclusive, and therefore more powerful, in terms of the
agility, when you can consider all the different aspects of what falls
under the umbrella of IAM.
We're also looking at new and innovative ways to reduce the deployment
timeframes, by building a lot of capabilities that are defined out of
the box. These are things like business processes, where there will be
catalog of the best practices that we see a majority of customers
implement. That has become a drop-down for an admin to go in and pick,
as they are configuring the application.
We'll be investing very
heavily in areas like that, where we can take the learning as we deploy
and build that back in as a set of best practices as a default to
reduce the time required to set up the application and get it deployed
in a particular environment.
Posted By Dana L Gardner,
Tuesday, December 03, 2013
| Comments (0)
relationship between enterprise IT and lines of business leadership
has not always been rosy. Sometimes IT holds the upper hand, and
sometimes the business does an end-run around IT to use new tools or
processes. They might even call it innovation.
Today, with the push toward big data and business intelligence (BI),
a new chasm is growing between enterprise IT groups and business
units. But, in this case, it could be disastrous because IT should be a
big part of the big data execution.
The next BriefingsDirect
discussion therefore examines how an ebb and flow between IT
centralization and decentralization that swings in the direction of
business groups, and even shadow IT, now runs the risk of neglecting essential management security and scalability requirements.
Indeed, big data and analytics
should actually force more collaboration and lifecycle-based
relationships among and between business and IT groups. For those
organizations -- where innovation is being divorced from IT discipline
-- we'll explore ways that a comprehensive and virtuous adoption of
rigorous and protected data insights can both make the business stronger
and make IT more valued.
To get to the bottom of why, BriefingsDirect recently sat down with John Whittaker, Senior Director of Marketing for Dell Software's Information Management Solutions Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Dell Software is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
John, we seem to go back and forth between resources in organizations
being tightly controlled and governed by IT, and then resources and
control resting largely with the line of business or
even, as I mentioned, with a shadow IT group of some sort. So over the
past 20 or more years, why has this problem been so difficult to
overcome? Why is it persistent? Why do we keep going back and forth?
That’s an interesting question, and I agree. I've been in IT for
longer than 20 years and certainly in your study of history you can see
that this ebb and flow of centralized management to gain some
constraints or some controls in governance and security has been one of
the primary motivators of IT. It’s one of the big benefits they
provide, but in the backdrop, you have lines of business that want to
innovate and want to go in new directions.
We’re entering one of those times right now with big
data and the advent of analytics, and it’s driving lines of business
to push into these new technologies, and maybe in ways that IT isn’t
ready for just yet.
This, as you mentioned, has been going on for
some time. The last iteration where this occurred was back in the ’90s
when e-commerce and the Web captured the imagination of business. We
saw a lot of similarities to what's occurring today.
It ultimately caused some problems back in the ’90s around e-commerce
and leveraging this great new innovation of the Internet, but doing it
in a way that was more decentralized. It was a little bit more of the
Wild West-based approach and ultimately led to some pretty significant
issues that I think we are going to see out of the big data and
analytics push that’s occurring right now.
I suppose to be fair to each constituency here, it’s the job of IT to
be cautious and to try to dot all the i’s and cross the t’s. There were
a lot of people in 1996-97 who didn’t necessarily think the Internet
was going to be that big of a thing, it seemed to have lots of risk
associated with it. So, I suppose due diligence needed to be brought to
the other hand, if the businesses didn’t recognize that this could be a
huge opportunity and we needed to take those risks -- create a
website, and enter into a direct dialogue with customers to a new
channel -- they would have missed a big opportunity. So these are sort
of natural roles, but they can’t be too brittle.
You’re absolutely right. At their core, both groups had, and have,
good motivations. IT lives in a world of constraints, of governance,
security, and of needing to deliver something that’s going to be stable,
that’s going to scale, that’s going to be secure, and that’s not going
Nobody in either group is trying to harm the business or anything close to it.
are laudable goals to have in mind. From the line-of-business
perspective, the business wants to innovate and doesn’t want to be
outmoded by its competitors. They rightfully see that all these great
innovations are coming, and analysts, pundits, and experts are talking
about how this is going to make a huge difference for businesses.
they inevitably want to embrace those, and you have this cognitive
dissonance occurring between the IT goals around constraints and the
desire to keep things running in a clean and efficient manner. IT is
seeing this new technology and saying, "Hold on. We don’t necessarily
want to jump into this. This is going to break our model.”
IT gets to a point where maybe they suggest we shouldn’t do it or we
should push it off for some time. That’s where the chasm between the two
gets started. From the business perspective, the answer "no” is
unacceptable, if they feel that’s what they need to do to achieve
success in business. They own the profit and loss responsibilities.
That’s where these problems come from.
Nobody in either group is
trying to harm the business or anything close to it. They just have
different motivations and perspectives on how to approach something,
and when one gets wildly far apart from the other, that’s where these
problems tend to occur. Again, when these big innovation cycles happen,
you’re more likely to see a lot of these problems start to occur.
definitely remember back in 1996-1997. We didn’t call it shadow IT at
the time, but you saw IT-like personnel being hired into functional
business areas to institute these new technologies, and that ultimately
led to a pretty serious hangover at the end of that innovation cycle.
Gardner: What’s the risk of ignoring IT, doing an end-run around them, or downplaying the role? What form does it take?
On their own
Whittaker: Ignoring IT can have some pretty serious problems. It all starts with the fact that, and by and large, businesses can embrace these new technologies without the aid of IT. Cloud-based
implementations have made it possible for lines of business to rapidly
deploy some of these new big data technologies, and you have vendors
in some cases telling them they don’t need IT’s help. So it’s not all
that difficult for lines of business to go out on their own and
implement a big data technology.
But they don’t typically have
the discipline to apply across-the-board governance capabilities and
discipline into their deployment and that leads to potential issues with
regulatory requirements. It also leads to security issues, and
ultimately can lead to problems where you have seriously bad data
You have data sunk in silos, and maybe the CEO
wants to know how much business we’re doing with x, y, and z. No one
can deliver that, because we call x, y, and z, something in one system,
a different name in another system, and a different name in the third
system. Trying to pull that data together becomes really difficult.
When you have lines of business independently operating disparate
solutions, those core governance issues tend to break down.
although they are great at spotting innovation opportunities, line of
business people are not necessarily in the business of building
scalable, secure, stable environments. That’s not the core of, say,
marketing. They need to understand how the technology can be leveraged,
but maintaining and managing it is not core to their charter. It tends
to be ignored.
are a lot of lessons that can be learned from the concept of working
closely together, iterating rapidly, and being open to innovation and
the idea that changes occur.
John, it strikes me that there are some examples within IT that help
understand this potential problem and even grab some remediation, and
that’s in software development. We’ve seen the complexity in groups
working without a lot of coordination and shared process insights and
have run aground.
For many years, we saw a very high failure
rate among software development projects, but more recently, we’ve seen
improvements -- agile, scrum,
opening up the process, small iterative steps that then revert back to
an opportunity to take stock and know what everyone is doing, checking
in, checking out with centralization -- but without stifling
innovation. Is there really a lesson here in what’s happened within
software development that could be brought to the whole organization?
Absolutely. In fact, within Dell Software itself we embrace agile and
use scrum internally. There are a lot of lessons that can be learned
from the concept of working closely together, iterating rapidly, and
being open to innovation and the idea that changes occur.
in these major innovation cycles, it’s important to go with the flow
and implement some of these new technologies and new capabilities early,
so you can have that brain trust built internally among the broad
team. You don’t want IT to hold the reins entirely, and at the same
time, you don’t want line of business to do it.
We really need to
break that model, that back and forth, centralization-decentralization
swing that keeps occurring. We need to get to a point where we really
are partnering and have good collaboration, where innovation can be
embraced and adopted, and the business can meet its goals. But it has to
be done in a way that IT can implement sound governance and implement
solutions that can scale, are stable, are reliable, and are going to
lead to long-term success.
What’s different this time, John? Are the stakes higher because we’re
talking about data analysis? That’s basically intelligence about what’s
going on within your markets, your organization, your processes, your
supply chain, your ecosystem, all of which could have a huge bearing.
We have the ability now to tackle massive amounts of data
very rapidly, but if we don’t bring this together holistically, it
seems as if there is a larger risk. I’m thinking about a competitive
risk. Others that do this well could enter your market and really
Whittaker: You’re absolutely
right. There’s great potential benefit that organizations receive or
can get out of leveraging big data and analytics, that of being able to
determine predictively what is going to occur in their business and
what are the most efficient routes to market and what areas of
improvements can occur.
The businesses that leverage this are
going to outmode, outperform, and ultimately win in the markets
currently dominated by organizations who aren’t paying attention and
who aren’t implementing solutions today. They’re getting a little bit
ahead of this cycle so that they are ready and are able to be
successful down the road.
We’re really moving into an era where the context of what’s happening is critically important.
really moving into an era where the context of what’s happening is
critically important. A data-driven management model is going to be
embraced and it’s ultimately going to lead to more successful
organizations. Companies and organizations that embrace this today are
going to be the winners tomorrow.
If you’re ignoring this or
putting this off, you’re really taking a tremendous risk, because this
next iteration of innovation that’s occurring around analytics applies
to large data sources. It’s being able to build the correlations and
determine that this is a more efficient approach, or conversely, that
we have a problem with this outlier that’s going to give us issues down
If you’re not doing that as an organization, you
really are running a pretty tremendous risk that somebody else is going
to walk in and be able to make smarter decisions, faster.
At the same time, your customers are gaining insights into how to
procure all the better. And so any rewards that might be out there, if
you are in a sales role of any kind, would become much more apparent.
Whittaker: That’s definitely true as well. The construct and the conversation has really shifted. With the advent of social media
and the pace at which information is shared and opinions are made,
it’s no longer the company that is the primary voice about its products
and its capabilities or its positions and point of views.
Customers more empowered
needs to have those. It needs to get them out. It needs to push them.
But in this new world we live in, the customers are so much more
empowered than they have ever been before, and it should be a good
thing. For companies that are delivering great products and solving real
problems for their customers, this should be great news.
you’re not listening to what your customers are saying in social media
and if you’re not paying attention to the ongoing story line and
conversation of your firm in the social sphere, you’re really putting
yourself at risk. You’re missing out on a tremendous opportunity to
engage with your customers in a new, interesting, and very useful way.
That’s a lot of what we built. We have a lot of capabilities here at Dell Software around data management, data integration, and data analysis. On the analysis side, we spend a great deal of time with products like Kitenga and our social networking analytics platforms to do that semantic analysis and look into that form of big data.
big data is more than just social. It’s also sensor data. The
iterative thing is another area where businesses should be innovating
and organizations should be pushing to take advantage of it. That’s
where line of business should be saying, "We need to get out into this
area, or if we don’t, we’re going to be outmoded by our competitors.”
And IT should be encouraging it. They should be pushing for more
innovation, bringing new ideas, and being a real partner and
collaborator at the table within the business and organization. That’s
the right way to do this.
IT could use big data analytics to improve its own environment and to answer this crisis of confidence that exists.
IT itself should be applying some of these technologies. In fairness
to line of business, there exists a bit of a crisis of confidence in
IT, and there’s really no better way to push against that or fight
against that then to be able to run analytics on the solutions you’re
providing. How well is IT performing? Are you benchmarking against past
performance? How do you benchmark against your industry?
another component. Big-data analytics can be utilized by IT not just to
deliver capabilities to the organization or push out and help with
connecting to the customer. IT could use big data analytics to improve
its own environment and to answer this crisis of confidence that
You could turn these tools internally and look at rates
of response as compared to your industry, how your network is
performing, how your database is performing, or how the code you write
is performing. Are your developers efficient in building clean code?
has been watching the major shift in the healthcare environment in
North America. A big component of that probably should have been more
benchmark analysis, analytics on code quality, and things of that
nature. That’s a great current and topical example of how IT should be
utilizing some of these technologies, not just externally, not just
bringing it to line of business, but within its own environment, to
prove that it’s building systems that are going to be scalable, secure,
Gardner: What needs to
take place in order for this higher level of coordination and
collaboration to take place? Are there any key steps that you have in
mind for embarking on this?
Four key areas
I think that there are four key areas that need to occur for this
collaboration to happen. Number one, senior executives need to be
aligned to what the organization is trying to achieve. They need to
articulate a common vision that accounts for the shared interest of both
IT and line of business and make it clear that they expect
collaboration. That should come at the top of the organization.
need to get out of the smoke-stacked, completely siloed,
organizational approaches and get to something that’s far, far more
collaborative, and that needs to come from the top. The current
approach is not acceptable. These groups need to work together. That’s a
key component. If you don’t have buy-in at the top, it makes it really
hard for this collaboration to occur.
Number two, IT needs to
get its house in order. This means many things, but primarily, it means
overcoming the crisis of confidence line of business has in IT by
coming to the table with an approach that works for line of business,
something that business aligns with such that it feels like it has IT
involvement and that they’re buying into the future that the business
wants to head towards. IT needs to show that they have a plan that does
not compromise the innovations that the business needs.
absolutely can no longer just say no. That’s not an acceptable
position. Certainly, if you look back, there were IT organizations that
were saying, "No, we’re not going to connect to the Internet. It’s not
secure. The answer is just going to be no.”
That didn’t work out
for them and it’s not going to work out here either. They should be
embracing this shift. We shouldn’t perpetuate this cycle by driving
more shadow IT and creating ultimately more for IT down the road as
inevitable problems start to emerge.
shouldn’t perpetuate this cycle by driving more shadow IT and creating
ultimately more for IT down the road as inevitable problems start to
Number three, clear the air and put the executive
plan in place. Tensions between IT and line of business have gotten to
the point where they can’t be ignored any more. Put the stakeholders
together in a room, air out the difficulties, and move forward with a
clean slate. This is a tremendous opportunity to build a plan that
meets both parties’ needs and allows them to start executing on
something that’s really going to make a huge impact for the business.
the fourth point, seek solutions that emphasize collaboration between
IT and the business. Many vendors today are encouraging groups to go
rogue and operate in silos, and that’s causing a lot of the problem. At
Dell, we’re much more about pushing a more collaborative approach. We
think IT is terrific, but business has a point. They need innovation
and they need IT to step up. And the business needs to embrace IT.
of conflicting with each other and doing your own thing, back up your
commitment to collaboration and utilize tools that empower it. That’s
where we’re going to win, and that’s how business is going to succeed in
This isn’t something that the G20, the Fortune 500, or Fortune 2000 alone can benefit from. This goes way down in the hierarchy, in the stack, certainly down to the small- and medium-sized business (SMB)
level. And maybe even lower. If you’re a data-intensive small
business, you probably need to start implementing and taking a look at
big data and what analytics based approaches and data-driven decision
making opportunities exist within your organization, or you will be
outmoded by organizations that do embrace that.
and more, we’re seeing, particularly in the mid-market, embracing of a
cloud-based approach. It's important to point out that that approach
is fine and terrific. We love the cloud and we’re big proponents of it,
but using a cloud-based solution doesn’t free line of business from
the need to collaborate with IT. It will not eliminate this problem.
seeing terrific IT departments and leadership starting to take a
larger role, starting to ultimately become drivers of innovation.
That’s really what we want to see. All businesses want the same thing.
They want to find sustainable competitive advantages. They want to
control spending. They want to reduce risk to the business.
the most effective and efficient path to achieving all three is getting
IT and the business aligned and allowing that collaboration to occur.
That’s really at the crux of how businesses are going to gain
competitive advantage out of technology in the future.
Embrace new technology
big points are, embrace the new technology that’s coming out. The
innovation is going to make your business far more successful, and your
organization will prosper from these new innovations that will occur.
two, do it in a manner that is collaborative between IT and line of
business. The CIO, the CMO, the CFO, the CEO, the heads of all of the
functional departments, whether you are in sales, marketing, finance,
operation, wherever you are, should be aligning with their IT
counterparts. It's the combined collaborative approach that’s going to
win the day.
And finally, this should really be driven top-down.
Senior executives, this is an opportunity to get everybody on the same
page to go after and leverage a pretty enormous opportunity before it
becomes a huge problem. Let’s get out there right now. We’re still in
the early days, but that doesn’t mean there’s not a lot to be gained.
And ultimately, in the long-term, we’re going to have more successful
organizations able to achieve even greater output through this
collaboration and the leveraging of big data analytics.
Posted By Dana L Gardner,
Monday, November 18, 2013
| Comments (0)
Now that server hardware decisions are no-brainers (thanks to virtualization and the ubiquity of multi-core 64-bit x86), deciding on the enterprise-wide purchase of a tablet computer types will be the biggest hardware choice many IT leaders will make.
So what guides these tablet decisions?
Do the attributes of the mobile device and platform (the nature of the
thing) count most? Or is it more important that it conforms to the
fast-changing needs of the back-end services and cloud ecosystem? Can the tablet be flexible and adaptive, to act really as many client types in one (the nurture)?
Can the tablet be flexible and adaptive, to act really as many client types in one (the nurture)?
how the requirements from enterprise to enterprise vary so much, this
is a hugely complex issue. We've seen a rapidly maturing landscape of
new means to the desired enterprise tablet ends in recent years: mobile device management (MDM), containerization and receiver technology flavors, native apps, web-centric apps, recasting virtual desktop infrastructure (VDI).
It is still quite messy, really, despite the fact that this is a
massive global market, the progeny of the PC market of the past 25
Some think that bring your own device (BYOD)
will work using these approaches on the user’s choice of tablet. If
so, IT will be left supporting a dozen or more mobile client device
types and/or versions. You and I know that can’t happen. The list of
supported device types needs to be under six, preferably far less,
whether it’s BYOD or quasi-BYOD.
Ticking time bomb
enterprises must act. Users are buying and making favorites. Mobility
is an imperative. These tablet hardware decisions must be made.
Think of it. You’re an IT leader at a competitive enterprise and rap, rap, rapping on your Windows to get in ASAP are BYOD, mobile apps dev, Android apps, iOS apps, and hybrid-cloud processes.
have a lot to get done fast amid complex overlaps and
interdependencies from your choices that could haunt you — or bless you
— for years. And, of course, you have a tight budget as you fight to
keep operating costs in check, even as scale requirements keeping
Back-end strategy and procurement decisions count more than at any time in the last 12 years.
Somewhere in this 3D speed chess match against the future there are actual hardware RFPs.
You will be buying client hardware for the still large (if not
predominant) portion of the workforce that won’t be candidates for BYOD
alone. And a sizable portion of these workers are going to need an
enterprise tablet, perhaps for the first time. They want you to give it
analysis vortex is where I decided to break from my primary focus on
enterprise software and data-center infrastructure to consider the
implications of the mobile client hardware. My dearly held bias is that
the back-end strategy and procurement decisions count more than at any
time in the last 12 years.
Better not brick
at the end of the network hops, there still needs to be a physical
object, on which the user will get and put in the work that matters
most. This object cannot, under any circumstances, become a weak link in
the hard-won ecosystem of services that support, deliver, and gather
the critical apps and data. This productivity symphony you are now
conducting from amid your legacy, modern data center, and cloud/SaaS services must work on every level — right out to those greasy fingertips on the smart tablet glass.
the endpoint must be as good as the services stream behind them, yet
not hugely better, not a holy shiny object that tends to diminish the
rest, not just a pricey status symbol — but a workhorse that can be
nurtured and that can adapt as demanded.
There still needs to be a physical object, on which the user will get and put in the work that matters most.
So I recently received and evaluated a Levono ThinkPad Tablet 2 running Windows 8 as well as an iPad Air running iOS 7. I wanted to get a sense of what the enterprise decisions will be like as enterprises seek the de facto standard mass-deployed tablet for their post-PC workforce. [Disclosure: Intel
sent me, for free, a Lenovo ThinkPad as a trial, and I bought my own
iPad Air. I do not do any business with Apple, Lenovo, or Intel.]
Let’s be clear, I’m a long-time Apple user by choice, but still run one instance of Windows 7
on a PC just in case there are Windows-only apps or games I need or
want access to. This also keeps up my knowledge on Windows in general.
Good enough is plenty
what I found. I personally love the iPad Air, but the Lenovo ThinkPad
Tablet 2 was surprisingly good, certainly good enough for enterprise
uses. I will quibble with efficacy of the stylus, that the Google Chrome browser is better on it than Microsoft IE,
that the downloads for both are a pain, and that battery life is a
weakness on Lenovo — but these are not deal breakers and will almost
certainly get better.
What’s key here is that the apps I wanted were easily accessed. There’s a store for that, regardless of the device. Netflix
just runs. The cloud services and my data/profile/preferences were all
gained quickly and easily. The synching across devices was quickly
running. Never having used Windows 8, although familiar with Windows 7,
was not an issue. I picked it up quickly, very quickly.
So the nature of the device is not the major factor, not a point of lock-in, or even a decision guide.
long-time Windows user, the predominant enterprises worker, will adapt
to an Intel-powered Lenovo device running Windows quite well. And
enterprise IT departments already know the strengths and weaknesses of
Windows, be it 7 or 8, and they know they will have to pay Microsoft its
use taxes for years to come in any event, given their dependence on
Microsoft apps, servers, services and middleware.
But that same enterprise tablet user will graft well to an Android device, an iOS device (thanks to market penetration of iPod, iTunes and iPhone),
or perhaps a Kindle Fire. Users will have their personal cloud
affiliations and the services can be brught to any of these devices and
platforms. It can be both a work and a personal device. Or you could easily carry two, especially if the company pays for one of them. As has been stated better elsewhere, these tablets are pretty much the same.
So the nature of the device is not the major factor, not a point of lock-in, or even a decision guide. Because of the single-sign-on APIs
from cloud and social media providers, you can now go from tablet to
tablet, find your cloud of choice — be it Google, Apple, Microsoft,
Facebook, Yahoo, or Amazon. You know how you can just rent bicycles in
many cities now and just ride it and drop it off? Same for everyone.
This is the future of tablet devices too. Quite soon, actually. Rent it,
log in, use it, move on.
Perhaps enterprises should just lease these things?
Enterprises must still choose
tablets then will connect back best to the enterprise? Will the
business private cloud services be as easily assimilated as the public
cloud ones? What of containerization support, isolation and security
features, and/or apps receiver technology flavors? Apple’s iOS 7 goes a
long way to help enterprises run their own identity and access management (IAM)
and isolate apps and run a virtual private connection. Windows 8 has
done this all along. Google and Amazon are happy to deliver cloud
services just as well. There are the three or four flavors.
using the Lenovo ThinkPad Tablet 2 running Windows 8, it astounds me
that Microsoft lost this market and has to claw back from such low
penetration in the mobile market. This should have been theirs by any
reckoning. Years ago.
Now it’s too late for the device and client
platform alone to dictate the market direction. It’s now a function of
how the business cloud services can best co-exist with a personal
device instance. Because this coexistence will be a must-have
capability, it doesn’t really matter what the device is. Any of the top
three or four will do.
The ability of the device to best nurture
the business and the end-users -- both separate while equal in the
same hardware -- that’s the ticket. The rest is standard feature check-offs.
You may also be interested in:
Posted By Dana L Gardner,
Wednesday, November 13, 2013
| Comments (0)
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
The next edition of the HP Discover
Podcast Series delivers an innovation case study interview that
highlights how data-intensive credit- and debit-card marketing services
provider, Cardlytics, delivers millions of highly tailored marketing offers to banking consumers across the United States.
Cardlytics, in adopting a new analytics platform, gained huge data analysis capacity, vastly reduced query times, and swiftly met customer demands at massive scale.
To learn how, we sat down with Craig Snodgrass,
Senior Vice President for Analytics and Product at Cardlytics Inc.,
based in Atlanta. The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: At some point, you must have had a data infrastructure or legacy
setup that wasn't meeting your requirements. Tell us a little bit
about the journey that you've been on gaining better analytic results
for your business.
with any other company, our data was growing and growing and growing.
Also growing at the same time was the number of advertisers that we
were working with. Since our advertisers spanned multiple categories --
they range from automotive, to retail, to restaurants, to quick-serve
-- the types of questions they were asking were different.
So we had this intersection of more data and
different questions happening at a vertical level. Using our existing
platform, we just couldn't answer those questions in a timely manner,
and we couldn't iterate around being able to give our advertisers even
more insights, because it was just taking too long.
weren’t able to even get answers. Then, when there was the
back-and-forth of wanting to understand more or get more insight it
just ended up taking longer-and-longer. So at the end of the day, it
came down to multiple and unstructured questions, and we just couldn't
get our old systems to respond fast enough.
Gardner: Who are your customers, and what do you do for them?
Growing the business
Our customers are essentially anybody who wants to grow their
business. That's probably a common answer, but they are advertisers.
They're folks who are used to traditional media, where when they do a
TV or radio ad. They're hitting everybody, people that were going to
come to their store anyways and people who probably weren’t going to
come to their store.
able to target who they want to bring into their store through looking
at both debit-card and credit-card purchase data, all in an anonymized
manner. We’re able to look at past spending behavior, and say, based
on those spending behaviors, that these are the types of customers that
are most likely to come to your store and more importantly, most
likely to be a long-term customer for you.
We can target those,
we can deliver the advertising in the form of a reward, meaning the
customer actually gets something for the advertising experience. We
deliver that through their bank.
The bank is able to do this for
their customers as well. The reward comes from the bank, and the
advertiser gets a new channel to go bring in business. Then, we can
track for them over time what their return on ad-spend is. That’s not
an advantage they’ve had before with the traditional advertising they’ve
It works inside of retail, just as well as restaurants, subscriptions, and the other categories that are out there as well.
So it sounds like a win, win, win. As a consumer, I'm going to get
offers that are something more than a blanket. It's going to be
something targeted to me as the bank that’s providing the credit card.
They're going to get loyalty by having a rewards effort that works.
Then, of course, those people selling goods and services have a new way
of reaching and marketing those goods and services in a way they can
Snodgrass: Yeah, and back
to this idea of the multiple verticals. It works inside of retail, just
as well as restaurants, subscriptions, and the other categories that
are out there as well. So it's not just a one-category type reward.
customer will know quickly when something is not relevant. If you
bring in a customer for whom it may not be relevant or they weren’t the
right customer, they're not going to return.
isn't going to get their return on ad-spend. So it's actually in both
our interests to make sure we choose the right customers, because we
want to get that return on ad-spend for the advertisers as well.
Gardner: Craig, what sort of volume of data are we talking about here?
Snodgrass: We're doing roughly 10 terabytes
a year. From a volume standpoint, it's a combination of not just the
number of transactions we're bringing in, but the number of requests,
queries, and answers that we’re having to go against it. That
intersection of growth in volume and growth in questions is happening at
the same time.
For us right now, our data is structured. I know
a lot of companies are working on the unstructured piece. We're in a
world where in the payment systems and banking systems, the data is
relatively structured and that's what we get, which is great. Our
questions are unstructured. They're everywhere from corporate real
estate types of questions, to loyalty, to just random questions that
they've never known before.
One key thing that we can do for
advertisers is, at a minimum, answer two large questions. What is my
market share in an area? Typically, advertisers only know when
customers come into their store with that transaction. They don't know
where that customer goes and, obviously, they don't know when people
don’t come into their store.
We have that full 360-degree view
of what happens at the customer level, so we can answer, for a
geographic area or whatever area that an advertiser wants, what is
their market share and how is their market share trending week-to-week.
other piece is that when we do targeting, there could be somebody that
visits a location three times over a certain time period. You don't
know if they're somebody who shops the category 30 times or if they
only shop them three times. We can actually answer share-of-wallet for a
customer, and you can use that in targeting, designing your campaigns,
and more importantly, in analysis. What's going on with these
us, with Vertica, one of the key components isn't just the speed, but
how quick we can scale if the number of queries goes up.
Gardner: So the better job you do, the more queries will be generated.
Snodgrass: It's a self-fulfilling prophesy. For us, with Vertica,
one of the key components isn't just the speed, but how quick we can
scale if the number of queries goes up. It's relatively easy to predict
what our growth and data volume is going to be. It is not easy for me
to predict what the growth in queries is going to be. Again, as
advertisers understand what types of questions we can answer, it's
unfortunately a ratio of 10 to 1. Once they understand something, there
are 10 other questions that come out of it.
We can quickly add
nodes and scalability to manage the increase in volumes of queries, and
it's cheap. This is not expensive hardware that you have to put in.
That is one of the main decision points we had. Most people understand
HP Vertica on the speed piece, but that and the quick scalability of
the infrastructure were critical for us.
Gardner: Just as your marketing customers want to be able to predict their spend and the return on investment (ROI)
from it, do you sense that you can predict and appreciate, when you
scale with HP Vertica what your costs will be? Is there a big question
mark or do you have a sense of, I do this and I have to pay that?
Snodgrass: It is the "I do this and I'll have to pay that," the linearness. For those who understand Vertica,
that’s a bit of a pun, but the linear relationship is that if we need
to scale, all we need to do is this. It's very easy to forecast. I may
not know the date for when I need to add something, but I definitely
know what the cost will be when we need to add it.
Compare and contrast
How do you measure, in addition to that predictability of cost, your
benefits? Are there any speeds and feeds that you can share that compare
and contrast and might help us better understand how well this works?
Snodgrass: There are two numbers. During the POC
phase, we had a set of 10 to 15 different queries that we used as a
baseline. We saw anywhere from 500x to 1,000x or 1,500x speed in return
of getting that data. So that’s the first bullet point.
second is that there were queries that we just couldn't get to finish.
At some point, when you let it go long enough, you just don't know if it
is going to converge. With Vertica, we haven't hit that limit yet.
Vertica has also allowed to have varying degrees of analysts’ capabilities when it comes to SQL
writing. Some are elegant and they write fantastic, very efficient
queries. Others are still learning the best way to go put the queries
together. They will still always return with Vertica. In the legacy
world prior to Vertica, those are the ones that just wouldn't return.
In a SaaS shop, there are a lot of things that you're going to do in SaaS that you are not going to go do in SQL
don’t know the exact number for how much more productive they are, but
the fact that their queries are always returning, and returning in a
timely manner, obviously has dramatically increased their productivity.
So it's a hard one to measure, but forget how fast the queries have
returned, the productivity of our analyst has gone up dramatically.
What could an analytics platform do better for you? What would you
like to see coming down the pipeline in terms of features, function,
Snodgrass: If you could do something in SQL, Vertica is fantastic. We'd like more integration with R, more integration with software as a service (SaaS),
more integration with these sophisticated tools. If you get all the
data into their systems, maybe they can manipulate it in a certain way,
but then, you are managing two systems.
Vertica is working on a
little bit better integration with R through distributed R, but there's
also SaaS as well. In a SaaS shop, there are a lot of things that
you're going to do in SaaS that you are not going to go do in SQL. That
next level of analytics integration is where we would love to go see
the product go.
Gardner: Do you
expect that there will be different types of data and information that
you could bring to bear on this? Perhaps some sort of camera, sensor of
some sort, point-of-sale
information, or mobile and geospatial information that could be
brought to bear? How important is it for you to have a platform that
can accommodate seemingly almost any number of different information
types and formats?
best way to answer that one is that we don't ever want to tell business
development that the reason they can't pursue a path is because we
don't have a platform that can support that.
I don't know where the future holds from these different paths, but
there are so many different paths we can go down. It's not just the
Vertica component, but the HP HAVEn
components and the fact that they can integrate with a lot of the
unstructured, I think they call it "the human data versus the machine
It's having the human data pathway open to us. We don't
want to be the limiting factor for why somebody would want to do
something. That's another bullet point for HP Vertica in our camp. If a
business model comes out, we can support it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Posted By Dana L Gardner,
Wednesday, November 06, 2013
| Comments (0)
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
The next edition of the HP Discover Podcast Series delves into how a healthcare solutions provider leverages big-data capabilities. We’ll see how Cerner has deployed the HP Vertica Analytics platform to help their customers better understand healthcare trends, as well as to help them better run their own systems.
learn more about how high-performing and cost-effective big data
processing forms a foundational element to improving healthcare quality
and efficiency, join Dan Woicke, Director of Enterprise Systems Management at Cerner Corp. based in Kansas City, Missouri.
The discussion, which took place at the recent HP Vertica Big Data Conference in Boston, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
going through some major transitions in how healthcare payments are
going to be made -- and how good care is defined. We're moving from pay for procedures to more pay for outcomes. So tell me about Cerner, and why big data is such a big deal.
Woicke: The key element here is that the payment structure is changing to more of an outcome model.
In order for that to happen, we need to get all the sources of data
from many, many disparate systems, bring them in, and let our analysts
work on what the right trends are and predict quality outcomes, so that
you can repeat those and stay profitable in the new system.
My direct responsibility is to bring in massive amounts of performance data. This is how our Cerner Millennium systems are running.
We have hundreds of clients, both in the data center and those that manage their own systems with their own database administrators (DBAs). The challenge is just to have a huge system like that running with tens of thousands of clinicians on the system.
need to make sure that we have the right data in place in order to
measure how systems are running and then be able to predict how those
systems will run in the future. If things are happening that might be
going negative, how can we take the massive amounts of data that are
coming into our new analytical platform, correlate those parameters,
predict what’s going to happen, and then take action before there is a
We want to be able to predict what’s happening, so that we can effect change before there is a negative impact on the system.
How does big data and the ability to manage big data get you closer to
the real-time and then, ultimately, proactive results your clients
Woicke: Since January we've
begun to bring in what we call Response Time Measurement System (RTMS)
records. For example, when a doctor or a nurse is in our electronic medical record (EMR)
system is signing an order, I can tell you how long it took to log
into the system. I can tell you how long you were in the charting
All those transactions produce 10 billion timers, per month, across all of our clients. We bring those all into our HP Vertica Data Warehouse. Right now, it’s about a two-hour response time, but my goal, within the next 12 months, is to get it down to 10 minutes.
can see in real time when trends are happening, either positive or
negative, and be able to take action before there is an issue.
Gardner: Tell us more about about Cerner -- what you do in IT.
We run the largest EMR in the world. We have well over 400 domains to
manage -- we call them domains -- which allows us to hook up multiple
facilities to those domains. Once we have multiple facilities
connecting into those domains, at any given time, there are tens of
thousands clinicians on the system at one time.
We have two data
centers in Kansas City, Missouri and we host more than half for our
clients in those data centers. The trend is moving toward being
remote-hosted managed like that. We still have a couple of hundred
clients that are managing their own Millennium domains. As I said before, we need to make sure that we provide the same quality of service to both those sets of clients.
Millennium is a suite of products or solutions. Millennium is a
platform where the EMR is placed into a single database. Then, we have
about 55 different solutions that go on top of that platform, starting
with ambulatory solutions. This year was really neat. We were able to
launch our first ambulatory iPad application.
are about 55 different solutions, and it's growing all the time with
surgery and lab that fit into the Cerner Millennium system. So we do
have a cohesive set of data all within one database, which makes us
Gardner: Where does the data come from primarily, and how much data we are talking about?
Woicke: We're talking about quite a bit of data, and that’s why we had to transform something away from a traditional OLTP database into an MPP type database, because those systems that are now sending data to Cerner.
We have claims data, and HL7
messages. We're going to get all our continuous care records from
Millenium. We have other EMRs. So that’s pretty much the first time that
we're bringing in other EMR records.
You’ll have that claim data
that comes in from multiple sources, multiple EMRs, but the whole goal
of population health is to get a population to manage their own
health. That means that we need to give them the tools in their hands.
And they need to be accurate, so that they can make the right decisions
in the future. What that's going to do is bring the total cost of your
healthcare down, which is really the goal.
What that's going to do is bring the total cost of your healthcare down, which is really the goal.
have health-plan enrollments, and then of course, within Millennium,
we're going to drill down into outcomes, re-admissions, diagnosis, and
allergies. That’s the data that we need to be able to predict what kind
of care we are going to have in the future.
Gardner: So it seems to me that we talk about "Internet of things."
We're also going to the "Internet of people." More information from
them about their health comes back and benefits you and benefits the
healthcare providers. But ultimately, they can also provide great
insights to the patients themselves.
Do you see, in the not too
distant future, applications where certain data -- well-protected and
governed of course -- is made into services and insights that allow for
a better proactive approach to health?
Without a doubt. We're actually endorsing this internally within the
company by launching our own weight-loss challenges, where we're taking
our medical records and putting them on the web, so that we have access
to them from home.
I can go on the site right now and manage
my own health. I can track the number of steps I'm doing. Those are the
types of tools that we need to launch to the population, so that they
endorse that good behavior, which will ultimately change their quality
Right now, we're in production with the operation side
that we talked about a little bit about earlier. Then, we are in
production with what we call Health Facts, a huge set of blinded data.
We hire a team of analysts and scientists to go through this data and
look for trends.
can see what that’s going to do for the speed of the amount of
analysis we could do on the same amount of data. It’s game changing.
something we haven’t been able to do until recently, until we got HP
Vertica. I am going to give you a good example. We had analysts log a SQL
query to do an exploratory type of analysis on the data. They would
log that at 5 p.m., then issue it, and hopefully, by the time they came
back at 8 a.m. the next day, that query would be done.
Vertica, we've timed those queries at between two and five seconds. So
you can see what that’s going to do for the speed of the amount of
analysis we could do on the same amount of data. It’s game changing.
were a lot of competitors that would have worked out, but we had a set
of criteria that we drilled down on. We were trying to make it as
scientific as possible and very, very thorough. So we built a score
sheet, and each of us from the operation side and Health Facts side
graded and weighted each of those categories that we were going to judge
during the proof of concept (POC). We ended up doing six POCs.
got down to two, and it was a hard choice. But with the throughput
that we got from Vertica, their performance, and the number of
simultaneous users on the system at a given period of time, it was the
right choice for us.
Gardner: And because we're talking about healthcare, costs are super important. Was there a return on investment (ROI) or cost benefit involved as well?
Absolutely. You could imagine that this would be the one or two top
categories weighted on our score sheet, but certainly HP Vertica is
extremely competitive, compared to some of the others that we looked at.
Dan, looking to the future, what do you expect your requirements to
be, say, two years from now? Is there a trajectory that you need to
take as an organization, and how does that compare to where you see
Vertica as a partner, we navigate that together. They invited me here
to Boston to sit on the user board. It was really neat to sit right
there with [HP Vertica General Manager] Colin Mahony
at the same table and be able to say, "This is what we need. These are
our needs coming around the corner," and have him listen and be able
to take action on that. That was pretty impressive.
your question though, it’s more and more data. I was describing the
operations side, where we bring in 10 billion RTMS records. There's
going to be another 10 billion type of records coming in from other
sources, CPU, Memory, Disk I/O, everything can be measured.
want to bring it into Vertica, because I'm going to be able to do some
correlation against something we were talking about. If I know that the
RTMS records show a negative performance that's going to happen within
the next 10-15 minutes, I can figure out which one of those operational
parameters is most affecting that outcome of that performance, and
then can send the analyst directly in to mitigate that problem.
bringing in more and more data and being able to correlate it, we're
going to show all the clients, as well as the providers, how their
system is doing.
On the EMR side, it’s more data as well.
On the operations side, we're going to apply this to other enterprises
to bring in more data to connect to the experts. So there is always
somebody out there. That’s the expert. What we're going to do is
connect the provider with the payers and the patient to complete that
triangle in population health. That’s where we're going in the next few
Gardner: I certainly think
that managing data effectively is a huge component of our healthcare
challenge here in the United States, and of course, you're operating in
about 19 countries. So this is something that will be a benefit to
almost any market where efficiency, productivity, quality of care come
Woicke: At Cerner Corp.,
we're really big on transparency. We have a system right now called the
Lights On Network, where we are taking these parameters and bringing
them into a website. We show everything to the client, how they're
performing and how the system is doing. By bringing in more and more
data and being able to correlate it, we're going to show all the
clients, as well as the providers, how their system is doing.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.
You may also be interested in:
Posted By Dana L Gardner,
Monday, November 04, 2013
| Comments (0)
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.
The next VMworld innovator panel discussion focuses on how two companies are using aggressive cloud-computing strategies to deliver applications better to their end users.
We'll hear how healthcare patient-experience improvement provider Press Ganey and project and portfolio management provider Planview
are both exploiting cloud efficiencies and agility. Their paths to the
efficiency of cloud have been different, but the outcomes speak
volumes for how cloud transforms businesses.
To understand how, we sat down with Greg Ericson, Senior Vice President and Chief Innovation Officer at Press Ganey Associates in South Bend, Indiana, and Patrick Tickle, Executive Vice President of Products at Planview Inc. in Austin, Texas.
The discussion, which took place at the recent 2013 VMworld Conference in San Francisco, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: We heard a lot about cloud computing at VMworld,
and you're both going at it a little differently. Greg, tell us a bit
about the type of cloud approach you’re taking at Press Ganey.
Ericson: Press Ganey is the leader in a patient-experience analytics. We focus on providing deep insight into the patient experience in healthcare settings.
We have more than 10,000 customers within the healthcare environment
that look to us and partner with us around patient-experience
improvement within the healthcare setting.
We started this cloud journey in July of 2012 and we
set out to achieve multiple goals. Number one, we wanted to position
Press Ganey's software as solution products of the next generation and have a platform that was able to support them.
We went through a journey of consolidating multiple data centers. We consolidated 14 different storage arrays
in our process and, most importantly, we were able to position our
analytic solutions to be able to take on exponentially more data and
provide that to our clients.
Patrick, how has cloud helped you at Planview? You were, at one time, a
fully a non-cloud organization. Tell us about your journey.
has been an enterprise software vendor, a classic best-of-breed
focused enterprise software vendor, in this project and portfolio and
resource management space for over 20 years.
We have a big global customer base of on-premise
customers that built up over the last 23 years. Obviously, in the world
of software these days, there's a fairly seismic big shift about being
in software as a service (SaaS) and how you get to the cloud, the business models, and all those kinds of things.
wisdom is for a lot it was that you can't get there unless you start
from scratch. Obviously, because this is the only thing we do, it was
pretty imperative that we figure out a way to get there.
or three years ago, we started trying to make the transition. There
were a lot of things we had to go through, not just from an
infrastructure standpoint, but from a business model and delivery
The essence was here. We didn’t have time to
rewrite a code base in which we've invested 10-plus years and hundreds
of thousands of hours of customer experience to be a market-leading
product in our space. It could take five years to rewrite it. Compared
to where we were 10 years ago, when you and I first met, there are a
lot more tools in the bag for people to get to the cloud that there
So we really went after VMware
and did the research sweep much more aggressively. We started out with
our own kind of infrastructure that we bolted together and moved to a FlexPod in our second generation.
We have vCloud Hybrid Services now, and leveraging our existing code base, and then the whole suite of VMware products and services, we have transformed the company into a cloud provider.
Today, 90 percent of all our new Planview customers are SaaS
customers. It's been a big transition for us, but the technology from
VMware has been right in the center of making it happen.
Greg, tell us a little bit about some of the business challenges that
are driving your IT requirements that, in turn, make the cloud model
attractive. Is this a growth issue? Is this a complexity issue? What are
your business imperatives that make your IT requirements?
That’s a great question. Press Ganey is a 25-year-old organization. We
pioneered the concept of patient experience and the analytics, and
insight into the patient experience, within the healthcare setting. We
have an organization that's steeped in history, and so there are
multiple things that we're looking at.
Number one, we have one of the largest protected health information (PHI)
databases in the United States. So we felt that we had to have a very
secure and robust solution to provide to our clients, because they
trust us with their data.
Number two, with the healthcare
reform, the focus on patient experience is somewhat mandatory, whereas
before, it was somewhat voluntary. Now, it's regulated or it's part of
the healthcare reform. When you look at organizations, some were
actually coming to us and saying, "We want to get however many patient
surveys out that we need to satisfy our threshold."
Our scientists are also finding a correlation between the patient experience results and clinical and quality outcomes.
philosophy is why would you want to do that? We believe that if you
can understand and leverage the different media to be able to fill that
out, you can survey your entire population of patients that are coming
into not only your institution but, in the accountable care
organization, the entire ecosystem that you’re serving. That gives you
tremendous insight into what's going on with those patients.
scientists are also finding a correlation between the patient
experience results and clinical and quality outcomes. So, as we can tie
those data sets together in those episodic events, we're finding very interesting kinds of new thought, leading thought, out there for our clients to look at.
for us, going from minimally surveying your population to doing census
survey, which is your entire population, represents an exponential
growth. The last thing is that, for our future, in terms of going after
some of those new analytics, some of the new insight that we want to
provide our clients, we want to position the technology to be able to
take us there.
We believe that the VMware vCloud Suite
represents a completeness of vision. It represents a complete a single
pane of glass into managing the enterprise and, longer-term, as we
become more sophisticated in identifying our data and as the industry
matures, we think that a public cloud, a hybrid cloud, is in the future for us, and we're preparing for that.
And this must be a challenge for you, not only in terms of supporting
the applications, but also those data sets. You're getting some larger
data sets and they could be distributed. So the cloud model suits your
data needs over time as well?
Absolutely. It gives us the opportunity to be able to apply technology
in the most cost-value proposition for the solutions that we’re
serving up for our customers.
Our current environment is around 600 server instances. We have about 300 terabytes (TB)
running in 20 SaaS applications, and we're growing exponentially each
month, as we continue to provide that deeper insight for our customers.
Gardner: Patrick, for your organization what are some of the business drivers that then translate into IT requirements?
From an IT perspective, it changed the culture of the company, moving
from being a on-premise perpetual kind of "ship the software and have a
customer care organization that focuses on bug and break-fix" to a
service-delivery model. There were a lot of things that rippled through
that whole thing.
We had to move from an IT culture to an OPs culture and all the things that go along with that, performance and up time.
the end of the day, we had to move from an IT culture to an operations
culture and all the things that go along with that, performance and
up-time. Our customer base is global. So it was being able to provide
that around the globe is. All those things were pretty significant
shifts from an IT perspective.
We went from a company that had a corporate IT group to a company that has a hosting and DevOps and Ops team that has a little bit of spend in corporate IT.
Out of the gate, the first step at Planview was moving to colo. SunGard
has been a great partner for us over the last couple of years as our
ping, power, and pipe. Then, in our first generation, we bolted together
some of our storage and computer infrastructure because it wasn’t
quite all the way there. Then, in our most recent incarnation of the
infrastructure we’re using FlexPods at SunGard in Austin, Texas and
always having to evaluate future footprints. But ultimately, like many
companies, we would like to convert that infrastructure investment
from a capital spend into an OPEX spend. And that’s what’s compelling with vCloud Hybrid Service.
What we've been excited about hearing from VMware
is not just providing the performance and the scalability, but the
compatibility and the economic model that says we’re building this for
people who want to just move virtual machines (VMs).
We understand how big the opportunity is, and that’s going to open up
more of a public cloud opportunity for us to evaluate for a wide
variety of use cases going forward.
Gardner: How big a deal is it when we can, with just a click of a mouse, move workloads to any support environment we want?
Tickle: It's a huge deal. Whether it’s a production environment or disaster recovery (DR)
environment, at the end of the day it's a big deal for both of us. For
a SaaS company the only matter is renewals. It’s happy customers that
renew. That transition from perpetual-plus maintenance to a renewal
model, where you're on the customer service watch at another level, and
it's every minute of every day.
Everything that we can do to
make the customer experience, not just from our UI and our software,
but obviously the delivery of the service, as compelling as possible,
allows us to run our business. That can be a disaster scenario or just
great performance across our geography where we have customers and then
to do that in a cost effective way that operates inside our business
model, our profit and loss.
So our shareholders are equally
pleased with their turn off. We can't afford to have half of the
company’s OPEX go into IT, while we’re trying to make customers as
successful as they possibly can. We continue to be encouraged that
we’re on a great path with the stack that we're seeing to get there.
I think it's fair to say that cloud is not just repaving old cow
paths, that cloud is really transforming your entire business. Do you
Ericson: I agree. It allows us, especially an organization that’s 25 years steeped in history, to be able to rejuvenate our legacy applications
and be able to deliver those with maximum speed, maximizing our
resources, and delivering them in a secure environment. But it also
allows us to be able to grow, to flex, and to be able to rejuvenate and
organically transform the organization. It's pretty exciting for us and
it adds a lot of value to our clients indirectly.
Greg,what are some of the more measurable pay-offs when you go to
cloud? Are these soft payoffs of productivity and automation or are
there hard numbers about return on investment (ROI) or moving more to a operation cost versus capital cost? What do you get when you do cloud right?
We justify the investment based on consolidation of our data centers,
consolidation and retirement of our storage arrays, and so on. That’s
from a hard-savings perspective. From a soft-savings perspective,
clearly in an environment that was not virtualized, virtualizing the environment represented a significant cost avoidance.
focus is on a complete solution that allows us to really focus in on
what's important for us, what's important for our clients.
we're looking at how to position the organization with a robust,
virtual secured infrastructure that runs with a minimum amount of
technical resources, so that we can focus most of our efforts on
delivering innovative applications to our clients.
opportunity for us is to focus there. As you look at the size of the
data set and the growth of those data sets, positioning infrastructure
to be able to stay with you is exciting for us and it’s a value
proposition for our clients.
a minimum amount of staff, we were able to move in nine months and
virtualize our entire environment. When you talk about 600 servers and
300 TB of data, that's a pretty sizable enterprise and we're fully
leveraging the vCloud Suite.
Our network is virtualized, our storage is virtualized, and our servers are virtualized. The release of vCloud Suite 5.5 and some of the additional network functionality and storage functionality that’s coming out with that is rather exciting. I think it's going to continue to add more value to our proposition.
Gardner: Some people say that a single point of management, when you have that comprehensive suite approach, comes in pretty handy, too.
It does, because it gives you the capability of managing through a
single pane of glass across your environments. I was going to accentuate
that we’re about 50 percent complete in building on our catalog.
For our next steps, number one is that we’re looking at building upon the excellence of Press Ganey and building our next-generation enterprise data warehouse.
We’re looking at leveraging from a DevOps perspective the VMware
vCloud Suite, and we already have some pilots that are up and running.
We'll continue to build that out.
only are we maximizing our assets in delivering a secure environment
for our clients, but we're also really working toward what I call
engineering to zero.
As we deploy, not only are we
maximizing our assets in delivering a secure environment for our
clients, but we're also really working toward what I call engineering to
zero. We’re completely automating and virtualizing those deployments
and we're able to move those deployments, as we go from dev to test, and
test to user acceptance testing, and then into a production
Tickle: As we all know, there are lot of hypervisors
out there. We can all get that technology from a wide variety of
sources. But to your question about the value with the stack, that’s
what's we look at and again. What's important now is not just the
product stack, but the services stack.
We look at a company like VMware and say, "Site Recovery Manager in conjunction with vCloud Hybrid Services brings a DR solution to me as SaaS vendor and that fits with my architecture and brings that service stack plus."
no comparing another hypervisor vendor to build out that stack of
service. Again, we could probably talk about probably numerous, but
that’s when I listen to the things that go on at the event and get to
spend time with the people at VMware. That whole value stack that VMware
is investing in is what looks so much more compelling than just
picking pieces of technology.
Gardner: Looking to the future, Greg, based on what you've heard at VMworld about the general availability of vCloud Hybrid Services
and the upgrade to the suite of private cloud support, what has you
most excited? Was there something that surprised you? What is in the
future road map for you?
A step further
Ericson: A couple of different things. The next release of NSX
is exciting for us. It allows us to be able to take the virtualization
of our network a step further. Also to be able to connect hypervisors
into a hybrid-cloud situation is something that, as we evolve our
maturity in terms of managing our data, is going to be exciting for us.
of the areas that we're still teasing out and want to explore is how
to tie in that accelerator for a big-data application into that.
Probably, in 2014, what we're looking at is how to take this
environment and really move from a DR kind of environment to a
high-availability environment. I believe that we’re architected for
that and because of the virtualization we can do that with a minimum
amount of investment.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.
You may also be interested in: