A discussion on how healthcare providers employ new breeds of intelligent digital workspace technologies to improve doctor and patient experiences, make technology easier to use, and assist in bringing actionable knowledge resources to the integrated healthcare environment.
The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.
Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?
Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease-of-adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.
We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost-effective and the best fit for the needs of our users.
Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?
Mullican: Gaining a competitive edge requires adopting new approaches to hybrid IT. Even carefully contained shadow IT is a great way to develop new approaches and attain breakthroughs.
Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.
Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.
Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize -- yet allow for experimentation, too?
Mullican: IT is ultimately responsible for the security and the up-time of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.
Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.
Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.
Gardner: What about the IT requirements? How you fulfill that mandate using technology?
Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers.
Gardner: So you have been doing multi-cloud before there was even a word for multi-cloud?
Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.
Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?
Agnostic arrangements of clouds
Mullican: It’s important for us to keep our local datacenters agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open-source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces.
There are a lot [of services] that you can combine with the right toolsets, not only in your local datacenter but also in the clouds. If you put in the effort to write the code with that in mind -- so you don’t lock into any one solution necessarily -- then you can optimize and put everything together.
Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.
Cloud cost control, granted
Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud -- from storage to compute -- and that cost is covered by the grant.
So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.
Gardner: Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?
Variety is the spice of IT
Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our datacenter, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us.
The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our datacenter. What HPE OneSphere does is enable them to access -- through a common API -- that infrastructure. So it’s very exciting.
Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?
Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully -- and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.
Gardner: What this might capability bring to you and your organization?
Cross-training the clouds
Mullican: We want to do more with cross-cloud. Right now we are very adept at provisioning within our datacenters, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds -- AWS, Google, Microsoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.
For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises datacenter, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud.
But the key here is that our developer and researchers -- the end users of OneSphere – they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline -- and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.
Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.
Mullican: Right, and that will be an ongoing effort to always optimize cost -- but not at the risk of slowing the research. We want the research to happen, and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.
Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.
Mullican: Right. The research that we do at HudsonAlpha saves lives and the utmost importance is to be able to conduct that research at the fastest speed.
Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.
The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.
As an early tester, and someone who has been a long-time user of HPE infrastructure, is there anything about the combination of HPE Synergy, HPE SimpliVity HCI, and HPE 3PAR intelligent storage -- in conjunction with OneSphere -- that’s given you a "whole greater than the sum of the parts" effect?
Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open-source applications on Image Streamer -– open-source operating systems and applications.
The ability to utilize that in the mix that I have architected natively with OneSphere -- in addition to the public clouds -- is very powerful, and I am excited to see where that goes.
Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure?
Get adept at DevOps
Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach.
As far as putting an emphasis on automation -- and being able to provision infrastructure in the datacenter and the cloud through automated APIs -- a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation. So it needs to start with IT.
It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources -- that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools.
That’s the partnership that companies really need to foster -- and it’s between IT and developers. And something like HPE OneSphere is a good fit for that, because it provides a unified API.
On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.
Then when it all comes together with a service catalog of options, in the end it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure, they are just choosing the proper workflow and pipeline for future projects.
We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.
Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction, and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?
HCI advances across the globe
Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies that’s not the case.
And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.
Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons.
Is there something about having that visibility into hybrid IT that extends into hybrid data environments?
Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration.
So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for example, in the cloud.
Gardner: Now we can think of HPE OneSphere as also elevating data scientists -- and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.
Hybrid hard work pays off
Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer.
HPE OneSphere is a way to showcase to the end user all of the efforts that have been, and are being, done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.
You may also be interested in:
- South African insurer King Price gives developers the royal treatment as HCI meets big data
- Containers, microservices, and HCI help governments in Norway provide safer public data sharing
- Big data and cloud combo spark momentous genomic medicine advances at HudsonAlpha
- Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman
- Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks
- A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes
- How VMware, HPE, and Telefonica together bring managed cloud services to a global audience
- Retail gets a makeover thanks to data-driven insights, edge computing, and revamped user experiences
- Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center
- How UBC gained TCO advantage via flash for its EduCloud cloud storage service
The next BriefingsDirect digital transformation success story examines how local governments in Norway benefit from a common platform approach for safe and efficient public data distribution.
We’ll now learn how Norway’s 18 counties are gaining a common shared pool for data on young people’s health and other sensitive information thanks to streamlined benefits of hyperconverged infrastructure (HCI), containers, and microservices.
Here to help us discover the benefits of a modern platform for smarter government data sharing is FrodeSjovatsen, Head of Development for FINT Project in Norway. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: What is driving interest in having a common platform for public information in your country?
Sjovatsen: We need interactions between the government and the community to be more efficient. Sowe needed to build the infrastructure that supports automatic solutions for citizens. That’s the main driver.
Gardner: What problems do you need to overcome in order to create a more common approach?
Common API at the core
Sjovatsen: One of the biggest issues is [our users] buy business applications such as human resources for school administrators to use and everyone is happy. They have a nice user interface on the data. But when we need to use that data across all the other processes -- that’s where the problem is. And that’s what the FINT project is all about.
[Due to apps heterogeneity] we then need to have developers create application programming interfaces (APIs), and it costs a lot of money, and it is of variable quality. What we’re doing now is creating a common API that’s horizontal -- for all of those business applications. It gives us the ability to use our data much more efficiently.
Gardner: Please describe for us what the FINT project is and why this is so important for public health.
Sjovatsen: It’s all about taking the power back, regarding the information we’ve handed the vendors. There is an initiative in Norway where the government talks about getting control ofallthe information. And the thought behind the FINT project is that we need to get ahold of all the information, describe it, define it, and then make it available via APIs -- both for public use and also for internal use.
Gardner: What sort of information are we dealing with here? Why is it important for the general public health?
Sjovatsen: It’s all kinds of information. For example, it’s school information, such as about how the everyday processes run, the schedules, the grades, and so on. All of that data is necessary to create good services, for the teachers and students. We also want to make that data available so that we can build new innovations from businesses that want to create new and better solutions for us.
Learn More About
Gardner: When you were tasked with creating this platform, why did you seek an API-driven, microservices-based architecture? What did you look for to maintain simplicity and cost efficiency in the underlying architecture and systems?
Agility, scalability, and speed
Sjovatsen: We needed something that was agile so that we can roll out updates continuously. We also needed a way to roll back quickly, if something fails.
The reason we are running this on one of the county council’s datacenters is we wanted to separate it from their other production environments. We need to be able to scale these services quickly. When we talked to Hewlett Packard Enterprise (HPE), the solution they suggested was using HCI.
Gardner: Where are you in the deployment and what have been some of the benefits of such a hyperconverged approach?
Sjovatsen: We are in the late stage of testing and we’re going into production in early 2018. At the moment, we’re looking into using HPE SimpliVity.
Gardner: Containers are an important part of moving toward automation and simplicity for many people these days. Is that another technology that you are comfortable with and, if so, why?
Sjovatsen: Yes, definitely. We are very comfortable with that. The biggest reason is that when we use containers, we isolate the application; the whole container is the application and we are able to test the code before it goes into production. That’s one of the main drivers.
The second reason is that it’s easy to roll out andit’s easy to roll back. We also have developers in and out of the project, and containers make it easy for them to quickly get in to the environment they are working on. It’s not that much work if they need to install on another computer to get a working environment running.
Gardner: A lot of IT organizations are trying to reduce the amount of money and time they spend on maintaining existing applications, so they can put more emphasis into creating new applications. How do containers, microservices, and API-driven services help you flip from an emphasis on maintenance to an emphasis on innovation?
Learn More About
Sjovatsen: The container approach is very close to the DevOps environment, so the time from code to production is very small compared to what we did before when we had some operations guys installing the stuff on servers. Now, we have a very rapid way to go from code to production.
Gardner: With the success of the FINT Project, would you consider extending this to other types of data and applications in other public sectoractivities or processes? If your success here continues, is this a model that you think has extensibility into other public sector applications?
Unlocking the potential
Sjovatsen: Yes, definitely. At the moment, there are 18 county councils in this project. We are just beginning to introduce this to all of the 400 municipalities [in Norway]. So that’s the next step. Those are the same data sets that we want to share or extend. But there are also initiatives with central registers in Norway and we will add value to those using our approach in the next year or so.
Gardner: That could have some very beneficial impacts, very good payoffs.
Sjovatsen: Yes, it could. There are other uses. For example, in Oslo we have made an API extend over the locks on many doors. So, we can now have one API to open multiple locking systems. So that’s another way to use this approach.
In Oslo we have made an API extend over the locks on many doors. We can now have one API to open multiple locking systems.
Gardner: It shows the wide applicability of this. Any advice, Frode, for other organizations that are examining more of a container, DevOps, and API-driven architecture approach? What might you tell them as they consider taking this journey?
Sjovatsen: I definitely recommend it -- it’s simple and agile. The main thing with containers is to separate the storage from the applications. That’s probably what we worked on the most to make it scalable. We wrote the application so it’s scalable, and we separated the data from the presentation layer.
You may also be interested in:
- How VMware, HPE, and Telefonica together bring managed cloud services to a global audience
- Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks
- IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT
- How Nokia refactors the video delivery business with new time-managed IT financing models
- Retail gets a makeover thanks to data-driven insights, edge computing, and revamped user experiences
- As enterprises face mounting hybrid IT complexity, new management solutions beckon
- How a large Missouri medical center developed an agile healthcare infrastructure security strategy
- Get ready for the Post-Cloud World
- Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcome
- Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe
The next BriefingsDirect data center financing agility interview explores how two Belgian hospitals are adjusting to dynamic healthcare economics to better compete and cooperate.
We will now explore how a regional hospital seeking efficiency -- and a teaching hospital seeking performance -- are meeting their unique requirements thanks to modern IT architectures and innovative IT buying methods
Here to help us understand the multilevel benefits of the new economics of composable infrastructure and software defined data center (SDDC) in the fast-changing healthcare field are Filip Hens, Infrastructure Manager at UZA Hospital in Antwerp, and Kim Buts, Infrastructure Manager at Imelda Hospital in Bonheiden, both in Belgium.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
We'll now learn how a Philips Healthcare Informatics and Hewlett Packard Enterprise (HPE) partnership creates new solutions for the global healthcare market and provides better health outcomes for patients by managing data and intelligence better.
Joining us to explain how companies tackle the complexity of solutions delivery in healthcare by using advanced big data and analytics is Martijn Heemskerk, Healthcare Informatics Ecosystem Director for Philips, based in Eindhoven, the Netherlands. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: Why are partnerships so important in healthcare informatics? Is it because there are clinical considerations combined with big data technology? Why are these types of solutions particularly dependent upon an ecosystem approach?
Heemskerk: It’s exactly as you say, Dana. At Philips we are very strong at developing clinical solutions for our customers. But nowadays those solutions also require an IT infrastructure layer
underneath to solve the total equation. As such, we are looking for partners in the ecosystem because we at Philips recognize that we cannot do everything alone. We need partners in the ecosystem that can help address the total solution -- or the total value proposition -- for our customers.
Gardner: I'm sure it varies from region to region, but is there a cultural barrier in some regard to bringing cutting-edge IT in particular into healthcare organizations? Or have things progressed to where technology and healthcare converge?
Heemskerk: Of course, there are some countries that are more mature than others. Therefore the level of healthcare and the type of solutions that you offer to different countries may vary. But in principle, many of the challenges that hospitals everywhere are going through are similar.
Some of the not-so-mature markets are also trying to leapfrog so that they can deliver different solutions that are up to par with the mature markets.
Gardner: Because we are hearing a lot about big data and edge computing these days, we are seeing the need for analytics at a distributed architecture scale. Please explain how big data changes healthcare.
Big data value add
Heemskerk: What is very interesting for big data is what happens if you combine it with value-based care. It's a very interesting topic. For example, nowadays, a hospital is not reimbursed for every procedure that it does in the hospital – the value is based more on the total outcome of how a patient recovers.
This means that more analytics need to be gathered across different elements of the process chain before reimbursement will take place. In that sense, analytics become very important for hospitals on how to measure on how things are being done efficiently, and determining if the costs are okay.
Gardner: The same data that can used to be more efficient can also be used for better healthcare outcomes and understanding the path of the disease, or for the efficacy of procedures, and so on. A great deal can be gained when data is gathered and used properly.
Heemskerk: That is correct. And you see, indeed, that there is much more data nowadays, and you can utilize it for all kind of different things.
Learn About HPE
That Drive Healthcare and Life Sciences
Gardner: Please help us understand the relationship between your organization and HPE. Where does your part of the value begin and end, and how does HPE fill their role on the technology side?
Healthy hardware relationships
Heemskerk: HPE has been a highly valued supplier of Philips for quite a long time. We use their technologies for all kinds of different clinical solutions. For example, all of the hardware that we use for our back-end solutions or for advanced visualization is sourced by HPE. I am focusing very much on the commercial side of the game, so to speak, where we are really looking at how can we jointly go to market.
As I said, customers are really looking for one-stop shopping, a complete value proposition, for the challenges that they are facing. That’s why we partner with HPE on a holistic level.
Gardner: Does that involve bringing HPE into certain accounts and vice versa, and then going in to provide larger solutions together?
Heemskerk: Yes, that is exactly the case, indeed. We recognized that we are not so much focusing on problems related to just the clinical implications, and we are not just focusing on the problems that HPE is facing -- the IT infrastructure and the connectivity side of the value chain. Instead, we are really looking at the problems that the C-suite-level healthcare executives are facing.
You can think about healthcare industry consolidation, for example, as a big topic. Many hospitals are now moving into a cluster or into a network and that creates all kinds of challenges, both on the clinical application layer, but also on the IT infrastructure. How do you harmonize all of this? How do you standardize all of your different applications? How do you make sure that hospitals are going to be connected? How do you align all of your processes so that there is a more optimized process flow within the hospitals?
By addressing these kinds of questions and jointly going to our customers with HPE, we can improve user experiences for the customers, we can create better services, we have optimized these solutions, and then we can deliver a lot of time savings for the hospitals as well.
Learn About HPE
That Drive Healthcare and Life Sciences
Gardner: We have certainly seen in other industries that if you try IT modernization without including the larger organization -- the people, the process, and the culture -- the results just aren’t as good. It is important to go at modernization and transformation, consolidation of data centers, for example, with that full range of inputs and getting full buy-in.
Who else makes up the ecosystem? It takes more than two players to make an ecosystem.
Heemskerk: Yes, that's very true, indeed. In this, system integrators also have a very important role. They can have an independent view on what would be the best solution to fit a specific hospital.
Of course, we think that the Philips healthcare solutions are quite often the best, jointly focused with the solutions from HPE, but from time to time you can be partnering with different vendors.
Besides that, we don't have all of the clinical applications. By partnering with other vendors in the ecosystem, sometimes you can enhance the solutions that we have to think about; such as 3D solutions and 3D printing solutions.
Gardner: When you do this all correctly, when you leverage and exploit an ecosystem approach, when you cover the bases of technology, finance, culture, and clinical considerations, how much of an impressive improvement can we typically see?
Saving time, money, and people
Heemskerk: We try to look at it customer by customer, but generically what we see is that there are really a lot of savings.
First of all, addressing standardization across the clinical application layer means that a customer doesn't have to spend a lot of money on training all of its hospital employees on different kinds of solutions. So that's already a big savings.
Secondly, by harmonizing and making better effective use of the clinical applications, you can drive the total cost of ownership down.
Thirdly, it means that on the clinical applications layer, there are a lot of efficiency benefits possible. For example, advanced analytics make it possible to reduce the time that clinicians or radiologists are spending on analyzing different kinds of elements, which also creates time savings.
Gardner: Looking more to the future, as technologies improve, as costs go down, as they typically do, as hybrid IT models are utilized and understood better -- where do you see things going next for the healthcare sector when it comes to utilizing technology, utilizing informatics, and improving their overall process and outcomes?
Learn About HPE
That Drive Healthcare and Life Sciences
Heemskerk: What for me would be very interesting is to see is if we can create some kind of a patient-centric data file for each patient. You see that consumers are increasingly engaged in their own health, with all the different devices like Fitbit, Jawbone, Apple Watch, etc. coming up. This is creating a massive amount of data. But there is much more data that you can put into such a patient-centric file, with the chronic diseases information now that people are being monitored much more, and much more often.
If you can have a chronological view of all of the different touch points that the patient has in the hospital, combined with the drugs that the patient is using etc., and you have that all in this patient-centric file -- it will be very interesting. And everything, of course, needs to be interconnected. Therefore, Internet of Things (IoT) technologies will become more important. And as the data is growing, you will have smarter algorithms that can also interpret that data – and so artificial intelligence (AI) will become much more important.
You may also be interested in:
We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.
To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.
Lee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.
There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare -- whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.
Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.
Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?
Kipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.
As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, and approaches. We partner a lot in the industry.
The healthcare industry is really a crowded place and there are many standard development organizations. There are many players. It’s quite vital as a forum that we reach out, collaborate, and engage with others to reach where we want to be.
Gardner: Gail, why is the role of the enterprise architecture function an important ingredient to help bring this together? What’s important about EA when we think about the healthcare industry?
Kalbfleisch: From an EA perspective, I don’t really think that it matters whether you are talking about the healthcare industry or the finance industry or the personnel industry or the gas and electric industry. If you look at any of those, the organizations or the companies that tend to be highly functioning, they have not just architecture -- because everyone has architecture for what they do. But that architecture is documented and it’s available for use by decision-makers, and by developers across the system so that each part can work well together.
We know that within the healthcare industry it is exceedingly complicated, and it’s a mixture of a lot of different things. It’s not just your body and your doctor, it’s also your insurance, your payers, research, academia -- and putting all of those together.
If we don’t have EA, people new to the system -- or people who were deeply embedded into their parts of the system -- can’t see how that system all works together usefully. For example, there are a lot of different standards organizations. If we don’t see how all of that works together -- where everybody else is working, and how to make it fit together – then we’re going to have a hard time getting to interoperability quickly and efficiently.
It's important that we get to individual solution building blocks to attain a more integrated approach.
Kipf: If you think of the healthcare industry, we’ve been very good at developing individual solutions to specific problems. There’s a lot of innovation and a lot of technology that we use. But there is an inherent risk of producing silos among the many stakeholders who, ultimately, work for the good of the patient. It's important that we get to individual solution building blocks to attain a more integrated approach based on architecture building blocks, and based on common frameworks, tools and approaches.
Gardner: Healthcare is a very complex environment and IT is very fast-paced. Can you give us an update on what the Healthcare Forum has been doing, given the difficulty of managing such complexity?
Bird’s-eye view mapping
Lee: The Healthcare Forum began with a series of white papers, initially focusing on an information model that has a long history in the federal government. We used enterprise architecture to evaluate the Federal Health Information Model (FHIM). People began listening and we started to talk to people outside of The Open Group, and outside of the normal channels of The Open Group. We talked to different types of architects, such as information architects, solution architects, engineers, and initially settled on the problem that is essential to The Open Group -- and that is the problem of boundaryless information flow.
We need to get beyond the silos that Oliver mentioned and that Gail alluded to. As I mentioned in my opening comments, this is a huge industry, and Gail illustrated it by naming some of the stakeholders within the health, healthcare and wellness enterprises. If you think of your hospital, it can be difficult to achieve boundaryless information flow to enable your information to travel digitally, securely, quickly, and in a way that’s valid, reliable and understandable by those who send it and by those who receive it. But if that is possible, it’s all to the betterment of the patient.
Initially, in our focus on what healthcare folks call interoperability -- what we refer to as boundaryless information flow -- we came to realize through discussions with stakeholders in the public sector, as well as the private sector and globally, that understanding how the different pieces are linked together is critical. Anybody who works in an organization or belongs to a church, school or family understands that sometimes getting the right message communicated from point A to point B can be difficult.
To address that issue, the HCF members have decided to create a Health Enterprise Reference Architecture (HERA) that is essentially a framework and a map at the highest level. It helps people see that what they do relates to what others do, regardless of their position in their company. You want to deliver value to those people, to help them understand how their work is interconnected, and how IT can help them achieve their goals.
Gardner: Oliver, who should be aware of and explore engaging with the HCF?
Kipf: The members of The Open Group themselves, many of them are players in the field of healthcare, and so they are the natural candidates to really engage with. In that healthcare ecosystem we have providers, payers, governing bodies, pharmaceuticals, and IT companies.
Those who deeply need planning, management and architecting -- to make big thinking a reality out there -- those decision-makers are the prime candidates for engagement in the Healthcare Forum. They can benefit from the kinds of products we produce, the reference architecture, and the white papers that we offer. In a nutshell, it’s the members, and it’s the healthcare industry, and the healthcare ecosystem that we are targeting.
Gardner: Gail, perhaps you could address the reference architecture initiative? Why do you see that as important? Who do you think should be aware of it and contribute to it?
Shared reference points
Kalbfleisch: Reference architecture is one of those building block pieces that should be used. You can call it a template. You can have words that other people can relate to, maybe easier than the architecture-speak.
If you take that template, you can make it available to other people so that we can all be designing our processes and systems with a common understanding of our information exchange -- so that it crosses boundaries easily and securely. If we are all running on the same template, that’s going to enable us to identify how to start, what has to be included, and what standards we are going to use.
A reference architecture is one of those very important pieces that not only forms a list of how we want to do things, and what we agreed to, but it also makes it so that every organization doesn’t have to start from scratch. It can be reused and improved upon as we go through the work. If someone improves the architecture, that can come back into the reference architecture.
Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector -- whether it’s Oliver in Europe, whether it’s someone over in California, Australia, it really doesn't matter. Anyone who wants to make interoperability better should know about it.
My focus is on decision-makers, policymakers, process developers, and other people who look at it from a device-design perspective. One of the things that has been discussed within the HCF’s reference architecture work is the need to make sure that it’s all at a high-enough level, where we can agree on what it looks like. Yet it also must go down deeply enough so that people can apply it to what they are doing -- whether it’s designing a piece of software or designing a medical device.
Gardner: Jason, The Open Group has been involved with standards and reference architectures for decades, with such recent initiatives as the IT4IT approach, as well as the longstanding TOGAF reference architecture. How does the HERA relate to some of these other architectural initiatives?
Building on a strong foundation
Lee: The HERA starts by using the essential components and insights that are built into the TOGAF ArchitecturalDevelopment Model (ADM) and builds from there. It also uses the ArchiMate language, but we have never felt restricted to using only those existing Open Group models that have been around for some time and are currently being developed further.
We are a big organization in terms of our approach, our forum, and so we want to draw from the best there is in order to fill in the gaps. Over the last few decades, an incredible amount of talent has joined The Open Group to develop architectural models and standards that apply across multiple industries, including healthcare. We reuse and build from this important work.
In addition, as we have dug deeper into the healthcare industry, we have found other issues – gaps -- that need filling. There are related topics that would benefit. To do that, we have been working hard to establish relationships with other organizations in the healthcare space, to bring them in, and to collaborate. We have done this with the Health Level Seven Organization (HL7), which is one of the best-known standards organizations in the world.
We are also doing this now with an organization called Healthcare Services Platform Consortium (HSPC), which involves academic, government and hospital organizations, as well as people who are focused on developing standards around terminology.
IT’s getting better all the time
Kipf: If you think about reference architecture in a specific domain, such as in the healthcare industry, you look at your customers and the enterprises -- those really concerned with the delivery of health services. You need to ask yourself the question: What are their needs?
And the need in this industry is a focus on the person and on the service. It’s also highly regulatory, so being compliant is a big thing. Quality is a big thing. The idea of lifetime evolution -- that you become better and better all the time -- that is very important, very intrinsic to the healthcare industry.
When we are looking into the customers out there that we believe that the HERA could be of value, it’s the small- to mid-sized and the large enterprises that you have to think of, and it’s really across the globe. That’s why we believe that the HERA is something that is tuned into the needs of our industry.
And as Jason mentioned, we build on open standards and we leverage them where we can. ArchiMate is one of the big ones -- not only the business language, but also a lot of the concepts are based on ArchiMate. But we need to include other standards as well, obviously those from the healthcare industry, and we need to deviate from specific standards where this is of value to our industry.
Gardner: Oliver, in order to get this standard to be something that's used, that’s very practical, people look to results. So if you were to take advantage of such reference architectures as HERA, what should you expect to get back? If you do it right, what are the payoffs?
Capacity for change and collaboration
Kipf: It should enable you to do a better job, to become more efficient, and to make better use of technology. Those are the kinds of benefits that you see realized. It’s not only that you have a place where you can model all the elements of your enterprise, where you can put and manage your processes and your services, but it’s also in the way you are architecting your enterprise.
It gives you the ability to change. From a transformation management perspective, we know that many healthcare systems have great challenges and there is this need to change. The HERA gives you the tools to get where you want to be, to define where you want to be -- and also how to get there. This is where we believe it provides a lot of benefits.
Gardner: Gail, similar question, for those organizations, both public and private sector, that do this well, that embrace HERA, what should they hope to get in return?
Kalbfleisch: I completely agree with what Oliver said. To add, one of the benefits that you get from using EA is a chance to have a perspective from outside your own narrow silos. The HERA should be able to help a person see other areas that they have to take into consideration, that maybe they wouldn’t have before.
Another value is to engage with other people who are doing similar work, who may have either learned lessons, or are doing similar things at the same time. So that's one of the ways I see the effectiveness and of doing our jobs better, quicker, and faster.
Also, it can help us identify where we have gaps and where we need to focus our efforts. We can focus our limited resources in much better ways on specific issues -- where we can accomplish what we are looking to -- and to gain that boundaryless information flow.
Reaching your goals
Lee: Essentially, the HERA will provide a framework that enables companies to leverage IT to achieve their goals. The wonderful thing about it is that we are not telling organizations what their goals should be. We show them how they can follow a roadmap to accomplish their self-defined goals more effectively. Often this involves communicating the big picture, as Gail said, to those who are in siloed positions within their organizations.
There is an old saying: “What you see depends on where you sit.” The HERA helps stakeholders gain this perspective by helping key players understand the relationships, for example, between business processes and engineering. So whether a stakeholder’s interest is increasing patient satisfaction, reducing error, improving quality, and having better patient outcomes and gaining more reimbursement where reimbursement is tied to outcomes -- using the product and the architecture that we are developing helps all of these goals.
Gardner: Jason, for those who are intrigued by what you are doing with HERA, tell us about its trajectory, its evolution, and how that journey unfolds. Who can they learn more or get involved?
Lee: We have only been working on the HERA per se for the last year, although its underpinnings go back 20 years or more. Its trajectory is not to a single point, but to an evolutionary process. We will be producing products, white papers, as well as products that others can use in a modular fashion to leverage what they already use within their legacy systems.
We encourage anyone out there, particularly in the health system delivery space, to join us. That can be done by contacting me at firstname.lastname@example.org and at www.opengroup.org/healthcare.
It’s an incredible time, a very opportune time, for key players to be involved because we are making very important decisions that lay the foundation for the HERA. We collaborate with key players, and we lay down the tracks from which we will build increasing levels of complexity.
But we start at the top, using non-architectural language to be able to talk to decision-makers, whether they are in the public sector or private sector. So we invite any of these organizations to join us.
Learn from others’ mistakes
Kalbfleisch: My first foray into working with The Open Group was long before I was in the health IT sector. I was with the US Air Force and we were doing very non-health architectural work in conjunction with The Open Group.
The interesting part to me is in ensuring boundaryless information flow in a manner that is consistent with the information flowing where it needs to go and who has access to it. How does it get from place to place across distinct mission areas, or distinct business areas where the information is not used the same way or stored in the same way? Such dissonance between those business areas is not a problem that is isolated just to healthcare; it’s across all business areas.
That was exciting. I was able to take awareness of The Open Group from a previous life, so to speak, and engage with them to get involved in the Healthcare Forum from my current position.
A lot of the technical problems that we have in exchanging information, regardless of what industry you are in, have been addressed by other people, and have already been worked on. By leveraging the way organizations have already worked on it for 20 years, we can leverage that work within the healthcare industry. We don't have to make the same mistakes that were made before. We can take what people have learned and extend it much further. We can do that best by working together in areas like The Open Group HCF.
Kipf: On that evolutionary approach, I also see this as a long-term journey. Yes, there will be releases when we have a specification, and there will guidelines. But it's important that this is an engagement, and we have ongoing collaboration with customers in the future, even after it is released. The coming together of a team is what really makes a great reference architecture, a team that places the architecture at a high level.
We can also develop distinct flavors of the specification. We should expect much more detail. Those implementation architectures then become spin-offs of reference architectures such as the HERA.
Lee: I can give some concrete examples, to bookend the kinds of problems that can be addressed using the HERA. At the micro end, a hospital can use the HERA structure to implement a patient check-in to the hospital for patients who would like to bypass the usual process and check themselves in. This has a number of positive value outcomes for the hospital in terms of staffing and in terms of patient satisfaction and cost savings.
At the other extreme, a large hospital system in Philadelphia or Stuttgart or Oslo or in India finds itself with patients appearing at the emergency room or in the ambulatory settings unaffiliated with that particular hospital. Rather than have that patient come as a blank sheet of paper, and redo all the tests that had been done prior, the HERA will help these healthcare organizations figure out how to exchange data in a meaningful way. So the information can flow digitally, securely, and it means the same thing to those who get it as much as it does to those who receive it, and everything is patient-focused, patient-centric.
Gardner: Oliver, we have seen with other Open Group standards and reference architectures, a certification process often comes to bear that helps people be recognized for being adept and properly trained. Do you expect to have a certification process with HERA at some point?
Certifiable enterprise expertise
Kipf: Yes, the more we mature with the HERA, along with the defined guidelines and the specifications and the HERA model, the more there will be a need and demand for health enterprise-focused employees in the marketplace. They can show how consulting services can then use HERA.
And that's a perfect place when you think of certification. It helps make sure that the quality of the workforce is strong, whether it's internal or in the form of a professional services role. They can comply with the HERA.
Gardner: Clearly, this has applicability to healthcare payer organizations, provider organizations, government agencies, and the vendors who supply pharmaceuticals or medical instruments. There are a great deal of process benefits when done properly, so that enterprise architects could become certified eventually.
My question then is how do we take the HERA, with such a potential for being beneficial across the board, and make it well-known? Jason, how do we get the word out? How can people who are listening to this or reading this, help with that?
Spread the word, around the world
Lee: It's a question that has to be considered every time we meet. I think the answer is straightforward. First, we build a product [the HERA] that has clear value for stakeholders in the healthcare system. That’s the internal part.
Second—and often, simultaneously—we develop a very important marketing/collaboration/socialization capability. That’s the external part. I've worked in healthcare for more than 30 years, and whether it's public or private sector decision-making, there are many stakeholders, and everybody's focused on the same few things: improving value, enhancing quality, expanding access, and providing security.
We will continue developing relationships with key players to ensure them that what they’re doing is key to the HERA. At the broadest level, all companies must plan, build, operate and improve.
There are immense opportunities for business development. There are innumerable ways to use the HERA to help health enterprise systems operate efficiently and effectively. There are opportunities to demonstrate to key movers and shakers in healthcare system how what we're doing integrates with what they're doing. This will maximize the uptake of the HERA and minimize the chances it sits on a shelf after it's been developed.
Gardner: Oliver, there are also a variety of regional conferences and events around the world. Some of them are from The Open Group. How important is it for people to be aware of these events, maybe by taking part virtually online or in person? Tell us about the face-time opportunities, if you will, of these events, and how that can foster awareness and improvement of HERA uptake.
Kipf: We began with the last Open Group event. I was in Berlin, presenting the HERA. As we see more development, more maturity, we can then show more. The uptake will be there and we also need to include things like cyber security, things like risk compliance. So we can bring in a lot of what we have been doing in various other initiatives within The Open Group. We can show how it can be a fusion, and make this something that is really of value.
I am confident that through face-to-face events, such as The Open Group events, we can further spread the message.
Lee: And a real shout-out to Gail and Oliver who have been critical in making introductions and helping to share The Open Group Healthcare Forum’s work broadly. The most recent example is the 2016 HIMSS conference, a meeting that brings together more than 40,000 people every year. There is a federal interoperability showcase there, and we have been able to introduce and discuss our HERA work there.
We’ve collaborated with the Office of the National Coordinator where the Federal Heath Architecture sits, with the US Veterans Administration, with the US Department of Defense, and with the Centers for Medicare and Medicaid (CMS). This is all US-centered, but there are lots of opportunities globally to not just spread the word in public for domains and public venues, but also to go to those key players who are moving the industry forward, and in some cases convince them that enterprise architecture does provide that structure, that template that can help them achieve their goals.
Gardner: I’m afraid we are almost out of time. Gail, perhaps a look into the crystal ball. What do you expect and hope to see in the next few years when it comes to improvements initiatives like HERA at The Open Group Forum can provide? What do you hope to see in the next couple of years in terms of improvement?
Kalbfleisch: What I would like to see happen in the next couple of years as it relates to the HERA, is the ability to have a place where we can go from anywhere and get a glimpse of the landscape. Right now, it’s hard to find anywhere where someone in the US can see the great work that Oliver is doing, or the people in Norway, or the people in Australia are doing.
It’s really important that we have opportunities to communicate as large groups, but also the one-on-one. Yet when we are not able to communicate personally, I would like to see a resource or a tool where people can go and get the information they need on the HERA on their own time, or as they have a question. Reference architecture is great to have, but it has no power until it’s used.
My hope for the future is for the HERA to be used by decision-makers, developers, and even patients. So when an organizations such as some hospital wants to develop a new electronic health record (EHR) system, they have a place to go and get started, without having to contact Jason or wait for a vendor to come along and tell them how to solve a problem. That would be my hope for the future.
Lee: You can think of the HERA as a soup with three key ingredients. First is the involvement and commitment of very bright people and top-notch organizations. Second, we leverage the deep experience and products of other forums of The Open Group. Third, we build on external relationships. Together, these three things will help make the HERA successful as a certifiable product that people can use to get their work done and do better.
Gardner: Jason, perhaps you could also tee-up the next Open Group event in Amsterdam. Can you tell us more about that and how to get involved?
Lee: We are very excited about our next event in Amsterdam in October. You can go to www.opengroup.org and look under Events, read about the agendas, and sign up there. We will have involvement from experts from the US, UK, Germany, Australia, Norway, and this is just in the Healthcare Forum!
The Open Group membership will be giving papers, having discussions, moving the ball forward. It will be a very productive and fun time and we are looking forward to it. Again, anyone who has a question or is interested in joining the Healthcare Forum can please send me, Jason Lee, an email at email@example.com.
You may also be interested in:
- Panel explores how the IT4IT Reference Architecture acts as a digital business enabler
- The UNIX evolution: A history of innovation reaches an unprecedented 20-year milestone
- The Open Group president, Steve Nunn, on the inaugural TOGAF User Group and new role of EA in business transformation
- A Tale of Two IT Departments, or How Cloud Governance is Essential in the Bimodal IT Era
- Securing Business Operations and Critical Infrastructure: Trusted Technology, Procurement Paradigms, Cyber Insurance
- Enterprise Architecture Leader John Zachman on Understanding and Leveraging Synergies Among the Major EA Frameworks
- Cybersecurity standards: The Open Group explores security and safer supply chains
- Explore synergies among major Enterprise Architecture frameworks with The Open Group
- Health Data Deluge Requires Secure Information Flow Via Standards, Says the Open Group's New Healthcare Director
- The Open Group Amsterdam Conference Panel Delves into How to Best Gain Business Value from Open Platform 3.0
- Healthcare Among Thorniest and Yet Most Opportunistic Use Cases for Boundaryless Information Flow Improvement