Reality Check: Buy Versus Build for Laboratory Decision Support
Whitepaper | Ben Gold
Senior Director of Product, CareSelect®, Change Healthcare
Whitepaper | James Colbenson, MBA/MHA
Director of Commercialization, Value-Based Medicine for Mayo Clinical Laboratories
This paper is part two of a three-part series developed in collaboration with Mayo Clinic. The series provides an industry perspective on provider organizations’ laboratory environment and the challenges they are facing. If your organization is committed to uncovering the full potential of the clinical laboratory, contact Change Healthcare to learn how CareSelect Lab decision support can help with your lab stewardship program.
Practicing medicine today is more complex than ever. The menu of diagnostics, medications and treatments available to providers is continually expanding. Standards of care are shifting as guidelines grow and multiply. There are more options and more decisions for everyone–from practitioners to laboratory technicians to IT staff.
Electronic health records (EHRs), originally intended to organize options and lighten information overload for both patient and provider, have developed into sophisticated multi-featured tools, supported by entire departments and strongly impacting—if not actually driving—clinician workflows.
At the intersection of the EHR and provider choice lies clinical decision support (CDS). The benefits of clinical decision support are particularly important for clinical laboratories, given the laboratory’s central role in most diagnoses and treatments. By harnessing evidence-based guidelines to optimize test utilization, laboratories can reduce costs and strengthen care.1 Labs using evidence-based CDS are also better positioned to manage financial risk in a value-based environment.
This white paper is the second of a three-part series developed in collaboration with Mayo Clinic Laboratories and Change Healthcare. It is intended to assist clinical laboratories in rationalizing the laboratory’s value and relevance in ways that support appropriate and high-quality patient care, fiscal strength, and program integrity for payers.
The white paper series provides industry perspective, commentary, and insight on the use and value of decision support in building an effective lab stewardship program. It will also highlight case-study proof points developed in collaboration with Mayo Clinic from early-adopter hospital laboratories that have successfully implemented third-party decision support—to their value advantage.
Organizations must think critically and move carefully before initiating any decision-support projects, including those for laboratory systems. Laboratory stewardship—or the ability to actively manage testing utilization to improve outcomes and control costs—is essential to success in a valuebased environment.
With the increasing amount of information to be processed and options to be considered, adoption of evidence based CDS has become increasingly practical and necessary for today’s health systems. CDS is shown to improve quality, standardize care delivery, and help control costs. A recent study cites diagnostic yield was 38% higher for CT pulmonary angiography (CTPA) for the evaluation of pulmonary embolism when the provider used a CDS tool.2
Although a growing number of decision support solutions are available, few have been developed specifically for the laboratory environment and many lack high-level clinical standards and guidelines through EHR integration with analytics. As a result, many providers resort to a default option of developing their own solution.
Under optimal conditions, a homegrown approach can be effective in creating a workable mechanism to distribute clinical guidance via the EHR. But it’s not uncommon for an organization to be overwhelmed by the unanticipated scope of the task and the problems that can emerge.
These difficulties can result in substantial gaps between anticipated decision support benefits and real-world performance. Systemic shortcomings also contribute to “alert fatigue” and resulting burnout among clinicians.3 One study determined that physicians overrode more than 96% of alarms related to opioid prescriptions, and 99% of the alerts did not result in an actual or averted adverse drug event (ADE). In one instance, an EHR warning system fired off 123 unnecessary and clinically inconsequential alerts to prevent a single ADE.4
Inadequate or poorly conceived decision-support functionality can also generate mistrust, even enmity, between IT staff and clinicians. Most seriously, ongoing problems with a deployed solution may undermine future decision-support initiatives.
But laboratory decision support requires wellconceived testing guidelines, an effective conversion of these rules into EHR-enabled guidance, and ongoing, robust analytics. And like any evidence-based rule set, laboratory guidelines must be regularly reviewed to ensure continued applicability and relevance amid rapid advances in medical knowledge. Even for a small number of guidelines, this process typically requires the involvement of multiple stakeholder committees.
As such, unless an organization is willing to engage the considerable resources required to build and maintain a comprehensive decision-support platform, it is generally much better off partnering with a capable vendor to implement an existing, proven solution.
The fact that many hospitals seek to control costs by taking advantage of the build-your-own decision support capabilities available within most EHRs is understandable. This functionality is typically touted by the EHR vendor, if not in great detail, and most hospitals have capable, techsavvy clinicians interested in improving clinical care with custom solutions.
Yet the significant effort required to bring these projects to fruition should not be overlooked. In a recent survey by HFMA and Navigant, more than half of healthcare executives queried (56%) acknowledged that their organizations were unable to keep up with ongoing EHR upgrades.5 They also reported that they consistently underused their existing EHR functions.6
By adding a new layer of complexity to EHR maintenance and operations in the form of homegrown decision support, already over-taxed IT resources may be stretched to the breaking point. Grappling with tasks like synchronizing decision support with new EHR upgrades, extending the system to additional facilities and changing or implementing new business intelligence rules across the enterprise, can lead to breakdowns in both core EHR functionality and decision-support capabilities.
These operational problems, of course, are independent of the challenges associated with building a system from scratch. That process begins with guideline development. For this, most hospitals and health systems rely on a combination of internal policies and external, evidence-based protocols.
Difficulties occur when attempts are made to integrate these disparate sources into a comprehensive whole that all clinicians will support. Agreeing on the granular details of the guideline’s purpose and functionality generally requires extensive research and considerable give-and-take among those assigned to the task.
What’s more, it’s a process that must be replicated for every guideline. Given the number of laboratory tests performed in a hospital, it’s not hard to envision the time that could be required to work through a comprehensive set of guidelines.
Finally, major uncertainties may emerge about how tests themselves should be identified, since wide variation exists in testing nomenclature across healthcare. This lack of standardization is being tackled by a new industry coalition intent upon creating consensus and convention around laboratory test names.7 But the fact that such an effort is even necessary speaks to the unforeseen challenges that may arise when codifying evidence-based guidelines around specific testing procedures.
It is true that most organizations begin their decisionsupport efforts initially by targeting only a small number of guidelines or alerts. Yet this approach, in and of itself, begs a larger question about how effective a system of limited scope can be.
There’s also a more fundamental and potentially critical problem surrounding guideline development that often doesn’t emerge until later in the process. At the project’s outset, a consensus presumably exists around the larger goals for the decisionsupport implementation, as well as the specific challenges the guidelines are meant to address.
However, if the project is conceived strictly on the basis of anecdotal observations and without quantification of existing clinical behavior and utilization patterns, the development team effectively is flying blind. In the absence of detailed baseline information and analytics, it’s difficult to be certain the project objectives are the right ones. Moving ahead before an initial analysis has been conducted in effect reverses the polarity of W. Edwards Deming’s famous formula for continuous process improvement.8
Consequently, teams may start down a path based on faulty assumptions, only to discover fundamental errors once they have gone down that path. At that point, considerable resources have been expended and they have missed the opportunity to correct and adjust their direction.
The inherent dilemma facing organizations that build their own decision-support systems is that in the absence of pre-existing utilization analytics, establishing baseline data will be virtually impossible without first getting a system up and running, irrespective of how ill-conceived the initial objectives ultimately may prove to be.
Even if a comprehensive set of clinical guidelines is successfully developed to address a proven need, a major disconnect can occur during the conversion of the guidelines into the code necessary to deliver the information through the EHR.
Whether the guidelines are based on existing, peerreviewed clinical recommendations or internally developed policies, translating them into workable EHR-based guidelines is enormously difficult. The simple reason is that most guidelines were never written with the rigid requirements and constraints of informatics in mind. EHRs are evolving, however, to accommodate integration with high-level clinical standards and guidelines and decision-support systems through application program interfaces.
Moreover, because clinicians naturally are interested in creating the most comprehensive system possible, they may overreach in conceiving the application’s functionality. When confronted by IT staff about the EHR’s real-world limitations, the clinician team likely will be required to modify their specifications significantly.
This will take time. It may also lead the clinicians to unfairly blame IT staff for apparently lacking the skills required to transform their vision into reality, however unrealistic that vision may have been. They likewise may point the finger at the EHR and lose faith in its capabilities. In such instances, both time and goodwill are unnecessarily expended.
Assuming a decision-support system is successfully launched and begins delivering quality information at the point of care, it is understandable for those involved—both clinicians and technologists— to assume their work is largely done. This is particularly true if any of the aforementioned problems have cropped up and been addressed over the course of the project.
But the reality is that even with the system operational, an essential, ongoing task remains. Until and unless powerful analytics are developed to work in conjunction with the decision support system’s huge volume of performance data, the platform’s value will be limited. Any true laboratory stewardship program must be predicated on a detailed understanding of actual clinician behavior.
Without the ability to monitor continually and measure existing utilization across both test type and provider, to quantify guideline adherence, and to project cost information, decision support’s ability to sustain genuine utilization improvement is seriously compromised.
Just as analytics are critical in helping establish the project’s direction at the outset, so too must they be present to maintain the proper course going forward. Empirical data represents the primary vehicle for engaging clinicians about their utilization habits and arguably is the most effective tool for
Beyond the need to support analytics and utilization monitoring after a decision-support system is deployed, organizations must also be prepared to review the underlining clinical guidelines regularly. This is vital to mitigate the risk of propagating information that has been found to be less than effective or is no longer relevant.
Updates are particularly important for genetic tests, given the rapid changes sweeping that realm of clinical testing. In 2018, researchers determined that there were already approximately 75,000 genetic tests available, with about 10 new tests entering the market daily.9 Medical information is itself exploding: In 2010 medical knowledge was doubling at the rate of every 3.5 years; in 2020, the doubling velocity is expected to reach just 73 days.10
Because dozens of guidelines may populate the system, conducting peer-review-style assessments will require considerable time and effort. Any rule changes also will mean the involvement of IT staff, which translates into additional time and resource consumption.
There’s no question decision support is, or should be, the foundation of an effective laboratory stewardship program, since it can assist hospitals greatly in controlling utilization and help meet the challenges of value-based care. Given this pressing need, the only question is whether the hospital will build the system themselves or look to an external vendor.
If the hospital or health system has the analytics required to make informed decisions up front about where best to focus finite resources; if it can sustain guideline development and system build-out over the long haul and at scale; if it has the horsepower to deliver ongoing utilization analysis; and if it has the expertise to update the guidelines as required, then it may make sense to move forward internally.
But even then, key questions arise: Are these really the kinds of tasks a healthcare organization should be focused on? Do they reflect core competencies? Can a provider truly expect to be successful on a project of this scope? If so, for how long? And at what cost?
Conversely, if a partner were to deliver a viable decision support infrastructure, how much more time could clinicians and staff spend on clinical and operational issues and the provision of care?
By turning to a proven decision-support solution, laboratory managers and hospital leaders can eliminate the uncertainty and risk that surround comprehensive decision support internal design-build projects. Instead, they’ll be able to focus on harnessing the benefits of proven laboratory decision-support solutions to create a stewardship program that will be effective and sustainable far into the future.
Part two of this three-part series has laid out the points for “buy versus build” laboratory decision support. The follow up to this is part three, which will provide a proof point perspective developed in collaboration with Mayo Clinic from actual hospital laboratories that have successfully implemented third-party decision support.
1 Sfiya Richardson, et al., “Higher Imaging Yield When Clinical Decision Support Is Used,” JACR, December 2019
2 Procop, G.W., et al., “Duplicate laboratory test reduction using a clinical decision support tool,” American Journal of Clinical Pathology, May 2014
3 “Clinical Decision Support Systems Could Be Modified to Reduce ‘Alert Fatigue’ While Still Minimizing the Risk of Litigation,” Health Affairs, December 2011
4 “The boy who cried wolf: Drug alerts in the ER,” press release, American College of Emergency Physicians, Nov. 9, 2015
5 “EHRs, Consumer Self-Pay Remain Providers’ Top Revenue Cycle Challenges,” 2019 Navigant/HFMA Revenue Cycle Trends Survey, Sept. 25, 2019
7 “What’s in the Name of a Clinical Laboratory Test?” Dark Daily, April 8, 2019
8 Ferhan Syed, “Deming Cycle: The Wheels of Continuous Improvement,” Total Quality Management, Feb. 25, 2009
9 Kathryn A. Phillips, et al., “Genetic Testing Availability and Spending: Where Are We Now? Where Are We Going?,” Health Affairs, May 2018
10 Peter Densen, “Challenges and Opportunities Facing Medical Education,” Transactions of the American Clinical and Climatological Association,” 2011