AI offers healthcare supply chain wonder, wisdom, whimsy and weakness


But it’s important to separate fact from fiction … and fantasy

By R. Dana Barlow

October 2024 – The Journal of Healthcare Contracting


Artificial intelligence (AI) represents a technology that straddles two realms opposite of one another.

On the right rests seemingly endless possibilities for accomplishing tasks more efficiently than before. On the left lingers deep-seeded fear and trepidation that the technology will replace the need for human effort (physical) and ingenuity (mental).

The truth, however, may teeter on the sliver slicing through both realms.

AI signifies the most revolutionary and polarizing technology offering since the general public was granted access to the internet that later sired social media, culturally serving as the three tentpoles under the big top of the digital circus.

Unless you’ve given up listening to, reading or watching media during the last few years, with AI, generative or otherwise, people have imagined the accomplishing of tasks more easily, quickly and efficiently; employers have imagined the possibilities of accomplishing tasks without people; and AI itself, short of sentience, has shown the peril of not having enough power and bandwidth to satisfy people’s expectations against the backdrop of environmentalists and sustainability advocates.

Outlandish expectations

Healthcare supply chain executives and leaders fully recognize the subjectivity and severity of outlandish expectations of AI. But humans likely will still be needed.

“One thought that comes to mind is that AI will provide flawless recommendations,” indicated Jason Molding, Chief Supply Chain Officer and vice president, Performance Management, MultiCare Health System, and president, MultiCare’s Myriadd Supply. “I think AI has the ability to give you a ‘head start’ and potentially shorten the decision-making process, but the output of AI still needs to be validated and scrutinized by a human. Undoubtedly, AI will continue to improve in its accuracy, reliability and validation, but I see the need for continuous human monitoring to ‘fact check’ and have the discernment and decision-making call for the near future.

Brendon Frazer, marketing director, Pandion Optimization Alliance, points to the human element, too.

“The most common [expectation] I see is that they believe AI is capable of replacing staff,” he noted. “As AI currently stands, there are very few jobs it can truly replace; however, it can evolve many jobs to higher levels quite easily.”

One key expectation is that AI will dramatically change the way we care for patients in the future, according to Gary Fennessy, vice president and Chief Supply Chain Executive, Northwestern Medicine.

“For me the human contact and relationship to a caregiver will always be the basis of how care is delivered and evaluated,” he said. “I also think that the cost of implementing AI will offset some of the benefits that AI provides. As AI becomes more integrated, infrastructure changes will be required. The computing requirements of AI are significant. As a result, the financial benefits and overall productivity may be far less than what are early expectations.”

Steve Downey, Chief Supply Chain & Patient Support Services Officer, Cleveland Clinic, and CEO & President, Excelerate, a supply chain-concentrated joint venture between Cleveland Clinic, Vizient and OhioHealth, homes in on three key areas that require additional forethought, insight and understanding.

First is data integration. “Ensuring that the source data is seamlessly accessible for AI processing is crucial,” Downey told The Journal of Healthcare Contracting. “The primary challenge often lies in making data available in a usable format.”

Second is ease of use. “Users need to be able to effectively interact with the tool,” he continued. “Crafting prompts can be tricky, and the quality of responses is heavily dependent on the clarity of the input questions.”

Third is vendor progress. “With our vendors, many are working on integrating AI into their operations and devices. This highlights the need for the healthcare sourcing team to review contracts regarding how vendors use your data for model training,” he noted.

Through extensive and ongoing research, David Dobrzykowski, Ph.D., professor and director, Walton College Healthcare Initiatives, and senior Ph.D. program coordinator, JB Hunt Transport Department of Supply Chain Management within the Sam M. Walton College of Business at the University of Arkansas, not only outlined the high points but highlighted the potential low points as well.

“Concerns that AI will greatly eliminate jobs and employment opportunities in supply chain are probably exaggerated,” he assured. “AI is another tool that supply chain pros, and even clinicians, will use to enhance their performance. Sure, some jobs will be eliminated, but there is so much low-hanging fruit to improve healthcare supply chain performance that I believe these jobs can be redesigned and people redeployed to bring greater value to their organizations.”

The notion of “redesigning jobs” and “redeploying staff” raises all kinds of questions from the cynics decrying these phrases as diplomatic euphemisms for layoffs to the skeptics concerned about rigorous training needed for coding, programming and repair of hardware and software to realists and futurists who relish the migration of manual and physical labor to automation and leaving the creative and strategic planning and thinking to humans.

­

Dobrzykowski cites two statistics from Gartner’s “Top Strategic Predictions for 2024 and Beyond” that he finds relevant:

  • By 2028, there will be more smart robots than frontline workers in manufacturing, retail and logistics due to labor shortages.
  • By 2028, the rate of unionization among knowledge workers will increase by 1,000%, motivated by the adoption of GenAI.

Dobrzykowski thinks about AI in two broad categories – augmented AI and intelligent automation. “The former will be useful in supporting knowledge workers in activities like strategic planning, RFP evaluation and vendor selection, even simulating buyer-supplier negotiations. Augmented AI will help supply chain pros to be more effective and consequently efficient in what we do, but not necessarily replace these managers. Augmented AI also stands to significantly support Value Analysis teams that seek to better understand consumption patterns of their SKUs and seek to improve inventory performance across a multitude of provider sites,” he said.

“Intelligent automation AI, as the name implies, will automate tasks currently performed by humans like reconciling purchase orders (POs) and resolving exceptions. These individuals will need to reskill or upskill to positions that require higher-level judgment – positions that are supported by augmented AI. In either case, supply chain pros will need to think more strategically and understand end-to-end supply chain from a higher-level perspective. This makes master’s programs in Supply Chain Management as a key developmental pursuit for supply chain pros,” he added.

The University of Arkansas’ Walton School of Business has offered specific analytics courses in its Masters in Supply Chain Management curricula since 2021, according to Dobrzykowski, as well as other related programs for healthcare managers, including a Masters in Supply Chain Management with healthcare elective courses, a Healthcare EMBA and a Masters in Healthcare Analytics. The school’s undergraduate program features analytics courses, too.

Some may wonder about the nature and level of the “reskilled” and “upskilled” positions to which Dobrzykowski refers.

“Many administrative jobs will be replaced by robotic process automation (RPA),” he explained. “For example, one distributor I am familiar with was able to replace nine positions in their contracting department by deploying RPA to update the estimated time of arrival (ETAs) on [purchase orders]. In 30 days, the bots updated 25,000 POs. This frees up human resources to focus on activities that create more value for the organization. These skills center on a more global view/understanding of the supply chain and stronger analytical thinking and skills. A more global view of the supply chain means understanding how decisions in one part of the supply chain may affect other functions. For example, a change in pricing tiers in contracting can significantly affect procurement and materials management.

“Many experts anticipate that AI will affect knowledge work (office work) much like automation and advanced robotics previously affected more physical work, like manufacturing and assembly work,” he added. “As such, positions like supply chain techs who are responsible for managing PAR locations on the floors may not feel as much of a direct impact from AI.”      

Costly mistakes

Unrealistic expectations of AI capabilities and informational output can lead to mistakes by supply chain executives, leaders and pros using the technology. It’s something like the old refrain that emerged during the electronic data interchange years of the 1990s, “automating bad data just means you’re transmitting bad data faster.”

Cleveland Clinic’s Downey sees four potential mistakes for which supply chain should be prepared to spot and solve.

The first is a “one-size-fits-all” mentality. “AI encompasses a range of technologies, including machine learning (ML), analytics, robotic processing automation (RPA), large-language models (LLM) and much more,” Downey noted. “Each has its own application in healthcare supply chains, so a one-size-fits-all approach isn’t effective. For example, we’re working with RPA on our accounts payable processes, ML on data management, analytics on formulary compliance visualization and LLMs for our contract repository.”

A second red flag involves the pace of progress. “This is a fast-paced world that needs quick cycle contracting, strong partnerships and continuous vendor feedback on development,” he said. “It’s essential to differentiate between actual progress and mere hype.”

Legal and regulatory concerns linger as the elephant in the operation. “There’s a need to understand the legal and regulatory implications of using health system data to train LLMs,” he urged.

Finally, it’s important to balance speed with risk management. “We need to advance quickly enough to drive change while carefully addressing third-party risks, particularly in cybersecurity and data governance, to ensure that critical steps aren’t overlooked,” Downey noted.

Pandion’s Frazer issues some red alerts that concern him, starting with automated infallibility.

“A mistake I often see is the complete and total trust of information that AI gives,” he indicated. “AI makes mistakes, sometimes small ones, sometimes massive ones; but it is exceptional at sounding correct.”

Leadership can accelerate the problem as well.

“Related to this issue, a very common mistake is leadership will often take unqualified personnel and task them with using AI to do a job they have no understanding of,” Frazer continued. “This allows the mistakes AI can make to go unnoticed, and decisions to be made based on bad information, since the user was not able to notice when a convincing mistake had been made.”

MultiCare’s Moulding also questions data accuracy and reliability as key problems.

“Probably data integrity would be high on my list,” he said. “If one is using AI in a closed environment, one would need to ensure that they have robust data governance, data definitions and analytics and insights to evaluate the AI outputs. Having bad data will most likely create bad outputs by the AI. Strong structures and talent are needed to harness the benefits of AI.”

Yet it’s important to place existing and future opportunities into the context of time, according to Northwestern’s Fennessy.

“In my opinion we are in the early stages of utilizing AI so it’s difficult for me to identify what mistakes we are making,” he indicated. “One concern I have is what process will we utilize to audit and understand the process by which AI is using to make decisions. I see new roles in organizations associated with managing AI processes and evaluating how decisions are made. I believe it is a mistake to think a physician is going to get a diagnosis from AI and not question the basis by which that diagnosis is made.” This extends well beyond the clinical decision support systems that doctors and surgeons have used for years that can direct and reframe diagnoses.

When it comes to AI, healthcare supply chain executives, leaders and pros must focus on the bigger picture, according to Dobrzykowski.

“The biggest mistake that providers make is approaching AI without an overarching strategy,” he said. “Our research suggests that 81% of healthcare executives do not have an analytics strategy. Without a strategy to guide your technology investments, it’s easy to fall into the trap of either paralysis (not engaging new tech) or gambling (trying one-off tech implementations that may or may not produce your desired results). These traps hinder an organization’s progress toward improvement, or even worse, failed implementations can leave a hospital with the perception that AI won’t help them achieve their performance goals.”

Embracing AI as some kind of magic potion or silver bullet isn’t a reliable strategy either.

“The reality is that our research suggests that the capabilities of AI are already far beyond those actually implemented by providers,” Dobrzykowski noted. “As researchers, we have uncovered predictive and prescriptive applications for AI in healthcare supply chain, yet most pros and executives are unaware of these capabilities and are still asking questions capable of being supported by descriptive statistics.”

What might some of those questions be and entail?

Dobrzykowski and his colleagues have been working with Randy Bradley, Ph.D., CPHIMS, FHIMSS, Associate Professor of Supply Chain Management and Information Systems, Department of Supply Chain Management within the Haslam College of Business at the University of Tennessee- Knoxville, and his colleagues on a large-scale research project investigating AI and other technology trends.

“We have engaged with over 1,300 healthcare executives, and so far, we find that nearly 60% of workshop participants report they primarily ask questions that fall in the category of descriptive/diagnostic analytics,” Dobrzykowski said. “Such questions include ‘what happened,’ ‘how often,’ ‘what exactly is the problem,’ and ‘what actions are needed.’ These questions are important because they help organizations get to a well-defined problem. When comparing the focus on descriptive analytics questions to predictive (e.g., ‘why is this happening?,’ ‘what will happen next?’) and prescriptive analytics (e.g., ‘what will be the impact if we try this,’ ‘what’s the best that can happen?’), whose percentages are 27% and 13%, respectively, it is clear organizations are not maturing in their use of analytics.”

Weakest links

Unrealistic expectations that can lead to overt and covert mistakes also may help uncover some of the weakest links that current and developing iterations of AI possess.

Pandion’s Frazer points to the “inability to truly complete complex tasks in a ‘hands off’ manner” as a weak spot, and that “even though this is rapidly becoming a possibility, there are some growing pains.”

True accuracy represents another issue, according to Frazer. “AI ‘hallucinating’ – serving incorrect data or even making data up almost like a human misremembering something – is an ongoing issue,” he warned. “Having professionals review all their AI-generated materials for inaccuracies is a necessity.”

Further, AI is only as good as its training data, according to Frazer. “The conclusions it draws, the knowledge it has, are all based on what data it was trained with. More niche or newly emerging subjects have less information to draw from, making AI less useful and more stale in those areas.”

Frazer also contends that AI is not “original.” “The advice it gives and recommendations it has are pulled from the data it was trained by, so while it is good for brainstorming and structured tasks and ‘by the book’ advice, truly original ideas still need to come from the person using AI,” he noted.

MultiCare’s Moulding emphasizes the glaring accuracy issue as well. “I think the primary ‘weakest’ link will be in an organization’s data integrity and governance, followed by having the skilled talent to interpret, validate and implement the AI outputs,” he said. “Uncovering these weaknesses would range from ensuring that you have a robust data and analytical governance council (if not a specific AI governance council), conduct data audits, test predictive models and monitor performance.”

AI raises five issues that should give healthcare executives, leaders and pros pause, according to Cleveland Clinic’s Downey.

One is training data selection. “All data included in the training set becomes part of the system’s learning. Ensure that this data aligns with your desired outcomes,” he advised.

Question clarity is another. “The responses from LLMs are influenced by the questions asked. It’s essential to frame your questions clearly and precisely to get accurate answers,” Downey recommended.

Separating hype from reality is yet another. “Quickly distinguishing between genuine opportunities and hype is crucial. This involves evaluating prospects efficiently, testing assumptions and making necessary adjustments.”

AI in supplier services raises concerns, too. “Be aware of when suppliers incorporate AI into their product development or service delivery. This usage should be explicitly detailed in contracts, as it’s not always immediately apparent.”

Finally, it’s important to have realistic expectations. “AI won’t automatically solve all problems. Effective implementation often requires addressing underlying processes and data requirements to fully leverage AI’s benefits,” he concluded.

Northwestern’s Fennessy pulls the viewfinder back even further.

“What we don’t know is the weakest link,” he said. “Will AI cover up problems that exist in core systems that give the impression that everything is working properly? The old saying garbage in, garbage out becomes even more acute. At some point, there will be a significant malpractice lawsuit associated with AI that creates all sorts of interesting scenarios. Who gets sued? The physician and hospital, the company that developed the AI process? And then counter lawsuits.”    

Dobrzykowski hints at the treasure trove of hospital data on which AI feeds that raises cybersecurity risks and the age-old concerns that dogged EDI decades earlier.

“AI will further accelerate the massive volume of data that lives on hospital systems’ IT systems, making them increasingly attractive to hackers,” he said. “However, another threat rests in our human reliance on the output of AI. AI truly is a garbage in – garbage out system, so if we feed bad data or faulty business rules and assumptions into the system, it will return garbage that managers may or may not recognize as such.”

The worry of loading bad data into EDI or AI now only means you’re transmitting it faster is true to some degree, according to Dobrzykowski, justifying human participation.

But remember, at the end of the day, AI is rooted in statistical methods. These methods are based on identifying variance. For example, when ‘X’ goes up, ‘Y’ goes up. These systems can’t tell us why ‘X’ of ‘Y’ went up. That requires human observation, experience and judgment.”

Realistic outcomes

Healthcare supply chain executives, leaders and pros who can navigate through the fog of AI potential will find bright spots that bring value, experts insist. One key benefit involves a clock.

“I believe that AI will allow care givers and support teams to optimize and utilize their time more effectively,” Fennessy indicated. “It will fundamentally give time back to everyone. What and how we utilize that time will be the key to execution and effective impact of AI.” 

Frazer and Moulding concur.

“The summarization of large amounts of information in easy-to-understand terms, tailored for specific tasks, will raise the floor for all employees utilizing AI properly, causing them to focus less on low-skill, time-consuming tasks, and focus more on the specific tasks that require a more nuanced look than AI is currently capable of,” Frazer noted.

“I think speeding up cycle times as it relates to creating insights from complex and disparate datasets and decision making are the main benefits at this point,” Moulding said. “Distilling vast amounts of data to tell a story, whether it is operational improvements or improvements in supply utilization that impacts cost, quality and outcomes is one of the main focus areas of my team.”

Downey acknowledges the learning curves AI requires. “Understanding new technology, mastering effective prompting, and identifying the right use cases requires time and experimentation,” he said. “For example, we have a Cleveland Clinic environment for ChatGPT, as well as a CoPilot evaluation underway. Knowing what to ask CoPilot and how it can best be used took time and experimentation by the team. Now our [Microsoft] Teams meetings are often transcribed and summarized by CoPilot, saving time from the group.

But he warns about “garbage in, garbage out.” “This issue is evident when using LLMs with spreadsheets. Attempts to analyze spreadsheets often lead to incorrect results if the data isn’t properly loaded or formatted, resulting in unreliable answers,” he said.

Dobrzykowski predicts improvements in fill rates, reduction in inventory levels and even improved patient outcomes related to HCAHPS and other important metrics that drive hospital reimbursement. “These improvements will be realized as health systems gain better visibility and insights into their own data trends, and in the future, broader industry trends,” he said. “Many of these tools are already available through distributors and 3PLs.”

Yet timing and speed don’t represent the advantage that suppliers enjoy, but access, according to Dobrzykowski. “It’s more that distributors and 3PLs have access to more data – from several health systems. They can use these data to analyze broad industry trends. They are able to act as data aggregators. A single health system will likely be in a position to analyze their data in isolation of others. Providers have historically found it very difficult to overcome IT, regulatory and cultural challenges in sharing data with other providers.”

safe online pharmacy for viagra cheap kamagra oral jelly online