By Alex Tate
There is a lot of interest and excitement surrounding Artificial Intelligence, or AI, and how it can bring sweeping changes to the healthcare ecosystem. The potential of AI is unlimited, and its avenues in healthcare are still predominantly unexplored. According to Accenture, successful and effective implementation of AI can save the US government up to $150 billion per annum, by 2026. For a government that is generously spending $3.5 trillion each year on its healthcare needs, such sizeable savings can ease some pressure. Accenture also predicts that the AI healthcare market will be valued at $6.6 billion by 2021.
However, despite all of the encouraging statistics, AI still has a cloud of uncertainty associated with it. There are various reasons why people are not comfortable with incorporating AI into healthcare. Technological limitations and complicated intricacies related to AI also do not help its cause. Even though experts are optimistic that AI will rejuvenate the healthcare system, there are implications which cast doubts over its future.
Here are some reasons why the use of AI in healthcare proves challenging:
AI Is Still a Black Box
In IT, black box refers to something which is not entirely understood, in terms of its internal workings. For example, you may know how to turn on your laptop by pressing the power button, but you are unsure what internal mechanisms kick-in during the process of turning on your device. In that case, a laptop may be a black box for you.
For data scientists and IT experts, AI is very much a black box. They are able to derive limited work from AI, but are, themselves, unsure how AI actually works. In theory, AI works by machine learning and neural networks. However, the exact intricacies and algorithms are still less understood.
Inadequate Technology to Support AI
Our current technology, especially in terms of hardware, is not fully capable of running AI and its algorithms. Only the most advanced supercomputers can manage AI. Whereas, the servers that most hospitals currently have may not be fully equipped to deal with the set of processes that take place in AI.
Even if we are able to comprehend AI completely, we may not be able to incorporate it into the healthcare setup, unless the public feels comfortable with it. People can be quite distrustful when it comes to the manipulation of their data. As long as AI does not get social acceptance, giving AI access to PHI (Protected Health Information) would be unethical.
Data Protection and Privacy Issues
The American healthcare system takes healthcare privacy very seriously. Breaches in patient information are punishable by a maximum fine of $1.5 million per violation. In light of such strict rules and hefty punishments in cases of non-compliance, scientists cannot risk giving AI open access to patient data. Even if we are able to understand the mechanism of AI, ensuring that data breaches do not happen would be an even more significant challenge. The downside of AI is, if this technology gets into the wrong hands, all the current protocols of data safety will become null and void.
Compliance and Regulations
Since the enactment of HIPAA, the US Government has been regularly enforcing compliances on the healthcare sector to ensure the safety of clinical information. Once AI regularly features in healthcare, the government may be required to enact a new set of regulations to streamline the applications of AI. At this stage, the scientists themselves are not sure how AI works. Aligning AI to federal and state compliances would be a different ordeal.
Reduction in job security due to increased automation and ‘smarter’ computers has been a growing dilemma over the last few years. Since machines are taking over much of the work previously performed manually and businesses are pushing to reduce costs in the ever-competitive healthcare arena, showing employees the door is becoming increasingly common. A decrease in healthcare jobs, as a result of AI, will not only increase unemployment but it will also have detrimental effects on the national economy.
Limited Decision Making for Providers
Richard Baldwin said, “No matter how advanced AI gets, it may never have the ability to be creative and think independently – something which natural intelligence is optimized to do.”
With AI having complete access to patient data, it might be able to prescribe treatments on its own. This process may hinder the provider’s own ability to judge and do the requisite decision making. It can also make a provider complacent, which can be dangerous if the AI commits an error. As long as AI does not become error-free, we cannot trust it to make decisions that a seasoned provider can.
Curtailed Provider-Patient Relationships
AI will completely change the dynamics of the provider-patient relationship. Since AI would be doing most of the things, the provider may not have holistic control over the entire treatment process. AI will also allow patients to self-diagnose their ailments, and the providers would be required to justify it, instead of examining the patient and diagnosing appropriately.
In case AI fails to function appropriately and presents erroneous information, no one can be held accountable for it. In case of a wrong prescription due to AI, there is no way for a patient to be compensated. Also, there are no enacted laws and regulations covering errors due to AI. Considering that, at this point, we do not have a complete understanding of AI, resolving such errors would be another challenge.
Hefty Training Costs
During the implementation process of AI, providers and support staff would need specialized training on AI. Making AI comprehendible for providers who are usually not fluent in IT dialects, would be very difficult and may take several weeks of training, if not months. Plus, the support system delegated for AI-backed healthcare software would also need extensive training.
AI is probably the most advanced technological innovation that humanity has ever witnessed. In its pilot testing, AI has shown signs of promise, but its future is full of bumpy rides and twisty roads. Like most other technological innovations, one day, we may master AI and all its complexities, but for now, that phase seems distant. Until we reach that phase, we should not rely on AI and trust natural intelligence to do the crucial decision-making in the healthcare sector.
Alex Tate has served in various positions at health IT organizations for the past thirteen years. Most recently as Vice President at a leading EHR organization. He is currently overseeing EHR Programs and revenue cycle consulting for a number of organizations. He has previously supervised the development of many emerging products and held leadership roles in health-tech strategy, operations, service organization development, delivery, and optimization.
Disclaimer: The viewpoint expressed in this article is the opinion of the author and is not necessarily the viewpoint of the owners or employees at Healthcare Staffing Innovations, LLC.