Changing Healthcare Relationships in the Era of Artificial Intelligence

Enjoy this long-form essay – a first for SSVMS! Reader feedback requested at the end.
Disclaimer: Artificial intelligence was used to create images and find some of the references used for this article.
A childhood dream comes true… or so it seemed
I still remember the first time I had a conversation with ChatGPT. What a thrilling experience! As I asked the computer questions and posed problems, then got cogent answers in real time, my imagination soared. I could envision a future in which I had a tireless, all-knowing digital assistant to relieve me of the countless tedious tasks that took up my time. The computer, I thought, could take my life to places I could only dream of. Knowledge, ease, and super-human abilities—who wouldn’t want that future?
Then my doctor’s mind kicked in. I imagined how artificial intelligence (AI) would someday revolutionize the practice of medicine. Computers could answer patients’ routine questions without having to email their doctor. Clinicians could get answers to life-and-death questions in real time to provide better care. Everyone, including patients, would have instant access to the best specialists in the world, right from their smartphone. The possibilities of a healthier world seemed endless.
My next reaction transported me back to a memory from childhood. I was 7 years old when my brother and I watched the original Star Trek series on television, seeing Captain Kirk and Mr. Spock talk to a computer. With its very electronic, non-human-sounding voice, the computer was able to analyze problems and give them all the information they needed to explore the galaxy.
The attraction of artificial intelligence, I realized, wasn’t just about greater convenience and noble goals related to patient care. It was also, like Star Trek itself, about the magic of technology. AI would be the awesome fulfillment of a fantasy of power and ease that would transport us to distant galaxies of achievement, helping us “boldly go” where no one has gone before.
Once I was hooked by ChatGPT, I started following emerging stories on its development, and the news of breakthroughs in healthcare came steadily. First, we learned that an AI chatbot could pass the medical board exams. More news came about how the diagnostic accuracy of AI bots was on par with that of practicing physicians. And then, the study showing that a computer could answer patient emails with greater empathy than a group of human physicians, and that the computer-generated emails were more satisfying to the patient. Simply amazing! On the heels of these breakthroughs, healthcare leaders announced that we were on the threshold of a Golden Age when doctors would partner with AI technology to provide better medical care while reducing burnout and saving money.
Cracks in the computer code
After a short while, however, along with the news of AI success came a number of disturbing reports. First, the problem of AI “hallucinations,” or more accurately, confabulations. AI is currently based on Large Language Models (LLMs), or extraordinarily powerful auto-complete programs. It’s similar to how your smartphone guesses what you might type next in a message to a friend. The difference is that the LLM has been trained on the content of the entire internet, so it has a super-human ability to recognize context. There is no actual thinking or understanding, however, just a very sophisticated guess on what words are supposed to come next. The problem is that LLMs sometime guess wrong.
Despite huge investments in greater computing power, more training data, and more sophisticated programming to try to replicate human reasoning, AI’s tendency to simply make stuff up has never gone away as predicted. AI is programmed to always have an answer to any question. And if it’s not capable of finding an answer, it will simply confabulate something before it ever spits out the words, “I don’t know.” Repeated failures in the attempt to create the next breakthrough AI model are now suggesting that the problem of hallucinations may never go away. The New York Times recently reported that AI hallucinations might be getting worse over time, not better. And, growing numbers of AI scientists, such as Gary Marcus, now believe that hallucinations are inherent to the very nature of artificial intelligence based on LLM’s.
There are other concerns about AI as well. Worries about privacy of personal data, which has always been a risk of digital technology, are also being reported. In addition, AI models are only as good as the information on which they are trained. So, any bias or inaccuracies in the training data could be picked up by the bot and incorporated into recommendations for patient care. AI can also be used to spread misinformation and disinformation, with potentially terrible consequences.
But, AI corporate leaders and software developers continue to argue that most of these problems are temporary and solvable. All we need to do is put in privacy safeguards, refine the software, and then scale up the training to feed it enough high-quality data, and we will have true Artificial General Intelligence. Respected physician experts are still asserting that within a few years, artificial intelligence will completely transform the practice of medicine for the better. It’s only a matter of time.
And then it happened: the crack in my AI dreams.
In September 2024, I listened to a podcast featuring Esther Perel, a renowned psychotherapist with a large following. The podcast was on “Artificial Intimacy,” a subject I had never heard about before. Artificial Intimacy, Perel explained, is a connection with technology that is intended to create an authentic relationship. This connection, however, tends to fall short, leaving people feeling lonely and disconnected.
Our relationship with social media has already created a kind of artificial intimacy. Think of the person who has a thousand “friends” on Facebook, but has no one to feed their cat when they are in the hospital. For some people, especially those who are already lonely and isolated, victims of cyberbullying, or people struggling with poor mental health, the hollow relationships created through social media can actually cause harm.
As bad as this sounds, Perel asserted that the risks of artificial intimacy are dramatically worse when AI-powered bots are involved, especially when the bots take on human characteristics like speech and appearance. This dangerous kind of artificial intimacy was featured in the now-prescient movie from 2013, Her, in which Joaquin Phoenix’s character falls in love with an AI bot played by Scarlett Johansson.
Perel highlighted several specific risks related to artificial intimacy with bots. The first stems from the rise of AI companion apps, or commercial bots that are marketed as being substitute friends and even romantic partners. Sites such as Candy AI, RepliKa, and Romantic AI offer digital friendship accessible 24/7 at a low price. Users can build a customized “dream friend,” “like building a toy from scratch.” Frighteningly, these apps—designed to maximize engagement with the user—can generate AI-bot boyfriends and girlfriends with hyper-sexualized good looks and flirty, provocative personalities, making them tremendously appealing to vulnerable people who may lack the social skills needed to have romantic relationships with flesh-and-blood people.

Perel went on to argue that these digital apps are dangerous for several reasons. The first is that interacting with bots lowers our relationship expectations. True friendship or romance involves a lot more than someone who listens, pays attention, and then flatters you. Vulnerability is the first casualty of artificial intimacy. Bots make it seductively easy to be vulnerable with them, but they have nothing to be vulnerable about in return. It’s a completely asymmetrical relationship.
Real friends also have the ability to give us difficult, even confrontational feedback. Friends in the real world force us to deal with people who can be at times moody, boring, irritating, or say unkind things. Relating to real people generates “social friction” that is a necessary and vital part of human connection. Bots, on the other hand, are designed to be perfectly pleasing and never create problems—no tension, no conflict, no social friction. AI scientists now have a word for this servile nature: AI “sycophancy.”
A steady diet of artificial connection has the potential to train humans to expect only the most superficial aspects of relationship—a new normal that is both dysfunctional and dissatisfying. It’s like learning to be satisfied with the flavorless, styrofoam-like tomatoes in a fast-food burger because you’ve forgotten what an heirloom tomato tastes like.
Even scarier is Perel’s second concern, that artificial intimacy is lowering our competency in creating and maintaining successful relationships. The truth is that relationships require work, sometimes difficult work, in terms of listening and empathy, being patient, and accommodating the needs of others. Yet AI technology is designed to do that work for us, removing the “friction” in a way that could actually enfeeble our relationship skills.
When I reflect on how bad my own sense of direction has become since I’ve come to rely on Google Maps, it’s easy to see how our reliance on technology could erode our skills and abilities in other areas of life.
The problems of unhealthy content, distorted expectations, and reduced competence in relationships are leading AI companions to cause psychological harm. One MIT study showed that using chatbots can initially reduce loneliness, but high daily usage has led to increased emotional dependence on the bot and fewer real-world social interactions. And, the mental health of users who already have social anxiety has worsened after using AI companions. There have even been lawsuits against technology companies by families of people who committed suicide, claiming that conversations with AI chatbots contributed to their loved one’s death. Despite these risks, the AI companion industry is booming, reaching over 20 million users in the US, and there is no regulation of this industry at all. Frighteningly, despite the potential for harm, Meta is now entering the AI companion business on a massive scale, stating that companions will bring back “the magic of friends.”
In summary, artificial intimacy is a new danger emerging in the digital age. Psychological harm, distorted expectations, and eroded competency—these dangers are not related just to the content of AI output. It’s not about what the bot says to us. Rather, the danger is inherent in our very relationship with artificial intelligence itself, in how the technology changes us, the users. This risk is increasingly being recognized and discussed by AI researchers, but the conversation is not yet taking place in the field of healthcare.

AI will change healthcare, too
It’s understandable for people who work in healthcare to dismiss or minimize concerns about artificial intimacy. After all, most professionals are not going to get an AI companion. None of the conversations in medical journals on AI talk about the risks of artificial intimacy. So what could this have to do with our profession?
We should remember that the same people who are harmed by artificial intimacy also happen to be our patients. As the use of AI companions spreads, the fallout of its harmful effects will inevitably flow downstream until it shows up in our hospitals and clinics. There is a strong chance that rates of anxiety and depression will increase along with the consequences of loneliness and social isolation. Eventually, these emotional and social issues will become wrapped up in physical ailments, and thus indirectly put greater pressure on the healthcare system.
Additionally, there is very little discussion about how our collective relationship with artificial intelligence will affect the public’s level of trust in healthcare. If well implemented, with appropriate regulation and lots of attention to detail, using AI might enhance our patients’ trust in our ability to diagnose and treat illness. But the field of digital technology has a very poor record when it comes to thoughtful implementation of new products.
Indeed, the goal of using artificial intelligence to enhance trust faces an uphill battle. First, our society’s trust in healthcare professionals is already trending down. Social media has contributed to this decline in trust through the spread of misinformation and the rise of influencers who advocate unproven or unhelpful treatments. For example, oncologists are seeing more and more patients who decline traditional cancer treatments, instead requesting Ivermectin, which is being actively promoted by social media influencers. Attributes of influencers like “authenticity” and relatability are starting to carry more weight than the expertise of professionals.
When it comes to trust specifically in AI, a recent Pew Research Center study has shown that currently about 60% of US patients have little trust in how health systems will implement artificial intelligence. Another recent study found that patients rated physicians who use AI as being less competent and less trustworthy than those who didn’t use AI. This climate of skepticism is the societal context into which AI is being introduced.
Given the tremendous influence that AI companions can have over users, patients who are dependent on companions that spout harmful content or misinformation can potentially amplify mistrust of healthcare providers. Just imagine a patient interaction in the near future when a clinician is delivering a diagnosis or recommending a treatment. The skeptical patient, having had their biases reinforced by a sycophantic AI companion, then pulls out their smartphone and asks the AI bot to explain to the doctor why their diagnosis is wrong, or that the bot knows of a better treatment. How many of us would be prepared to handle a communication dilemma like that?
AI will change healthcare not just because of what it does for us, but also because of what it does to us
Artificial intelligence does have the potential to radically improve healthcare. The ability of AI to help make the right diagnosis faster, and come up with a cost-effective workup, is starting to become a reality. AI used to document care is already helping clinicians be more efficient and feel less burned out. And, at some point in the near future, AI will be reliable enough to fill out orders on Epic, set up referrals to specialists, monitor the health of patients with chronic conditions through the use of wearables, and coordinate care amongst different professionals.
However, when we factor in all the risks of AI, including the relational risks, we can see that the situation is more complex than the Star Trek fantasy that initially seduced me. As I have taken this deeper dive into the topic of artificial intelligence in healthcare, my dreams of practicing medicine with magical power and ease have been replaced by a more nuanced and humbling appreciation for what’s at stake as we adopt AI. One of the fundamental risks, I’ve realized, is how our relationship with AI will change medical professionals as well as patients. There is no better example of this than the possibility that relying on artificial intelligence could erode our critical thinking abilities.

Warning: Artificial intelligence can make you less intelligent
Over the past few months, there have been several studies published on the relationship between reliance on AI and a decline in critical thinking—one by Swiss social scientists at a business school in the UK , another by computer scientists at Microsoft. Both of these studies came to the same conclusion: The more that workers became reliant on AI, the greater their decline in critical thinking abilities.
With nearly 700 managers and administrators participating in its study, the UK business school looked at objective assessments of critical thinking as well as qualitative reports by study subjects. Starting from a definition of critical thinking as “the ability to analyse, evaluate, and synthesize information to make reasoned decisions,” they found that AI had variable effects on people. Those who used AI the most tended to be the younger, less experienced, less educated managers. On the positive side, AI helped these workers be more efficient by filtering out irrelevant information and highlighting important content. However, the more they relied on AI, the less they used their own critical thinking abilities to evaluate and use AI output. In practical terms, this means the heavy users simply used the AI output without much thought, reflection, or analysis. This was accompanied by greater “cognitive offloading,” meaning it removed some of the mental burden of their work. However, that relief came with a price—less critical thinking.
Reducing the cognitive demands of medical practice—isn’t that something that every clinician dreams of? It’s the very reason that AI is now being used extensively to help with clinical documentation, which can be incredibly draining for medical workers as they struggle to both listen to patients attentively and take detailed notes during each visit. Yet, these studies show that there is a potential cost to reducing the burden on workers in this way.
The Microsoft study, conducted on over 300 knowledge workers, involved detailed analysis of how AI use affected a wide range of business professionals. Based on in-depth surveys and interviews, it assessed over 900 specific, real-world examples of problems that the workers used AI to solve. The study defined critical thinking as having six components: knowledge (recall of ideas), comprehension (demonstrating understanding of ideas), application (putting ideas into practice), analysis (contrasting and relating ideas), synthesis (combining ideas), and evaluation (judging ideas through criteria).
Like the business school study, the Microsoft researchers found that AI improved the efficiency and quality of work for workers with lower skills and less education. Additionally, they found that younger, less educated workers used AI more, with older, more educated workers relying on AI less. They came to a number of conclusions regarding harms from AI use that were very similar to those uncovered in the business school study:
- The more that people used AI routinely, the more confidence they had in it and the more reliant they became on it.
- Workers relied on AI because it powerfully reduced the cognitive demands of doing their job. That is, it made their jobs easier.
- The greater the reliance on AI, the greater workers’ subjective experience that their critical thinking skills were being degraded.
- There were specific factors, moreover, that enhanced workers’ perception that they were working at a higher level of critical thinking:
- Greater confidence in their own knowledge and skills
- Believing that the task was both important and was risky
- Perceiving that the task was their responsibility
Admittedly, business managers and knowledge workers are different from healthcare providers, and those in healthcare may be more protected by our strong sense that our work requires highly specialized knowledge and skills. But the potential for loss of critical thinking skills harkens back to Ester Perel’s concerns about artificial intimacy. If becoming reliant on AI to fulfill our social needs reduces our social competency, then it’s possible that reliance on AI for intellectual tasks could reduce our cognitive and/or professional competency. Over time, and as AI is used more and more for clinical tasks and decision-making, this threat will become more and more of a possibility… unless preventive steps are taken early on.
Could artificial intelligence pose a threat to patient and clinician autonomy?
Social scientists, philosophers, and ethicists who look at the impact of artificial intelligence on healthcare have raised another concern: how AI used for clinical decision support could affect the autonomy of both patients and clinicians.
When patients come to us for care, they need sound medical advice based on scientific research. If AI can provide us with more accurate and unbiased diagnosis and treatment recommendations, that would be a huge benefit. But patients come to us not just with their history, biology, and symptoms—they also bring their values, preferences, life goals, and their families with them. Clinical decisions that support the patient’s autonomy are best made when a clinician presents understandable information to the patient that includes all the contextual factors (values, preferences, goals, and relationships) to come up with a solid treatment plan.
Now imagine a clinical case in the future. A patient presents for medical care. An AI-powered bot completes some of the work of taking a history. Simultaneously, an algorithm scrapes the medical record to produce a convenient summary. Then, it analyzes the patient’s available social media data, evaluates their speech patterns and how quickly they type their response to patient messages, and then combines that information with the clinical data. On top of that, AI factors in risks based on the patient’s genetic profile to come up with an accurate diagnosis and evidence-based treatment recommendation. When this happens, it will be revolutionary for patient care. What could possibly go wrong?
This simple and optimistic scenario could be filled with pitfalls as well as opportunities. What if the patient doesn’t trust the AI system, and therefore does not give a full, accurate history? What if bias or misinformation has crept into the AI training data? What if the patient’s problem is uncommon, or the AI database doesn’t contain enough research to formulate a plan? Incorrect or limited data could lead to an inappropriate recommendation.
An even bigger issue is what to do if the AI bot can’t produce a rationale for the recommendation. One of the defining characteristics of AI technology is its “black box” nature, meaning we have little or no understanding of how AI comes up with its output. AI companies are now touting that their models have the ability to “reason,” which might make the rationale of their output understandable, but some experts are challenging this assertion. There is a growing backlash of computer scientists who doubt that we will ever be able to understand the rationale of current AI models when it comes to complex problem solving. But without understanding the rationale for the treatment plan, should patients just simply trust the computer? How many patients would trust a recommendation that neither they, their doctor, nor the computer could understand?
These are clearly situations that demand human intervention. Yet, at some point in the future, our computers could know vastly more than the human medical professionals. What happens when clinicians are no longer capable of checking the AI output for accuracy, or understanding the machines that are supplying the recommendations?
Psychologists, social scientists, and ethicists—mostly people who work outside the AI industry—are starting to take this possible scenario very seriously, writing about the danger of “double paternalism.” This is a kind of paternalism between AI and clinician, where the human providers are simply passive messengers. It’s also a paternalism between AI and patient, where scientific knowledge trumps everything else. This could lead to a phenomenon of “computer knows best.” A huge mismatch of knowledge combined with a lack of transparency could allow computers in the future to dominate medical decision-making at the expense of both clinicians and patients.
Certainly there might be times when the computer really does know best. But it’s not hard to think of many examples where this would be a terrible strategy. Imagine an end-of-life scenario when an AI algorithm, programmed to maximize life span, encounters an individual patient who would prefer quality of life over quantity. The algorithm could put great pressure on patients and families to agree to a treatment that they don’t understand or even really want. If people put more faith in technology than their personal values and their own personal agency, it’s very possible that we could do a disservice to our patients. No one is saying that this dismal future is our destiny. But, it’s our responsibility to build systems to insure that human values and autonomy remain the priority.
And, to add yet another level of complexity, legal experts are raising the issue of who would be responsible for the medical-legal consequences of an adverse outcome caused by AI output. Would the bot and its manufacturer be liable, or would the physician who approved of the treatment? Given how much protection large digital technology companies have had, it’s hard to imagine that they would be held responsible—the burden may well fall on the shoulders of the human clinicians. This is a huge topic that could easily be the subject of a completely separate article.

Reimagining our relationships with patients could be a starting point for a safer future with artificial intelligence
The same authorities who have identified the relational risks of artificial intelligence in healthcare are also discussing strategies to minimize the dangers. One of the most common recommendations is to redraw the traditional relationship with the patient not as a dyad between clinician and patient, but as a triad of clinician, patient, and AI. Humans would take priority in the triad, and would also be responsible for managing it. Making this transition will require a both a significant shift in our thinking and a willingness to deal with greater complexity.
The change in thinking that will be required demands that we let go of part of our identity as clinicians. So much of our training and professional life has revolved around the acquisition of knowledge, solving problems, and then implementing solutions. But the amount of medical knowledge is skyrocketing so fast that it’s more and more difficult to know everything needed to practice medicine. For some of us, this letting go of feeling that we are capable of knowing everything will be a huge adjustment. In exchange, we will have to focus instead on learning what’s most important for the care of the patient. And, we will need to learn how to evaluate the genetic, psychological, and social data needed to care for the patient. We will need to hang on to the feeling of mastery in our profession and the confidence in our ability to think critically and to manage patients. And perhaps most importantly, clinicians must always have the confidence and authority to override the recommendations of the computer, even if the computer appears to know more than we do.
Over time, the role of clinicians will change dramatically. Instead of information gathering, humans will spend more time on information verification—that is, cross-checking AI output for accuracy and applicability. Instead of direct problem solving, clinicians will focus more on integrating AI output with knowledge of the patient’s medical conditions and life context—that is, their values, preferences, and social relationships. And instead of directly carrying out tasks, clinicians will spend more time and energy on what’s being called “stewardship,” or managing a system of care rather than just doing the work on their own.
For example, much of the routine work of a clinician will be taken over by machines: entering orders into the computer, calling a specialist for a referral, submitting a prior authorization request to an insurance company, or messaging a patient to check on their condition. Instead, the human leader of the clinician-patient-AI triad will need to monitor how the system is accomplishing tasks and fine tune and intervene as needed. We will always need clinicians to integrate all the data required to provide care and help the patient apply that in their lives.
Along with a change in mindset, clinicians will have to develop new kinds of competency and receive new kinds of training. Given the risk of AI hallucinations, clinicians will need to know how to evaluate the accuracy of AI output. Additionally, clinicians will need the capacity to 1) understand the models and assumptions that AI uses to generate output, 2) the ability to apply AI decisions and recommendations, 3) the ability to communicate to patients and families the rationale of AI recommendations, and 4) match AI output with patients’ preferences, values, risk-tolerance, and other personal factors.
According to the Microsoft computer scientists, in order to prevent the decline in critical thinking that comes with reliance on AI, systems will need to introduce new kinds of “friction” into workflows. They assert that “interventions” need to be built into AI systems to enhance the motivation and capacity for critical thinking. These interventions might come in the form of alerts to human clinicians that the medical situation is high-risk, or reminders that the human is responsible for the care recommendation. Other interventions might be: reminders to slow down and give extra thought before approving the AI output; building in options for the human members of the care team to request assistance with critical thinking when needed; and, providing ongoing training in critical thinking skills that will be part of the professionals’ career development pathway.
Scholars at the University of Chicago are also recommending that the medical teams be provided with two or more competing AI options for patient management, including the pros and cons of each recommendation. This would encourage clinicians to use their critical thinking skills to assess each option, and not just automatically do what the AI bot advises.
Time and training to learn new skills will be needed. One of the big challenges facing us in this time of transition will be figuring out how we can all develop these new competencies. Ironically, one of the main benefits of artificial intelligence, reducing workload and mental demands, will create new types of complexity and new demands on clinicians. Will individual clinicians, already struggling with their own near or actual burnout, be willing to go back to school to learn how to practice in new ways? Will computer engineers go to the trouble of building elements into AI systems that will facilitate critical thinking, patient-centered care, and healthy relationships with patients? These are vital and complex questions that need to be answered as we enter this new era of artificial intelligence in medicine.
A clear need to focus even more on skillful, empathic communication
With all this attention on technology, one final point deserves special attention in this discussion. There is widespread agreement by researchers studying artificial intelligence in healthcare that skillful, empathic communication with patients will become more important than ever in the AI era. As one thoughtful researcher noted, a clinician’s core values and abilities will not change. But, we will need to use our communication skills to manage a more complex relationship with the patient that now includes a third party.
At a fundamental level, we will always need humans to be the communication experts. There will always be situations where the history-taking performed by the human interviewers will be superior to that from computers. If the patient is not being truthful or accurate, or their history is distorted by intense emotions, it’s likely that the humans will discern that and find the “hidden data” first. There will also be times when humans will be more likely to “read between the lines” of the patient’s story and better assess their thoughts, feelings, hopes, fears, and expectations. And, communication skills will remain our most important tool to counter the growing misinformation that influences our patients’ beliefs.
Even if an AI bot can generate emails that feel empathic, we should never farm out the essential skills of listening, explaining, and showing compassion to machines. The human members of the team will still need to exercise these skills to help our patients know that we understand them and their problems. Patients need to understand the nature of their illness and the rationale for treatment, and they need to have their questions answered. They have the expectation and the right to have their values, wishes, and preferences understood and honored. These critical roles will always be carried out by us.
And finally, what happens when the computer has nothing more to offer the patient? What happens when the algorithm fails to come up with any more diagnoses and all treatment options have been exhausted? In other words, what do we do when the patient’s problem can’t be “fixed?” The answer is to return to the relationship with the patient. As Dr. Catherine Hwang wrote in a recent JAMA piece, “When modern technologies reach their limits and medical therapies no longer prevail, we still can nurture the emotional and spiritual aspects of healing through the therapeutic relationships we cultivate with our patients. These are moments when words, silence, and nonverbal gestures of caring and compassion become our most powerful medicine, often outweighing the potential of a surgeon’s knife or a chemist’s drug.” The only members of the team that can accomplish these essentials are the humans, and the only way we can do that is through excellence in communication. These are truths that technology will never change.

Will we go forward boldly, or blindly?
There is tremendous excitement about the potential for good that can be accomplished by utilizing artificial intelligence in healthcare. AI promises to make healthcare more efficient and effective, and it even holds the promise of finding cures to a multitude of diseases. But before rushing to use new technology carelessly, we should remember a simple lesson from history. There has never been a major new technology that has not profoundly affected our species. Technology doesn’t just do things for us, it fundamentally changes us.
Domesticating horses didn’t just help our ancestors carry more, or travel further and faster—it resulted in a new class of elite warriors that lead to the rise of large empires and massive concentration of social and economic power. The development of the printing press didn’t just create written materials on a mass scale—it lead to a transformation of society and a new way of thinking that we now refer to as The Enlightenment. Artificial intelligence will have no less of an impact on our species, and possibly more. We need to not just think about what AI will do for us, but also about how it will change human psychology, our ability to think, and our social relationships.
Having a realistic awareness of the pitfalls of AI as well as the promises will better prepare us to use it wisely. This awareness will help us navigate the challenging questions that will arise. Will AI companies incorporate appropriate safeguards into their software, or will they cut corners to maximize profit? Will regulators have the courage to set up helpful guardrails for AI, and minimize the social costs, or will they give Big Tech a free reign? Will healthcare organizations use the technology judiciously and after thorough vetting, or will they implement it willy-nilly in an effort to get the upper hand on the competition? And, will healthcare providers be willing to engage in the education and self-development needed to use AI wisely; or, will they take the path of least resistance and use AI just to make life simpler and easier?
Perhaps the worst-case scenario would occur if we all allowed ourselves to be naively seduced by this amazing, magical technology. If we get emotionally swept up in Star Trek-like fantasies of power, excitement, and ease, we will probably suffer many negative consequences. The key will be to move forward boldly, but not blindly, where no one has gone before.
Let us know what you think!
Is AI going to he helpful or a hindrance to your practice? Why?
Email your answers to Brandon Craig,
bcraig@ssvms.org, and we'll publish them in the next issue of SSVMS Magazine.