Hyper-Personalized AI in Healthcare: A Benefit or a Threat?

Why Trust Techopedia

The art of persuasion permeates society. From companies enticing you to buy their product to political parties trying to win your vote, people have always been adept at convincing others to think and act in specific ways.

But we now live in an age where humans are not the only ones who can effectively influence decision-making. Advancements in AI have produced increasingly anthropomorphic and hyper-personalized LLMs capable of astonishing feats of persuasion.

Techopedia explored how the healthcare system, in particular, stands to benefit from AI that drives behavior change. However, as is always the case with AI, “whenever there is light, there are also shadows.”

Key Takeaways

  • Hyper-personalized AI health coaches could revolutionize healthcare by promoting healthier lifestyle choices.
  • AI therapy bots are tools that should complement but not replace professional human therapists.
  • The persuasive power of AI raises concerns about misinformation, bias, privacy, and security issues, highlighting the need for regulatory oversight and human supervision.

Hyper-Personalized Health Coaches Address America’s Health Crisis

The Centre for Disease Control (CDC) has estimated that 129 million people in the US have at least one major chronic disease, and “about 90% of the annual $4.1 trillion health care expenditure is attributed to managing and treating chronic diseases and mental health conditions.”

On top of that, the recent Health Affairs predictions reveal that healthcare spending is projected to have grown 7.5% in 2023, outpacing the nominal gross domestic product (GDP) growth rate of 6.1%. You don’t need to be an economist to recognize this is a dire picture of unsustainability.

In a recent article for the Times, Sam Altman, CEO of OpenAI, and Arianna Huffington, CEO of Thrive Global, proposed that AI could help alleviate this healthcare crisis.

Advertisements

OpenAI and Thrive Global are co-funding a new startup, Thrive AI, which will deliver hyper-personalized health coaches designed to affect change across the five foundational behaviors of sleep, food, movement, stress management, and social connection.

Thrive AI offers a mentor, a guide, and a therapist available 24/7
Thrive AI offers a mentor, a guide, and a therapist available 24/7. Source: Thrive AI

The AI will be trained on peer-reviewed science, lifestyle methodologies such as microsteps, and user health data to generate well-informed and highly persuasive output.

“With AI-driven personalized behavior change, we have the chance to finally reverse the trend lines on chronic diseases,” asserts Altman and Huffington.

The pair also acknowledges the connection between physical and mental health and champions Thrive AI’s provision of holistic care. They wrote:

“With personalized nudges and real-time recommendations across all five behaviors—helping us improve our sleep, reduce sugar and ultra-processed foods, get more movement in our day, lower stress, and increase connection—AI could help us be in a stronger position to make better choices that nourish our mental health.”

While all this sounds quite positive, there are no guarantees that hyper-personalized and super-persuasive health bots will be free from the ethical quagmire of misinformation, bias, and potential privacy and security concerns already associated with AI in healthcare.

Problems With Persuasion & Personalization

Earlier this year, Anthropic published research investigating the extent of AI’s persuasiveness. The results showed that each new iteration of Claude was more persuasive than its predecessor.

Claude Persuasiveness
Persuasiveness scores of model-written arguments (bars) and human-written arguments (horizontal dark dashed line). Error bars correspond to +/- 1SEM (vertical lines for model-written arguments, green band for human-written arguments). We see persuasiveness increases across model generations within both classes of models (compact: purple, frontier: red). Source: Anthropic

While this kind of scaling is impressive, it also raises concerns. According to Anthropic, persuasion “may ultimately be tied to certain kinds of misuse, such as using AI to generate disinformation, or persuading people to take actions against their own interests.”

Another study published in Scientific Reports demonstrated the correlation between personalized messaging and effective persuasion in domains such as consumer product marketing and political appeals for climate change.

Given the proper guidance and access to personal information, LLMs like ChatGPT have been more persuasive than humans in debate settings, highlighting AI’s ability to shape opinions or even alter a person’s beliefs.

This kind of functionality could be devastating in the medical arena, where accuracy and honesty are paramount.

Hannah Collinson, a Specialist Doctor in Pediatrics, told Techopedia that there’s real potential for AI health coaches to play an important role in lifestyle advice. However, false results or diagnoses and the manipulation of AI systems to deliver misinformation are genuine concerns. She states that “there would need to be some human oversight.”

Altman and Huffington seem aware of the potential problems, saying:

“Policymakers need to create a regulatory environment that fosters AI innovation while safeguarding privacy. Health care providers need to integrate AI into their practices while ensuring that these tools meet rigorous standards for safety and efficacy.”

Will AI Replace Health Professionals?

AI is already contributing to medical breakthroughs. A scientific statement by the American Heart Association outlines how AI can improve cardiovascular and stroke outcomes.

Another study found that a deep learning Convolutional Neural Network (CNN) was more successful than dermatologists at diagnosing skin cancer.

Similarly, MIT researchers developed a machine learning system that can look at chest X-rays and diagnose pneumonia.

While these results are extraordinary, the medical world is not ready to dispense with humans. In fact, the latter study showed that a hybrid human-AI model produced the best results. Seemingly, teamwork makes the dream work when it comes to achieving increased efficiency and accuracy in disease diagnoses.

Collinson said AI could improve clinical efficiency by “interpreting X-rays or scans, blood results, and histology samples.” However, she doubts AI’s capabilities in delivering “the personal touch.” The doctor said:

“Medicine is not just about the hard cold diagnostics; it’s a relationship with people who are potentially having a really difficult time, and I’m not sure how well AI would be able to replicate true empathy and the communication skills involved in a consultation.”

In the mental health field, therapy bots are helping people access 24/7 support that would otherwise be inaccessible because of barriers such as cost and convenience.

Character.ai’s “psychologist” was created by Sam Zaia, a 30-year-old medical student in New Zealand, and has received 148.8 million messages to date, making it the most popular character on the platform. One user posted their experience on Medium and gave a pretty balanced review of the bot:

“I know some who have greatly benefited and would say this is even better than their therapists, while some regard this AI as a joke. To me, I think it’s a great source of support, but it’s far away from being salvation.”

In a similar vein, Dr. Kate Darling, a Research Scientist at the MIT Media Lab and author of The New Breed, wrote in the BBC Science Focus Magazine:

“It’s possible that therapy bots can be a huge help to people. But we should be wary of any products rushing to market with insufficient research, and especially AI-powered apps that may incorporate all manner of known and unknown harms.”

While the Thrive AI website states that its product is not a replacement for human support, it does, in the same breath, champion its superior benefits.

“Think of Thrive AI as a mental health specialist, coach, advisor, or counselor. What makes it better than a human one is that advice and insights are available whenever needed without making an appointment or booking a session,” it says.

Therapy bots can be helpful tools that produce immediate results for individuals in crisis, but they are far from certified therapists.

What is perhaps most worrying is the number of people accessing these bots. Surely, this indicates high levels of mental health issues and a shortage of investment in public health services.

AI can effectively alleviate the strain, but it is important to remember that it is not a replacement, no matter how personalized or persuasive it becomes.

The Bottom Line

Language is the primary vehicle of persuasion, therefore, rapidly evolving LLMs will inevitably become experts in the field.

Personalized and persuasive AI will likely transform healthcare, but robust safeguards must be implemented to maximize the benefits and mitigate the risks.

FAQs

What is personalized LLM?

How persuasive is AI?

What are the cons of AI therapy?

References

  1. Quote by Masashi Kishimoto: “In this world, whenever there is light, there a…” (Goodreads)
  2. Chronic Disease Prevalence in the US: Sociodemographic and Geographic Variations by Zip Code Tabulation Area (Cdc)
  3. National Health Expenditure Projections, 2023–32: Payer Trends Diverge As Pandemic-Related Policies Fade (Healthaffairs)
  4. AI-Driven Behavior Change Could Transform Health Care (Time)
  5. Imagine Having a Mentor, a Guide, and a Therapist Available 24/7 (Thrivelabs)
  6. How Small Habits Can Lead to Big Changes – The New York Times (Nytimes)
  7. OpenAI Launching the Ultimate AI Health Coach, Backed by BILLIONAIRES! (Longer Life) (Youtube)
  8. The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs) | npj Digital Medicine (Nature)
  9. Measuring the Persuasiveness of Language Models (Anthropic)
  10. The potential of generative AI for personalized persuasion at scale | Scientific Reports (Nature)
  11. On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial (Arxiv)
  12. Use of Artificial Intelligence in Improving Outcomes in Heart Disease: A Scientific Statement From the American Heart Association (Ahajournals)
  13. Artificial Intelligence Better than Dermatologists in Diagnosing Skin Cancer (Oncozine)
  14. An automated health care system that understands when to step in | Harvard-MIT Health Sciences and Technology (Hst.mit)
  15. Personalized AI for every moment of your day (Character)
  16. Trying Out character.ai’s Psychologist | by Danny | Jun, 2024 | Medium (Medium)
  17. Rise of the therapy chatbots: Should you trust an AI with your mental health? (Sciencefocus)
  18. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots (Ncbi.nlm.nih)
  19. Language Use and Persuasion: Multiple Roles for Linguistic Styles (Researchgate)
Advertisements

Related Reading

Related Terms

Advertisements
John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He holds a degree in Creative Writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advancements. When he's not thinking about LLMs, he enjoys running, reading and writing songs.