Expert Panel: The Medical Industry at War Against AI Cybersecurity Attacks

Artificial intelligence (AI) is a double-edged sword for the healthcare industry.

While the medical field embraces innovation, discovering AI solutions that have the potential to enhance diagnosis, administration, and drug development, there are consistently new threats — a primary target for ransomware gangs and cybercriminals.

For example, the recent BlackCat (ALPHV) attack against Change Healthcare in February triggered a US-wide pharmacy outage, which caused serious issues for untold numbers of Americans unable to fill their prescriptions. The company handles 15 billion healthcare transactions across 67,000 pharmacies across a year.

It is alleged — but unconfirmed — that the company paid millions of dollars in Bitcoin to stop the attack.

And this is far from being the first attack on the sector with serious implications for the public.

During the Thanksgiving holidays of 2023, a cyberattack on Ardent Health Services impacted 30 hospitals in six states, diverting ambulances and affecting emergency services.


Unfortunately, these two attacks are just the tip of the iceberg.

Techopedia talked to leading experts in cybersecurity about the intersection of AI and security in healthcare, what best practices providers and companies should adopt and the future of the industry.

Key Takeaways

  • While AI can be a massive weapon for the healthcare industry, escalating ransomware attacks driven by AI are a real danger.
  • Attacks this year include a massive, 6-day freeze on a large swathe of pharmacies across the US.
  • This has allegedly led to multimillion Bitcoin payments in order to resume patient care.
  • The healthcare sector is now faced with the challenge of ensuring their AI tech is safe.
  • Experts weigh in on the risks, best practices, and technology solutions that can strengthen healthcare security postures as they embrace innovation.

AI Security Risks Healthcare Leaders Should Consider

Like in any other industry, in healthcare, AI is praised for its potential. Management consultants McKinsey described generative AI as a technological innovation that can “lead to powerful new advancements in public health and healthcare.”

In research and development alone, McKinsey says it can increase productivity gains in pharmaceutical and medical R&D by up to 20% and generate savings of up to 200 million worldwide in the R&D of tuberculosis alone.

However, McKinsey warns that the entire healthcare sector and all its supply chain and partners — including government public health institutions, policymakers, federal and state organizations, care providers, and more — must implement risk management and build skills to reap the benefits of AI.

Healthcare AI Risk Management: Injection Attacks, Data Poisoning, and AI Abuse

Ariel Parnes, former Head of the Israeli Intelligence Service Cyber Department, winner of the Israel Defense Prize for tech innovations in the cyber field, and COO and co-founder at Mitiga spoke to Techopedia about the potential of AI and the risks.

He said:

“As the healthcare industry embraces generative AI for advancements in care, diagnosis, and treatment, it’s crucial to recognize that while AI has been in use for over a decade, GenAI represents a new, powerful generation with unique capabilities.


“However, GenAI introduces novel cybersecurity risks too, including new vulnerabilities to ‘native GenAI attacks’ such as prompt injection, where malicious inputs are designed to manipulate AI responses, and data poisoning, aiming to corrupt the AI model by feeding it misleading information.”

AI´s Bright Healthcare Future Threatened by Lack of Policies

Reports such as BRG’s AI and the Future of Healthcare report, released on February 27,  signal a wide adoption of AI tech in the sector, with 75% of healthcare professionals surveyed believing AI will be widespread within the next three years.

Other studies, such as the recent Center for Connected Medicine report, reveal significant deficits, with 65% of U.S. health system leaders saying they had no policies for AI at all. Only 16% reported having systemwide policies for AI usage and data access.

Parnes highlighted the importance of risk management policies when deploying AI.

“To navigate these challenges (AI security and privacy threats), healthcare leaders must establish policies that balance the innovative potential of GenAI with risk management.”

AI Performance and Scrutiny

Rob Hughes, CISO of RSA Security — one of the largest cybersecurity and risk management organizations in the world — also spoke to Techopedia about some specific risks healthcare organizations face when deploying AI.

“When using AI, organizations likewise need to consider who needs access to the technology and what data the AI will process.”

Hughes explained that losing control of data is one of the biggest concerns regarding AI in healthcare.

“If data is uploaded or trained on a public LLM then control of that data may be lost—so AI technology should be treated like any other system and the boundaries and use case well understood.”

AI Systems Must Meet Strict Healthcare Compliance

Additionally, those working in the healthcare industry must prioritize compliance and make sure their AI systems meet the demands of laws such as the Health Insurance Portability and Accountability Act (HIPAA).

“That’s why healthcare officials need to be particularly concerned about any AI model that can access or query patient records and ensure that the model meets the data security needs,” Hughes said.

Taimur Aslam, Chief Technology Officer at Cytex — a company that specializes in full spectrum defense, added that the data AI systems work within healthcare environments needs to be restricted.

“If AI is used to transcribe or generate medical documentation (SOAP note, discharge summary, treatment plan), then there should be a restriction on what Patient Health Information (PHI) is used by AI and is the AI allowed to process and store this PHI.”

Aslam added that, unlike other sectors where AI is being used, in healthcare, the technology must be peer-reviewed by independent experts.

“Furthermore, there should be built-in checks to enable physicians to spot changes in the AI output if its data set has been tampered with or taken over by a malicious actor.”

Bias, Adversarial Accuracy Attacks, and Shadow IT

Shawn Loveland, with 35 years of experience in cybersecurity, previously focused on dark web intelligence for Microsoft and is now Chief Operating Officer (COO) of Resecurity spoke to us about AI bias, attacks, and the risk of shadow IT — when workers engage with apps or software that has not been vetted or approved by the company.

“Another risk is the reliability of AI systems, as these systems can be prone to manipulation or attacks, such as adversarial attacks, where small changes in input data can result in incorrect AI conclusions. This is particularly concerning in healthcare, where wrong decisions can have critical consequences.”

Loveland added that shadow IT also presents risks and includes using AI tools (e.g., ChatGPT, Copilot, etc.) and software with built-in AI functionality (e.g., Microsoft Office, Windows, Dropbox, etc.).

AI Healthcare Security Best Practices and Solutions

From personalized medicine, predictive analytics, medical imaging, drug discovery, and robotic surgery, AI in healthcare is experiencing significant advancements. But all of these new technologies taking the sector by storm must incorporate solid security solutions, Loveland says.

“To address security concerns, these technologies must incorporate robust data encryption, strict access controls, regular security audits, and compliance with healthcare regulations like HIPAA.”

Like any other tech that adds up to the digital attack surface of an organization, AI apps in healthcare must be continuously monitored by security teams and automated technologies.

“AI-driven threat detection systems are crucial to safeguarding sensitive patient data against cyber threats. Ongoing auditing and verification of the efficacy of the output of the AI models.”

Aslam highlighted cybersecurity solutions for AI tech, such as the use of advanced AI for the generation of medical electronic documentation, AI that supports physicians in decision-making, and back-office automation software.

“Most of the cybersecurity practices for these AI use cases are based on the traditional preventative techniques used for general information security.”

Aslam listed some best practices:

  • Ensuring that privileged PHI data is only sent to covered entities and business associates as required by HIPAA.
  • Invoking the principle of least privilege to ensure that stored PHI data is only accessed by those who need it.
  • Encrypting the data in transit and data at rest.

How AI Can Drive IT-OT Security

A Healthcare organization’s security depends on IT-Operation Technology (OT) infrastructures. By breaching the digital space (IT), attackers can affect the real-world technology that is used to provide care (OT). Parnes talked about the issue.

“AI has been a part of cybersecurity for years, but the advent of GenAI introduces new potentials for enhancing security measures, especially in the complex IT-OT infrastructures critical to healthcare organizations”.

Parnes said that GenAI’s powerful learning capabilities can significantly accelerate the automation of cybersecurity processes, including the development and adaptation of advanced and sophisticated threat detection models based on large-volume data sets.

“This allows for more efficient and effective identification of potential risks, ensuring that IT and OT systems are safeguarded against breaches that could disrupt operations”.


“GenAI can also be used by healthcare providers to simplify cybersecurity management by enabling interfaces that understand and respond to human language, making it easier for healthcare organizations to manage their security protocols and respond to threats swiftly and effectively”.

Managing Legacy Hardware Risks

Numerous hospitals, emergency providers, and care organizations still use legacy equipment — often unpatched and ripe with vulnerabilities and exploits. Parnes walked us through the steps healthcare providers should take to mitigate legacy hardware risks.

“To remediate weaknesses in networks using legacy equipment, organizations should first prioritize replacing these systems wherever feasible, as they are challenging to support and protect”.

Parnes continued by saying that if replacement isn’t possible, the risks can be mitigated by restricting legacy systems’ access to the organization’s most critical assets.

“Additionally, adopt an ‘assume breach’ stance for these environments, vigilantly monitoring for anomalies, suspicious behavior, and indications of compromise.”

Healthcare: Engineering Security in a Criminally Loaded Digital Attack Surface

The healthcare digital attack surface is one of the most targeted environments worldwide. Cybercriminal gangs such as BlackCat, LockBit (recently dismantled by law enforcement), BlackBasta, and many other infamous ransomware gangs are constantly launching new attacks on healthcare and public health supply chains.

Healthcare and public health topped the list of FBI’s Internet Crime Complaint Center (IC3) 2023 annual report for most attacked critical infrastructure sectors, with 249 reported attacks. According to experts, the number of attacks could be much bigger as many security incidents go unreported.

Experts also believe that the number of attacks is expected to be higher in 2024 unless drastic law enforcement actions take place. Parnes spoke about the role of the government and law enforcement.

“In the fight against cybercrime, the state holds a critical position, employing national capabilities like intelligence, law enforcement, and international collaboration to shield against digital threats”.

Parnes highlighted new offensive cyber tactics can be used to fight and deter criminal activities.

“This method was highlighted in the disruption of the BlackCat ransomware by the FBI, which unfortunately led to the group intensifying their operations, as shown in their recent attack on UnitedHealth’s tech unit.”

Parnes added that despite the challenges ransomware groups present, nations should not be dissuaded from utilizing their defensive capabilities and called for a multidimensional, international collaboration campaign that integrates offensive cyber countermeasures with traditional tools of national power.

The Future of AI Cybersecurity in Healthcare

There will be significant advancements in the use of AI in healthcare cybersecurity and we share Loveland’s visions for the future.

“There will be deeper integration of AI into healthcare cybersecurity systems, but also new threats against the use of AI, as threat actors will use AI to conduct their attacks.”

Cybercriminals and researchers are constantly finding new vulnerabilities in AI technologies. For example, a recent paper from Cornell Tech students warns that in the next few years, AI worms could be a massive threat to GenAI environments and abundant. These AI worms can self-replicate, avoid detection, spread, and even launch malware and exfiltrate (steal) data. However, Loveland, like many experts in the field believes AI can also be put to use by the healthcare industry.

“We expect to see an increase in AI-driven threat detection systems that can proactively predict and neutralize common cyber threats.”

The Bottom Line

While AI adoption rates in healthcare are much slower than in other sectors, cybersecurity strategies could accelerate deployment by solving the ethical, moral, compliance, cybersecurity, and privacy dilemmas unique to the sector, which prevent the industry from modernizing and innovating.

In the end, AI security may be the only way to strengthen healthcare postures, as ransomware gangs show no sign of slowing down their attacks.


Related Reading

Related Terms

Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.