• Home
  • Blog
  • Privacy Concerns Arise as AI Chatbots Enter Healthcare

Privacy Concerns Arise as AI Chatbots Enter Healthcare

Artificial Intelligence (AI) chatbots have long been used in healthcare for basic use cases such as answering questions at an insurer’s website. However, with the rising visibility of ChatGPT, expansion into new use cases has dramatically increased. The global healthcare chatbots market is expected to top $543 million by 2027, at a 19.5 percent Compound Annual Growth Rate (CAGR). 

AI Chatbots healthcare CHATGPT

The potential for AI regulation in the U.S. is still being debated. There is now a growing consensus that some form of regulation is necessary to ensure AI is used safely and responsibly. 

While there are exciting ways this technology can improve patient experiences and drive efficiency, the use of AI chatbots raises real concerns about patient data privacy and security that are palpable to consumers, compliance experts and notably, from legislators.

One example is Italy’s temporary ban of ChatGPT following issues over possible privacy violations and failure to verify users were at least 13 years of age. 

In response, OpenAI issued a statement acknowledging the Italian Data Protection Agency’s concerns and explaining they acted immediately to suspend ChatGPT’s Italy services while addressing compliance issues. 

Additionally, OpenAI said they are committed to complying with all applicable data protection laws and regulations and will work to ensure their AI models and services meet the highest standards of privacy and data protection. 

In a recent testimony to Congress, OpenAI CEO, Sam Altman, argued the U.S. government should regulate AI systems like ChatGPT to mitigate risks. 

Altman proposed the government create a new agency to oversee the development and use of AI systems. This agency would be responsible for setting safety standards, licensing AI systems, and investigating potential misuse of AI. Altman also argued the government should invest in research to develop new AI safety technologies. 

Altman’s testimony has sparked a debate about the need for AI regulation in the U.S. Some experts argue regulation is necessary to protect the public from the risks of AI, while others argue regulation would stifle innovation. This debate is likely to continue in the coming years as AI systems become more powerful and widespread. 

Looking ahead at what may come next, data protection regulations remain critical to understand—such as GDPR (General Data Protection Regulation)—where implementation requires organizations to obtain consent from individuals before collecting and processing people’s personal data. And where failure to comply may result in penalties and large fines. 

Although it is still too early to know for certain, regulation in the future could arguably focus on areas such as transparency, fairness, safety, and accountability—areas demanding great attention from organizations today to get a proper grasp on tomorrow.

AI Bots’ Implications for Healthcare 

In the meantime, the adoption of AI bots in healthcare has raised concerns about their safety and privacy, particularly regarding patient data privacy. 

Said Gartner® in a March 2023 research report, “ChatGPT has created significant interest in the potential of large language models to improve healthcare efficiency, experience and outcomes.”  

“CIOs must separate the hype from the reality by learning the potential use cases, limitations and risks associated with deployment of this technology,” Gartner® explained. 

“Broad-scale adoption will be limited until appropriate privacy and security standards and controls have been implemented and successful integration into clinical workflows is achieved,” Gartner® stated. 

There are specific broad concerns, for instance, including privacy, bias, accuracy, and explainability. Healthcare organizations must implement appropriate technical and organizational measures to protect their data from unauthorized access or disclosure. It is especially important that personally identifiable information (PII) and protected health information (PHI) be protected to ensure patient privacy.  

Additionally critical is when AI bots collect patient data from various sources, such as electronic health records (EHRs), wearable devices, and patient portals. This must be conducted in compliance with privacy laws and regulations such as HIPAA (Health Insurance Portability and Accountability Act) in the U.S. 

Compliance Risks Associated with Healthcare AI Bots  

Although AI bots used in healthcare to improve patient outcomes and reduce costs have the potential to transform the healthcare industry, they also bring a host of compliance risks that must be carefully managed. 

Here are a few key compliance risks associated with AI bots in healthcare: 

  • Data privacy and security risks: AI bots often collect and process large amounts of patient data, which can include sensitive health information. If not properly secured, data can be vulnerable to hacking and other cybersecurity threats, leading to potential breaches of patient privacy. 
  • Legal concerns: The use of AI bots in healthcare raises legal concerns, including questions about liability in the event of an error or adverse event. 
  • Patient safety concerns: If AI bots are not properly designed, tested, and regulated, they can pose risks to patient safety. For example, if a chatbot provides a patient with incorrect medical advice, the patient may take action that could lead to harm or worsen their condition. 

How AI Chatbots Handle PHI 

Regarding risks, as the use of AI bots in healthcare continues to expand, there is growing concern about how these bots handle PHI.  

For example, potential privacy breaches can occur when AI bots are not configured properly or trained to recognize forms of PII.  

Some areas of concern include: 

  • Unauthorized data access: If an AI bot does not properly authenticate users or restrict access to sensitive data, unauthorized individuals may be able to view or access patient information 
  • Data leakage: AI bots that are not properly configured to recognize and protect sensitive data may inadvertently expose this information to unauthorized parties, potentially leading to identity theft or other forms of fraud 
  • Data aggregation and correlation: AI bots that collect and aggregate large amounts of patient data may be able to identify sensitive information about an individual based on correlations between seemingly innocuous data points 

It is critical to properly configure and train AI bots to recognize and protect all forms of PII. This includes implementing strong access controls and encryption protocols, limiting the collection and aggregation of sensitive data, and conducting regular audits to identify and address potential vulnerabilities in the system.  

Additionally, organizations should have policies and procedures in place to respond to potential data breaches and mitigate the impact on affected individuals. 

HIPAA Concerns 

Also worthy of note is that compliance with HIPAA regulations is required for AI bots that handle PHI. Yet, ensuring AI bots are fully compliant with HIPAA can be challenging. 

This is especially true given technology continues to evolve and new use cases emerge. Additionally, there may be competing priorities or resource constraints that make it difficult for organizations to fully invest in ensuring HIPAA compliance when implementing AI bots.  

However, failure to comply with HIPAA can result in significant fines and other penalties, sanctions, and reputation losses. Therefore, it is in the best interest of healthcare organizations to prioritize compliance efforts. 

How to Protect Your Organization’s Healthcare Privacy 

Technical measures to ensure privacy include encrypting and securely storing PHI, and restricting access to authorized personnel while regularly monitoring access logs. Policies and procedures should be developed for AI bot usage in healthcare, specifying how AI bots should be used and what data they can access. Healthcare professionals should be properly trained on AI bot usage and privacy risks to ensure compliance with regulations and protect patient privacy.  

In addition to technical measures, policies, and procedures, there are several other steps healthcare organizations can take to protect patient privacy when using AI bots. These include regularly conducting risk assessments to identify potential vulnerabilities and implementing appropriate safeguards, such as firewalls and intrusion detection systems. Organizations should also establish incident response plans to address data breaches and other security incidents and provide ongoing training and education to employees on privacy policies and best practices.  

By taking a comprehensive approach to healthcare privacy, organizations can help ensure patient data remains secure and confidential while improving patient outcomes. 

How Can SAI360 Help Protect Healthcare’s Data? 

SAI360 provides a comprehensive solution for healthcare organizations to manage compliance risks associated with their data. This includes a range of features designed to protect healthcare data, including: 

  • Data privacy and security: Helps healthcare organizations protect sensitive data by providing robust data encryption, access controls, and monitoring capabilities. The module enables organizations to identify and address potential security vulnerabilities before they can be exploited, ensuring that patient data remains confidential and secure. 
  • Compliance management: Streamlines compliance management by providing a centralized platform for managing regulatory requirements, policies, and procedures. This helps healthcare organizations stay up to date with evolving compliance requirements, reducing the risk of non-compliance penalties. 
  • Risk management: Enables healthcare organizations to identify and assess risks associated with their data, including potential privacy breaches and cybersecurity threats. The module provides automated risk assessments and reporting, helping organizations proactively manage their compliance risks. 
  • Incident management: The compliance module provides incident management tools in the event of a data breach or other security incident to help organizations respond quickly and effectively. The module enables organizations to track and document incident response activities, ensuring compliance with regulatory reporting requirements. 

SAI360 offers a powerful solution for healthcare organizations looking to protect their data and manage compliance risks. By leveraging the solutions recognized best practices and advanced features, healthcare organizations can proactively identify and address potential compliance risks, ensuring that patient data remains secure and confidential. 

For more information on how SAI360’s modular SaaS solutions can drive efficiency, efficacy, and agility in your workplace, visit https://www.sai360.com/industries/healthcare-health-insurance. 

Wondering how AI may affect your organization? Click here for instant access to a complimentary copy of this Gartner® Research Report courtesy of SAI360, available for a limited time.

Keep Reading