AI Health Data Privacy Risks and How to Protect Yourself

 10 min video

 2 min read

YouTube video ID: -k5BxnTGsvQ

Source: YouTube video by Stanford Graduate School of BusinessWatch original video

PDF

People often turn to large language models to interpret lab results because traditional healthcare can feel slow or confusing. The speaker, despite authoring a book on health‑data privacy, admitted to uploading personal lab results to an AI portal, saying, “I had just put my own protected health information into a completely unprotected portal.” Roughly 40 million users per day ask ChatGPT health‑related questions, illustrating the scale of this impulse.

The Scope of the Problem

Health‑data hacking has become commonplace. In 2024, three in four Americans experienced at least some form of health data breach, and about 50 % of the population is expected to have data hacked annually. Medical identity theft imposes a heavy burden—on average it takes 200 hours and $14,000 to resolve. Data brokers compile and sell lists based on conditions such as HIV/AIDS, depression, or weight‑loss goals, and companies like GoodRx, BetterHelp, Hims, and Hers have been cited for selling user data to advertisers.

The AI Factor

HIPAA, passed 30 years ago, was designed to protect information within formal healthcare systems and never applied to digital health apps or AI tools. Today, AI companies are launching features that integrate directly with personal health records, while privacy policies remain fluid. Companies may alter terms of service or even sell data during bankruptcy, as seen with 23andMe. Some AI firms, such as Anthropic, are reportedly pulling back on internal privacy safeguards to stay competitive in the “AI arms race.” The speaker warns, “The cat is out of the bag when it comes to our healthcare data privacy.”

Navigating the Future

To mitigate risk, users should exercise rights under state privacy laws—California, for example, allows individuals to access or delete collected data. Before linking medical records to an AI, consider potential consequences for employment, security clearances, and legal exposure, including subpoenas for reproductive‑health information. Safer practices include redacting identifying details or asking questions in general terms rather than providing exact lab values. The speaker advises, “If you do end up connecting to your medical record, slow down.” Ultimately, a new societal paradigm is needed that supports victims of data breaches rather than attempting to lock all data down, because “the kicker is most of your data is already out there. So if you use these tools, at least you get some upside.”

  Takeaways

  • About 40 million people per day use AI chatbots like ChatGPT for health queries, often uploading sensitive personal information.
  • Three‑quarters of Americans faced health‑data hacking in 2024, and medical identity theft typically costs $14,000 and 200 hours to resolve.
  • HIPAA does not cover digital health apps or AI tools, allowing companies to sell or change data use policies without patient consent.
  • Users can reduce exposure by invoking state privacy rights, redacting details, and avoiding direct connections between AI and electronic medical records.
  • A broader societal shift is required to support victims of data breaches rather than trying to keep all health data locked away.

Frequently Asked Questions

Why does HIPAA not protect data shared with AI health tools?

HIPAA was created 30 years ago to safeguard information within formal healthcare settings and does not extend to digital health apps or AI platforms, leaving user data vulnerable to collection, sale, or policy changes by those companies.

How can individuals reduce privacy risks when using AI for health queries?

Individuals can invoke state privacy statutes to request deletion of their data, avoid uploading exact lab results, use generic phrasing, and refrain from linking AI directly to electronic medical records, thereby limiting exposure to employment, security‑clearance, or legal repercussions.

Who is Stanford Graduate School of Business on YouTube?

Stanford Graduate School of Business is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.

Does this page include the full transcript of the video?

Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.

Helpful resources related to this video

If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.

Links may be affiliate links. We only include resources that are genuinely relevant to the topic.

PDF