Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: Tech Giants Launch AI Health Apps, Face Privacy Scrutiny
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
Cybersecurity

Tech Giants Launch AI Health Apps, Face Privacy Scrutiny

Highlights

  • AI health apps often lack the legal privacy protections hospitals must follow.

  • Security features are frequently company promises, not enforceable obligations.

  • Users face real risks when sharing health data outside regulated settings.

Samantha Reed
Last updated: 11 February, 2026 - 11:20 pm 11:20 pm
Samantha Reed 1 hour ago
Share
SHARE

Artificial intelligence applications are emerging rapidly in the healthcare sector, with products from major technology firms such as OpenAI, Anthropic, and Google gaining significant traction among users. These AI tools promise to streamline medical consultations, analyze records, and deliver wellness advice instantaneously. As consumers increasingly turn to digital health services, especially amid rising healthcare costs and access challenges, new concerns are surfacing about the privacy of sensitive health data entrusted to these technologies. While companies pitch airtight security protocols and compliance promises, questions about regulatory oversight and actual accountability persist. Many users see these platforms as a quick alternative to traditional healthcare—but understanding the boundaries of their privacy protection is crucial for informed use.

Contents
Are Tech Companies Subject to Health Data Privacy Laws?How Are Security Promises Framed in AI Healthcare Apps?What Are the Real-World Risks for Users?

Past reporting on the rollout of AI health apps from OpenAI, Anthropic, and Google often highlighted their diagnostic capabilities and potential to reduce administrative burdens in hospitals. Earlier articles largely framed these products as innovative tools supplementing care, with less attention given to the legal ambiguity and privacy risks now under discussion. Reports over previous months focused more on user adoption rates and model accuracy, while the conversation has since shifted toward questioning whether industry-standard protections truly apply to these new entrants.

Are Tech Companies Subject to Health Data Privacy Laws?

OpenAI, Anthropic, and Google’s healthcare-related products—including ChatGPT Health and Claude for Healthcare—operate in a regulatory gray area. Legal experts point out that these AI-driven platforms are not generally classified as “covered entities” under the Health Insurance Portability and Accountability Act (HIPAA), which primarily applies to hospitals, clinics, and specific business partners that handle electronic protected health information. Instead, these companies set their own standards for collecting, storing, and sharing user data—with no required adherence to federal health data safeguards. As Andrew Crawford, a senior counsel at the Center for Democracy and Technology, observed, “that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using peoples’ health data.”

How Are Security Promises Framed in AI Healthcare Apps?

Artificial intelligence health apps from leading tech firms highlight robust encryption, data isolation, and user control features in their presentations. OpenAI claims its suite of products secures user interactions with features such as chat deletion, multifactor authentication, and commitments not to use personal health data for AI training purposes. Their partnership with b.well for handling medical records emphasizes adherence to privacy-friendly frameworks and voluntary compliance with standards like the CARIN Alliance Trust Framework. However, legal observers note that these assurances are often company policies, not legal obligations.

“Generally speaking, a lot of companies say they’re HIPAA compliant, but what they mean is that they’re not a HIPAA regulated entity, therefore they have no obligation,”

said Sara Geoghegan, senior counsel at the Electronic Privacy Information Center.

What Are the Real-World Risks for Users?

Despite technical safeguards, significant risks remain for consumers who share health information with AI platforms. The potential for data breaches, unauthorized sharing, and AI “hallucinations”—instances when models confidently generate incorrect information—complicate the issue. Healthcare, a frequent target for cyberattacks even in regulated settings, faces increased exposure when data moves outside the conventional system. When protection of health information is governed primarily by company-drafted terms of service, users may have limited remedies if their data is mishandled or sold.

“They’re not mandated by HIPAA,”

said Carter Groome, CEO of First Health Advisory, who also described the companies’ security commitments as often “hyperbolic” in effort to attract user trust.

Current trends show these AI-powered health apps continue to draw widespread use. The convenience, immediacy, and cost-effectiveness of tools like ChatGPT Health appeal to many consumers who feel underserved or priced out of traditional care. Nevertheless, lack of universally enforceable privacy restrictions on these new platforms heightens the risk of misuse or exploitation of sensitive data. The experience of genetic testing businesses such as 23andMe, which raised privacy alarms during company transitions, underlines the vulnerability of health data when handled outside tightly regulated environments.

Awareness of the limits to legal protection is essential for anyone considering AI health applications for their medical questions or records. While tech firms may voluntarily mirror industry standards or pursue certifications, their obligations are not equivalent to those imposed on traditional healthcare professionals. Before sharing detailed medical information, users should carefully review privacy policies, seek out independent assessments of security practices, and recognize the potential consequences of entrusting sensitive data to unregulated technology platforms. Ultimately, the decision to interact with AI health apps must balance accessibility and convenience against the realities of privacy risk—especially in a landscape where regulation lags innovation.

You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

Congress Proposes SAVE America and MEGA Acts Reshaping Voting Process

US AI Strategy Drives Speed, Faces Global Trust Hurdles

FBI Cracks Down on 764 Network with New Arrest in New York

Senate Presses Telecom CEOs to Address Salt Typhoon Cyber Breaches

Trump Officials Seek Industry Support to Streamline Cyber Rules

Share This Article
Facebook Twitter Copy Link Print
Samantha Reed
By Samantha Reed
Samantha Reed is a 40-year-old, New York-based technology and popular science editor with a degree in journalism. After beginning her career at various media outlets, her passion and area of expertise led her to a significant position at Newslinker. Specializing in tracking the latest developments in the world of technology and science, Samantha excels at presenting complex subjects in a clear and understandable manner to her readers. Through her work at Newslinker, she enlightens a knowledge-thirsty audience, highlighting the role of technology and science in our lives.
Previous Article Tesla Maps Semi Megachargers as Mass Production Nears
Next Article Hugging Face Maintains Profit Focus Without Ads or New Funding

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

Hugging Face Maintains Profit Focus Without Ads or New Funding
AI Technology
Tesla Maps Semi Megachargers as Mass Production Nears
Electric Vehicle
Tesla Shifts Strategy as Analyst Labels It a Robotics Company
Electric Vehicle
Apptronik Secures $520M to Boost Apollo Production and Deployment
Robotics
Buyers Turn to Adata XPG 32GB RAM Amid Limited Budget Choices
Computing
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?