Artificial intelligence applications are emerging rapidly in the healthcare sector, with products from major technology firms such as OpenAI, Anthropic, and Google gaining significant traction among users. These AI tools promise to streamline medical consultations, analyze records, and deliver wellness advice instantaneously. As consumers increasingly turn to digital health services, especially amid rising healthcare costs and access challenges, new concerns are surfacing about the privacy of sensitive health data entrusted to these technologies. While companies pitch airtight security protocols and compliance promises, questions about regulatory oversight and actual accountability persist. Many users see these platforms as a quick alternative to traditional healthcare—but understanding the boundaries of their privacy protection is crucial for informed use.
Past reporting on the rollout of AI health apps from OpenAI, Anthropic, and Google often highlighted their diagnostic capabilities and potential to reduce administrative burdens in hospitals. Earlier articles largely framed these products as innovative tools supplementing care, with less attention given to the legal ambiguity and privacy risks now under discussion. Reports over previous months focused more on user adoption rates and model accuracy, while the conversation has since shifted toward questioning whether industry-standard protections truly apply to these new entrants.
Are Tech Companies Subject to Health Data Privacy Laws?
OpenAI, Anthropic, and Google’s healthcare-related products—including ChatGPT Health and Claude for Healthcare—operate in a regulatory gray area. Legal experts point out that these AI-driven platforms are not generally classified as “covered entities” under the Health Insurance Portability and Accountability Act (HIPAA), which primarily applies to hospitals, clinics, and specific business partners that handle electronic protected health information. Instead, these companies set their own standards for collecting, storing, and sharing user data—with no required adherence to federal health data safeguards. As Andrew Crawford, a senior counsel at the Center for Democracy and Technology, observed, “that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using peoples’ health data.”
How Are Security Promises Framed in AI Healthcare Apps?
Artificial intelligence health apps from leading tech firms highlight robust encryption, data isolation, and user control features in their presentations. OpenAI claims its suite of products secures user interactions with features such as chat deletion, multifactor authentication, and commitments not to use personal health data for AI training purposes. Their partnership with b.well for handling medical records emphasizes adherence to privacy-friendly frameworks and voluntary compliance with standards like the CARIN Alliance Trust Framework. However, legal observers note that these assurances are often company policies, not legal obligations.
“Generally speaking, a lot of companies say they’re HIPAA compliant, but what they mean is that they’re not a HIPAA regulated entity, therefore they have no obligation,”
said Sara Geoghegan, senior counsel at the Electronic Privacy Information Center.
What Are the Real-World Risks for Users?
Despite technical safeguards, significant risks remain for consumers who share health information with AI platforms. The potential for data breaches, unauthorized sharing, and AI “hallucinations”—instances when models confidently generate incorrect information—complicate the issue. Healthcare, a frequent target for cyberattacks even in regulated settings, faces increased exposure when data moves outside the conventional system. When protection of health information is governed primarily by company-drafted terms of service, users may have limited remedies if their data is mishandled or sold.
“They’re not mandated by HIPAA,”
said Carter Groome, CEO of First Health Advisory, who also described the companies’ security commitments as often “hyperbolic” in effort to attract user trust.
Current trends show these AI-powered health apps continue to draw widespread use. The convenience, immediacy, and cost-effectiveness of tools like ChatGPT Health appeal to many consumers who feel underserved or priced out of traditional care. Nevertheless, lack of universally enforceable privacy restrictions on these new platforms heightens the risk of misuse or exploitation of sensitive data. The experience of genetic testing businesses such as 23andMe, which raised privacy alarms during company transitions, underlines the vulnerability of health data when handled outside tightly regulated environments.
Awareness of the limits to legal protection is essential for anyone considering AI health applications for their medical questions or records. While tech firms may voluntarily mirror industry standards or pursue certifications, their obligations are not equivalent to those imposed on traditional healthcare professionals. Before sharing detailed medical information, users should carefully review privacy policies, seek out independent assessments of security practices, and recognize the potential consequences of entrusting sensitive data to unregulated technology platforms. Ultimately, the decision to interact with AI health apps must balance accessibility and convenience against the realities of privacy risk—especially in a landscape where regulation lags innovation.
