According to Silicon Republic, OpenAI is launching a new dedicated service called ChatGPT Health for health-related queries. The company says over 230 million people globally ask ChatGPT health and wellness questions every single week. The new service is marketed as a secure “one stop shop” where users can connect medical records and apps like Apple Health and MyFitnessPal for analysis. Critically, OpenAI states conversations within ChatGPT Health will not be used to train its foundation models, a direct response to privacy concerns. The service was developed over two years with input from hundreds of physicians and is being evaluated against a clinical framework. Interested users outside the EEA, Switzerland, and the UK can join a waitlist, with a broader web and iOS rollout planned in the coming weeks.
Privacy Promise vs. Reality
Here’s the thing: this is a massive, necessary concession from OpenAI. CEO Sam Altman himself admitted last year that ChatGPT doesn’t offer doctor-patient confidentiality, even as people share their most personal details with it. And let’s be real, the default setting of using chats for training was a non-starter for anything truly sensitive. So this walled-off “Health” garden is their attempt to fix that. But the promise—that data flows in but not out—is only as good as their security and our trust. A recent report of a VPN service capturing and selling AI chat data shows the value of this information. OpenAI isn’t just being nice; it’s trying to build the trust required to become a central hub for our most private data. The question is, will it work?
The Competitive Landscape Just Shifted
This move isn’t happening in a vacuum. It directly pressures every other consumer-facing AI assistant, from Anthropic’s Claude to Google’s Gemini, to establish similar privacy-forward, dedicated health features. It also positions ChatGPT as a potential aggregator and interpreter for the fragmented health app ecosystem. Think about it: if it can pull data from Apple Health, MyFitnessPal, and your medical records, it starts to look less like a chatbot and more like a health data platform. That’s a powerful position. The losers here are likely the standalone symptom-checker apps and maybe even the basic Google search for health info. Why scroll through sketchy forums when an AI can analyze your specific data? Of course, that’s *if* you’re willing to give it that data.
The Big Picture and The Catch
OpenAI is clearly trying to professionalize its most common—and most risky—use case. Working with hundreds of doctors and a clinical evaluation framework is a step towards legitimacy. But let’s not mistake this for a medical device or a replacement for a doctor. It’s a sophisticated, possibly very helpful, informant. The big catch? Availability. Launching initially only *outside* the EEA, Switzerland, and the UK is telling. Those regions have tough data privacy laws (like GDPR). Navigating that regulatory maze is probably too complex for a “small cohort” test. So they’re starting where the rules might be more flexible. It’s a smart rollout strategy, but it highlights that even with these new privacy promises, the real-world rules are still being written. Basically, they’re building the plane while flying it, over a field of legal landmines.
