OpenAI’s New Health Tab Is a Privacy Minefield

OpenAI's New Health Tab Is a Privacy Minefield - Professional coverage

According to CNET, OpenAI announced on Wednesday that it’s launching a new ChatGPT Health tab, currently in beta, dedicated to medical and wellness inquiries. The company says “hundreds of millions” of people already use ChatGPT weekly for health questions, and this new feature aims to centralize records and create a private area for those chats. It will encourage connecting third-party apps like Apple Health and will have a separate chat history and memory feature. OpenAI states health conversations won’t be used to train its models and promises encryption and multi-factor authentication. However, the service explicitly states it is not for diagnosis or treatment, and privacy experts like Andrew Crawford of the Center for Democracy and Technology warn that companies like OpenAI are not bound by HIPAA protections. You can currently join a waitlist for access.

Special Offer Banner

Privacy is the big problem

Here’s the thing: this is a massive data grab happening in a legal vacuum. As Andrew Crawford from the Center for Democracy and Technology points out, HIPAA doesn’t apply here. We’re talking about a company that will now be collecting incredibly sensitive health data—your symptoms, your worries, your fitness app logs—with no federal law governing how it’s used, shared, or sold. OpenAI says it won’t use the chats for training and will encrypt the data. But that’s their policy. It’s not law. And in a world of constant data breaches, trusting a tech company with your deepest health anxieties feels like a monumental leap of faith. They’re basically asking us to take their word for it.

The danger of bad advice is real

Even if you trust them with the data, can you trust the answers? The disclaimer that it’s “not for diagnosis or treatment” is a legal shield, but we all know how people will use it. They already do. Remember the guy who ended up hospitalized after ChatGPT told him to replace salt with sodium bromide? That’s not a one-off bug; it’s a core risk of LLMs. They hallucinate. They confabulate. And when the topic is your health, a confident-sounding error isn’t just inconvenient—it can be deadly. The potential for harm in mental health conversations is especially terrifying. So what’s the plan when, not if, the bot gives dangerously wrong advice?

Where this fits in the market

This is a clear land grab in the “wellness tech” space. OpenAI sees all those health queries and wants to own that vertical, pulling data from your other apps to create a sticky, all-in-one health hub. It puts pressure on everyone from WebMD to telehealth apps and even wearable companies. But the competitive advantage comes with a huge asterisk: liability and trust. Other health tech companies operate under stricter scrutiny or within actual care frameworks. OpenAI is walking in from the consumer internet side, where the rules are, well, there basically aren’t any. It’s a bold move, but one that could backfire spectacularly with a single high-profile data incident or medical mishap.

So should you use it?

Look, I get the appeal. The healthcare system is frustrating, and getting quick, free answers is tempting. But this feels like a classic tech solution in search of a problem it’s uniquely unqualified to solve safely. For now, it’s a privacy minefield wrapped in a liability disclaimer. If you’re curious, read the fine print on that OpenAI blog very, very carefully. But maybe treat it as a fancy journal or symptom tracker, not a medical resource. Because when your health is on the line, you probably want more than a chatbot’s best guess and a promise that your data is safe.

Leave a Reply

Your email address will not be published. Required fields are marked *