According to Mashable, Microsoft’s Detection and Response Team researchers warned on Monday that cybercriminals are exploiting OpenAI’s Assistants API as a backdoor for malware. The researchers discovered this novel threat in July while investigating what they called a “sophisticated security incident” and named the backdoor SesameOp. The bad actors are using the OpenAI API as a command-and-control channel to stealthily communicate and orchestrate malicious activities within compromised environments. Microsoft concluded this represents long-term espionage operations where threat actors harvest data for espionage-type purposes. The researchers emphasized this isn’t a vulnerability but rather a misuse of built-in capabilities in the Assistants API, which is scheduled to be replaced by OpenAI’s Responses API next year.
A clever but concerning misuse
Here’s what makes this approach so sneaky. Instead of setting up their own command servers that could be tracked and blocked, these hackers are basically using OpenAI‘s infrastructure as their communication channel. The malware fetches commands through the legitimate Assistants API, then executes them on compromised devices. It’s like using a public bulletin board system to leave coded messages that only your malware can understand.
And honestly? It’s brilliant in a terrifying way. These criminals are piggybacking on OpenAI’s reputation and infrastructure, making their traffic look like normal API calls. Who’s going to block connections to OpenAI? That would break legitimate business operations. So the malicious traffic blends right in with the legitimate stuff.
The bigger AI security problem
This isn’t just about one API. It points to a much larger issue we’re going to see more of as AI becomes embedded everywhere. When you build powerful tools that developers can integrate into their applications, you’re also creating potential attack vectors. The very features that make these APIs useful—their flexibility, their cloud-based nature, their ability to process and store data—also make them attractive to attackers.
Remember when everyone was worried about AI safety meaning robots taking over? Well, the real immediate threat seems to be criminals using AI infrastructure to hide their tracks. Microsoft‘s discovery suggests we’re already in that future.
What happens now?
Microsoft has published their technical analysis and recommendations, including auditing firewalls and reviewing web server logs frequently. But here’s the interesting twist: OpenAI was already planning to deprecate the Assistants API next year anyway, replacing it with the Responses API.
So should developers rush to migrate? Probably. OpenAI has a migration guide available, and given this security concern plus the planned deprecation, there’s not much reason to stick with the older API. But here’s the real question: Will the new Responses API be any more secure against this kind of creative misuse? Or are we just playing whack-a-mole with determined attackers who will always find new ways to abuse legitimate tools?
The pattern here is familiar. We saw it with cloud storage services being used for data exfiltration, with legitimate remote access tools being used for unauthorized access, and now with AI APIs being used for command-and-control. The cat-and-mouse game continues, just with fancier tools.
