OpenAI’s API Gets Weaponized in ‘SesameOp’ Backdoor

OpenAI's API Gets Weaponized in 'SesameOp' Backdoor - Professional coverage

According to Infosecurity Magazine, Microsoft’s Detection and Response Team discovered in July 2025 that threat actors had weaponized OpenAI’s Assistants API to deploy a backdoor called SesameOp that manages compromised devices remotely. The attackers maintained presence for several months using a complex arrangement of internal web shells and compromised Microsoft Visual Studio utilities. SesameOp consists of a heavily obfuscated DLL loader called Netapi64.dll and a NET-based backdoor named OpenAIAgent.Netapi64 that leverages OpenAI as its command-and-control channel. Microsoft published their findings about this sophisticated threat on November 3, revealing that instead of using traditional C2 methods, the backdoor exploits legitimate OpenAI infrastructure. The Assistants API that SesameOp abuses is scheduled for deprecation in August 2026 when OpenAI plans to replace it with the Responses API.

Special Offer Banner

Sponsored content — provided for informational and promotional purposes.

How This Sneaky Backdoor Operates

Here’s the thing that makes SesameOp particularly clever—it’s not actually using OpenAI‘s AI capabilities. The malware doesn’t execute models or use agent SDKs. Instead, it treats the Assistants API like a sophisticated messaging service. The backdoor fetches commands through OpenAI’s infrastructure, decrypts and executes them locally, then sends the results back as messages. Basically, they’re using legitimate AI infrastructure as their personal encrypted chat room.

The technical execution is pretty sophisticated. The DLL gets loaded at runtime using .NET AppDomainManager injection, which is a defense evasion technique that makes detection harder. And they’re not taking any chances with their communications—everything gets compressed to minimize size and encrypted using both symmetric and asymmetric methods. So even if someone’s monitoring network traffic, they’d just see what looks like normal API calls to OpenAI.

Why This Changes the Game

This is significant because it represents a shift in how attackers think about infrastructure. Instead of setting up their own suspicious servers in random data centers, they’re leveraging trusted, legitimate services that most security tools wouldn’t flag. I mean, who’s going to block traffic to OpenAI? Most organizations are actively encouraging their employees to use AI tools.

And the timing is interesting too. OpenAI is already planning to deprecate the Assistants API next year. But that doesn’t really solve the immediate problem—attackers could just pivot to whatever replaces it. The fundamental issue is that legitimate cloud services are becoming the new attack vector. We’ve seen this pattern before with services like Google Docs or Dropbeing used for C2, but AI infrastructure adds another layer of complexity.

What Comes Next

Looking ahead, this probably won’t be the last we see of AI infrastructure being weaponized. The cat’s out of the bag now. Other threat actors will likely study this technique and adapt it for their own purposes. Microsoft’s recommendations focus on standard security hygiene, but the real challenge will be detecting malicious activity within legitimate services.

So what can organizations do? Well, the old rules still apply—monitor for unusual patterns, even in trusted services. But we’re entering an era where security teams need to understand that “legitimate” doesn’t always mean “safe.” The boundaries are blurring, and attackers are getting creative about hiding in plain sight. This SesameOp discovery is basically a wake-up call that our threat models need to evolve alongside the services we depend on.

Leave a Reply

Your email address will not be published. Required fields are marked *