What happened?
In November 2025, Microsoft reported a new malware campaign called SesameOp, which uses a legitimate artificial intelligence (AI) service to secretly communicate with attackers.
The malware was hidden inside modified developer software which, once installed, connected to the OpenAI platform using valid credentials controlled by the attacker.
Microsoft's investigation found that the malicious code was included in otherwise legitimate developer tools. It was determined that Visual Studio utilities had been modified before distribution; this means the activity did not stem from suspicious or pirated software, but from trusted programs that had been quietly changed to include hidden components. The campaign is understood to have been active for several months before being detected. Importantly, there is no vulnerability in OpenAI or Microsoft services; the attackers simply used the platform in an unintended way for covert communication.1
This connection enabled the malware to receive instructions and send information back through the same trusted service that many organisations already use for AI‑related tasks.
Because the malware used encrypted communication over standard cloud channels, the activity could easily blend in with legitimate traffic and go unnoticed. Microsoft confirmed that this activity was limited and that there is no evidence of widespread compromise.2
For context, the affected service, "OpenAI Assistants API", is a tool that allows software developers to integrate AI into their applications. Companies don't need to build their own AI models; instead, their software communicates with OpenAI's models through the API.
For example, a company may have a customer support chatbot on its website. When a customer asks a question about an order or product, the chatbot sends the question to OpenAI's API, receives an AI-generated answer, and displays it to the customer. The AI model itself remains on OpenAI's servers; the company's chatbot simply interacts with it. This means normal AI usage, including ChatGPT, is not affected by this incident.
The affected API is scheduled for retirement by August 2026. In the meantime, organisations using it should review usage and plan migration or decommissioning in the usual governance cycle. Microsoft and OpenAI have provided detailed migration guidance for the Assistants API.
So what?
This incident demonstrates that attackers are increasingly finding ways to hide their activities within normal business operations. By using trusted services, such as cloud and AI platforms, they can bypass traditional monitoring and evade detection.3
For organisations, this highlights the importance of visibility and oversight over all AI and cloud services. To address these risks, key steps include:
- Ensure that all AI or API usage within the organisation is approved and monitored.
- Review and log activity for unusual or unexpected connections.
- Include AI and cloud services in security policies, audits, and incident response planning.
Even organisations not using the affected service should treat this as a reminder that all trusted platforms can be misused if not properly monitored. While this is not an immediate threat to leadership or business operations, it reinforces the importance of strong governance over technology use.
Recommended actions
- Review and monitor AI usage: Identify all AI or cloud services in use and ensure activity is logged.
- Check for unusual connections: Investigate any unexpected traffic to AI services or external APIs.
- Maintain updated security tools: Confirm endpoint protection and network monitoring solutions are current and active.
- Raise awareness among IT/security teams: Ensure staff understand that attackers may attempt to misuse any trusted service for covert communications.
- Plan for service decommissioning: If using the affected API, prepare to migrate or decommission it ahead of its retirement in August 2026.
Future threat landscape
SesameOp shows a new trend: attackers are taking advantage of trusted platforms, including AI services, to conduct covert operations. As AI becomes increasingly integrated into business processes, organisations will need to treat these platforms as part of their security perimeter.
AI and cloud services are not inherently risky; the risk comes from lack of oversight. Organisations that maintain clear governance, monitor usage, and enforce access controls will be better protected against emerging techniques such as this.