On 3 July 2025, The US Department of State issued an internal cable to its embassies and consulates, describing a campaign of "sophisticated manipulation" using AI-generated voice and text to impersonate US Secretary of State and National Security Advisor, Marco Rubio.
The deepfake was used to contact at least five high‑level government officials, including three foreign ministers, a state governor, and a member of US Congress, via the encrypted messaging service Signal. The impersonator also created a fraudulent email address, Marco.Rubio[@]state.gov, attempting to gain access to private communication channels.
The US Federal Bureau of Investigation (FBI) and State Department launched coordinated investigations and issued directives to guard against voice‑enabled "vishing" but have stated that the likely aim of the attack was not financial gain but to gain illicit access to official communication channels.
From a corporate security standpoint, this campaign raises alarm bells for executives. If AI can convincingly emulate a US cabinet-level voice and breach trust, the same capability could target CEOs, board members, or external stakeholders, aiming to extract insider data.
This AI voice-impersonation campaign builds upon recent incidents, such as an AI clone of White House Chief of Staff Susie Wiles in May, and the Signal-gate incident in which a Signal group chat, intended for high-level US officials involved in national security, was leaked to journalists.
As voice‑based AI becomes cheaper and more accessible, any figure of influence may be targeted. Business leaders must upgrade defences, rethink verification norms, and prepare for a reality where the voice you hear might not be the one you trust.