Microsoft Threat Intelligence blocks a credential phishing campaign using AI-generated code to conceal its malicious payload.
Microsoft Threat Intelligence recently blocked a credential phishing campaign that likely used AI-generated code to hide its payload. Leveraging a large language model (LLM), the malicious SVG file disguised its intent with business terminology and complex, unnatural code that Microsoft Security Copilot noted “would not typically be written by a human.”
AI is being used by both defenders and attackers in cybersecurity. While defenders leverage it to detect and respond to threats, cybercriminals use AI to craft convincing lures, automate obfuscation, and generate realistic malicious code.
Although this campaign targeted mainly US organizations, it highlights a wider trend of AI-enhanced attacks and the need for defenders to anticipate such threats.
Despite the advanced obfuscation, Microsoft Defender for Office 365 successfully detected and blocked the campaign using AI-powered analysis of infrastructure, behavior, and message context.
Sharing this analysis helps the security community spot similar tactics, showing that AI-enhanced threats, while evolving, are detectable. By leveraging these insights and best practices, organizations can better defend against emerging AI-driven phishing attacks.
Although AI can make phishing payloads more complex, it doesn’t change the core behaviors and infrastructure that security systems use to detect threats. AI-generated code remains bound by the same patterns as human-crafted attacks.