Hackers Using AI For Everything From the First Email to Covering Their Tracks
Somewhere right now, a North Korean worker is on a video call with a Western company’s hiring manager, interviewing for an IT job. The resume looks real. The headshot looks real. The identity documents look real. All of it was fabricated using AI, and if the hire goes through, that company just handed a foreign intelligence operative long-term, trusted access to its systems.
That’s one example from a Microsoft Threat Intelligence report published March 6. But it’s representative of a much larger shift: threat actors are now using AI across every stage of how attacks are built and run. The report is based on real observed activity, and it covers everything from how attacks are researched and targeted to how stolen data gets processed after a breach.
How Attackers Are Using AI to Develop Campaigns
Microsoft’s core finding is that AI has made attacks cheaper, faster, and available to a wider pool of attackers. Threat actors are using generative AI to write phishing emails that are harder to spot, translate attacks into other languages for broader targeting, debug malware they couldn’t otherwise build, generate fake domains that look legitimate, and sort through stolen data to find what’s actually worth using.
They observed this across reconnaissance, initial access, malware development, post-compromise activity, and infrastructure setup. In each phase, AI is acting as an accelerant. The more capable the attacker, the more they can squeeze out of it. But critically, less capable attackers are now running operations that previously required real technical skill.
Microsoft specifically identified two North Korean groups, Jasper Sleet and Coral Sleet, with Jasper Sleet using AI face-swapping tools to fabricate identity documents and generate polished headshots, allowing North Korean workers to pass hiring screenings at Western companies.
Attack Vectors Most Teams Aren't Watching
Beyond phishing and malware, Microsoft flagged some of the less obvious ways AI is being used. Attackers are using it to automate the creation of convincing look-alike domains at scale, something that previously required manual effort.
They’re also using it to configure and troubleshoot their own attack infrastructure. And after a breach, they’re running stolen data through AI to quickly identify credentials, financial information, and anything worth monetizing or using in follow-on attacks.
Additionally, Microsoft noted early signs of threat actors testing agentic AI, systems that can run tasks with minimal human direction, adapting as they go. This isn’t widespread yet, but attacks that currently require a human hand at every step could eventually run more autonomously.
Your Controls Were Built for a Different Threat
The challenge isn’t just that attacks are more sophisticated, it’s that they’re more convincing and coming from more directions. An AI-generated phishing email reads better than one written in broken English. A fabricated identity with a polished headshot and a clean resume looks like a real hire. The signals defenders have relied on are getting harder to read.
MFA and perimeter controls are still worth having, but they’re not the whole answer. Session token theft bypasses MFA, and a legitimate employee account bypasses most perimeter controls entirely. The organizations that are ahead of this are the ones that actively test whether their defenses work against the way attacks are actually being run, not just whether the controls are switched on.
What to Do Next
The Microsoft report includes specific detection guidance, but the practical starting point is knowing whether your current controls would actually catch any of this. Here are three concrete things to act on:
- Test your email security against AI-generated phishing. The quality gap between a human-written phishing email and an AI-generated one is closing fast. If your email controls were tuned to catch typos and awkward phrasing, they need to be re-evaluated against modern samples.
- Audit your hiring and identity verification process. Jasper Sleet isn’t breaking in through a vulnerability, they’re walking through the front door with a fake resume and an AI-generated headshot. If your onboarding process doesn’t verify identity documents and cross-check candidates against known fraud patterns, you have a gap worth closing.
- Validate your MFA against session token theft. Adversary-in-the-middle attacks intercept session cookies after authentication, which means MFA doesn’t help once the token is stolen. If you haven’t tested this specific attack path, you don’t know whether your accounts are protected or just appear to be.
Most organizations can’t confidently answer all three, and that’s not unusual, it’s a visibility problem. The threats described in this report don’t expose themselves through a single scan or a one-time audit. They show up in the gaps between tests, in the access that looked legitimate, in the credentials nobody flagged.
That’s exactly what continuous exposure management is designed to address, not a point-in-time picture of your risk, but an ongoing framework that keeps testing as the threat keeps changing.