Victim calls latest Gmail threat ‘the most sophisticated phishing attack I’ve ever seen’
Cybercriminals are now using AI-driven phone calls that mimic human voices to deceive users into revealing their credentials, News.Az reports, citing Forbes.
Imagine getting a call from a number with a Google caller ID from an American support technician warning you that someone had compromised your Google account, which had now been temporarily blocked.
Imagine that support person then sending an email to your Gmail account to confirm this, as requested by you, and sent from a genuine Google domain.
Imagine querying the phone number and asking if you could call them back on it to be sure it was genuine.
They agreed after explaining it was listed on google.com and said there might be a wait while on hold. You checked and it was listed, so you didn’t make that call.
Imagine being sent a code from Google to be able to reset your account and take back control and almost clicking on it.
Luckily, by this stage Zach Latta, founder of Hack Club and the person who nearly fell victim, had sussed it was an AI-driven attack, albeit a very clever one indeed.
If this sounds familiar, that’s because it is: I first warned about such AI-powered attacks against Gmail users on Oct. 11 in a story that went viral.
The methodology is almost exactly the same, but the warning to all 2.5 billion users of Gmail remains the same: be aware of the threat and don’t let your guard down for even a minute.
“Cybercriminals are constantly developing new tactics, techniques, and procedures to exploit vulnerabilities and bypass security controls, and companies must be able to quickly adapt and respond to these threats,” Spencer Starkey, a vice-president at SonicWall, said.,
“This requires a proactive and flexible approach to cybersecurity, which includes regular security assessments, threat intelligence, vulnerability management, and incident response planning.”
All the usual phishing mitigation advice goes out the window — well, a lot of it, at least — when talking about these super-sophisticated AI attacks.
“She sounded like a real engineer, the connection was super clear, and she had an American accent,” Latta said.
This reflects the description in my story back in October when the attacker was described as being “super realistic,” although then there was a pre-attack phase where notifications of compromise were sent seven days earlier to prime the target for the call.
The original target is a security consultant, which likely saved them from falling prey to the AI attack, and the latest would-be victim is the founder of a hacking club. You may not have quite the same levels of technical experience as these two, who both very nearly succumbed, so how can you stay safe?
“We’ve suspended the account behind this scam,” a Google spokesperson said, “we have not seen evidence that this is a wide-scale tactic, but we are hardening our defenses against abusers leveraging g.co references at sign-up to further protect users.”