E-mail piracy warning for 2025 I have just become serious
Forget everything you thought you know about staying safe online. No more revealing signs, no more derisory pretension, more laughable promises. Imagine if the next email that is clearly your friend, member of your family or colleague is really false – but it’s so good that you just can’t say it.
This is the fabric of safety nightmares, and it is already realized – He will shape the new landscape of threats. “The AI gives cybercriminals the possibility of easily creating emails and more personalized and convincing messages that seem to come from sources of trust,” McAfee warned before 2025. “These types of attacks should grow in sophistication and frequency.” And for Gmail, Outlook, Apple Mail and other leading platforms, the defenses are not yet in place to stop this.
And so with 2025 just a few days, here is the first news of the year that reports exactly that. About Financial time“An influx of hyper-personalized phishing arches generated by artificial intelligence bots” is increasing. These attacks are already a safety nightmare and will only get worse. The newspaper indicates that large companies, including Ebay, now warn “the rise of fraudulent e-mails containing personal information probably obtained by the analysis of the AI of online profiles”.
Checkpoint warned This would happen in 2025: “Cyber-criminals should take advantage of artificial intelligence to develop very targeted phishing campaigns and adapt malicious software in real time to escape traditional detection mechanisms. The security teams will rely on tools fueled by AI … But opponents will react with increasingly sophisticated and AI phishing and defake campaigns. »»
“AI robots can quickly ingest large amounts of data in the tone and style of a company or a person and reproduce these features to develop a convincing scam,” explains Flight of these latest attacks. “They can also scratch the online presence of a victim and the activity of social media to determine the subjects to which they may be the most likely to respond – pirates who generate tailor -made phishing scams.”
McAfee’s warning underlines improved phishing, with lures in the same way that the presentation has improved a lot. As such, when you “receive an email that seems identical to that of your bank, asking you to check the details of your account”, make sure that you have the usual safety hygiene factors-2FA, strong and unique passwords or better keys, without clicking on links.
But new phishing lures – especially in the business world – could simply seek information, access to trust elsewhere within the company, or to launch greater and more complex fraud to divert funds or encourage a manager to give their financial team the wink to OK a transaction. Check Point indicates that the rapid AI progress now gives attackers “the possibility of writing a perfect phishing email”.
Ebay cybercrime security researcher Nadezda Demidova said Flight that “the availability of generative AI tools reduces the entry threshold for advanced cybercrime … We have witnessed growth in the volume of all kinds of cyberattacks”, describing the latest scams as “polished and closely targeted”.
Jake Moore d’Eset agrees. “Social engineering,” he says, “has an impressive grip on people because of human interaction, but now, like AI, can apply the same tactics from a technological perspective, it becomes more difficult to mitigate unless people are starting to think about reducing what they publish online.”
This is the fear of such attacks that the FBI Emitted a specific opinion last month: “The generative AI takes what it has learned from the examples of input by a user and synthesizes something completely new according to this information. These tools help to create content and can correct human errors that could otherwise serve as fraud warning signs … Synthetic content is not intrinsically illegal; However, synthetic content can be used to facilitate crimes, such as fraud and extortion. »»
“In the end,” Moore told me, “whether IA has improved an attack or not, we must remind people of these attacks more and more sophisticated and how to think twice before transferring money or disclosing personal information on demand – as credible as demand may seem.”
“Phishing scams generated using AI can also be more likely to bypass companies by companies and cybersecurity training”, ” Flight said. And with human errors always the key to all compromises, a lure so convincing in the first step is a safety nightmare. Therefore, other emails are probably real and it is unlikely that anyone who returns to the original source. The confidence circle was broken.
“The AI transforms how the Gmail team protects billions of reception boxes”, ” Google Said, with new “revolutionary AI models that have considerably strengthened the Gmail cyber-defenses (to) the models of spots and respond quickly”. But the AI can break these models, which makes each email unique and specifically avoids the step and repeat the Telltales of the past – at least for the most sophisticated campaigns.
And that will only get worse. “The AI has increased the power and simplicity of cybercriminals to increase their attacks,” warns Moore. “Current phishing emails are introduced into algorithms and analyzed, but when such e-mails sound and feel authentic, they go under human and technological radars.”