During a national association of information managers of the National Association of March 2024, the IT leaders of the government and the company, a former security problem has been highlighted which has become a current threat: an awareness training for cybersecurity for end users is back at the top of the government’s cybersecurity problems, and we saw it playing before. Where do we have?
A new generation of phishing attacks generated by AI, arriving by emails, texts, vocal messages and even videos, targets government organizations in an unprecedented manner. These new intelligent cyber attacks pose new challenges for defenders of the organization because they are delivered without typing faults, formatting errors and other errors observed in the targeted phishing and targeted spear campaigns.
Even more frightening are the generated Deeping Fackets that can imitate a person’s voice, face and gestures. New cyber attacks can provide disinformation and fraudulent messages on a scale and sophistication that are not seen before.
In simple terms, the fraud generated by AI is more difficult than ever to detect and stop. The recent examples of 2024 include false messages imitating President Biden, the governor of Florida Ron Desantis and the CEOs in the private sector. Beyond the political elections and impacts, a Deepfake video of the CFO of a multinational company recently deceived the staff to make banking transfers – resulting in a loss of $ 26 million.
So how can companies approach these new data of data?
In recent years, there has been an industry thrust to go beyond the traditional safety awareness training for end users and a more holistic set of measures to combat people’s cyber attacks.
In simple terms: effective safety awareness training really changes safety cultivation. People get engaged and start asking questions, they understand and report risks, and they realize that security is not only a work problem. It is also their personal security and family safety.
The term that many are currently adopting is “human risk management” (HRM). The Research and Consulting Forrester Cabinet describes HRMs as “solutions that manage and reduce the risk of cybersecurity posed by and to humans through: detect and measure human security behaviors and quantify human risk; launch political and training interventions based on human risk; Educate and allow labor to protect themselves and protect their organization from cyber attacks; Build a positive security culture. »»
So what does this mean to tackle the depths of depths generated by immediate AI?
First, we must (re) train employees to detect this new generation of sophisticated phishing attacks. They should know how to authenticate the source and content received. This understands them what to look for, like:
- Incons been in audio or video quality
- Lip synchronization or incompatible vocal synchronization
- Facial Movements Connection Nature
- Unusual behavior or speech schemes
- Source verification
- Improve detection skills
- Use of filigranes for images and videos
Second, provide tools, processes and techniques to check the authenticity of the message. If and when these new tools are not available, establish a process to ensure that employees feel authorized to question the legitimacy of messages through a verification process that will be encouraged by management. Also report Deepfake content: if you meet a Deepfake that implies that you or someone you know, report it to the platform hosting the content.
Third, consider the new technological tools of the company that use AI to detect messages fraud. This is true-you may have to fight fire with fire using the next generation of cyber tools to stop these messages generated by AI in the same way as e-mail safety tools detect and deactivate traditional phishing links and spam messages in quarantine. Some new tools allow staff to check messages and images for fraud, if this cannot be done automatically for all incoming emails.
This new generation of cyber attacks using Deepfakes to deceive humans essentially undermines confidence in all digital. Indeed, digital trust becomes more and more difficult to obtain for current governments and trends are not encouraging – requiring immediate action.
As Albert Einstein said one day: “He who is carefree with truth in small questions cannot trust important business.”
This story originally appeared in the May / June 2024 issue of Government technology review. Click here to read the full digital edition online.