Hundreds of cybersecurity professionals, analysts and decision-makers met earlier this month ESET World 2024A conference that presented the company’s vision and technological progress and presented a certain number of insightful discussions on the latest cybersecurity trends and beyond.
The subjects ran the whole range, but it is sure to say that the subjects that resonated research and the most inclusive ESET threat perspectives on artificial intelligence (AI). Let us now briefly examine certain sessions which covered the subject which is on the lips of all these days – IA.
Return to the basics
First of all, the director of ESET technology (CTO) Juraj Malcho gave the configuration of the field, offering his point of view on the main challenges and opportunities offered by AI. He would not stop there, however, and continued to seek answers to some of the fundamental questions surrounding AI, including “is it as revolutionary as it is?”.

The current iterations of AI technology are mainly in the form of large -language models (LLM) and various digital assistants that make technology feel very real. However, they are still quite limited, and we must completely define how we want to use technology in order to empower our own processes, including its cybersecurity uses.
For example, AI can simplify cyber defense by deconstructing complex attacks and reducing resource requests. In this way, it improves the security capacities of short -term commercial IT operations.
Demystify AI
Juraj JANoifK, director of artificial intelligence at Eset and Filip MazAN, Sr. Manager of Advanced Threat Detection and IA at Eset, then presented a complete view of the world of AI and automatic learning, exploring their roots and distinguishing characteristics.

M. MaZáWe have demonstrated how they are fundamentally based on human biology, in which AI networks imitate certain aspects of the way in which biological neurons work to create networks of artificial neural with variable parameters. The more complex the network, the more its predictive power, leading to progress observed in digital assistants such as Alexa and LLM like Chatgpt or Claude.
Later, Mr. MazAN pointed out that when AI models become more complex, their usefulness can decrease. As the recreation of the human brain approaches, the growing number of parameters requires in -depth refinement. This process requires human surveillance to monitor and constantly finetine the operations of the model.

Indeed, the thinner models are sometimes better. Mr. MazAN described how the strict use of ESET of internal AI capacities leads to faster and more precise detection of threats, responding to the need for rapid and precise responses to all kinds of threats.
He also echoed Mr. Malcho and underlined some of the limits that assaulted large languages (LLM) models. These models work on the basis of prediction and involve connection meanings, which can be easily confused and cause hallucinations. In other words, the usefulness of these models is only going so far.
Other limits of current AI technology
In addition, Mr. Jánošík continued to tackle other limits of contemporary AI:
- Explanitability: Current models are made up of complex parameters, which makes their decision -making processes difficult to understand. Unlike the human brain, which operates on causal explanations, these models work through statistical correlations, which are not intuitive for humans.
- Transparency: The top models are owners (closed gardens), without visibility on their interior operation. This lack of transparency means that there is no responsibility for the way these models are configured or for the results they produce.
- Hallucinations: GENERIVE Chatbots often generate plausible but incorrect information. These models can make great confidence while providing false information, leading to misadventures and even legal problems, as after The Air Canada chatbot presented false information about a discount to a passenger.
Fortunately, the limits also apply to the excessive use of AI technology for malicious activities. Although chatbots can easily formulate messages with plausible consonance to facilitate attacks by sporting or compromise by e-mail, they are not only well -equipped To create dangerous malware. This limitation is due to their propensity to “hallucinations” – producing plausible but incorrect or illogical outputs – and their underlying weaknesses in the generation of logically connected and functional code. Consequently, the creation of effective and effective malicious software generally requires the intervention of a real expert to correct and refine the code, which makes the process more difficult than some may suppose it.
Finally, as Mr. Jánošík pointed out, AI is just another tool that we must understand and use in a responsible manner.
The clone climb
During the next session, Jake Moore, global cybersecurity advisor to ESET, gave a taste of what is currently possible with the right tools, the cloning of RFID cards and the piracy of video surveillance when creating convincing Deep-Fakes plans-and how it can endanger the data and finances of the company.
Among other things, it has shown how easy it is to compromise the premises of a company using a well -known hacking gadget to copy employee entry cards or to hack (with permission!) A social media account belonging to the CEO of the company. He then used a tool to clone his resemblance, both for the face and the voice, to create a convincing deep video which he then published on one of the CEO’s social media accounts.

The video – who If the potential CEO had announced a “challenge” to the United Kingdom’s bicycle to Australia and had accumulated more than 5,000 views – was so convincing that people began to offer sponsorships. Indeed, even the company’s financial director was also fooled by the video, questioning the CEO about his future. Only one person was not fooled – the daughter of the 14 -year -old CEO.
In a few steps, Mr. Moore has demonstrated the danger that lies in the rapid spread of Deep Fakes. Indeed, seeing no longer believes – companies and people themselves must examine everything they meet online. And with the arrival of AI tools like Sora which can create a video based on a few entrance lines, dangerous times could be close.
Final keys
The last session dedicated to the nature of AI was a panel which included Mr. Jánošík, Mr.AN, and M. Moore and was led by Ms. Pavlova. This started with a question about the current state of AI, where panelists have agreed that the latest models are flooded with many parameters and need additional refinement.

The discussion then moved to immediate dangers and concerns for businesses. Mr. Moore stressed that a significant number of people were not aware of the AI capabilities, that bad players can exploit. Although the panelists have concluded that sophisticated malicious software generated by AI is currently not an imminent threat, other dangers, such as the improvement of the phishing messaging generation and the deep flocchers created using public models, are very real.
In addition, as Mr. JánošíkThe greatest danger lies in the confidentiality aspect of AI data, given the amount of data that these models receive from users. In the EU, for example, the GDPR And Ai ac have defined certain data protection executives, but that is not enough because they are not global acts.

Mr. Moore added that companies should ensure that their data remains internally. Business versions of generative models can adapt to the invoice, avoiding the “need” to count on (free) versions that store data on external servers, which eventually revealed sensitive business data in danger.
To respond to data confidentiality problems, Mr. MazAn Suggest companies should start from bottom to top, by pressing open source models that can work for simpler use cases, such as generation of summaries. It is only if they prove to be inadequate if companies move to solutions powered by the cloud from other parts.
M. Jánošík Concluded by saying that companies often neglect the drawbacks of using AI – guidelines for the secure use of AI are indeed necessary, but even common sense helps to ensure the safety of their data. As encapsulated by Mr. Moore in a response concerning how AI must be regulated, there is an urgent need to raise awareness of the potential of AI, including the damage of damage. Encourage critical thinking is crucial to ensure security in our world increasingly focused on AI.