Most industry analysts expect organizations to accelerate efforts to exploit generative artificial intelligence (GENAI) and important language models (LLM) in a variety of use cases during The next year.
The typical examples include customer support, fraud detection, content creation, data analysis, knowledge management and, more and more, software development. A recent Survey of 1,700 IT professionals Directed by Cenient in the name of outsystems, 81% of respondents describing their organizations as currently using Genai to help coding and developing software. Almost three -quarters (74%) provide for the creation of 10 or more applications in the next 12 months using development approaches powered by AI.
Although these use cases promise to provide significant efficiency and productivity gains for organizations, they also introduce new risks of confidentiality, governance and security. Here are six IA security problems to which industry experts say that computer and security leaders should pay attention in the next 12 months.
The coding assistants will be running – just like the risks
The use of AI -based coding assistants, such as GitHub Copilot, Amazon Codwhisperer and Openaai CODEX, will go from experimental status and early adopting to the dominant current, in particular among the start -up organizations. The advantages presented of these tools include improving the productivity of developers, the automation of repetitive tasks, reduction in errors and faster development times. However, as with all new technologies, there are some The drawbacks too. From a security point of view, they include automatic coding responses such as vulnerable code, exposure to data and the propagation of unsecured coding practices.
“”Although the AI-based code assistants undoubtedly offer strong advantages with regard to automatic completion, code generation, reuse and more accessible appointment to an audience not of engineering, C ‘East Not without risks“Explains Derek Holt, CEO of Digital.ai. The most important thing is the fact that AI models are only good as the code on which they are formed. The coding assistants for development, says Holt. The negative impacts are limited and productivity gains stimulate the expected advantages. “”
AI to accelerate the adoption of Xops practices
While more and more organizations are working to integrate AI capabilities into their software, expect to see Devsecops, Dataops and Modelops – or the practice of management and monitoring of AI models in production – Converge into a larger and global Xops management approach, says Holt. The push to compatible software has more and more blurring the lines between traditional declarative applications which follow predefined rules to obtain specific results, and the LLMS and Genai applications which dynamically generate responses based on models learned from sets of training data, explains Holt. The trend will exert new pressures on operations, support and QA teams, and will stimulate the adoption of Xops, he notes.
“Xops is an emerging term that describes DevOps requirements when creating applications that use internal or open source models formed on business owner data,” he said. “This new approach acknowledges that when delivery of mobile or web applications that use AI models, it is necessary to integrate and synchronize traditional DevSecops processes with that of dataops, Mops and models in a Integrated end -to -end life cycle. ” Holt perceives that this assembly emerging from best practices will become hyper critical for companies to guarantee quality, secure and sustainable applications.
Shadow ai: a more important securityache
The easy availability of a large and rapidly growing range of Genai tools has fueled the unauthorized use of technologies in many organizations and has generated a new set of challenges for already overloaded security teams. An example is rapid proliferation – and often not managed – Use of Chatbots AI among workers for various ends. The trend has increased concerns about accidental exposure of sensitive data in many organizations.
Safety teams can expect to see a peak in the unauthorized use of such tools in the coming year, predicts Nicole Carignan, vice-president of Cyber-Ai in Darktrace. “We will see an explosion of tools that use generative AI and AI within companies and devices used by employees”, leading to a Rise in the shadow aiSaid Carignan. “If uncontrolled, this raises serious questions and concerns about the prevention of data loss as well as compliance problems such as new regulations such as the EU ACT AI Start taking effect, “she says. Carignan expects information managers (CIO) and information security chiefs (CISO) to undergo increasing pressure to implement detection capacities , monitoring and stretching the non -zero use of AI tools in their environment in their environment.
AI will increase, not human skills
AI excels in the processing of massive threat data volumes and the identification of models in this data. But for at least for a while, there remains at best a increase This is able to manage repetitive tasks and allow the automation of basic threat detection functions. The most successful security programs during the next year will continue to be those that combine AI treatment power with human creativity, according to Stephen Kowski, CTO on the ground at Slashxt Email Security +.
Many organizations will continue to demand human expertise to identify and respond to attacks by the real world that evolve beyond the historical models used by AI systems. Effective threat hunting will continue to depend on human intuition and skills to identify subtle anomalies and connect apparently independent indicators, he says. “The key is to achieve the right balance where AI manages high volume routine detection while qualified analysts study new attack models and determine strategic responses.”
AI’s ability to quickly analyze large data sets will strengthen the need for cybersecurity workers to sharpen their data analysis skills, adds Julian Davies, vice-president of advanced services at Bugcrowd. “The ability to interpret the ideas generated by AI will be essential to detect anomalies, predict threats and improve global security measures.” Quick engineering skills will also be increasingly useful for organizations that seek to draw maximum value from their AI investments, he adds.
Attackers will exploit AI to exploit open source vulns
Venky Raju, CTO on the ground at Colortokens, expect threatening players will hold AI tools to exploit vulnerabilities and automatically generate operating code in open source software. “Even closed source software is not immune, as AI Fuzzing Fuzzing tools can identify vulnerabilities without access to the original source code. Such zero day attacks are an important concern for the cybersecurity community “Explains Raju.
In a report earlier this year, Cowsterrike Indicated to compatible ransomware AI as an example of how attackers exploit AI to perfect their malicious capacities. The attackers could also use AI to search for targets, identify system vulnerabilities, encrypt data and easily adapt and modify ransomware to escape the mechanisms of detecting and correction of final points.
Verification, human surveillance will be critical
The organizations will continue to find it difficult to do fully and implicitly confidence in AI to do the right thing. A Recent survey by Qlik Of 4,200 leaders of the C Suite of C and AI decision -makers showed that most respondents massively favored the use of AI for a variety of uses. At the same time, 37% described their senior executives as lacking in confidence in AI, 42% of intermediate level managers expressing the same feeling. Some 21% also said that their customers were also wary of AI.
“Confidence in AI will remain a complex balance of Advantages against risksAs current research shows that the elimination of biases and hallucinations can be counterproductive and impossible, “said Kowski of Slashxt.” While industry agreements provide certain ethical executives, the subjective nature of ethics means that different organizations and cultures will continue to interpret and implement the directives of AI differently. “The practical approach consists in implementing robust verification systems and maintaining human surveillance rather than looking for perfect reliability,” he says.
Davies de Bugcrowd says that there is already a growing need for professionals who can manage the ethical implications of AI. Their role is to ensure privacy, to prevent biases and to maintain transparency in Decisions focused on AI. “The ability to test the single UN security and safety cases becomes critical,” he said.