The developers have adopted an atmosphere coding and an AI assisted program, with AI coding tools entirely confident to manage everything, while others only partially count. Surprisingly, a quarter of the founders of YC admit that 95% of their code base is generated by AI.
However, there are major drawbacks to code with AI. Although the debugging of the atmospheres is part of the problem, it is not limited to it – the codes generated by the AI also introduce security problems.
AI coding can be cool, but you need an understanding of security
Recently, an X user deployed cursor To build a SaaS application and stressed that AI was not only an assistant but also a manufacturer. A few days later, he shared that someone was trying to find security vulnerabilities in their application. The next day he taken at X And said he was attacked.
Furthermore, he recognized that resolving the problem took a long time, because he did not have the necessary technical knowledge.


Several application developers intervened to help, suggesting what could have been bad and providing potential solutions to solve the problem. The pirates targeting the application adopted interesting approaches to send a message to the application manufacturer. For example, the creator shared a screenshot that displayed an area that said “please_dont_vibe_code.ai.”


Admittedly, hackers had their opinions when they try to exploit the vulnerabilities of the application. Santiago Valdarrama computer scientist, taken at X And said: “The coding of vibrations is great, but the code that these models generate is full of safety holes and can be easily hacked.”
The code AI adds security risks
Amlan PanigrahiA Genai engineer in Deloitte, said AIM“This can be a security problem for organizations working on production environments. However, for a prototype with generic / open source exposure of data sets, this is not a problem.”
He also indicated that developers should consider the security implications of the business of their organization if they intend to use co -pilot assistants. As an alternative, they could personalize and provide API termination points to the LLMs hosted on a trust or self-heberged infrastructure to supply these coding assistants.
Chelati ChelatiA Senior Devsecops engineer in Fraud.net, spoke to AIM on this subject. “AI coding has important security challenges. The generative AI, at the base, is a system for carrying out advanced sentences, which makes it likely to cause injection attacks which could introduce sensitive details or a vulnerable code in a system, “he said.
He also declared that AI models are often based on obsolete third -party libraries, because they are trained on historical data rather than continuous adaptation to the latest security fixes and best practices. “This can lead to the inadvertent use of an obsolete or unsecured code, more amplifying risks.”
Lifting the concerns concerning the entire trend of “atmosphere coding”, Gulati noted that dependence on the code generated by AI without understanding its functionality can lead to safety vulnerabilities, configuration errors or compliance problems, because developers may not be able to assess or secure the code generated before implementation.
The same has been corroborated by a report by APIIRO application safety platform. The report said that the AIA assistants became popular and that code production has increased in the past two years. However, growth came with risks such as APIs exposing sensitive data.
He also declared that the benchmarks containing personally identifiable information (PII) and the payment data have increased 3x since T2 2023. In addition, there has been a 10x increase in APIs with authorization and validation of missing entries in the past year.
A Recent research report Compared human codes and generated by LLM and said: “It is essential to focus on the creation of evaluation and attenuation methods of vulnerabilities because LLM have the power to distribute dangerous coding practices if they are formed on data with coding vulnerabilities.” He also declared that LLM can involuntarily introduce security defects.
Research has concluded that there are security vulnerabilities in the human code and generated by the LLM, although defects in the code generated by AI have proven to be more serious.
Another research report By the Center for Security and Emerging Technologies (CSET) found that the code generated by AI on five LLM models contained bugs which often have an impact and could potentially lead to malicious exploitation.
A user on x mentioned That the application of his friend was hacked during construction with the cursor and the bolt.
AI coding assistants also need work
Several security developers and researchers have stressed that certain characteristics of AI code assistants, such as the cursor, could present a safety risk. A developer mentioned on Cursor forum These internal business secrets may have been disclosed to external servers, including those of Cursor and Claude, while using the assistant.
Functionities such as access to self-assurance and agent interactions access and use the content of the .VE files, even when they are explicitly excluded in .gitignore and .cursorignore. Some users could reproduce the forum problem and confirm the complaint.
A User on X Mentioned that if it is not cautious, Cursor AI can delete folders anywhere, modify the parameters of the operating system, steal cryptographic wallets and crush important configuration files.
Therefore, before diving into the use of AI for code generation, it seems necessary to have an understanding of security, whether you coed with a relaxed approach or just employ a code assistant to get help.