

Christopher Goodney / Bloomberg
Wednesday, the Department of Financial Services of New York State, or Nydfs,
The four risks underlined by the ministry concern both the way threat actors can use AI against companies – AI compatible social engineering and cybersecurity attacks improved by AI – and threats posed by use and dependence on AI – Exposure or theft of non -public information and increase in vulnerabilities due to the dependencies of the supply chain.
The six examples of attenuations underlined by the ministry in its industry letter will be familiar to many cybersecurity and risks professionals – including programs and risk -based policies, supplier management, access controls and cybersecurity training. However, their mention in the directives is notable because the ministry has directly linked these practices to the requirements set out in its regulations.
Adrienne A. Harris, superintendent of the department, recognized that even as the advice focused on AI risks, technology also presented opportunities for financial institutions to improve their cybersecurity.
“The AI has improved the ability of companies to improve threat detection strategies and response to incidents, while simultaneously creating new opportunities for cybercriminals to commit crimes on a larger scale and speed,” said Harris
How threat actors use AI against banks
Social engineering, which is based on the manipulation of people to enter a system rather than exploiting more technical vulnerabilities, has long been a concern in the cybersecurity space. Many companies such as Knowbe4, Fortinet, without Institute and others offer security awareness training programs which focus on the attenuation of the threat of social engineering by teaching the signs that they can be targeted by such a campaign.
One of the factors that differentiates more dangerous social engineering campaigns on the pack is the degree to which a campaign is realistic, and interactivity is a key to this. The AI has strengthened the ability of threat actors to present a more convincing front through Deepfakes, according to the NYDFS directives.
An example of the guidelines mentioned in February, when a clerk working for the Hong Kong branch of a multinational company transferred 25 million dollars to fraudsters after being brought to join a video conference where all the other participants were generated in AI, including an imitator of taxation of the company’s financial director. The clerk made 15 transactions on five local bank accounts accordingly, according to
The AI can also improve the technical capacities of threat actors, according to the NYDFS guidelines, allowing less technically qualified actors to launch attacks by themselves and improve the efficiency of those who are more technically followers – as by accelerating the development of malicious software. In other words, AI can help threaten the actors to almost all stages of an attack, including in the middle of an intrusion.
“Once in the information systems of an organization, AI can be used to carry out recognition to determine, among other things, how to best deploy malware and access and exfiltrate non -public information,” said directives.
How the dependence of the banks at AI can constitute threats
A threat actor does not need to infiltrate the computer systems of a bank to steal data; They also fly to third parties that a bank has entrusted data. Indeed, this was an increasing tactic for threat actors hoping to steal data from consumers, even independent of the rise of AI.
The so -called third -party risks and the vulnerabilities of the supply chain are a common concern among banks and regulators, and AI amplifies these concerns.
“The tools and applications supplied with AI depends on the collection and maintenance of large amounts of data”, reads NYDFS directives. “The process of collecting this data frequently involves working with suppliers and third -party service providers. Each link of this supply chain introduces potential security vulnerabilities which can be used by threat actors.”
Due to the large amounts of data that banks and third parties must collect to activate and improve their AI models, NYDFs have also highlighted the exposure or theft of these vast troves as a risk of counting on AI.
“Maintaining non -public information in large quantities present additional risks for covered entities that develop or deploy AI because they must protect much more data, and threat actors are more encouraged to target these entities to try to extract non -public information for financial purposes or other malicious purposes,” said directives.
Six risk mitigation strategies
NYDFS guidelines have underlined the need for financial services companies to practice the principle of
From a compliance point of view, the first and the most important measures that the banks operating in New York can implement are the risk of cybersecurity. This is one of the most critical aspects of the NYDFS cybersecurity regulation, also known as the 500 party, that the ministry modified in November 2023.
The regulation on cybersecurity obliges banks to maintain programs, policies and procedures based on these risk assessments, which, according to directives, “must take into account the risks of cybersecurity encountered by the covered entity, including deepfakes and other threats posed by AI, in order to determine the defensive measures they should implement”.
The cybersecurity regulations also require banks that operate in the state to “establish, maintain and test plans that contain proactive measures to study and mitigate cybersecurity events”, such as data violations or ransomware attacks. Again, the NYDFS directives indicated that the risks linked to the AI should be taken into account in these plans.
Second, the NYDFS “strongly recommends” to each bank to consider, among other factors, the threats to which its third-party service providers are faced with the use of AI and how these threats could be exploited against the bank itself. Efforts to mitigate these threats could include the taxation of third parties to take advantage of the available confidentiality, security and confidentiality options available, according to directives.
Third, banks must implement multifactric authentication, that the cybersecurity regulations require that all banks use by November 2025. The ministry has
Fourth, the ministry reminded banks the need to provide “cybersecurity training at all staff” at least one annual basis, and this training must include social engineering – another requirement set out by the cybersecurity regulation. This guarantees that bank staff know how threat stakeholders can use AI to improve their campaigns.
“For example, training must meet the need to verify the identity of a applicant and the legitimacy of the request if an employee receives an unexpected money transfer request by phone, video or email,” reads the directives.
Fifth, covered entities “must have a surveillance process in place” which can quickly identify new security vulnerabilities so that they can correct them quickly. Directives reminded the banks that the cybersecurity regulations forced them to monitor user activity (mainly employees), including email and web traffic, to block malware and protect themselves against the installation of a malicious code.
“The covered entities that use products or services compatible with AI, or allow staff to use AI applications such as chatgpt, should also consider monitoring unusual query behavior that could indicate an attempt to extraction to the NPI and block personnel queries that could expose the NPI to a public AI product or system.
Sixth and finally, recommended directives use effective data management practices. An important example is the elimination of unused data when it is no longer necessary for commercial operations. This practice is required by the regulation of the ministry, and from November 2025, banks will also have to maintain and update data inventories. This “should” include the identification of all the information systems on which the AI are based or use, according to the directives.