Why is data confidentiality important?
While we are marking the 2025 data confidentiality week, the growing complexity of data-oriented technologies and climbing the risk of cyber players emphasize the need for companies to rethink their privacy strategies.
Our online activity creates a data treasure, interest and hobby to online purchases and behavior. It may even include information about your physical self, such as health data – think about how an application on your phone can follow your steps.
Are these data valuable for companies that seek to capitalize on it and personalize each interaction, but do they ensure secure?
With information from industry leaders, IT explores how companies can deal with data confidentiality challenges while taking it from taking it as a competitive advantage.
Data confidentiality as a market differentializer
The financial and reputation repercussions of data violations increase. According to IBM 2024 Cost of a data violation reportThe average global cost of data violation reached $ 4.88 million, marking an increase of 10% compared to the previous year and the highest figure ever recorded.
Meanwhile, a 2024 Cisco Consumer Privacy Survey have found that 75% of consumers will not buy in organizations in which they do not trust their data. These figures emphasize that data confidentiality is no longer a requirement for conformity but essential to customer confidence and business longevity.
Akhil Mittal, principal director of security council at Black Duck, underlines the growing importance of data confidentiality, not only as a requirement of conformity, but as a fundamental element of the trust and differentiation of the market.
“High -level violations and more strict regulations such as the GDPR, the CCPA and the emerging laws linked to confidentiality linked to AI push companies to make data confidentiality a fundamental element of their operations,” explains Mittal .
He adds that the adoption of an “Confidentiality by Conception” approach – integrating confidentiality measures from the start of development – can mitigate risks, especially in native and distributed cloud systems.
Mittal also highlights the role of technologies improving confidentiality (TETS), such as data anonymization and data protection based on AI, in the reading of modern challenges. These tools reduce risks and strengthen customer confidence.
Building security by design: the transition to zero confidence and encryption
The rapid adoption of zero trust architecture (ZTA) transforms the way companies approach security. Carlos Aguilar Melchor, chief scientist for cybersecurity in Sandboxaq, defends ZTA as the cornerstone of modern data confidentiality strategies.
“Zero Trust underlines the principle” Never Trust, still to be checked “, improving resilience against cyberrencies,” he explains.
Aguilar also emphasizes the importance of post-quantum cryptography (PQC) to encryption to the test of the future against emerging quantum computer threats.
Find out more during the 2025 data confidentiality week: Most companies not prepared in the post-quantum world, reveals the study
Similarly, Boris Cipot, principal security engineer at Black Duck, adds that security must be integrated into each stage of the Life Cycle of Software Development (SDLC).
“The implementation of technologies such as static applications safety tests (SAST) and software composition analysis (SCA) is a must. Sast tools will help discover and alleviate the vulnerabilities of your code. On the other hand, the SCA tools will help organizations identify the open source sources used in their development and to alleviate their vulnerabilities and their risk of compliance with licenses.
“In addition, the safety tests of dynamic applications (DAST) and the safety tests of interactive applications (IATs) help organizations to discover vulnerabilities in the code, configurations and the dangerous behavior of applications,” He explained.
AI in data confidentiality: threats and solutions
AI reshapes the risks of data confidentiality and protection strategies. On the one hand, cybermenaces fed by AI – such as deep phishing scams and automated hacking tools – have significant risks. On the other hand, pets focused on AI, including data anonymization and federated learning, help companies strengthen security.
Like Paul Bischoff, defender of consumer privacy at Comarch, notes: “AI programs scratch as much data as possible from public sources to train their algorithms. Consequently, personal information can be included in the response of an AI to an invitation, intentionally or not. »»
“The AI considerably reduces the obstacles to research and the collection of personal data, which facilitates the exploitation of criminals. I recommend deactivating search engines by scraping social media and using data deletion services as incogni or privacybee to remove your data from the hand from the data brokers. »»
Sunil Agrawal, Ciso in Glean, underlines the need for robust governance in AI systems. “The confidentiality of the data is not a check box-it is an imperative in zero,” he says. AGRAWAL recommends integrating the detection and correction mechanisms in real time to treat access anomalies as they arise, guaranteeing responsible AI practices.
“A solid commitment to data confidentiality and responsible use of AI is not only ethical; It is essential to protect sensitive information, protect innovation and ensure sustainable growth in the workplace, ”he adds.
Transparency as a competitive advantage
Dr. Andrew Bolster, Senior Director of R&D at Black Duck, underlines that the Open-Source IA models, such as Deepseek, underline the importance of transparency in the progress of privacy and innovation. However, he warns of the negligence of security measures when implementing open source platforms.
“The use of Deepseek of the OpenAi Data Chain of Thought for its initial training underlines the importance of transparency and the resources shared in the advancement of the AI. In the context of “open source AI”, it is crucial that the underlying training and evaluation data is open, as well as the initial architecture and the weight of the resulting model, ” He declared.
He added: “Open source AI, with its transparency and collective development, often exceeds alternatives of closed source in terms of adaptability and confidence. While more and more organizations recognize these advantages, we could see a significant change towards an open source AI, which stimulates a new era of technological progress. »»
Find out more during the 2025 data confidentiality week: Deepseek R1: Five key dishes of Genai’s “Spoutnik moment”
High -level violations: learned lessons
Recent public violations demonstrate the urgent need for stronger data confidentiality measures.
THE Displacement of the violation in 2023where nearly 100 million individuals have been affected due to third -party vulnerability, or T-Mobile API leakage which exposed the data of 37 million customers and the Samsung AI data leakageAlso in 2023, where employees involuntarily shared the sensitive source code by entering them into Chatgpt, all highlight the importance of securing the software supply chain.
Besnik Vrellaku, CEO and founder of Salesflow.io, a market for marketing software (GTM), explains: “One of the main challenges we face is to ensure data confidentiality while using tools generative gender. Data leakage problems are important, especially since large companies have suffered violations. »»
“To mitigate this risk, we carry out an in -depth reasonable diligence on legal fronts and constantly implement our terms and conditions, which many companies tend to neglect.”
Vrellaku adds that despite these challenges, Salesflow.ai is able to strengthen confidence with customers by implementing end -to -end encryption and maintaining transparency with third -party suppliers.
He stressed the importance of remaining “up to date with the legal developments of AI and the best practices of the industry” to navigate effectively in the complexities of the integration of AI and data confidentiality.
What will follow?
Governments around the world tighten data confidentiality regulations. The laws on AI confidentiality, such as the EU AI Act and various regulations on American AI, have introduced new transparency requirements for the processing of AI data.
The laws on the location of data in countries such as India and China apply stricient data sovereignty, forcing companies to store data inside national borders.
Post-quantum security standards of the National Institute of States and Technology of the United States (NIST) finalize post-quantic encryption requirements to safeguard future quantum computer risks.
Chris Linnell, the main data confidentiality consultant at Bridewell, notes that consumer confidence loss is now more worrying than regulatory fines. He stresses that transparency and respect for confidentiality laws are essential to maintain relations with customers.
“Often, we hear regulatory fines discussed as the main reason for obtaining compliance, but what we see is that the loss of consumer confidence is one of the greatest impacts of the bad practice of the confidentiality of Data, and thereafter one of the largest engines of our customers who can demonstrate proactive compliance, ”he says.
According to experts, the next five years will bring major changes to data confidentiality, including stronger AI governance. We could expect more strict global regulations on data processing based on AI as the adoption of AI increases.
Companies will have to adopt post-quantity cryptography to protect sensitive data. While consumers become more concerned with confidentiality, companies will have to prioritize transparency and ethical data practices.
Mittal concludes: “Data confidentiality week recalls that the protection of consumer data is not only a regulatory requirement – it is a moral duty and a commercial advantage.” Organizations that lead to data security will strengthen confidence today, reduce risks and remain ahead of the regulatory examination in the future.