As an AI platform from the Chinese developer DeepSeek rises in popularity, many users are concerned about its privacy policies and how their data is being handled.
Dr. Ali Dehghantanha, professor in the University of Guelph’s School of Computer Science, says, “DeepSeek AI introduces some innovative search and data retrieval capabilities, but these come at the cost of critical privacy challenges that differ from models like ChatGPT.”


Dehghantanha is the director of the Cyber Science Lab – a U of G research lab focused on research and training in cybersecurity. According to him, ChatGPT is designed to generate responses based on pre-trained data while prioritizing user privacy. However, DeepSeek AI uses real-time data processing and live search features.
When ChatGPT does use live search, he says, it’s designed to limit the amount of real-time data sent externally.
“The design philosophy matters, too,” he says. “ChatGPT focuses on generating responses from a fixed dataset, making live search an additional, controlled feature.”
In contrast, he says, DeepSeek AI incorporates live search as a core functionality, meaning it continuously sends queries and information to external sources.
“This increased data flow can potentially expose sensitive information if robust data security protocols aren’t in place,” he says. “DeepSeek’s innovative methods create risks related to how user data is accessed, processed and stored, underscoring the need for transparency and strong privacy safeguards.”
As the boundaries of AI technology are pushed, Dehghantanha says it’s essential to strike a balance between AI and security.
He recommends the company to deploy clear communications on data handling practices and to create proactive regulatory measures.
“This will be key to ensuring public trust as these tools evolve,” he says.
Dehghantanha is the Canada Research Chair in Cybersecurity and Threat Intelligence.
He is available for interviews.
Contact:
Dr. Ali Dehghantanha
adehghan@uoguelph.ca