DeepSeek, a Chinese AI chatbot akin to OpenAI’s ChatGPT, has skyrocketed in popularity, becoming the most downloaded free app in the U.S. However, its rapid ascent has triggered serious privacy concerns, especially as the U.S. moves to ban TikTok due to its links with the Chinese government.
Like most apps, DeepSeek requires users to accept its privacy policy upon sign-up, though few take the time to read it. According to Adrianus Warmenhoven, a cybersecurity expert at NordVPN, DeepSeek’s policy—available in English—makes it clear that user data, including conversations and generated responses, is stored on servers in China. This raises alarm, as data collected under Chinese jurisdiction is subject to different security and privacy regulations.
What Data Does DeepSeek Collect?
DeepSeek’s privacy policy outlines the vast amount of information it gathers, categorized as:
1. Information You Provide
- Personal details such as date of birth, username, email, phone number, and password.
- Conversations, prompts, feedback, chat history, uploaded files, and other user-provided content.
- Customer support interactions, including proof of identity and inquiries.
2. Automatically Collected Information
- Internet and network activity, including IP address, device identifier, and cookies.
- Technical data such as device model, operating system, keystroke patterns, system language, and performance metrics.
- Usage data like features accessed and app interactions.
- Payment details.
3. Information from Other Sources
- Linked services, such as Google or Apple accounts used for login.
- Advertising and analytics partners that share user information.
The Keystroke Monitoring Debate
A controversial aspect of DeepSeek’s data collection is its monitoring of “keystroke patterns or rhythms.” While this might seem alarming, it is not entirely uncommon—TikTok, for example, also gathers this data, though Instagram does not. The concern arises because keystroke dynamics can serve as biometric identifiers, making users uniquely trackable.
DeepSeek has not clarified how it uses this data, but similar technologies have been used for fraud detection, identity verification, and even behavioral tracking. Critics argue that biometric data, unlike passwords, cannot be changed if compromised, increasing security risks.
What Does DeepSeek Do With Your Data?
DeepSeek states that user data is used to:
- Deliver personalized ads.
- Notify users about service updates.
- Comply with legal obligations.
- Share data with law enforcement when required.
- Provide access to its “corporate group,” which can use and process user information.
According to privacy expert Nicky Watson, “DeepSeek’s policy explicitly states that all collected data is stored on Chinese servers, raising significant concerns about its potential use beyond app functionality.” This is particularly worrying given China’s cybersecurity laws, which compel tech companies to share data with government agencies.
Why Should Users Be Concerned?
Many users overlook data security, often skimming past privacy agreements. However, DeepSeek’s compliance with Chinese cybersecurity laws means that user data can be accessed by the Chinese government upon request. Additionally, the app reportedly sends data to major Chinese tech firms, including Baidu and Volces, raising fears over state influence and censorship.
DeepSeek has already demonstrated content restrictions—users cannot ask about sensitive topics such as the 1989 Tiananmen Square massacre. This level of control, combined with broad data collection, raises the possibility of manipulation and surveillance.
The Risks of Data Exposure
Even beyond state interference, data security remains a pressing concern. Inadequate safeguards can lead to:
- Identity theft – Sensitive data leaks could put users at financial risk.
- Unauthorized surveillance – The ability to track users through biometric keystroke patterns.
- Cyberattacks – AI platforms are prime targets for hackers, and DeepSeek recently faced “large-scale malicious attacks.”
As Warmenhoven warns, “With AI platforms becoming more sophisticated, they also become prime targets for cybercriminals.”
How Can Users Protect Themselves?
While users can take steps to safeguard their data—such as carefully reviewing privacy terms and limiting the personal information they share—experts emphasize that systemic protections should be in place.
John Scott-Railton, a researcher at the University of Toronto’s Citizen Lab, states, “The reality is that companies dictate how they use your data, and users are often at their mercy. Stronger data privacy regulations are needed to ensure that personal information is not misused.”
F. Mario Trujillo of the Electronic Frontier Foundation adds, “Typing intimate thoughts and questions into a chatbot should not mean giving up privacy rights. The best solution is enacting robust data protection laws that apply universally—whether the app is from China, OpenAI, or Meta.”
Final Thoughts
The debate over AI privacy is far from over. While DeepSeek’s privacy policy is concerning, it is not an isolated case—many tech giants engage in similar practices. However, the difference lies in the jurisdiction where data is stored and the potential risks associated with government intervention.
In the end, the responsibility for protecting user privacy should not fall solely on individuals. Governments must enforce stricter data protection laws to prevent misuse, ensuring that personal information remains secure, regardless of the company handling it.