Insights + Resources

February 19, 2025

DeepSeek or Deep Privacy Concern? How the Chinese AI Platform’s Privacy Policy Compares with ChatGPT’s

Introduction

General

The protection of private information is fundamentally considered a matter of human dignity, safety and self-determination. Its importance has grown in recent years as digital businesses have demanded more personal information from users in exchange for using their services. As global AI systems increasingly become integrated into people’s daily lives, from a consumer protection perspective, ensuring that private data is protected become an even more important (and difficult) challenge.

In Australia, personal information is protected by the Privacy Act 1988 (Cth) (Privacy Act) and its Australian Privacy Principles (APPs). Over time, Australian laws have evolved to align more closely with the European General Data Protection Regulation (GDPR), the global benchmark for data protection policy.

The Privacy Act defines Personal Information (PI) as information or an opinion about an identified individual, or an individual who is “reasonably identifiable”.  The information or opinion does not have to be true to be PI under Australian law.

At a qualitative level, certain PI is more sensitive than others. For example, political opinions and health information are typically more sensitive than phone numbers and IP addresses. At a quantitative level, the more PI an individual shares, the greater the risk of being vulnerable to potentially serious consequences, such as identity theft and financial fraud resulting from data breaches.

DeepSeek and the Growth of the Generative AI Chatbot Market

The launch of Chinese AI startup DeepSeek in January significantly disrupted the tech market, initially wiping nearly A$1 trillion off the US stock market.

Quickly becoming one the most downloaded apps on the Apple App Store and Google Play Store, DeepSeek has been banned by various countries and organisations amid growing privacy concerns. In Australia, DeepSeek has been prohibited from federal government computers and mobile devices after it was found to pose “an unacceptable risk” to national security. Similar sentiments led to a US ban of popular Chinese owned social media company TikTok last month.

As of January this year, it is estimated that both ChatGPT and DeepSeek had tens of millions of daily active users, indicating that the two AI platforms have quickly become rivals vying for market share. However, the market is far from stable, with OpenAI, the owner of ChatGPT, last week rebuffing a $97.4 billion bid for certain assets of the company led by tech giant Elon Musk.

Amid the hype around these platforms, their relative approaches to privacy require closer scrutiny. Our previous articles on the 2022 Optus Breach, the 2018 Cambridge Analytica Controversy and the 2015 Ashley Madison Breach shed light on how such data breach scandals have grown in frequency and magnitude.

Accordingly, we have done a deep dive into DeepSeek and OpenAI’s respective privacy policies, to deliver 10 key insights around their approaches to personal data.

DeepSeek vs ChatGPT – 10 Privacy Insights

Below we reveal our 10 high level insights on the respective approaches of the Generative AI giants to the protection of PI, from the point of view of Australian users.

Insight #1 Generative AI Platforms require serious regulation and user education (especially as they grow exponentially)

With respect to information sharing, ChatGPT and DeepSeek each operate in a similar way from the user’s perspective. To simplify:

  1. A user registers an account (free or, in case of ChatGPT only at this stage, subscription) at which point certain PI is shared.
  2. The user is then able to interact with the chatbot interface by inputting data prompts about subjects of interest to the user – this may contain PI about the user and/or people connected to the user.
  3. The platform then accesses its large language model database to formulate responses or outputs. If the prompt includes PI, the outputs will contain (and build on) PI shared by the user, providing a more sophisticated view of the user/connected people. The degree to which the personal profile is ‘built up’ will depending on other data sets the platform has been trained on or can access.
  4. The outputs are then ‘fed back into the machine’, including the deeper and more sophisticated personal profile about the user (and/or other people), to further train the platform, potentially being available to be disclosed to other users in outputs or for other purposes.

The ability for serious harm to occur to users’ privacy rights through Generative AI platforms is potentially immense, and growing exponentially, unless properly regulated. Users also need to be educated about these privacy risks, so they are more careful about what they share.

Insight #2 Generative AI platforms are able to build deep user profiles

In some ways, these online AI platforms can be compared to social media services like Facebook. Both enable users to share private information that can then be assembled to form deep insights about individuals. Yet social media sites are built around users sharing information with the platform, and in turn with other users, and their innate purpose is understood to be fundamentally public and social.

With Generative AI platforms like ChatGPT and now DeepSeek, users will usually perceive their information as being posted in a private service. However, especially with DeepSeek, confidentiality is far from guaranteed, and in fact may be contrary to the business model.

As the “relationship” with the AI tool grows, and the chatbot increasingly positions itself as the user’s ‘trusted advisor’, the user may be inclined to disclose quantitatively and qualitatively more PI. This in turn builds the sophistication of the user’s profile, as does other content able to be accessed by the platform about “reasonably identifiable” individuals.

Insight #3 DeepSeek collects quantitatively more data types from users than OpenAI

Unlike OpenAI, DeepSeek requires registering users to provide their date of birth, and may require users to upload proof of identity documents to verify this.

DeepSeek also collects users’ data from third party platforms, including from their linked Apple or Google accounts. Users’ audio chat inputs are also collected.

Overall, DeepSeek evidently collects more kinds of PI than OpenAI; and the more PI that is collected, the greater the exposure to security and privacy issues.

Insight #4 Unlike OpenAI, DeepSeek does not practice data anonymisation techniques

Data anonymisation (which makes the data subject unidentifiable) is an important practice used by businesses to protect user privacy. It is particularly important in the case of a data breach, as it prevents data being traced back to individual users. Whilst organisations are not legally required to anonymise PI without the request of the data subject under Australian law, it is considered good practice where practical.

DeepSeek’s privacy policy outlines no procedures followed by the platform to anonymise user data. In fact, DeepSeek even combines data between users’ various devices to paint a clearer picture of how the individual user is using the service.

This differs from OpenAI’s policy, which suggests that data ‘may’ be de-identified and aggregated for the same purpose. However, it should be noted that OpenAI’s obligation is not strictly described, meaning it is uncertain to what extent this is followed in practice.

 Insight #5 Unlike OpenAI, DeepSeek doesn’t allow users to opt out of having their content used to train AI

Both OpenAI and DeepSeek train their AI algorithms using large language models and deep learning, which require large data sets to improve their platforms. Both acknowledge that they utilise user input and output to refine their service (even though users own the IP in this content in both cases). This raises privacy concerns for users who may feed their PI into the chatbot interface.

OpenAI allows users to opt out of having their content train its algorithms by filling out an online request on its privacy portal. DeepSeek does not currently provide its users with this option. That said, one wonders how many users utilise this process.

Insight #6 DeepSeek shares your data with more third parties than OpenAI

Under Australian law, businesses are allowed to disclose PI for the reasons they have collected it, as outlined in their privacy policy. This ties back to the general rule that companies must gain the individual’s express or implied consent to use or disclose PI.

OpenAI imposes a strict obligation on itself to not sell PI for cross-contextual behavioural advertising. This differs from DeepSeek, which shares data with a wide range of third parties, including for advertising and analytics purposes. DeepSeek is also silent as to their data anonymisation techniques used to safeguard privacy during this process, unlike ChatGPT.

This poses a greater risk to users as their PI is passed on to other companies, magnifying their risk of data breach.

Insight #7 Personal data is not stored in Australia

OpenAI and DeepSeek both store user data in their home countries, being the United States and China respectively. Under APP 8, there are strict conditions on sending PI overseas, making organisations responsible for ensuring adequate privacy protections for data which is transferred outside Australia. Although both platforms comply with APP 8 by informing users that data is sent and stored overseas, it is still a consideration which users should be aware of when using the services.

There are also legal mechanisms that protect Australians’ PI stored overseas, including the Office of the Australian Information Commissioner’s (OAIC) networks with the US and China that facilitate regulation of Australians’ PI under the Privacy Act.

Insight #8 Both AI platforms provide limited protection for children using the service

Both OpenAI and DeepSeek allow individuals under 15 to use its platforms: OpenAI sets the minimum age at 13, whereas DeepSeek sets it slightly higher at 14.

Whilst the Privacy Act does not currently contain any special protection for children (watch this space, as change is imminent), we note that:

  • The OAIC considers that individuals below the age of 18 may lack the maturity or digital literacy to provide adequate consent to the use and disclosure of their PI, and that this should be evaluated on a case-by-case basis; and
  • As a general rule, the OAIC presumes that an individual under 15 does not to have the capacity to consent.[1]

Further:

  • OpenAI requires all users below the age of 18 to obtain parental permission to use their service;
  • DeepSeek on the other hand only requires users under 18 to read the privacy policy with a parent or guardian; and
  • Neither platform provides a mechanism to ensure that these respective rules are complied with.

While OpenAI’s permission policy theoretically provides stronger protection for children, in reality one wonders how many children are actually obtaining parental consent to use ChatGPT, or reading the privacy policy of DeepSeek together with their parent.

Insight #9 The platform allows users to share photos and other complex file formats, increasing the risk of serious harm

Both platforms support the upload of photos (but not videos at this stage), as well as web links, and many typical office file formats including Word, PDF, Powerpoint etc. However at this stage only ChatGPT is able to fully read the contents of the photos, and provide products and other insights. For its part, DeepSeek says it is limited to only analysing text base files. For photos, whilst these are available to be uploaded, DeepSeek can currently only extract and analyse text within the photos (using optical character recognition technology). Whether this means that the content of the photos is not being analysed by DeepSeek, on its own side and for its own purposes, is a point to reflect on. This is particularly the case since DeepSeek’s privacy policy states that it collects ‘uploaded files’, which it can do for various purposes including analysing how people are using the platform. That aside, the ChatGPT platform is more powerful at extracting deeper levels of PI at this stage of consumer release.

The ability to share photos and more complex files beyond text inputs in the app interface increases the sophistication and value of what can be shared to the platforms, increasing the risk of serious privacy harm.

Insight #10 DeepSeek and OpenAI must both comply with the Australian Privacy Act

Both DeepSeek and OpenAI are foreign companies that service global users, raising the question as to how enforceable local Australian privacy laws are against these platforms.

Section 5B of the Privacy Act facilitates the regulation of international organisations’ privacy activity. In particular, it provides the Act extra-territorial operation where the organisation ‘carries on business in Australia’. In 2023, the Privacy Act was amended to modify the test for an ‘Australian link’. Previously, there was a need for the organisation to collect or hold personal information in Australia. However, the following the amendment, the test was broadened to whether the organisation was carrying on business in Australia.

In Facebook Inc v Australian Information Commission,[2] the Full Federal Court of Australia addressed whether Facebook Inc, a US-based company, was subject to the Privacy Act. As this pre-dated the amendment above, the Court was required to consider both whether Australian data was being collected or held by Facebook, and also whether Facebook was carrying on business in Australia. This was in the context of unauthorised disclosure of Australian users’ PI during the Cambridge Analytica scandal. In the case, the term ‘carrying on business’ was broadly defined.  The Court found prima facie that Facebook Inc was carrying on business in Australia by deploying cookies on Australian users’ devices to facilitate targeted advertising, and offering Australian developers access to the Graph API, which enabled integration with the Facebook platform.

The Facebook case emphasised that a physical presence in Australia is not necessary for a company to be deemed to be carrying on business for privacy regulation purposes. Based on this precedent, it is likely that OpenAI and DeepSeek must comply with the Privacy Act and the APPs, including because they also use cookies to enable Australian users to stay logged in to their services, the active availability and commercialisation of the generative AI apps in Australia, and their collection of personal information in Australia. This means that were one of these AI platforms to breach the Privacy Act, the OAIC would likely be able take action overseas to investigate complaints and enforce determinations.

Concluding Remarks

The race for the superior Generative AI is far from over, and will continue to play out on the global stage. However, from a privacy perspective, having analysed the privacy policies of DeepSeek and OpenAI through the lens of Australian users, it is clear that (assuming it is complying) OpenAI is safest for Australians’ data.

Generative AI platforms like ChatGPT and DeepSeek encourage users to share their PI to get better, more usable and more tailored outputs from these interactive services. Whilst a user will usually perceive their information as being posted in a private service, especially with DeepSeek, this may not be the case. As the “relationship” with the AI platform grows, and the trusting user shares more PI, this builds the sophistication of the user’s profile, as does other content the platform is able to access and stitch together. Users of these platforms should be aware of this, and act consciously and carefully.

Are you a business looking to ride the AI wave without sacrificing privacy? For advice on how to use AI platforms and protect your data, please contact us below. Edwards + Co Legal provide corporate and commercial legal advice to modern Australian businesses.

[1] Australian Privacy Principles Guidelines Privacy Act 1988, Chapter B.60.

[2] Facebook Inc v Australian Information Commission [2022] FCAFC 9.

Close Btn Created with Sketch.

RECEIVE FREE NEWS + EXCLUSIVE INSIGHTS

Straight to your inbox on legal and business developments set to disrupt and transform.