The protection of private information is fundamentally considered a matter of human dignity, safety and self-determination. Its importance has grown in recent years as digital businesses have demanded more personal information from users in exchange for using their services. As global AI systems increasingly become integrated into people’s daily lives, from a consumer protection perspective, ensuring that private data is protected become an even more important (and difficult) challenge.
In Australia, personal information is protected by the Privacy Act 1988 (Cth) (Privacy Act) and its Australian Privacy Principles (APPs). Over time, Australian laws have evolved to align more closely with the European General Data Protection Regulation (GDPR), the global benchmark for data protection policy.
The Privacy Act defines Personal Information (PI) as information or an opinion about an identified individual, or an individual who is “reasonably identifiable”. The information or opinion does not have to be true to be PI under Australian law.
At a qualitative level, certain PI is more sensitive than others. For example, political opinions and health information are typically more sensitive than phone numbers and IP addresses. At a quantitative level, the more PI an individual shares, the greater the risk of being vulnerable to potentially serious consequences, such as identity theft and financial fraud resulting from data breaches.
The launch of Chinese AI startup DeepSeek in January significantly disrupted the tech market, initially wiping nearly A$1 trillion off the US stock market.
Quickly becoming one the most downloaded apps on the Apple App Store and Google Play Store, DeepSeek has been banned by various countries and organisations amid growing privacy concerns. In Australia, DeepSeek has been prohibited from federal government computers and mobile devices after it was found to pose “an unacceptable risk” to national security. Similar sentiments led to a US ban of popular Chinese owned social media company TikTok last month.
As of January this year, it is estimated that both ChatGPT and DeepSeek had tens of millions of daily active users, indicating that the two AI platforms have quickly become rivals vying for market share. However, the market is far from stable, with OpenAI, the owner of ChatGPT, last week rebuffing a $97.4 billion bid for certain assets of the company led by tech giant Elon Musk.
Amid the hype around these platforms, their relative approaches to privacy require closer scrutiny. Our previous articles on the 2022 Optus Breach, the 2018 Cambridge Analytica Controversy and the 2015 Ashley Madison Breach shed light on how such data breach scandals have grown in frequency and magnitude.
Accordingly, we have done a deep dive into DeepSeek and OpenAI’s respective privacy policies, to deliver 10 key insights around their approaches to personal data.
Below we reveal our 10 high level insights on the respective approaches of the Generative AI giants to the protection of PI, from the point of view of Australian users.
With respect to information sharing, ChatGPT and DeepSeek each operate in a similar way from the user’s perspective. To simplify:
The ability for serious harm to occur to users’ privacy rights through Generative AI platforms is potentially immense, and growing exponentially, unless properly regulated. Users also need to be educated about these privacy risks, so they are more careful about what they share.
In some ways, these online AI platforms can be compared to social media services like Facebook. Both enable users to share private information that can then be assembled to form deep insights about individuals. Yet social media sites are built around users sharing information with the platform, and in turn with other users, and their innate purpose is understood to be fundamentally public and social.
With Generative AI platforms like ChatGPT and now DeepSeek, users will usually perceive their information as being posted in a private service. However, especially with DeepSeek, confidentiality is far from guaranteed, and in fact may be contrary to the business model.
As the “relationship” with the AI tool grows, and the chatbot increasingly positions itself as the user’s ‘trusted advisor’, the user may be inclined to disclose quantitatively and qualitatively more PI. This in turn builds the sophistication of the user’s profile, as does other content able to be accessed by the platform about “reasonably identifiable” individuals.
Unlike OpenAI, DeepSeek requires registering users to provide their date of birth, and may require users to upload proof of identity documents to verify this.
DeepSeek also collects users’ data from third party platforms, including from their linked Apple or Google accounts. Users’ audio chat inputs are also collected.
Overall, DeepSeek evidently collects more kinds of PI than OpenAI; and the more PI that is collected, the greater the exposure to security and privacy issues.
Data anonymisation (which makes the data subject unidentifiable) is an important practice used by businesses to protect user privacy. It is particularly important in the case of a data breach, as it prevents data being traced back to individual users. Whilst organisations are not legally required to anonymise PI without the request of the data subject under Australian law, it is considered good practice where practical.
DeepSeek’s privacy policy outlines no procedures followed by the platform to anonymise user data. In fact, DeepSeek even combines data between users’ various devices to paint a clearer picture of how the individual user is using the service.
This differs from OpenAI’s policy, which suggests that data ‘may’ be de-identified and aggregated for the same purpose. However, it should be noted that OpenAI’s obligation is not strictly described, meaning it is uncertain to what extent this is followed in practice.
Both OpenAI and DeepSeek train their AI algorithms using large language models and deep learning, which require large data sets to improve their platforms. Both acknowledge that they utilise user input and output to refine their service (even though users own the IP in this content in both cases). This raises privacy concerns for users who may feed their PI into the chatbot interface.
OpenAI allows users to opt out of having their content train its algorithms by filling out an online request on its privacy portal. DeepSeek does not currently provide its users with this option. That said, one wonders how many users utilise this process.
Under Australian law, businesses are allowed to disclose PI for the reasons they have collected it, as outlined in their privacy policy. This ties back to the general rule that companies must gain the individual’s express or implied consent to use or disclose PI.
OpenAI imposes a strict obligation on itself to not sell PI for cross-contextual behavioural advertising. This differs from DeepSeek, which shares data with a wide range of third parties, including for advertising and analytics purposes. DeepSeek is also silent as to their data anonymisation techniques used to safeguard privacy during this process, unlike ChatGPT.
This poses a greater risk to users as their PI is passed on to other companies, magnifying their risk of data breach.
OpenAI and DeepSeek both store user data in their home countries, being the United States and China respectively. Under APP 8, there are strict conditions on sending PI overseas, making organisations responsible for ensuring adequate privacy protections for data which is transferred outside Australia. Although both platforms comply with APP 8 by informing users that data is sent and stored overseas, it is still a consideration which users should be aware of when using the services.
There are also legal mechanisms that protect Australians’ PI stored overseas, including the Office of the Australian Information Commissioner’s (OAIC) networks with the US and China that facilitate regulation of Australians’ PI under the Privacy Act.
Both OpenAI and DeepSeek allow individuals under 15 to use its platforms: OpenAI sets the minimum age at 13, whereas DeepSeek sets it slightly higher at 14.
Whilst the Privacy Act does not currently contain any special protection for children (watch this space, as change is imminent), we note that:
Further:
While OpenAI’s permission policy theoretically provides stronger protection for children, in reality one wonders how many children are actually obtaining parental consent to use ChatGPT, or reading the privacy policy of DeepSeek together with their parent.
Both platforms support the upload of photos (but not videos at this stage), as well as web links, and many typical office file formats including Word, PDF, Powerpoint etc. However at this stage only ChatGPT is able to fully read the contents of the photos, and provide products and other insights. For its part, DeepSeek says it is limited to only analysing text base files. For photos, whilst these are available to be uploaded, DeepSeek can currently only extract and analyse text within the photos (using optical character recognition technology). Whether this means that the content of the photos is not being analysed by DeepSeek, on its own side and for its own purposes, is a point to reflect on. This is particularly the case since DeepSeek’s privacy policy states that it collects ‘uploaded files’, which it can do for various purposes including analysing how people are using the platform. That aside, the ChatGPT platform is more powerful at extracting deeper levels of PI at this stage of consumer release.
The ability to share photos and more complex files beyond text inputs in the app interface increases the sophistication and value of what can be shared to the platforms, increasing the risk of serious privacy harm.
Both DeepSeek and OpenAI are foreign companies that service global users, raising the question as to how enforceable local Australian privacy laws are against these platforms.
Section 5B of the Privacy Act facilitates the regulation of international organisations’ privacy activity. In particular, it provides the Act extra-territorial operation where the organisation ‘carries on business in Australia’. In 2023, the Privacy Act was amended to modify the test for an ‘Australian link’. Previously, there was a need for the organisation to collect or hold personal information in Australia. However, the following the amendment, the test was broadened to whether the organisation was carrying on business in Australia.
In Facebook Inc v Australian Information Commission,[2] the Full Federal Court of Australia addressed whether Facebook Inc, a US-based company, was subject to the Privacy Act. As this pre-dated the amendment above, the Court was required to consider both whether Australian data was being collected or held by Facebook, and also whether Facebook was carrying on business in Australia. This was in the context of unauthorised disclosure of Australian users’ PI during the Cambridge Analytica scandal. In the case, the term ‘carrying on business’ was broadly defined. The Court found prima facie that Facebook Inc was carrying on business in Australia by deploying cookies on Australian users’ devices to facilitate targeted advertising, and offering Australian developers access to the Graph API, which enabled integration with the Facebook platform.
The Facebook case emphasised that a physical presence in Australia is not necessary for a company to be deemed to be carrying on business for privacy regulation purposes. Based on this precedent, it is likely that OpenAI and DeepSeek must comply with the Privacy Act and the APPs, including because they also use cookies to enable Australian users to stay logged in to their services, the active availability and commercialisation of the generative AI apps in Australia, and their collection of personal information in Australia. This means that were one of these AI platforms to breach the Privacy Act, the OAIC would likely be able take action overseas to investigate complaints and enforce determinations.
The race for the superior Generative AI is far from over, and will continue to play out on the global stage. However, from a privacy perspective, having analysed the privacy policies of DeepSeek and OpenAI through the lens of Australian users, it is clear that (assuming it is complying) OpenAI is safest for Australians’ data.
Generative AI platforms like ChatGPT and DeepSeek encourage users to share their PI to get better, more usable and more tailored outputs from these interactive services. Whilst a user will usually perceive their information as being posted in a private service, especially with DeepSeek, this may not be the case. As the “relationship” with the AI platform grows, and the trusting user shares more PI, this builds the sophistication of the user’s profile, as does other content the platform is able to access and stitch together. Users of these platforms should be aware of this, and act consciously and carefully.
Are you a business looking to ride the AI wave without sacrificing privacy? For advice on how to use AI platforms and protect your data, please contact us below. Edwards + Co Legal provide corporate and commercial legal advice to modern Australian businesses.
[1] Australian Privacy Principles Guidelines Privacy Act 1988, Chapter B.60.
[2] Facebook Inc v Australian Information Commission [2022] FCAFC 9.