Insights + Resources

March 28, 2023

The Rise of the AI Deities: As Thorny Issues Cluster – Part 1

Around the world, people are in equal parts marvelling at the capabilities of AI platforms like Chat GPT to transform their lives, and fearing the hell for their livelihoods. It’s a watershed time that some regard as a bigger inflection point for humanity than the mainstreaming of the web or the invention of the printing press. Below, we take a look at some developing clusters of legal issues that accompany the rise of these AI deities. Whether these thorny clusters will be a bouquet of roses or a crown of thorns remains to be seen.

Introduction

Bill Gates has called artificial intelligence the most important technological advance since the 1980s.[1] This is perhaps not surprising given Microsoft have invested US$13Bn into development house Open AI LLC, creator of fan-favourites ChatGPT and Dall-e 2.

Cynicism parked, it is inescapable that massively data-heavy, machine-learning tools like ChatGPT, Dall-e 2 and, to a lesser extent, Lensa AI have begun to revolutionise the way we work, learn and play. But what are the legal issues that accompany having our lives assisted and controlled by these intelligent, God-like (some would say) machines.

Below we look at three thorny clusters of issues raised by these AI deities.

1. The Data Access Cluster

Machine learning models like ChatGPT use mind-boggling amounts of data to develop their systems and improve their accuracy. Without a large data set, their outputs may be skewed, misleading, shallow or just plain wrong. For example, our law firm asked ChatGPT about copyright law in a small country and it produced complete judicial case citations for cases that our further investigation revealed did not exist.

At the same time, the surprising ease with which hackers have recently generated large-scale data breaches – headlined in Australia by the Optus Data Breach – has fanned growing consumer concerns about information privacy.

To create the enormous data troves on which successful AI models rely, huge amounts of information is gorged down without the knowledge or consent of IP owners or data subjects. This is particularly problematic when sensitive personal information is being accessed. In the wrong hands, personal information can be used for identity theft and other types of fraud, as well as profiling, discrimination, and other privacy violations.

Privacy experts rightly argue that the accuracy and integrity of AI programs should not come at the expense of individual privacy.[2]

Managing AI’s access to enough data (and it’s a lot) is a challenging issue for lawmakers because it involves a balancing act between two things that many agree are “good for society”.  That is, the potentially life-changing benefits of machine-learning programs, pitted against the widely acknowledged need to protect individual privacy.

Without regulatory interference, the onus will be on software developers and owners (often seeking to launch dominant commercial AI platforms in an increasingly competitive market) to ensure their training models do not infringe personal information rights. It is not a huge leap of logic to conclude that this may not be in their best commercial interests.

It seems beyond sensible reproach that data accessed must be “clean”, in the sense that individuals are not personally identifiable, information that is accessed is quarantined, anonymised or pseudonymised and otherwise kept de-identifiable. Then again, good luck policing that chief.

2. The IP Cluster

Scout-like AI platforms such as ChatGPT and Dall-e 2 enable users to instantly produce wholly new literary and artistic works by drawing on the hundreds of billions of pieces of information on which the program has been trained. Yeh, it’s a ton of data.

The ownership of the outputs spawned by AI challenges traditional copyright principles that rest upon the well-established legal doctrinal bedrock of “human authorship” and “creative spark”.

Our article on AI art discussed artists’ complaints when AI outputs closely resembled their original work. As yet, there has been limited legal recourse for those who feel their copyrighted work has been infringed.

Copyright law has been criticised for being slow to respond, and it has fallen to industry players, such as Adobe and Nvidia[3], to self-regulate.

 

DO NOT TRAIN!

 

Adobe, a US$170Bn company known for its ground-breaking and ubiquitous software products, especially within creative industries, is no featherweight. Adobe has responded to artists’ outcry by agreeing to only train Adobe’s Firefly on public domain and copyright-expired content[4].

Adobe has also suggested the global adoption of a “do not train” content tag would deter AI from training on “called out” copyrighted works. This is a powerful message by an important company about artist rights protection, though again is a solution that relies on the goodwill of commercial entities to self-regulate.

Given the huge disadvantage that regulators face against the world’s most well-resourced companies who play in this space, industry regulation may well be the best way forward.

3. The Liability Cluster

The way AI models are designed, as the program interacts with users, it develops and becomes more autonomous. If the outputs cause professional damage, personal injury, property damage, reputational harm or other forms of financial or economic loss or liability, the question arises as to who or what (if anyone or anything) is to be held to account.

As mentioned above, we asked ChatGPT about copyright law in another country and it completely concocted names of parties, dates and legal issues. Program owner Open AI LLC calls these outputs “hallucinations”, yet many users may blithely rely on them, with potentially disastrous social or professional consequences.

Autonomous cars are a notable example of this challenge. The first recorded death involving a driverless car in Arizona in 2018 attracted significant criticism for Uber, who was testing their self-driving car software.[5}

Arguments arise as to which of the following party should be asked to make good the loss:

  • The owner of the program (such as Open AI LLC)
  • The developer who wrote the software code
  • The user who provided the text input
  • The AI model itself (well not yet anyway!)

The autonomous car case was settled, leaving the liability issue unanswered. However several jurisdictions including the US and UK have since introduced legislation that attempts to create a legal framework around this issue.

Ensuring that those who have been harmed by AI have access to legal recourse and damages is of fundamental importance. The baseline principle of the “rule of law” dates back to ancient civilisations such as Greece and Rome, and provides that there must be an established and predictable legal framework that applies equally to all individuals and organisations.

As the capabilities of AI develops, there is a need for clear and consistent legal frameworks that regulates the responsible use and deployment of these programs, whilst also protecting the rights and interests of individuals and society as a whole.

And with that we say…

Whether these thorny clusters of legal issues will be a bouquet of roses or a crown of thorns for the era of AI deities remains to be seen. One thing we do know is that AI is here and it will change life as we know it.

As we move forward in “twitter time” (i.e. fast), lawmakers need to balance “getting hell out the way” against creating adequate regulatory frameworks to ensure the ethical use of AI and preserving the fundamental rule of law. In this exhilarating time, it falls upon all of us to help shape this responsibly.

E+Co are experts  in the protection and commercialisation of IP assets and digital industries. For advice on developing laws surrounding AI and the digital landscape generally, please contact us below.

 

[1] Bill Gates, ‘The Age of AI has begun’ Gates Notes: The Blog of Bill Gates (Blog Post, 21 March 2023) <https://www.gatesnotes.com/The-Age-of-AI-Has-Begun>.

[2] Cameron F. Kerry, ‘Protecting privacy in an AI-driven world’ Brookings (Report, 10 February 2020) <https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/>.

[3] Dawn Chmielewski and Stephen Nellis, ‘Adobe, Nvidia AI imagery systems aim to resolve copyright questions’ Reuters (Article, 22 March 2023) <https://www.reuters.com/technology/adobe-nvidia-ai-imagery-systems-aim-resolve-copyright-questions-2023-03-21/>.

[4] In Australia and the USA, the life of the author plus 70 years.

[5] Wakabayashi, Daisuke, ‘Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam’ (Article, 19 March 2018) <https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html>.

Close Btn Created with Sketch.

RECEIVE FREE NEWS + EXCLUSIVE INSIGHTS

Straight to your inbox on legal and business developments set to disrupt and transform.