A recent ripple spreading rapidly through social media is an AI art app that creates stylised avatars based on selfies. The Lensa AI app was downloaded 5.8 million times in the first week of December 2022 alone. However, despite its popularity, the AI tool raises legal concerns in relation to intellectual property ownership and privacy, amongst other matters.
If you are on any kind of social media you will have seen them by now. The younger, digitally air-brushed, artistically rendered, somewhat magical visions of many of our friends have been floating through stories for some weeks. Where do these come from?
The main platform responsible for the images is ‘Lensa AI’, which was developed in 2018 as a photo editing app utilising artificial intelligence (AI). It is owned by the US entity, Prisma Labs, Inc. In late November 2022, the launch of the ‘Magic Avatars’ meant the app went viral and shot to the top of the App Store and Google Play.
The Magic Avatars feature of the Lensa AI app has also spawned a number of copycat applications which work in a similar way, and the kinds of issues discussed below are not confined to this Prisma Labs alone.
The “selfie-isation” of AI art is part of a trend which we explored in our article on the Dall-e 2 AI platform, and touches again on the idea of machine as author and inventor – which we wrote about in our article on AI machine as a patent owner. In this article, we look primarily at some copyright and privacy concerns raised by the Lensa AI app.
The Lensa AI tool allows users to upload 10 or more “selfie” images of themselves, with no other people in the shot. A few minutes later the AI produces a series of 50 illustrations based on the images in a variety of artistic styles.
Lensa AI uses Stable Diffusion, which is a text-to-image software application that accesses a database of online images to learn patterns. This database, known as LAION-5, includes a dataset of 5.85 billion text-image pairs. The pairs “train” the AI in making the personalised “Magic Avatars” from the patterns found in the pixel arrangements of the images.
When a user uploads images of themselves, the AI is able to recognise patterns contained in the images and repeat the process to produce a range of new images. The more users that upload their images, the “smarter” the machine intelligence becomes.
The Magic Avatar tool is priced at US$7.99 for 50 images, and the company reported that users spent $9.25 million on the app in the first week of December. According to Statista, the App was downloaded 5.8 million times in the first week of December alone.
The process deployed by the app has raised a number of copyright and privacy issues, particularly from those within the art community. This has become quite contentious, with artists and commentators alike debating whether Lensa AI has crossed the line from both a legal and an ethical point of view. These are touched on below.
The bulk of criticism Lensa AI currently faces stems from the creation of the artwork itself, and whether the artworks used to train the AI has been ‘copied’ by the app. Artists have found their artistic works in the LAION-5 database without their knowledge or permission, some bearing their ‘mangled’ signatures. It seems clear that Prisma Labs has used these works as a free training source for the tool, without rewarding or recognising the moral rights of the artists.
Many avatars follow similar brush strokes, colours and creative composition to human artists who have spent years refining their craft. However, under Australian law these stylistics elements and ideas are not protected by copyright. The Full Federal Court affirmed in Cummins v Vella  FCAFC 218 that copyright only protects the particular form and individual artwork, not the concept behind it. This reflects the fundamental principle that one can examine another’s work and use the elements and techniques to create their own original piece, without breaching copyright.
Artists are concerned that the use of their stylised techniques by Prisma Labs meets the test of ‘substantial similarity’ and warrants copyright protection. However, current Australian copyright law does not assist the artists.
Whether techniques, processes and style could ever attract copyright protection is questionable. A finished artwork comprises of several factors including style, composition, various techniques and colours, the totality of which can then be copyright protected. Attaching copyright to each individual element seems difficult. Some argue that this is entirely appropriate, as there is no such thing as entirely ‘original’ works, as all art owes a debt to the work that inspired it, or that “There is nothing new under the sun”.
Prisma Labs responded to the backlash on Twitter, recently stating that:
“To sum up, AI produces unique images based on the principles derived from data, but it can’t ideate and imagine things on its own. As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool.”
Additionally, users should be aware that the Lensa AI terms state that Prisma Labs owns any art that is created from the user’s selfies.
There have been instances where artists have found copies of their own faces and, concerningly, medical records in the LAION-5 database. In Australia, medical records are considered to be ‘sensitive information’ under the Privacy Act, and the unlawful disclosure of these records can cause serious harm to the individual.
There have been calls for copyright law to catch up and assist in regulating the production of this form of art to ensure that artists are not being “ripped off” and that their physical work won’t be devalued by machines. The law has been slow to catch up to the pace of AI development and web 3.0 generally, and whether changing the established principles of copyright law to protect artists in this way is possible remains to be seen.
Beyond the purely legal analysis, the mass production of artworks by a machine that draws on underlying human effort and originality, raises very real ethical and social anthropological questions.
Looking ahead, one can envisage the possibility of the use of databases and processes similar to the Lensa AI to create animated “digital people”, which have the capacity to talk and impersonate humans. This is especially the case as the way the AI works, in a simplistic sense, is that the more images it is fed, the more intelligent and sophisticated it becomes. As the line between real and meta worlds increasingly blurs, one can imagine this creating chaos across social media and in corporate environments such as on Zoom calls. Beyond the potential for harmless goofing around, this will raise real issues regarding personal liability and accountability for the conduct of these AI clones, as well as the potential for serious identity theft.
Given the amount of social interest and money being generated, and the advancing state of the tech, this space will undoubtedly develop rapidly in 2023. Stay tuned (and face-tuned).
 Tegan Jones, ‘‘It’s just a vehicle for profit’: Australian artists speak out against AI art’ Crikey (Article, 15 December 2022) <https://www.crikey.com.au/2022/12/15/ai-generated-art-lensa-australian-artists/>.