This month, the Lensa AI App (the App) has topped lists of most-downloaded photo and video apps and flooded social media feeds with AI generated Magic Avatars (otherwise known as "hot AI selfies"). In the first week of launching the Magic Avatars feature, the App attracted 5.8 million users. In the first five days, the company owning the App, Prisma Labs, generated $8.2 million of revenue from the App, more than the same figure for the previous year.
How does the Magic Avatar feature work?
In return for a small fee, a user can upload 10 to 20 selfies. The App then uses AI technology to generate a host of colourful, highly stylised avatars. These avatars are based on the submitted selfies and informed by an open-source neural network model called Stable Diffusion, which has been trained on a sizeable amount of unfiltered content scraped from the internet.
Criticism of the App
The App has drawn wide criticism from privacy experts, digital artists and users alike. In particular, users have commented on its tendency to perpetuate problematic gender roles and misogynistic, racist and outdated notions of beauty.
Some users commented that men appeared disproportionately more likely to be depicted as astronauts, explorers and inventors, with women more likely to be turned into fairies and princesses. Others noted that the App made them thinner than they were, while some users commented that their skin had been whitened and their features aligned with Eurocentric beauty standards.
A significant number of users complained that women's avatars appeared highly sexualised. Although some users seemed pleased with their results, many were more critical. For example, The Guardian uploaded several images of three different famous feminists: Betty Friedan, Shirley Chisholm and Amelia Earhart. It found that Betty Friedan, the author of The Feminist Mystique, became "nymph-like" and "full-chested", while Shirley Chisholm, the first black woman elected to US Congress, "had a wasp waist" and Amelia Earhart, the aviation pioneer, was "rendered naked – leaning on to what appeared to be a bed". In addition, the tech expert and advocate Brandee Barker uploaded only headshots but found that her results contained "several sexualized, half-clothed, large-breasted, small-waisted 'fairy princess', 'cosmic' and 'fantasy images'".
Perhaps most concerningly, experiments carried out by users found that the App could be misused to generate nudes from photos of children. One user uploaded a mixture of childhood photos and selfies and found that the results depicted fully nude photos of an adolescent or childlike face with a distinctly adult body.
Further, users found the App could be used to generate non-consensual deepfake soft pornography by photoshopping people's faces onto topless models. For example, Tech Crunch uploaded a set of 15 photos of a well-known actor plus an additional five photos of that actor's face photoshopped onto a topless model. It explained that the results were "a lot spicier" than expected, with over ten per cent of the image set including topless photos of higher stylistic consistency than the poorly done photoshopped images provided as input.
Prisma Lab's response
Such has been the strength of criticism that Prisma Labs has explicitly dealt with the question "Why do female users tend to get results featuring an overly sexualised look?" in an FAQ document. The document explains that Stable Diffusion was trained on a sizeable amount of "unfiltered internet content" and that, as such, "it reflects the biases humans incorporate into the images they produce. Creators acknowledge the possibility of societal biases. So do we".
In addition, the FAQ document explains that changes have been introduced to make it harder for users to generate "not safe for work" content.
Comment
AI technologies clearly have major transformative potential and the App's viral uptake evidences a real curiosity to explore and benefit from this. That said, while embracing the advances made possible by AI we must also proceed with caution.
We must ensure that appropriate safeguards are in place to stop AI being misused. Whilst there are established criminal laws and civil remedies to protect and compensate those who are subjected to revenge porn, individuals targeted by deepfake pornography are far more exposed. It is hoped that the new offenses proposed in the Online Safety Bill, criminalising the creation of deepfake pornography images, will have a positive impact.
In addition, we must properly interrogate what data sets are being used to train our AI and what consequences this is likely to have on the output. Failure to do so risks further perpetuating societal biases and unthinkingly accepting a warped, dated and disturbed depiction of women and femininity.
This week's headlines will do nothing to encourage and empower women and minority groups to embrace AI and the metaverse, which is a crying shame. If these technologies continue to be designed by and for privileged, white, men, these issues will never be addressed.
To hear from experts trying to reverse this trend, join us for the third event of our Bloody Difficult Women series. The series aims to shining a light on the misogynistic way that women and minority groups are portrayed in the media, and the abuse they suffer online. The series is aimed at inspiring attendees to confidently use their voices, raise their profiles and their aspirations.