Is AI generating an ‘averaged’, one-sided, view of art history?

0
40

The use of artificial intelligence (AI) programmes (or models), has been plagued with issues of bias and misrepresentation. A recent study published by Bloomberg demonstrated that the text-to-image AI app Stable Diffusion generates images that replicate racial and gender biases, with an even more alarming disparity than in the real world. This comes after Google’s image-recognition system shockingly mislabelled an African-American couple as “gorillas” in 2015; Amazon dropped a recruitment algorithm in 2018 after finding that it favoured male candidates.

These biases often derive from imperfect and skewed data used to “train” an AI system, or reflect existing social inequities, leading towards unconscious biases carrying into new technologies. Now, with the rising use of AI in image generation, art making and research, these flawed systems may have wider implications for visual cultures and future narratives of our art history.

Most providers of AI image generators do not disclose what data they use to train their apps, but often make use of massive volumes of image data scraped from the internet. While it is difficult to determine whether the internet has a holistic understanding of global visual cultures, it is likely to better reflect the interests of a majority of its users.

The lack of online resources

The most used language on the World Wide Web is English, at 27%, with 42% of internet users being based in Asia; Europe and North America following closely behind, at 38% combined. The Senegalese artist Linda Dounia Rebeiz says the “Western canon of art has long been the focus of most academic and critical attention, and therefore is the most widely documented”, and would form the basis of art and images included in the dataset. Birde Tang, a curator based in Abu Dhabi, says that in her experience of researching the histories of art related to the Global South and specifically, the WANASA region (West Africa, North Africa, South Asia), she has faced challenges in accessing materials “due to the lack of online resources”, likely because there is less information on these geographies available on the internet.

LAION-5b is the largest public text-to-image dataset used to train text-to-image models, including Midjourney and Stable Diffusion. The artists Mathew Dryhurst and Holly Herndon co-founded Spawning, an organisation that creates tools to allow artists to opt in or out of the datasets that generative art models use to train and create compositions, and shares its analysis of LAION-5b at Have I Been Trained? This last is a web-based tool they created in 2022 to show whether particular images have been used for AI training and to request that images are opted out of future models. It has demonstrated, Dryhurst says, that “most data scraped for training [AI models comes] from Western sources, with a vast majority of images coming from Pinterest, stock photo repositories and WordPress [server networks]”. While models used to guide the image-generation process, such as CLIP, may have some knowledge of artists and visual styles of non-Western origins, Dryhurst says, “If you were to simply prompt ‘art’ or ‘a painting’, I think it is fair to conclude you are most likely to be summoning Western archetypes”.

Datasets such as LAION-5b are not created with the intention of furthering specific art historical narratives or visual cultures; they are only composites of what is available on the wider Web. In doing so, they take images as data points rather than visual objects with historical or social contexts and inferences, and create what the film-maker and artist Hito Steyerl refers to as the “mean image” or, in Dryhurst’s words, a “poorly curated average representation of a concept scraped carelessly from the internet”. When the process tends towards a single average, it erases the margins. In this case, art histories and visual cultures outside of the popularised canon may be further sidelined as AI-generated images and narratives become more widely used for art making and research.

A work from Adaeze Okaro’s series Planet Hibiscus, made with an AI programme, for the artist Linda Dounia Rebeiz’s digital show In/Visible, which was devised to feed future AI training with more diverse demographics

Courtesy the artist

Artists such as Rebeiz are incorporating AI into their practice to generate images and feed future training with deliberate representation of diverse demographics and perspectives. Rebeiz says, “I understand that [AI] has the tendency to be an echo chamber of our world order, which makes my relationship with it complicated, but also makes my participation critical”. From training custom generative adversarial neural network (GAN) models—where one network generates images and a second distinguishes the real from the fake—to working with generative AI models, Rebeiz sees that it is critical for her, as a Black woman artist, to be actively present in the emerging field. “I was afraid that if I did not [participate],” says Rebeiz, “I and everything I represent would be erased from the digital memory of the world”. To continue this work, Rebeiz has curated In/Visible on the digital platform Feral File, bringing together ten Black artists working with AI. She sees their work as “brute-forcing AI tools to tell their stories” by “endlessly reprompting, correcting distortions and editing out stereotypes”, thus making the contributions of Black artists to AI more visible.

How AI can make connections

Cultural institutions and collections also have a part to play. Kevin Lim, the director of innovation and technology at the National Gallery Singapore, sees the museum’s collections as valuable resources for those working with Southeast Asian art which can be used to contribute to AI datasets that reflect the role that the region has played in global art history. Lim says that, in working with large datasets, new opportunities present themselves for identifying patterns and relationships “that are beyond human or a single author’s perception”, and could unveil interesting connections to add to the discourse.

AI models are yet to understand images by their social and cultural contexts and are at present unfiltered reflections of what Dryhurst describes as the “commercial internet”. As the technology evolves, Dryhurst expects that we will be interacting with more specific models in the future, which target images in a more directed fashion. “I think that curators, historians, journalists and artists alike will likely have to take very seriously the task of building their own models and curating their own data in order to stimulate a wide range of narratives and pathways for the public,” he says. This sentiment is echoed by Rebeiz, who believes we first have to “grapple with the fraught legacy of the art world and technology” and recognise biases in our systems, to build new tools that can be truly expansive and polyphonic.

  • Clara Che Wei Peh is a Singapore-based curator and art writer specialising in emerging technologies. Her curated projects include Art Dubai Digital 2023.

LEAVE A REPLY

Please enter your comment!
Please enter your name here