In recent weeks, an image has been stirring debate on social media. Once again, artificial intelligence (A.I.), that great technological provocateur of the past 12 months, is to blame.
At first glance, it’s innocuous. A line of seven people in sharp evening attire stroll down a brightly lit street—think Broadway minus the cars. Squint or minimize the image’s size, however, and the word “OBEY” emerges.
Similar images spat out by A.I. generators include a Studio Ghibli-styled forest infused with the ‘M’ of MacDonald’s and a litter of cats whose markings have been manipulated to spell out “Gay Sex.” The jocular corner of the internet, it seems, continues to hold some sway.
This latest viral phenomenon grew out of attempts to create artful QR codes from images. The technique involved adding a guidance technique called ControlNet to text-to-image model Stable Diffusion. It allows users to closely control the shapes and poses of subjects in a generated image, creating hypnotic optical illusions.
Woah. AI art. Can you see the subliminal obey pic.twitter.com/B8TSXJ5VRF
— Stephen Davies (@stedavies) September 19, 2023
The latest A.I.-centric handwringing concerns the potential to generate images with hidden, subliminal messages. The supposed danger is that brands will start producing carefully doctored images that subtly embed their logos. Previously, the existential threat was A.I.’s looming global takeover; now the concern is that it will be used to control and manipulate the masses.
“Many talk about the dangers of ‘AGI’ [artificial general intelligence] taking over humans but you should worry more about humans using A.I. to control other humans,” a prominent user called Cocktail Peanut wrote on X, formerly known as Twitter.
The fear mongering is reminiscent of the 1950s in the U.S. when the public grew panicked that companies were placing subliminal messages in advertising. It was labelled “merchandising hypnosis” and in 1958, the country’s National Association of Broadcasters prohibited the practice. Subsequent research on the practice has generally shown the fears to have been overblown, though more effective when the message conveyed is negative.
“A.I., like other emerging technologies, is frequently caught in the crossfire of enthusiasm and dread. Some fears are exaggerated, fueled by sensationalism or ignorance,” Irina Raicu director of The Internet Ethics Program at Santa Clara University told Artnet News.
“Subliminal communications, by definition, are intended to influence behavior or thoughts without the recipient’s conscious understanding. If A.I. models were consistently and purposefully generating such messages, there would be legitimate cause for alarm.”
More Trending Stories: