Would you let a robot wash you? This question was posed by artist group Blast Theory while investigating the ethics of artificial intelligence in care systems. The Brighton-based collective is showing in “A.I.: Who’s Looking After Me at Science Gallery London (until January 20), which brings doctors, patients, artists and scientists together to explore key issues surrounding A.I. and care.
“In a care setting, being washed by a robot is a realistic possibility,” Blast Theory’s Matt Adams told Artnet News. “There’s this tension where you might not want a robot to do something so intimate; you want human contact. But the flip argument is, it’s better for a robot to wash you so you’re not dealing with the embarrassment of another person; you have some privacy. There are these tensions between what impersonal means versus private.”
Fear and suspicion of A.I. is escalating, raising questions of privacy, artistic authenticity, and human redundancy. The exhibition avoids easy resolutions, exploring the entangled benefits and risks of artificial intelligence in contemporary life. “A.I. is here,” Siddharth Khajuria, director of Science Gallery London, told Artnet News. “It’s not dystopian or future hopeful. It’s present and messy.”
The gallery, connected with King’s College London, combines diverse knowledge bases. “We need to bring different perspectives together to grapple with increasingly knotty societal problems,” said Khajuria. “The projects that feel messy in the best sense are collaborations between patient groups, medical engineers, and artists. When you encounter them, it will be tricky to know whose imagination has led or shaped it.”
Projects include sound artist Wesley Goatley’s immersive installation about defunct voice assistants and Fast Familiar’s exploration of the romance potential of a machine which has learned everything about love on the internet. For , Dr Oya Celiktutan, Head of the Social A.I. & Robotics Lab at King’s Department of Engineering, collaborated with soft robotics studio Air Giants and King’s students Jeffrey Chong, Theodore Lamarche and Bowen Liu. The result is a “huggable” robot, which interacts emotively with visitors.
“I’m interested in non-verbal communications between people,” Celiktutan told Artnet News. “I’m interested in how we can imitate that with robots so they can be clear and build trust with humans. This robot really doesn’t have any resemblance to a human, but with this basic shape it can communicate and connect using nonverbal movements.”
In stark contrast with the violent image of robots often stereotyped in movies, invites trust and touch. “One of the big questions is ‘What can we do to make a robot seem more approachable?’” said Chong. “Also, what can a robot do for you to be able to trust and want to interact with it? What buttons can it press on the human brain or what behaviours can it display to make you think of it as a conversational partner?”
’s cuddly appearance raises the question of aesthetics in robotics. “Soft robotics are interesting because they look cute,” said Lamarche. “I think a lot of the time people are scared of A.I. because of job replacement, but soft robotics see a lot of interest in the health sector where there are not enough people. There is an example the PARO robot, which is a little seal. It can be used for dementia patients and has a gentle soothing light to keep people physically and mentally interacting.”
Artist Mimi Ọnụọha delves behind the scenes of A.I., focusing on the human workforce that enables it to run. While the end user may see A.I. as independent from humans, many systems require vast amounts of manual tagging. Ọnụọha’s investigates the working spaces of the crowdsourced labour force, which largely operates remotely from bedrooms, front rooms and cafes in the Global South.
“It’s so tedious and intense,” Ọnụọha told Artnet News. “It’s important work but they won’t be paid the same as A.I. specialists or researchers. A.I. saves time, but whose time?” She points out the similarities between this labour distribution and the injustices of longer-running industries, such as fast fashion. “They are old patterns of labour architecture, but the aims are for this new technology.”
Ọnụọha does not call for an about turn on our relationship with these technologies, but a considered approach to their use. “We need to insert a little friction into how people approach these tools,” she said. “What is this ecosystem and how do we want it to be? What types of power differentials are we considering? If folks can consider this while at the same time holding the potential of A.I., I think that’s great. We’re past the point of being able to throw it out. The question becomes how to think strategically.”
While most of the projects focus on human relationships with A.I., Blast Theory invites a third species into the conversation: house cats. For , the group, its collaborators, animal behavioral experts and welfare officers set up a controlled experiment. For 72 hours over three-hour stints, cats were observed interacting with a robotic arm offering a “game” every six minutes, such as dragging a feather or throwing a ball. The system gradually learned each cat’s response, calculating the happiness levels of each game and adapting its offerings.
Of all animals, cats added an interesting dimension because of their standoffish nature. “Cats are famously imperious, opinionated and not biddable,” Adams said. “There was something interesting about a cat out of all animals that we have a close relationship with. They aren’t going to just be gulled into accepting something.”
The resulting video raises questions about the role of humans. Naturally, this kind of care system in the home could supplant the owner. “There were moments where the robots were playing a game with a cat and it almost felt like the cat was enjoying it more than if it was playing with a human,” said Adams. “The human is kind of an interrupting, disrupting factor. The cat wants to do prey behaviour, but if a human is there, they are making noises and have emotional weight. They might be a power figure, potentially the owner of the pet. Of course, that is threatening to us humans who want to be special.”
The exhibition is a timely reminder of the extent to which A.I. is entangled with humans, reflecting the good and evil that already exist within our structures. “Ultimately robots are what we make of them,” said Chong. “I think the reason scary robots are so popular in the media is because it reflects a fear that we have of other humans. It’s a reflection of the danger inherent in humanity.”
Kahjuria agrees with this take, highlighting the importance of questioning the underlying prejudices that underpin A.I. systems. “There’s so much emerging technology that is deliberately presented to feel magical and sleek,” he said. “But ultimately, all A.I. is the result of humans in a room making decisions, and there is usually a certain kind of person in those meetings and a certain power dynamic. Those conversations embed value systems and prejudices into the products they churn out. I hope the show will remind people just how human this stuff is.”
More Trending Stories: