A Danish Artist Has Debuted an A.I. ‘Camera’ That Generates Images Using Your Geolocation Data


Bjørn Karmann’s camera, named the Paragraphica, uses censors and geolocation data—including weather conditions—to create a stream of text that is then converted into a “photo,” according to his website.

The camera looks like a typical point-and-shoot, but replaces the lens with a red device, described by the photography website Digital Camera World as looking like a “TV aerial stuffed where the lens should be.” Karmann told the that the bizarre element is simply a sculpture inspired the star-nosed mole, an animal that is blind but visualizes its environment using its snout.

“The viewfinder displays a real-time description of your current location, and by pressing the trigger, the camera will create a scintigraphic representation of the description,” Karmann wrote on his website.

Introducing – Paragraphica! ??
A camera that takes photos using location data. It describes the place you are at and then converts it into an AI-generated “photo”.

See more here: https://t.co/Oh2BZuhRcf
or try to take your own photo here: https://t.co/w9UFjckiF2 pic.twitter.com/23kR2QGzpa

— Bjørn Karmann (@BjoernKarmann) May 30, 2023

Photographers who use the device can control the outcome of the image with three physical dials on top of the camera body where knobs that would normally control such things as shutter speed and film speed would be.

The first knob, Karmann wrote, operates similarly to focal length in a traditional camera lens but is used to limit the radius in which the camera searches for data. A diagram of the dial shows that the distance appears to range from nearly 10 feet to an infinite distance.

The second knob controls the noise seed for the A.I. image diffusion. In the A.I. image generation process, models add Gaussian noise through which the image emerges. Karmann defined noise as “comparable to film grain.”

Courtesy of Bjørn Karmann.

Karmann described the third knob as a “guidance scale” which provides an input for how closely the A.I. model follows the generated text prompt.

The hardware Karmann used to create the camera included a Raspberry Pi 4, a single-board computer about the size of a credit card, and 3D-printed housing with custom electronics. The software runs on Noodl and Python coding with the Stable Diffusion API.

“Quite frankly it’s the strangest and stupidest thing I have ever seen, yet I am in awe by its engineering,” Sebastian Oakley wrote in his review for Digital Camera World. “But this is photography—or could it be?”


Please enter your comment!
Please enter your name here