Descriptive Camera Doesn’t Take Pictures, But Has a Masterful Command of the English Language
Matt Richardson’s “Descriptive Camera” doesn’t take pictures, not even blurry ones. But it works just fine. Rather than capturing an image on film, Richardson’s invention (a prototype built for a Computational Cameras class at New York University‘s Interactive Telecommunications Program) prints out a descriptive bit of text about the photo.
How It Works
After an photograph is taken, the image is sent to Amazon’s Mechanical Turk, a service that pays humans to do tasks that computers cannot solve, like translating into words what is captured in a photograph. The camera uses a BeagleBone, a tiny Linux computer used in hardware prototypes, and once the transcription has been returned, a thermal printer spits out the resulting text, with a turnaround time—from shutter-click to ink on paper—of between 3 and 6 minutes.
Why We Want It
Richardson, a photographer and programmer, has had people suggest to him that the Descriptive Camera could be developed into a good product for the visually impaired. He also sees potential in the idea of a device that can catalog all the images we snap on our digital cameras and smartphones.
For the time being, Richardson is focussed on perfecting the camera’s design, making it smaller, more compact and practical.
Snap a picture outside of a building in New York City and you’ll get this printout from the camera: “This is a faded picture of a dilapidated building. It seems to be run down and in the need of repairs.”