Photography and Art

Sunday, January 16, 2011

Future Cameras - or What is a Camera Anyway?

The Consumer Electronics Show has just wound up and there have been lots of new devices with built-in cameras as well as dedicated consumer-oriented cameras introduced. At the same time, there has been a lot of chatter about a professional photographer (Lee Morris) who did a shoot of a model, complete with studio lighting, with an iPhone (see http://fstoppers.com/iphone/). The upshot is that an iPhone is capable of high quality work given perfect lighting and a gorgeous model.

Are cameras as stand-alone devices about to disappear? Will devices like the iPhone and the new Samsung Galaxy (which has not one, but two cameras built-in) replace cameras? Will the market bifurcate into a consumer market with combo devices that include cameras and a pro-sumer market with more conventional digital SLR cameras?

Let's take a step back and examine what a camera does and then think about the best machine for the job. Here are the things that a camera needs to do:


  • Capture light through some sort of opening connected to some sort of sensor. This process should allow magnification of the image in case the subject of the photo is distant.
  • Process the signal from the sensor to make sense of the light that comes into the opening - preferably turning it into an image.
  • Provide a user interface (UI) that allows a human being to hold the device, point it, compose an image (i.e. preview it) and choose things like the brightness of the image. This UI might also allow the human to post-process the image and either upload it to the Internet or print it out.
  • Capture data about the image (metadata) like the time the image was taken and ideally where the image was taken.
If we look at today's digital SLR, we have a device that still looks very much like the film cameras of old. The light capturing device is a lens. In fact, most people own several lenses to cover the full range of magnification that photographers need. The user interface is a confusing mix of analog devices (e.g. a viewfinder that uses a through-the-lens mirror, knobs on the top of the camera) and digital devices like menus on multiple screens. The camera gives you the option of processing the image in-camera (i.e. creating a Jpeg) or processing the image later with a computer (i.e. raw processing). The size and form factor of the DSLR are very similar to film SLR's. There is a computer inside the camera, but it is crude like on old DOS PC. Communications with humans is through a menu-driven interface that suffers in comparison to an iPad or Android device.

There are notable advances being made like Sony's alpha 55 translucent mirror camera that avoids the mechanical pop-up mirror. Panasonic and others also do away with the conventional viewfinder and provide an LCD screen that reflects what the sensor is capturing. My Canon camera has live-view which lets me compose the shot by looking at the screen on the back and zoom in on the subject to make sure it's in focus.

However, these are baby steps. I think it is possible to completely re-engineer the camera using technology that is available today. Let's tackle each part of the problem:

  • Light Capture: lenses are heavy and expensive. Why do we need multiple lenses anyway? My newest version of Photoshop Lightroom has a lens correction module that is quite remarkable. Why not team up a cheap lens capable of zooming from wide angle to extreme telephoto with software that can correct the flaws in the lens. If it's good enough for the Hubble telescope, why not a camera?
  • Image Processing: Silicon is relatively cheap and getting cheaper all the time. Why not throw some serious compute power at this problem and build all the power of Lightroom into the camera? Think of how much fun it would be to have a lovely UI (e.g. like an iPad) that allowed total control over every aspect of the shot, including exposure, focus, magnification etc. Imagine being able to see the photo BEFORE the shot on a 7 inch or 9 inch high resolution screen and being about to fool around with parameters like focal point and depth of field using touch screen gestures. 
  • Form Factor: Why does the camera have to be a single device? Pointing an iPad at a subject looks goofy - why not an iPad-like control pod connected via Bluetooth to a small device that holds just the sensor and lens? The image capture device could be mounted on a tripod and controlled remotely from the UI of the pod. Or, it could clip to the top of the control pod if you wanted to carry it around. For action fans, the light capture device could be mounted to all sorts of things, like the helmet of a cyclist or the front of a canoe.
  • Data Capture: The control pod would have the ability to capture location via a GPS and can provide a keypad to capture things like the name of an image as well as tags that will facilitate image retrieval in your database.
It wouldn't take much doing to pull this off. The lens/sensor module can be made using parts from a superzoom digital camera. An iPad or a Samsung Galaxy could be used for the control pod. We'd need someone to package a Lightroom "lite" edition for iPad or Android. The bluetooth tether would be the final step. 

I gather that the new Blackberry Playbook pad will tether to the Blackberry phone wirelessly -- and the Blackberry phone has a camera in it. All that's missing is the software!


No comments:

Post a Comment