Coupled with computers and software, plenoptic cameras can enable a robot to better navigate its environment with less confusion and work autonomously. Robotic sensors, using light filed technology, match up to the human sense of sight, serving as a robots eyes, allowing the robot to get around in its surroundings.

The Camera Has Many Eyes

Len Calderone

The combination of aperture size and depth of field affects the decisions made before a photographer presses the exposure button, and these two elements are fundamental to a good photograph.  The sharpness of an image is dependent on these two elements.  The depth of field will determine how much of the subject will be in focus, while a narrow aperture will extend the depth of field and reduce the blur of objects away from the focal plane, and a narrow aperture requires a longer exposure, increasing the blur due to the natural shake of our hands while holding the camera and any movement in the scene.

The result of over a decade of work has produced a plenoptic camera that allows the user to adjust the focus of any picture that it takes after the shot has been made.  Using a light field sensor, a camera not only captures the color and intensity of every light ray, but the direction as well, making the camera 4D.  By extracting appropriate 2D slices from the 4D light field of a scene, image-based modeling and rendering methods generate a three-dimensional model and then render some novel views of the scene.  The light field is generated from large arrays of both rendered and digitized images. 

The light field is a function that describes the radiometric (intensity of radiant energy) properties of light flaring in every direction through three-dimensional space. Light should be interpreted as a field, much like magnetic fields. 

Lytro Camera

A typical digital camera aligns a lens in front of an image sensor, which captures the picture. A new camera, the Lytro, adds an intermediate step - an array of micro-lenses between the primary lens and the image sensor. That array fractures the light that passes through the lens into thousands of discrete light paths, which the sensor and internal processor save as a single .LPF (light-field picture) file. Your standard digital image is composed from pixel data, like color and sharpness, but pixels in a light-field picture add directional information to that mix. When a user decides where in the picture the focus should be, the image is created pixel-by-pixel from either the camera’s internal processor and software or a desktop app.

A single light-field snapshot can provide photos where focus, exposure, and even depth of field are adjustable after the picture is taken. In the future, light-field cameras promise ultra-accurate facial-recognition systems, personalized 3-D televisions, and cameras that provide views of the world that are indistinguishable from what you’d see out a window.


Raytrix penoptic camera

The simplest conception of a plenoptic (light, travelling in every direction in a given space) camera is essentially an array of mini-cameras (micro-lens + micro-pixel array for each logical pixel) that separately captures light from all directions at each point in the image.

A conventional camera allows a photographer to capture the light that hits on the digital sensor (or, film if you’re still using film).  The photographer has to be sure that the image is perfectly focused before the shutter release button is pressed.  A Light Field camera captures all the light that enters the body of the camera. The processing software allows you to move back and forth in space along those light rays that are inside, thus changing the focus and depth of field.

A light-field camera captures far more light data, from many angles, than is possible with a conventional camera.  The camera accomplishes this feat with a special sensor called a micro-lens array, which puts the equivalent of many lenses into a small space.

A key to light-field technology is to use the increasing resolution found in the image sensors in conventional digital cameras. There is a special array of lenses that fit in front of image sensors that helps break the image apart into individual rays, along with software to help reassemble and manipulate it.

There are other benefits as well, such as the images being focused after the fact.  The user doesn’t have to spend time focusing before shooting, or worry about focusing on the wrong target.    

The radiance along all light rays in a area of three-dimensional space illuminated by an fixed arrangement of lights is called the plenoptic function.  The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time.

Plenoptic cameras create new imaging functionalities that would be difficult, if not impossible, to achieve using the traditional camera technology.  This comes in the form of images with enhanced field of view, spectral resolution, dynamic range, and temporal resolution.  Additional flexibility gives the ability to manipulate the optical settings of an image (focus, depth of field, viewpoint, resolution, lighting) after the image has been captured.

One example of this new imaging is omni-directional imaging using catadioptrics,  which is an optical system where refraction and reflection are combined in an optical system, usually via lenses (dioptrics) and curved mirrors (catoptrics).

Another example is high dynamic range imaging using assorted pixels, which produces images with a much greater range of light and color than conventional imaging. The effect is stunning, as great as the difference between black-and-white and color television

A further illustration is refocusing using integral imaging, where refocusing is conceptually just a summation of shifted versions of the images that form through pinholes over the entire aperture. This matches shifting and adding the sub-aperture images through post-capture control of spatial/temporal/angular resolution and spectral resolution and plenoptic imaging for recovering scene structure. 

With a plenoptic lens, the light rays pass through several lenses before making it to the sensor, thereby getting recorded from several different perspectives.  Because there’s all these tiny little lenses in front of the sensor, the resulting image looks like this:

By using some computer calculations, a user is able to resolve those little fragments into a normal image.  It’s up to the user to decide what he or she wants in focus, as the user can determine what’s in focus, and what’s not, by moving a slider in the menu.  Just imagine what it would be like to never have an out-of-focus picture again? 

This technology is just being refined and has not found its way into robotics, but it will only be a matter of time before the industry utilizes plenoptic cameras.  Robots with cameras are being used now in the depths of the ocean, but because of the lack of light and sediment in the water, sharp images are difficult to come by.  By absorbing more light from different directions, and the ability to adjust the focus, scientists will be able to better capture sea life and seascape images. 

 

 

The images of medical robots will be much sharper, giving doctors the advantage of clearly defining the area of an operation.  Autonomous robots utilize cameras to track objects and identify movement characteristics. Unmanned vehicles use cameras in conjunction with LIDAR (light detecting and ranging) to identify objects in the environment in order to safely maneuver. The cameras are used to identify curbs and lane lines to keep unmanned vehicles within lanes as they moved through an urban environment.  Being able to maintain a sharper focus or 3D rendering will make these vehicles safer.

Most space-based astronomy has been conducted with telerobotic (control of robots from a distance) telescopes, which would be more accurate if space images were clearer. 

Coupled with computers and software, plenoptic cameras can enable a robot to better navigate its environment with less confusion and work autonomously. Robotic sensors, using light filed technology, match up to the human sense of sight, serving as a robot’s eyes, allowing the robot to get around in its surroundings.    Technology, such as light fields, will improve all aspects of robotics.

Additional Information

www.lytro.com

http://www.lytro.com/renng-thesis.pdf

http://www.raytrix.de/index.php/Cameras.html

http://www.popsci.com/gadgets/article/2011-05/cameras-40000-lenses-help-salvage-blurry-images

http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf

 

About Len

Len started in the audio visual industry in 1975 and has contributed articles to several publications. He also writes opinion editorials for a local newspaper. He is now retired.

This article contains statements of personal opinion and comments made in good faith in the interest of the public. You should confirm all statements with the manufacturer to verify the correctness of the statements.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Stäubli robotic tool changer solutions

Stäubli robotic tool changer solutions

Stäubli is a pioneer in the construction and development of robotic tool changing systems. Our customers benefit from our many years of expertise in all industry sectors, as well as our modular product concept, which offers three efficient solution paths: MPS COMPLETE offers preconfigured robotic tool changers for immediate use. MPS MODULAR allows the user to determine the configuration, while MPS CUSTOMIZED allows the construction of special, application-specific systems.