Our newest hands-on guidebook describes how you can optimize the benefits of Time of Flight by considering various factors such as the environment of the application and the target object properties in the scene.

Time of Flight Forges Ahead: Design Tips to Boost 3D Performance and Cut Integration Time & Cost
Time of Flight Forges Ahead: Design Tips to Boost 3D Performance and Cut Integration Time & Cost

White Paper from | LUCID Vision Labs

Have you wondered how to boost the performance of your 3D Time of Flight imaging application and at the same time cut your integration time and cost? Our newest hands-on guidebook describes how you can optimize the benefits of Time of Flight by considering various factors such as the environment of the application and the target object properties in the scene.

What’s Inside:

• Outdoors vs Controlled Environments
• Target & Camera Motion
• Specular, Diffuse & Transparent Targets
• Scene Complexity Simplified
• Working Distance Considerations
• Put It All Together

 

 

Sample Chapter 3: "Specular and Diffuse Targets"

ToF works best on targets with surfaces that have neither high specular reflection nor absorbent non-reflective surfaces. Both extremes lead to point cloud distortions. For example, black piping has both low reflectivity (black color) and high specular surfaces, creating both voids and spikes in the point cloud (Image Set 4, Example c.) Targets that are black or very dark will absorb light and thus return the least amount of light creating voids or holes in the point cloud. Targets with high specular reflections can saturate pixels creating spikes in the point cloud that are closer in distance than their true position. Targets with these properties may not be suitable for ToF.
 

examples of tof targets in pointclouds

Image Set 4
Various point cloud examples of different objects.
a. Onions
b. Cardboard boxes
c. Black PVC pipes
d. White PVC pipes

 

Targets with diffuse surfaces and high reflectivity work best for ToF (Image Set 4, Example a, b, d). These targets send enough light back to the ToF sensor without specular reflections. Some objects however, exhibit properties that are less than ideal but are still discernible within the scene. In these situations it is possible to increase target details via changes in exposure time and gain, image accumulation, and filtering.

 

EXPOSURE TIME AND GAIN

Determining the best exposure time maximizes usable depth data for both high and low reflectivity targets. The Helios camera enables two discrete exposure time settings — 1,000µs and 250µs, as well as high and normal gain settings. The 1,000µs setting is the default exposure time and also the maximum exposure time allowed. Longer exposure time and high gain should be used for scenes further from the camera, or when imaging objects with low reflectivity. Shorter exposure time and normal gain is used for scenes closer to the camera, or objects that appear over saturated.

EXPOSURE TIME GAIN BEST FOR…
1,000µs High Dark objects, farther distances
250µs Normal Highly reflective objects, closer distances
 

IMAGE ACCUMULATION

The Helios processing pipeline is capable of accumulating multiple frames for improved depth calculations. This is helpful for targets that produce noisy data. With image accumulation, depth frames are averaged over a set number of frames, improving imaging results. It should be noted that the higher the number of frames accumulated, the slower the depth data generation, as more images must be captured to calculate the data.

Examples of different image accumulation settings on Helios ToF camera

examples of point-cloud settings

 

CONFIDENCE THRESHOLD

Depth data confidence is based on a level of intensity for each point. If a returning signal is too weak, there is low confidence in the resulting depth data. Thresholding removes depth data with low intensity/confidence, improving scene clarity.

 

SPATIAL FILTERING

Spatial filtering reduces noise by adjusting and averaging depth data differences between neighboring pixels — smoothing out surfaces. Additionally, the Helios camera also uses edge-preservation in its spatial filtering, reducing noise in surfaces while maintaining object-edge sharpness.

 

Click here to download the full Time of Flight Guidebook PDF

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Lucid Vision Labs

Lucid Vision Labs

LUCID Vision Labs, Inc. designs and manufactures innovative machine vision cameras and components that utilize the latest technologies to deliver exceptional value to customers. Our compact, high-performance GigE Vision cameras are suited for a wide range of industries and applications such as factory automation, medical, life sciences and logistics. We innovate dynamically to create products that meet the demands of machine vision for Industry 4.0. Our expertise combines deep industry experience with a passion for product quality, technology innovation and customer service excellence. LUCID Vision Labs, Inc. was founded in January 2017 and is located in Richmond, BC, Canada with local offices in Germany, Japan, South Korea, China and Taiwan. For more information, please visit www.thinklucid.com.

Other Articles

3D Time-of-Flight Camera Aids Robotic Palletizers
Regardless of the palletizing method, the goal of any supply chain operation is to ship out the most product possible. This can only be done by making the best use of the space on each individual pallet, even if it means just one more package per pallet.
More about Lucid Vision Labs

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Discover how human-robot collaboration can take flexibility to new heights!

Discover how human-robot collaboration can take flexibility to new heights!

Humans and robots can now share tasks - and this new partnership is on the verge of revolutionizing the production line. Today's drivers like data-driven services, decreasing product lifetimes and the need for product differentiation are putting flexibility paramount, and no technology is better suited to meet these needs than the Omron TM Series Collaborative Robot. With force feedback, collision detection technology and an intuitive, hand-guided teaching mechanism, the TM Series cobot is designed to work in immediate proximity to a human worker and is easier than ever to train on new tasks.