LiDAR, which stands for Light Detection and Ranging, is a time-of-flight sensing technology that pulses low-power, eye-safe lasers and measures the time it takes for the laser to complete a round trip between the sensor and a target. The resulting aggregate data are used to generate a 3D point cloud image, providing both spatial location and depth information to identify, classify, and track moving objects.
*“Time of flight” between transmission and reception of emitted light allows for distance to be measured, enabling a point cloud of an abject or environment to be created continuously in real-time.
Point clouds are large data sets composed of 3D point data. These point clouds contain raw data from surroundings that are scanned from moving objects such as vehicles and humans as well as stationary objects such as buildings, trees and other permanent structures. A point cloud containing data points can then be transformed by a software system to create LiDAR-based 3D imagery of a given area.
Field-of-view is defined as the angle in degrees covered by a sensor. Typically LiDAR sensor performance is measured in horizontal and vertical field of view.
LiDAR operates by detecting and measuring the return of light to the sensor’s receiver. Some targets reflect light better than others, making them easier to reliably detect and measure up to the sensor’s maximum range. For example, a white surface returns a greater amount of light compared to a black surface, which absorbs more of the light. This makes a white target easier to reliably detect or measure at longer distances compared to a very dark target.
Mirror-like targets are also more challenging to detect and measure because, unlike diffuse targets which disperse light in many directions, mirror-like objects reflect only a small, focused beam of light that may not reflect directly into the sensor’s receiver.
Meanwhile, retro-reflective targets—like road signs and license plates—return a high percentage of light back to the receiver and are good targets for LiDAR sensors. Because of these differences, the real-world performance and maximum effective range of a LiDAR sensor may vary depending on the target’s surface reflectivity. Contact Quanergy sales team to discuss your specific application.
Quanergy provides high performance 3D LiDAR sensors and AI-powered perception software that improve safety, efficiency, and performance while reducing costs in a wide variety of markets and applications.
Our patented M Series of LiDAR sensors feature high resolution, 360-degree field of view to generate rich 3D point clouds in real-time at long range. These cost-effective, high definition LiDAR sensors are a rugged and reliable solution for the most challenging real-world applications that require the widest field of view and the longest range.
Built upon Optical Phased Array (OPA) technology, our S Series are true 100% CMOS solid-state LiDAR sensors that provide the highest level of reliability, longevity, and cost-effectiveness in an ultra-compact device—small enough to fit in the palm of your hand. Thanks to OPA technology, Quanergy S series features no moving parts on a macro or microscopic scale. This ensures a high resistance to vibration and provides more than 100,000 hours Mean Time Between Failure (MBTF). The affordable, scalable CMOS silicon process enables mass production and industry-leading cost savings.
LiDAR and radar are both used to determine the velocity, range, and angle of moving objects. Radar uses radio waves instead of light, whereas cameras rely on millions of pixels or megabytes to process a 2D image.
Unlike radar, LiDAR can provide a full real-time 3D image of the world around it. Moreover, unlike cameras, LiDAR provides no PII (Personally Identifying Information) risk and a lower rate of false alarm. LiDAR creates an image of the target at the same time as it determines the object’s distance, thus providing a 3D view of the object and a precise calculation of the direction in which it is moving—something neither cameras nor radar can provide.
Furthermore, neither radar or cameras can see accurately in the dark or through weather conditions such as rain or snow, which substantially limit their “sight” capabilities. LiDAR can also provide surface measurements and a precise resolution of objects within a certain range.
|Field of View|
|Object Detection – Shape / Orientation|
|Object Detection – Static / Lateral Motion|
|Resolution with Range|
|Rain, Snow, Smog, Dust, Sand Storm|
|Ambient Light – Pitch Darkness / Bright Sunlight|
|Read Sign / Color|
|Intensity / Reflectivity|
For Autonomous vehicles, the most robust and responsive safe sensor system would be the full suite of LiDAR, radar, video cameras, with LiDAR as the primary sensor.
LiDAR, unlike cameras and radar, can operate in any light condition, day or night, making it an essential technology for autonomous vehicles. Cameras, radar, and other technologies can help vehicles “see” their surroundings, but only to a certain extent. Once it is dark or raining, camera technology cannot offer the high-resolution images needed for cars to see accurately and to distinguish between humans and other objects. LiDAR remains the only type of sensor offering the highest range of accuracy and finest angular resolution, making LiDAR crucial for ensuring the safety of passengers and pedestrians.
LiDAR technology has broad use across countless applications in the following industries: mapping, smart cities, smart spaces, security, industrial automation, and transportation.