How LiDAR Works
LiDAR (Light Detection and Ranging) emits laser pulses and measures round-trip time to calculate distance to objects. A rotating LiDAR generates thousands of distance measurements per second, forming a point cloud — a precise 3D representation of surrounding geometry.
Compared to cameras:
- Works in any lighting condition
- Measures distance accurately (cm-level)
- Unaffected by surface texture
Common LiDAR Types
2D LiDAR
Scans in a single plane, ideal for indoor mobile robots:
- Sick S300: Industrial standard, $300-500
- RPLiDAR A2: Hobby grade, $60
- Neato XV: Vacuum robot sensor, cheap
2D LiDAR produces a 2D scan that maps walls and obstacles. Perfect for SLAM (Simultaneous Localization and Mapping) and obstacle avoidance.
3D LiDAR
360-degree vertical scans, more complete maps:
- Velodyne Puck: $4K, 16 channels
- Livox Mid360: $200, affordable 3D
- RealSense L515: $400, indoor depth camera (not true LiDAR but similar)
3D LiDAR crucial for humanoid and manipulation robots where knowing ceiling/clutter heights matters.
Point Cloud Representation
import numpy as np
import open3d as o3d
# Point cloud: N x 3 matrix
points = np.random.randn(100000, 3) # 100K points, [x, y, z]
# Load from LiDAR
pcd = o3d.io.read_point_cloud("lidar_scan.pcd")
# Visualize
o3d.visualization.draw_geometries([pcd])
# Downsample (reduce points)
pcd_down = pcd.voxel_down_sample(voxel_size=0.05)
# Normal estimation (for surface understanding)
pcd_down.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=0.1, max_nn=30))
SLAM: Localization + Mapping
SLAM (Simultaneous Localization and Mapping) fuses LiDAR measurements to:
- Estimate robot position
- Build a consistent map
Classic Approach: ICP (Iterative Closest Point)
import open3d as o3d
# Load two sequential scans
source = o3d.io.read_point_cloud("scan_t0.pcd")
target = o3d.io.read_point_cloud("scan_t1.pcd")
# ICP: find transformation aligning source to target
reg_p2p = o3d.pipelines.registration.registration_icp(
source, target,
max_correspondence_distance=0.1,
init=np.eye(4),
criteria=o3d.pipelines.registration.ICPConvergenceCriteria(
max_iteration=50)
)
# Estimated transformation (odometry update)
T = reg_p2p.transformation
print(f"Rotation:\n{T[:3,:3]}")
print(f"Translation: {T[:3,3]}")
Modern Approach: Semantic SLAM
Add semantic understanding to make maps persistent:
class SemanticSLAM:
def __init__(self):
self.global_map = None # Accumulated point cloud
self.semantic_labels = {} # Object categories at 3D locations
def add_scan(self, lidar_points, semantic_segmentation, T_odometry):
"""Add new LiDAR scan to global map."""
# Transform to world frame
points_world = (T_odometry[:3, :3] @ lidar_points.T +
T_odometry[:3, 3:4]).T
# Assign semantic labels
for i, (point, label) in enumerate(zip(points_world, semantic_segmentation)):
self.semantic_labels[tuple(point)] = label
# Accumulate
if self.global_map is None:
self.global_map = points_world
else:
self.global_map = np.vstack([self.global_map, points_world])
def get_objects(self, category):
"""Query all points of a specific type."""
return [p for p, label in self.semantic_labels.items()
if label == category]
Applications
1. Obstacle Detection for Mobile Robots
def detect_obstacles(lidar_cloud, ground_threshold=0.1):
"""Remove ground plane, return obstacle points."""
# RANSAC plane fitting
plane_model, inliers = pcd.segment_plane(distance_threshold=0.05,
ransac_n=3, num_iterations=100)
# Inliers = ground, outliers = obstacles
inlier_cloud = pcd.select_by_index(inliers)
obstacle_cloud = pcd.select_by_index(inliers, invert=True)
return obstacle_cloud
2. 3D Shape Reconstruction
def reconstruct_mesh(pcd):
"""Create mesh from point cloud."""
# Poisson reconstruction
mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(
pcd, depth=8)
return mesh
LiDAR + IMU Fusion
Combine LiDAR SLAM with IMU for better odometry:
class LiDAR_IMU_Fusion:
def __init__(self):
self.lidar_slam = LiDAR_SLAM()
self.imu_filter = KalmanFilter() # Fuse with IMU
def step(self, lidar_points, imu_accel, imu_gyro):
# SLAM update from LiDAR
T_lidar = self.lidar_slam.update(lidar_points)
# IMU provides high-frequency estimates
v_imu = self.imu_filter.predict(imu_accel, imu_gyro)
# Fuse: LiDAR for accuracy, IMU for update rate
T_fused = fuse(T_lidar, v_imu)
return T_fused
Common Issues and Solutions
| Problem | Cause | Solution |
|---|---|---|
| Map drift over long distance | Odometry accumulates error | Loop closure detection |
| Failure in featureless corridors | No distinctive landmarks | Semantic + topological maps |
| Slow on resource-limited hardware | Full point cloud processing | Downsampling + voxel grids |
Libraries
- Open3D: Easy 3D processing in Python
- PCL (Point Cloud Library): C++ for production code
- ROS:
ros-humble-perception-pclfor integration
Takeaway
LiDAR + SLAM is the foundation for autonomous mobile robots. Master point cloud processing and loop closure, and you can build production navigation systems.