← Back to Blog
navigationamrai-perceptionlidar

LiDAR and 3D Mapping: Building Environment Maps for Robots

Understanding LiDAR and 3D Mapping — using LiDAR sensors and PCL to create accurate 3D maps for autonomous robots.

Nguyen Anh Tuan15 tháng 11, 20254 min read
LiDAR and 3D Mapping: Building Environment Maps for Robots

How LiDAR Works

LiDAR (Light Detection and Ranging) emits laser pulses and measures round-trip time to calculate distance to objects. A rotating LiDAR generates thousands of distance measurements per second, forming a point cloud — a precise 3D representation of surrounding geometry.

Compared to cameras:

Common LiDAR Types

2D LiDAR

Scans in a single plane, ideal for indoor mobile robots:

2D LiDAR produces a 2D scan that maps walls and obstacles. Perfect for SLAM (Simultaneous Localization and Mapping) and obstacle avoidance.

3D LiDAR

360-degree vertical scans, more complete maps:

3D LiDAR crucial for humanoid and manipulation robots where knowing ceiling/clutter heights matters.

Point Cloud Representation

import numpy as np
import open3d as o3d

# Point cloud: N x 3 matrix
points = np.random.randn(100000, 3)  # 100K points, [x, y, z]

# Load from LiDAR
pcd = o3d.io.read_point_cloud("lidar_scan.pcd")

# Visualize
o3d.visualization.draw_geometries([pcd])

# Downsample (reduce points)
pcd_down = pcd.voxel_down_sample(voxel_size=0.05)

# Normal estimation (for surface understanding)
pcd_down.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
    radius=0.1, max_nn=30))

SLAM: Localization + Mapping

SLAM (Simultaneous Localization and Mapping) fuses LiDAR measurements to:

  1. Estimate robot position
  2. Build a consistent map

Classic Approach: ICP (Iterative Closest Point)

import open3d as o3d

# Load two sequential scans
source = o3d.io.read_point_cloud("scan_t0.pcd")
target = o3d.io.read_point_cloud("scan_t1.pcd")

# ICP: find transformation aligning source to target
reg_p2p = o3d.pipelines.registration.registration_icp(
    source, target,
    max_correspondence_distance=0.1,
    init=np.eye(4),
    criteria=o3d.pipelines.registration.ICPConvergenceCriteria(
        max_iteration=50)
)

# Estimated transformation (odometry update)
T = reg_p2p.transformation
print(f"Rotation:\n{T[:3,:3]}")
print(f"Translation: {T[:3,3]}")

Modern Approach: Semantic SLAM

Add semantic understanding to make maps persistent:

class SemanticSLAM:
    def __init__(self):
        self.global_map = None  # Accumulated point cloud
        self.semantic_labels = {}  # Object categories at 3D locations
    
    def add_scan(self, lidar_points, semantic_segmentation, T_odometry):
        """Add new LiDAR scan to global map."""
        # Transform to world frame
        points_world = (T_odometry[:3, :3] @ lidar_points.T + 
                        T_odometry[:3, 3:4]).T
        
        # Assign semantic labels
        for i, (point, label) in enumerate(zip(points_world, semantic_segmentation)):
            self.semantic_labels[tuple(point)] = label
        
        # Accumulate
        if self.global_map is None:
            self.global_map = points_world
        else:
            self.global_map = np.vstack([self.global_map, points_world])
    
    def get_objects(self, category):
        """Query all points of a specific type."""
        return [p for p, label in self.semantic_labels.items() 
                if label == category]

Applications

1. Obstacle Detection for Mobile Robots

def detect_obstacles(lidar_cloud, ground_threshold=0.1):
    """Remove ground plane, return obstacle points."""
    # RANSAC plane fitting
    plane_model, inliers = pcd.segment_plane(distance_threshold=0.05,
                                              ransac_n=3, num_iterations=100)
    
    # Inliers = ground, outliers = obstacles
    inlier_cloud = pcd.select_by_index(inliers)
    obstacle_cloud = pcd.select_by_index(inliers, invert=True)
    
    return obstacle_cloud

2. 3D Shape Reconstruction

def reconstruct_mesh(pcd):
    """Create mesh from point cloud."""
    # Poisson reconstruction
    mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(
        pcd, depth=8)
    
    return mesh

LiDAR + IMU Fusion

Combine LiDAR SLAM with IMU for better odometry:

class LiDAR_IMU_Fusion:
    def __init__(self):
        self.lidar_slam = LiDAR_SLAM()
        self.imu_filter = KalmanFilter()  # Fuse with IMU
    
    def step(self, lidar_points, imu_accel, imu_gyro):
        # SLAM update from LiDAR
        T_lidar = self.lidar_slam.update(lidar_points)
        
        # IMU provides high-frequency estimates
        v_imu = self.imu_filter.predict(imu_accel, imu_gyro)
        
        # Fuse: LiDAR for accuracy, IMU for update rate
        T_fused = fuse(T_lidar, v_imu)
        
        return T_fused

Common Issues and Solutions

Problem Cause Solution
Map drift over long distance Odometry accumulates error Loop closure detection
Failure in featureless corridors No distinctive landmarks Semantic + topological maps
Slow on resource-limited hardware Full point cloud processing Downsampling + voxel grids

Libraries

Takeaway

LiDAR + SLAM is the foundation for autonomous mobile robots. Master point cloud processing and loop closure, and you can build production navigation systems.


Related Articles

Related Posts

Sim-to-Real Transfer: Train simulation, chạy thực tế
ai-perceptionresearchrobotics

Sim-to-Real Transfer: Train simulation, chạy thực tế

Kỹ thuật chuyển đổi mô hình từ simulation sang robot thật — domain randomization, system identification và best practices.

1/4/202612 min read
ResearchEmbodied AI 2026: Toàn cảnh và xu hướng
ai-perceptionresearchrobotics

Embodied AI 2026: Toàn cảnh và xu hướng

Tổng quan embodied AI -- từ foundation models, sim-to-real đến robot learning tại scale với open-source tools.

25/3/202612 min read
TutorialHands-on: Fine-tune OpenVLA với LeRobot
ai-perceptionvlatutorialPart 7

Hands-on: Fine-tune OpenVLA với LeRobot

Tutorial thực hành — fine-tune OpenVLA trên custom data, LoRA, quantization và deploy trên robot thật.

23/3/202613 min read