← Back to Blog
navigationros2tutorialamrnavigation

ROS 2 A to Z (Part 3): Nav2 — Your First Autonomous Robot

Configure the Nav2 stack for autonomous SLAM mapping and automatic movement — from simulation to reality.

Nguyen Anh Tuan11 tháng 2, 202610 min read
ROS 2 A to Z (Part 3): Nav2 — Your First Autonomous Robot

Nav2 — Navigation Stack for Autonomous Robots

If you are building an AMR (Autonomous Mobile Robot), Nav2 is the most critical stack you need to master. Nav2 provides the complete pipeline from SLAM mapping, AMCL localization, path planning, obstacle avoidance, to wheel speed control. In this article, we will configure Nav2 from scratch, run simulation in Gazebo, and send navigation goals for the robot to move autonomously.

This article requires you to understand Part 1: Setup and First Node and Part 2: Topics, Services and Actions. If you want to understand SLAM theory first, read SLAM and Navigation for Autonomous Robots.

Autonomous Mobile Robot moving in warehouse

Install Nav2 and Dependencies

# Nav2 full stack
sudo apt install -y \
  ros-humble-navigation2 \
  ros-humble-nav2-bringup \
  ros-humble-slam-toolbox \
  ros-humble-turtlebot3-gazebo \
  ros-humble-turtlebot3-navigation2

# Gazebo simulation
sudo apt install -y ros-humble-gazebo-ros-pkgs

# Set TurtleBot3 model (Waffle Pi — has LiDAR + camera)
echo "export TURTLEBOT3_MODEL=waffle_pi" >> ~/.bashrc
echo "export GAZEBO_MODEL_PATH=\$GAZEBO_MODEL_PATH:/opt/ros/humble/share/turtlebot3_gazebo/models" >> ~/.bashrc
source ~/.bashrc

Nav2 Architecture

Nav2 is a modular system with multiple nodes running simultaneously. Understanding the architecture helps you know what to configure:

                    ┌─────────────────┐
                    │   BT Navigator   │  ← Behavior Tree orchestrates
                    └────────┬────────┘
                             │
              ┌──────────────┼──────────────┐
              │              │              │
    ┌─────────▼──────┐ ┌────▼────┐ ┌───────▼───────┐
    │    Planner     │ │Controller│ │   Recovery    │
    │  (NavFn/Smac)  │ │ (DWB)   │ │  (Spin/Wait)  │
    └─────────┬──────┘ └────┬────┘ └───────────────┘
              │              │
    ┌─────────▼──────────────▼──────┐
    │        Costmap 2D             │
    │  (Global + Local costmaps)    │
    └─────────┬─────────────────────┘
              │
    ┌─────────▼──────────────────────┐
    │   Sensor input (LiDAR, camera) │
    │   + TF transforms + Map        │
    └────────────────────────────────┘

Main Components:

Component Role Default Plugin
BT Navigator Orchestrate behavior with Behavior Tree NavigateToPose
Planner Calculate global path from A to B NavfnPlanner
Controller Track path, avoid local obstacles DWBLocalPlanner
Costmap Cost map for planner + controller Costmap2D
Recovery Handle when stuck (spin, backup, wait) Spin, BackUp, Wait
AMCL Localize robot on existing map AMCL particle filter

Step 1: Run Gazebo Simulation

Always test in simulation before running on real hardware:

# Terminal 1: Launch Gazebo world
ros2 launch turtlebot3_gazebo turtlebot3_world.launch.py

Gazebo opens with TurtleBot3 in a world with walls and obstacles. Check the topics:

# Terminal 2: List topics
ros2 topic list | grep -E "(scan|odom|cmd_vel)"
# /scan           ← LiDAR data
# /odom           ← Odometry
# /cmd_vel        ← Velocity commands

# Test LiDAR
ros2 topic echo /scan --once

Step 2: Map with SLAM Toolbox

SLAM (Simultaneous Localization and Mapping) allows the robot to move while building the map.

# Terminal 2: Launch SLAM Toolbox
ros2 launch slam_toolbox online_async_launch.py \
  slam_params_file:=/opt/ros/humble/share/slam_toolbox/config/mapper_params_online_async.yaml \
  use_sim_time:=true

Configure SLAM Toolbox

Create a custom configuration:

# ~/ros2_ws/src/my_first_robot/config/slam_params.yaml
slam_toolbox:
  ros__parameters:
    # Solver params
    solver_plugin: solver_plugins::CeresSolver
    ceres_linear_solver: SPARSE_NORMAL_CHOLESKY
    ceres_preconditioner: SCHUR_JACOBI
    ceres_trust_strategy: LEVENBERG_MARQUARDT
    ceres_dogleg_type: TRADITIONAL_DOGLEG
    ceres_loss_function: None

    # Matcher params (critical for accuracy)
    resolution: 0.05                # 5cm/pixel — balance detail and memory
    max_laser_range: 12.0           # LiDAR max range
    minimum_travel_distance: 0.3    # Move at least 30cm before map update
    minimum_travel_heading: 0.3     # Rotate at least 0.3 rad before update

    # Map params
    map_update_interval: 3.0        # Update map every 3 seconds
    max_laser_range: 12.0
    min_laser_range: 0.1

    # Loop closure
    do_loop_closing: true
    loop_match_minimum_chain_size: 10

    # TF frames
    odom_frame: odom
    map_frame: map
    base_frame: base_footprint
    scan_topic: /scan

    use_sim_time: true

Drive Robot to Scan Map

# Terminal 3: Control robot with keyboard
ros2 run turtlebot3_teleop teleop_keyboard

Drive the robot around. Tips for good maps:

Visualize Map in RViz

# Terminal 4: Open RViz
ros2 run rviz2 rviz2

In RViz, add displays:

  1. Map — topic /map
  2. LaserScan — topic /scan
  3. TF — see frame tree
  4. RobotModel — see robot position

Save Map

When map is good:

# Save map (creates 2 files: .pgm and .yaml)
ros2 run nav2_map_server map_saver_cli -f ~/ros2_ws/maps/my_warehouse

# Result:
# ~/ros2_ws/maps/my_warehouse.pgm    ← Map image
# ~/ros2_ws/maps/my_warehouse.yaml   ← Metadata

Check yaml file:

# ~/ros2_ws/maps/my_warehouse.yaml
image: my_warehouse.pgm
mode: trinary
resolution: 0.05          # 5cm/pixel
origin: [-5.0, -5.0, 0.0] # Lower-left corner of map
negate: 0
occupied_thresh: 0.65
free_thresh: 0.25

SLAM map created by robot in Gazebo

Step 3: Configure Nav2

Nav2 Main Config File

This is the most important file — configure entire navigation stack:

# ~/ros2_ws/src/my_first_robot/config/nav2_params.yaml
amcl:
  ros__parameters:
    use_sim_time: true
    alpha1: 0.2           # Rotation noise from rotation
    alpha2: 0.2           # Rotation noise from translation
    alpha3: 0.2           # Translation noise from translation
    alpha4: 0.2           # Translation noise from rotation
    base_frame_id: "base_footprint"
    global_frame_id: "map"
    odom_frame_id: "odom"
    robot_model_type: "nav2_amcl::DifferentialMotionModel"
    max_particles: 2000
    min_particles: 500
    update_min_a: 0.2     # Min rotation to trigger update
    update_min_d: 0.25    # Min translation to trigger update
    laser_model_type: "likelihood_field"
    laser_max_range: 12.0
    scan_topic: /scan

bt_navigator:
  ros__parameters:
    use_sim_time: true
    global_frame: map
    robot_base_frame: base_footprint
    odom_topic: /odom
    # Default Behavior Tree — navigate to pose
    default_bt_xml_filename: "navigate_w_replanning_and_recovery.xml"
    plugin_lib_names:
      - nav2_compute_path_to_pose_action_bt_node
      - nav2_follow_path_action_bt_node
      - nav2_back_up_action_bt_node
      - nav2_spin_action_bt_node
      - nav2_wait_action_bt_node
      - nav2_clear_costmap_service_bt_node
      - nav2_is_stuck_condition_bt_node
      - nav2_goal_reached_condition_bt_node
      - nav2_rate_controller_bt_node
      - nav2_recovery_node_bt_node
      - nav2_pipeline_sequence_bt_node

controller_server:
  ros__parameters:
    use_sim_time: true
    controller_frequency: 20.0
    min_x_velocity_threshold: 0.001
    min_theta_velocity_threshold: 0.001
    controller_plugins: ["FollowPath"]
    FollowPath:
      plugin: "dwb_core::DWBLocalPlanner"
      debug_trajectory_details: true
      min_vel_x: 0.0
      min_vel_y: 0.0
      max_vel_x: 0.26       # Max velocity (m/s) — reduce if robot slow
      max_vel_y: 0.0
      max_vel_theta: 1.0    # Max turning velocity (rad/s)
      min_speed_xy: 0.0
      max_speed_xy: 0.26
      min_speed_theta: 0.0
      acc_lim_x: 2.5
      acc_lim_y: 0.0
      acc_lim_theta: 3.2
      decel_lim_x: -2.5
      decel_lim_theta: -3.2
      vx_samples: 20
      vy_samples: 5
      vtheta_samples: 20
      sim_time: 1.7
      linear_granularity: 0.05
      angular_granularity: 0.025
      xy_goal_tolerance: 0.25
      trans_stopped_velocity: 0.25
      critics: ["RotateToGoal", "Oscillation", "BaseObstacle", "GoalAlign", "PathAlign", "PathDist", "GoalDist"]

planner_server:
  ros__parameters:
    use_sim_time: true
    planner_plugins: ["GridBased"]
    GridBased:
      plugin: "nav2_navfn_planner/NavfnPlanner"
      tolerance: 0.5
      use_astar: false       # A* or Dijkstra
      allow_unknown: true

local_costmap:
  local_costmap:
    ros__parameters:
      use_sim_time: true
      update_frequency: 5.0
      publish_frequency: 2.0
      global_frame: odom
      robot_base_frame: base_footprint
      rolling_window: true
      width: 3               # 3m x 3m local window
      height: 3
      resolution: 0.05
      robot_radius: 0.22     # Robot radius (m) — measure accurately!
      plugins: ["voxel_layer", "inflation_layer"]
      voxel_layer:
        plugin: "nav2_costmap_2d::VoxelLayer"
        enabled: true
        observation_sources: scan
        scan:
          topic: /scan
          max_obstacle_height: 2.0
          clearing: true
          marking: true
          raytrace_max_range: 3.0
          obstacle_max_range: 2.5
      inflation_layer:
        plugin: "nav2_costmap_2d::InflationLayer"
        cost_scaling_factor: 3.0
        inflation_radius: 0.55   # Safe distance buffer

global_costmap:
  global_costmap:
    ros__parameters:
      use_sim_time: true
      update_frequency: 1.0
      publish_frequency: 1.0
      global_frame: map
      robot_base_frame: base_footprint
      robot_radius: 0.22
      resolution: 0.05
      track_unknown_space: true
      plugins: ["static_layer", "obstacle_layer", "inflation_layer"]
      static_layer:
        plugin: "nav2_costmap_2d::StaticLayer"
        map_subscribe_transient_local: true
      obstacle_layer:
        plugin: "nav2_costmap_2d::ObstacleLayer"
        enabled: true
        observation_sources: scan
        scan:
          topic: /scan
          max_obstacle_height: 2.0
          clearing: true
          marking: true
      inflation_layer:
        plugin: "nav2_costmap_2d::InflationLayer"
        cost_scaling_factor: 3.0
        inflation_radius: 0.55

recoveries_server:
  ros__parameters:
    use_sim_time: true
    costmap_topic: local_costmap/costmap_raw
    footprint_topic: local_costmap/published_footprint
    cycle_frequency: 10.0
    recovery_plugins: ["spin", "backup", "wait"]
    spin:
      plugin: "nav2_recoveries/Spin"
    backup:
      plugin: "nav2_recoveries/BackUp"
    wait:
      plugin: "nav2_recoveries/Wait"

Explaining Key Parameters

Step 4: Launch Navigation

# Terminal 1: Gazebo
ros2 launch turtlebot3_gazebo turtlebot3_world.launch.py

# Terminal 2: Nav2 with saved map
ros2 launch nav2_bringup bringup_launch.py \
  use_sim_time:=true \
  map:=$HOME/ros2_ws/maps/my_warehouse.yaml \
  params_file:=$HOME/ros2_ws/src/my_first_robot/config/nav2_params.yaml

# Terminal 3: RViz with Nav2 config
ros2 launch nav2_bringup rviz_launch.py

Set Initial Pose in RViz

When launched, AMCL doesn't know where robot is. In RViz:

  1. Click "2D Pose Estimate" on toolbar
  2. Click robot position on map, drag to show orientation
  3. Wait for particle cloud to converge (few seconds)

Send Navigation Goals

Method 1: RViz GUI

  1. Click "Nav2 Goal" on toolbar
  2. Click destination on map
  3. Robot plans path and moves automatically

Method 2: Python Script

# ~/ros2_ws/src/my_first_robot/my_first_robot/send_nav_goal.py
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import PoseStamped
from nav2_simple_commander.robot_navigator import BasicNavigator
from rclpy.duration import Duration


def main():
    rclpy.init()
    navigator = BasicNavigator()

    # Set initial pose
    initial_pose = PoseStamped()
    initial_pose.header.frame_id = 'map'
    initial_pose.header.stamp = navigator.get_clock().now().to_msg()
    initial_pose.pose.position.x = 0.0
    initial_pose.pose.position.y = 0.0
    initial_pose.pose.orientation.w = 1.0
    navigator.setInitialPose(initial_pose)

    # Wait for Nav2 ready
    navigator.waitUntilNav2Active()

    # Create goal
    goal_pose = PoseStamped()
    goal_pose.header.frame_id = 'map'
    goal_pose.header.stamp = navigator.get_clock().now().to_msg()
    goal_pose.pose.position.x = 2.0
    goal_pose.pose.position.y = -1.5
    goal_pose.pose.orientation.w = 1.0

    # Send goal
    navigator.goToPose(goal_pose)

    # Monitor progress
    while not navigator.isTaskComplete():
        feedback = navigator.getFeedback()
        if feedback:
            distance = feedback.distance_remaining
            print(f'Distance remaining: {distance:.2f}m')

        if Duration(seconds=60) < navigator.get_clock().now() - goal_pose.header.stamp:
            navigator.cancelTask()
            print('Navigation timeout!')
            break

    result = navigator.getResult()
    print(f'Navigation result: {result}')

    rclpy.shutdown()


if __name__ == '__main__':
    main()

Waypoint Following — Navigate Through Multiple Points

# Send multiple waypoints sequentially
waypoints = []

for pt in [(1.0, 0.0), (2.0, -1.0), (3.0, 0.5), (0.0, 0.0)]:
    pose = PoseStamped()
    pose.header.frame_id = 'map'
    pose.header.stamp = navigator.get_clock().now().to_msg()
    pose.pose.position.x = pt[0]
    pose.pose.position.y = pt[1]
    pose.pose.orientation.w = 1.0
    waypoints.append(pose)

# Robot navigates through each point
navigator.followWaypoints(waypoints)

while not navigator.isTaskComplete():
    feedback = navigator.getFeedback()
    if feedback:
        current_wp = feedback.current_waypoint
        print(f'Heading to waypoint {current_wp + 1}/{len(waypoints)}')

RViz showing path planning and Nav2 costmap

Behavior Trees — Nav2's Brain

Nav2 uses Behavior Trees (BT) instead of state machines to orchestrate behavior. BTs are more flexible, easier to debug and modify:

<!-- Simplified Nav2 default BT -->
<root main_tree_to_execute="MainTree">
  <BehaviorTree ID="MainTree">
    <RecoveryNode number_of_retries="6">
      <PipelineSequence>
        <!-- 1. Calculate global path -->
        <RateController hz="1.0">
          <ComputePathToPose goal="{goal}" path="{path}"
                             planner_id="GridBased"/>
        </RateController>
        <!-- 2. Track path -->
        <FollowPath path="{path}" controller_id="FollowPath"/>
      </PipelineSequence>

      <!-- 3. Recovery if stuck -->
      <SequenceStar>
        <ClearEntireCostmap name="ClearGlobalCostmap"
                            service_name="/global_costmap/clear_entirely_global_costmap"/>
        <ClearEntireCostmap name="ClearLocalCostmap"
                            service_name="/local_costmap/clear_entirely_local_costmap"/>
        <Spin spin_dist="1.57"/>
        <Wait wait_duration="5"/>
        <BackUp backup_dist="0.3" backup_speed="0.05"/>
      </SequenceStar>
    </RecoveryNode>
  </BehaviorTree>
</root>

Execution Flow:

  1. Calculate global path (replan every second)
  2. Controller tracks path
  3. If stuck -> clear costmap -> spin -> wait -> backup
  4. Retry up to 6 times

Common Nav2 Troubleshooting

Robot Won't Move

# Check TF tree
ros2 run tf2_tools view_frames
# Must have: map → odom → base_footprint → base_link

# Check costmap
ros2 topic echo /local_costmap/costmap --once
# If empty → LiDAR topic wrong or QoS mismatch

Robot Hits Obstacles

AMCL Won't Converge

Summary

In this article, you have:

This is the foundation for any AMR — from warehouse delivery robots to cleaning robots. In Part 4, we will dive into the lowest layer: connecting ROS 2 to real hardware via ros2_control and hardware interface.


Related Articles

Related Posts

Deep DiveDigital Twins và ROS 2: Simulation trong sản xuất
simulationros2digital-twinPart 6

Digital Twins và ROS 2: Simulation trong sản xuất

Ứng dụng simulation trong công nghiệp — digital twins, ROS 2 + Gazebo/Isaac integration cho nhà máy thông minh.

3/4/202611 min read
TutorialSim-to-Real Pipeline: Từ training đến robot thật
simulationsim2realtutorialPart 5

Sim-to-Real Pipeline: Từ training đến robot thật

End-to-end guide: train policy trong sim, evaluate, domain randomization, deploy lên robot thật và iterate.

2/4/202615 min read
TutorialBắt đầu với MuJoCo: Cài đặt đến mô phỏng robot đầu tiên
simulationmujocotutorialPart 2

Bắt đầu với MuJoCo: Cài đặt đến mô phỏng robot đầu tiên

Hands-on tutorial MuJoCo — cài đặt, tạo MJCF model, simulate robot arm và visualize kết quả với Python.

30/3/202611 min read