In this beginner-friendly tutorial, you’ll guide your robot through its very first autonomous mapping session using ROS 2 SLAM. This post is part of our hands-on learning series, where we break down core robotics concepts into real, buildable workflows.
If you’ve already built a basic Raspberry Pi mapping robot and have your sensors set up, you’re now ready to give your robot the ability to perceive space and create a digital map of its surroundings, an essential milestone in robotics. This is your ROS 2 SLAM beginner guide, tailored for practical implementation using tools like RViz, SLAM Toolbox, and the Nav2 stack.
Whether you’re an engineering student, hobbyist, or startup prototyper, mastering ROS 2 SLAM is foundational to advanced robot autonomy.
1. Headless SSH Setup — “Knock-Knock, RPi?”
Before your robot can start mapping with ROS 2 SLAM, you need access to the Raspberry Pi running your ROS 2 nodes. In a production setup, robots rarely have screens or keyboards, they’re “headless.” That means remote access via SSH is essential.
First, identify your Raspberry Pi’s IP address by logging into your Wi-Fi router or using a mobile network scanner. Once located, use the following command from your laptop or development workstation:
bash
ssh ubuntu@<raspberry-pi-ip>
Replace <raspberry-pi-ip> with the actual IP address. You’ll likely be prompted to change the default password. This ensures secure remote login for every future session.
Remote access is not only convenient, it mirrors professional robot deployment practices in labs and field settings, especially when testing ROS 2 SLAM in real-world environments.
2. Connecting Your USB LiDAR — “Pocket Flashlight for Your Robot”
Imagine giving your robot a handheld flashlight that spins rapidly, scanning everything around it. That’s exactly what a 2D USB LiDAR sensor does—it provides 360-degree spatial awareness using light pulses.
Connect your LiDAR to a USB 3.0 port on the Raspberry Pi. Then confirm that it’s detected by the operating system:
bash
ls /dev/tty*
Look for a new /dev/ttyUSB0 or similar device. If your LiDAR draws significant power (more than 500 mA), consider using an external 5V buck converter to prevent system instability. Power-related interruptions can compromise scan accuracy and even crash your SLAM session.
This sensor is your robot’s primary tool for environmental understanding. Without it, SLAM is impossible.
3. Visualize LaserScan Data in RViz
Once the LiDAR is active, it’s time to visualize the raw scan data using RViz, the standard visualization tool for ROS.
Start the LiDAR driver:
bash
ros2 launch <lidar_driver_package> <launch_file>.py
Now open RViz on your desktop (or forward X11 if running remotely) and add a LaserScan display. Select the appropriate topic, typically /scan.
As you move the LiDAR by hand (or eventually by robot motion), you’ll see a live sweep of dots forming a 360-degree scan. This is raw point cloud data, representing walls, furniture, and obstacles.
If you’re seeing dots update in real-time, congratulations—your robot has just taken its first look around using ROS 2 SLAM.
4. Understanding the Occupancy Grid — “Pixel-Art Mapping”
The occupancy grid is a 2D map constructed from the LaserScan data. It represents space as a grid of pixels or “cells,” each storing probability values:
- White: Free space the robot can move through
- Black: Detected obstacles or walls
- Grey: Unknown regions
These grids are the basis for navigation, obstacle avoidance, and path planning. They’re dynamically updated as your robot explores, and are one of the most common outputs of a ROS 2 SLAM system.
The occupancy grid is the digital floorplan your robot draws based on its laser perception—an essential output for autonomous movement.
5. Launching SLAM Toolbox (Online Async Mode)
Now it’s time to transform raw LaserScan data into a real map using SLAM Toolbox, one of the most robust and actively maintained SLAM packages for ROS 2.
To begin live mapping:
bash
ros2 launch slam_toolbox online_async_launch.py
This command starts the SLAM algorithm in an asynchronous mode, meaning it updates in real-time as your robot moves. Return to RViz and add the Map display to see the occupancy grid grow with each scan.
Walls, corners, and room boundaries begin appearing on your screen, turning laser pulses into meaningful architecture.
For more about how SLAM Toolbox works, check out its GitHub repository.
6. Saving the Map, “Click Save in Your Robot’s Memory”
Once your robot has completed scanning the environment, you’ll want to save that map for future navigation or localization tasks.
Use the following command to save the current occupancy grid:
bash
ros2 run nav2_map_server map_saver_cli -f my_first_map
This generates two files:
- my_first_map.pgm: the grayscale image representation
- my_first_map.yaml: the associated metadata
You can reload these maps later for autonomous navigation using ROS 2’s Nav2 stack.
Saving the map ensures your robot doesn’t have to rediscover the world every time it boots up.
7. Drive and Loop Closure — Teleoperate Your Robot
To test SLAM performance, drive your robot manually using a keyboard-based teleoperation node:
bash
ros2 run teleop_twist_keyboard teleop_twist_keyboard
Attempt to move the robot in a square or circular pattern. As you loop back to the starting point, a well-functioning SLAM system should “close the loop,” aligning the map edges seamlessly.
If the map is distorted or the loop doesn’t close properly, don’t worry. This is common in beginner SLAM setups and usually indicates pose drift.
8. Why the Map Has Gaps – The Role of Sensor Fusion
At this point, your robot is mapping using only LiDAR data. While LiDAR is powerful, it lacks internal awareness of the robot’s movement. That’s why the robot’s pose estimate may drift over time, especially during turns or long paths.
Without sensor fusion, the combination of IMU, encoder, and LiDAR data—your robot is effectively guessing where it is based only on external scans. This introduces error.
SLAM gets significantly more accurate when you integrate multiple sensor sources. That’s the next big step in our ROS 2 development journey.
9. Share Your Map With the Community
Your robot has now drawn its first digital map—a major achievement.
Take a screenshot of your occupancy grid in RViz and post it in our Robotisim Discord server. Seeing how others are solving similar SLAM challenges can inspire and guide your next moves.
Mapping your own space is a rite of passage for robotics developers. Celebrate it, document it, and use it as your baseline for future optimization.
10. What’s Next, Sensor Fusion and Navigation
Now that your robot can generate a basic map using ROS 2 SLAM, it’s time to improve that map’s fidelity. In our next post, Sensor Fusion Made Easy, we’ll show how to:
- Fuse wheel encoder data with IMU measurements
- Reduce pose drift during movement
- Create smoother, more accurate maps
- Prepare for autonomous path planning using the Nav2 stack
These enhancements will take your robot from basic mapping to robust navigation.
Final Thoughts on Mapping with ROS 2 SLAM
This ROS 2 SLAM beginner guide introduced you to the core process of real-world mapping with a Raspberry Pi robot. We covered remote access, LiDAR setup, live scan visualization, occupancy grids, and saving your first digital map.
This is a critical foundational skill in robotics. Autonomous mapping empowers everything from delivery bots to exploration rovers. By using standard robot protocols, the ROS2 tutorial ecosystem, and affordable hardware like the Raspberry Pi mapping robot, you’re joining the global robotics community in building smarter, more capable machines.
For more in-depth walkthroughs, visit our ROS 2 learning hub at Robotisim.