4 min read

TU Many Bots: Robotic search and rescue

Python C++ ROS PyTorch CV

Imagine a scenario where two capable robots are dropped into an unknown environment. Their mission? To locate the exit while also finding and guiding a blind robot—one that lacks visual sensors—to safety. This is the challenge tackled by the Tu Many Robots @TUM project, an innovative exploration and guidance system developed using ROS Noetic on Ubuntu 20.04.

The Mission

The core idea behind the project is both simple and compelling:

Two capable robots are spawned into an unknown world.
They must locate the exit, as well as a blind robot, i.e., one without visual sensors.
They must guide the blind robot to the exit.

This multi-robot system not only navigates complex environments but also cooperates to ensure that even a sensor-impaired robot can reach safety. The project beautifully combines elements of perception, SLAM, path planning, and coordinated control to achieve this goal.

Diving into the System

1. Perception: Seeing Beyond Sight

At the heart of the system is a sophisticated perception module that fuses data from cameras, laser scanners, and odometry. The capable robots use this data to detect the blind robot. A trained deep learning module is employed for object detection, interpreting bounding boxes from camera feeds and integrating this with laser sensor data. This allows the system to estimate the blind robot’s position and bearing accurately—even when direct visual input is unavailable.

Visual Insight:

  • The perception pipeline processes sensor data, estimates the blind robot’s location, and predicts its orientation. Check out the sample detection image below:

2. Mapping the Unknown: SLAM and Map Merging

For simultaneous localization and mapping (SLAM), the project leverages the well-established gmapping package. With parameters tuned to update the map at a 1Hz frequency and classify distant spaces as unknown, the system reliably generates maps even in cluttered environments.

Once local maps are created by the exploring robots, they are merged using the multirobot_map_merge package. This merged map is crucial for planning routes and ensuring that every robot has a shared understanding of the environment.

3. Exploration and Navigation: Finding the Exit

To explore the environment, the project uses the explore_lite package. This module helps the robots identify frontiers—the boundaries between explored and unexplored space. By publishing goals towards the largest frontiers, the robots ensure that they cover the area efficiently.

Navigation is handled by the move_base node, which creates both global and local costmaps. This setup allows the robots to navigate dynamically, avoiding obstacles in real-time while pursuing the exit and the blind robot.

4. The Guiding Routine: A Chain of Trust

The most critical part of the system is the guiding routine. Once the exit and the blind robot have been located, the task shifts from exploration to guidance. The guiding module publishes the estimated poses of all robots and calculates control commands using transformations from TF. This results in a coordinated, single-file chain where the capable robots lead the blind robot safely to the exit.

Experience the Process:

  • Watch as the robots coordinate in a visually stunning simulation:

5. Robust Software Architecture

The project’s architecture is modular:

  • Robot State Publisher: Customized to manage multiple robots by ensuring unique coordinate frames.
  • Perception, SLAM, and Navigation Modules: Seamlessly integrated to work together, even when computational demands are high.
  • Guiding and Following Routines: Ensure smooth handoffs between exploration and guidance phases.

Each module communicates through well-defined APIs, making the system both robust and extensible.

Setting Up and Running the System

Getting the system up and running is straightforward if you have ROS Noetic and the necessary hardware. Detailed installation scripts (install_script.bash) and configuration scripts (config_script.bash) streamline the setup process. For users with NVIDIA GPUs, the full computer vision pipeline can be enabled, although a mock pipeline is available for systems without the required hardware.

To build and launch the simulation:

cd <your catkin workspace>
catkin config --blacklist darknet_ros
catkin build
source devel/setup.bash
source src/TU_Many_Bots/config_script.bash
roslaunch tmb_startup complete_launch.launch

The vision module can also be run independently:

roslaunch darknet_ros darknet_ros.launch

Looking Ahead

The Tu Many Robots @TUM project exemplifies the potential of collaborative robotics in solving real-world navigation challenges. Future developments might include:

  • More distributed exploration algorithms.
  • Dynamic map updates post-localization.
  • Enhanced error handling and control precision.

By continually refining these systems, researchers are paving the way for robots that can navigate and collaborate in increasingly complex and dynamic environments.

Join us on this exciting journey as we push the boundaries of robotics and explore what’s possible when technology meets real-world challenges!