Overview

The goal of this project was to retrieve an object, in this case it was a multimeter, and return it to a specified location. This was accomplished using MoveIt!, Grasp Pose Detection, ROS Navigation, and rtabmap_ros. Using MoveIt!’s API I created a pick and place pipeline which takes advantage of MoveItcpp.

See the relevant packages nu_ridgeback and sawback on GitHub.

Hardware

This robot, referred to as Sawback, is a Sawyer from Rethink aboard a Ridgeback from ClearPath. The Sawback is equipped with a Velodyne lidar, Bumblebee depth camera, and 2 Hokoyu UST10LX lidars.

Bring Up

A large part of this project was getting the robot up and running. See the network configuration along with the bring up procedure on GitHub. The challenging component was setting up the network so all the sensors, Ridgeback’s computer, Sawyer’s computer, and the user’s computer can communicate with each other.

Process

MoveIt along with an IKFast plugin was used for Sawyer’s motion planning. The IKFast plugin greatly improved the planning performance when using cartesian path planning for the pre-grasp and post-grasp components of a pick or place. Grasp Pose Detection was used to sample the point cloud from the Bumblebee camera for grasp candidates. Grasp Pose Detection returns a list of sorted grasp poses ranked based on the predicted success of each grasp. In this case, the highest ranked grasp was used. If the robot failed to pick up the multimeter, the point cloud was repeatedly sampled. The object’s point cloud was segmented from the ground. This is required to prevent Grasp Pose Detection from sampling the ground plane.

The following gifs show the robot picking up the multimeter.

The Bumblebee camera was aimed towards the ground for grasping; reducing its utility to provide a useful point cloud for 3D SLAM. Instead, 3D SLAM was performed using the point cloud from the Velodyne lidar via rtabmap_ros. The robot’s large mass helped prevent the mecanum wheels from slipping. Therefore, odometry was performed using only the Ridgeback’s wheel encoders.

The ROS Navigation stack was used for path planning and control. The Velodyne lidar’s 3D scan was projected into the 2D plain for navigation. This encapsulates obstacle’s 3D geometry as occupied space in the occupancy grid. The 2D laser scans provided by the Hokuyo lidars and another 2D scan from the Velodyne lidar (provides both a 3D and 2D scan) were used to update the navigation stack’s cost maps.

Below, the ROS Navigation stack is running in a narrow hallway. The local cost map is show in blue and magenta. The occupancy grid is from rtabmap_ros and the 3D scan is from the Velodyne lidar is shown.