Skip to content
/ h2r_handovers Public template

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

License

Notifications You must be signed in to change notification settings

patrosAT/h2r_handovers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

This ROS node provides a driver for obejct-independent human-to-robot handovers using robotic vision. The approach requires only one RGBD camera and can therefore be used in a variety of use cases without the need for artificial setups like markers or external cameras. The object-independent grasp selection approach (GGCNN) ensures general applicability even in cluttered environments. To ensure save handovers, the approach uses two NN to segment body parts and hands.

The movements are based on input of the three packages Bodyparts, Egohands and GGCNN.

Input

Output

Examples

Example Video

This youtube video shows the handover of 20 household objects from a frontal and a lateral perspective.

YouTube H2R Handovers

Example RVIZ

This image shows the handover of a banana as seen by the robot. The visualization was done in RVIZ.

Getting Started

Dependencies

The models have been tested with Python 2.7.

Hardware Requirements

  • Depth camera (for this project an realsense D435 was used)
  • GPU >= 4 GB

Software Requirements

ATTENTION: This package requires the ROS operating system!

rospy
actionlib
std_msgs
geometry_msgs
tf
tf2_ros
tf2_geometry_msgs
rv_msgs
rv_manipulation_driver

ROS 3rd party packages

Note: To enable real-time processing, it might be necessary to distribute these packages across several computers. We recommend using sensor_msgs/CompressedImage to keep the network usage on a reasonable level.

Launch

Before launching the package, make sure that the camera and the 3rd party ROS packages are up and running.

The ROS package contains a launch file:

Input

Output

Configuration

The initial setup can be changed by adapting the handover.yaml file:

Camera:

  • depth: Rostopic the publisher is subscribing to (depth image).

Subscription:

  • bodyparts: Rostopic the node is subcribing to (bodyparts).
  • egohands: Rostopic the node is subcribing to (egohands).

GGCNN:

  • topic: Rostopic the node is subcribing to (GGCNN).
  • window: Number of windows that are combined to make the picking point estimation more robust.
  • deviation_position: Maximal deviation in x, y and z form the window's mean. Estimations with a larger deviation in one of these directions are dropped and not considered for calculating the grasp point.
  • deviation_orientation: Maximal deviation in x, y, z, and w (orientation) form the window's mean. Estimations with a larger deviation in one of these orientations are dropped and not considered for calculating the grasp point.

Robot:

  • arm_state: Rostopic the node is subcribing to (arm state).
  • arm_servo_pose: Rostopic the node is publishing to (servo pose, see rv_manipulation_driver).
  • arm_named_pose: Rostopic the node is publishing to (named pose, see rv_manipulation_driver).
  • arm_gripper: Rostopic the node is publishing to (gripper, see rv_manipulation_driver).

Movement:

  • dist_ggcnn: Distance to object until which the GGCNN will update the picking point (future feature, not implemented yet).
  • dist_final: Distance to object until which the system will monitor the object to detect deviations or unforseen events. Such cases will lead the system to abort the handover.
  • speed_approach: Speed during setup and object transfer to dropping location (see rv_manipulation_driver).
  • scaling_handover: Speed during the handover (see rv_manipulation_driver).

Gripper:

  • gripper_open: Width of the opened gripper.
  • gripper_closed: Width of the closed gripper.

Visualization: The visualization node publishes the picking point to be displayed in RVIZ.

  • topic: Rostopic the node is publishing to (visualization).
  • activated: Turn on/off visualization: use keywords "True" or "False" only.

Acknowledgments

The ROS node interacts with the Franka Emika robot arm using the high level API rv_manipulation_driver provided by the Australian Center for Robotic Vision (ACRV).

License

The project is licensed under the BSD 4-Clause License.

Disclaimer

Please keep in mind that no system is 100% fault tolerant and that this demonstrator is focused on pushing the boundaries of innovation. Careless interaction with robots can lead to serious injuries, always use appropriate caution!

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

About

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published