Depth-aware Object Segmentation and Grasp Detection for Robotic Picking Tasks

Stefan Ainetter, Christoph Böhm, Rohit Dhakate, Stephan Weiss, Friedrich Fraundorfer

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

In this paper, we present a novel deep neural network architecture for joint class-agnostic object segmentation and grasp detection for robotic picking tasks using a parallel-plate gripper. We introduce depth-aware Coordinate Convolution (CoordConv), a method to increase accuracy for point proposal based object instance segmentation in complex scenes without adding any additional network parameters or computation complexity.
Depth-aware CoordConv uses depth data to extract prior information about the loca-tion of an object to achieve highly accurate object instance segmentation. These result-ing segmentation masks, combined with predicted grasp candidates, lead to a complete scene description for grasping using a parallel-plate gripper. We evaluate the accuracy of grasp detection and instance segmentation on challenging robotic picking datasets, namely Siléane and OCID_grasp, and show the benefit of joint grasp detection and seg-mentation on a real-world robotic picking task
Original languageEnglish
Title of host publicationBritish Machine Vision Conference (BMVC) 2021
Number of pages16
DOIs
Publication statusPublished - 2021
Event32nd British Machine Vision Conference: BMVC 2021 - Virtuell, United Kingdom
Duration: 22 Nov 202125 Nov 2021

Conference

Conference32nd British Machine Vision Conference
Abbreviated titleBMVC 2021
Country/TerritoryUnited Kingdom
CityVirtuell
Period22/11/2125/11/21

Fingerprint

Dive into the research topics of 'Depth-aware Object Segmentation and Grasp Detection for Robotic Picking Tasks'. Together they form a unique fingerprint.

Cite this