Publications

Detailed Information

DSQNet: A Deformable Model-Based Supervised Learning Algorithm for Grasping Unknown Occluded Objects

Cited 2 time in Web of Science Cited 1 time in Scopus
Authors

Kim, Seungyeon; Ahn, Taegyun; Lee, Yonghyeon; Kim, Jihwan; Wang, Michael Yu; Park, Frank C.

Issue Date
Publisher
Institute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Automation Science and Engineering
Abstract
Grasping previously unseen objects for the first time, in which only partially occluded views of the object are available, remains a difficult challenge. Despite their recent successes, deep learning-based end-to-end methods remain impractical when training data and resources are limited and multiple grippers are used. Two-step methods that first identify the object shape and structure using deformable shape templates, then plan and execute the grasp, are free from those limitations, but also have difficulty with partially occluded objects. In this paper, we propose a two-step method that merges a richer set of shape primitives, the deformable superquadrics, with a deep learning network, DSQNet, that is trained to identify complete object shapes from partial point cloud data. Grasps are then generated that take into account the kinematic and structural properties of the gripper while exploiting the closed-form equations available for deformable superquadrics. A seven-dof robotic arm equipped with a parallel jaw gripper is used to conduct experiments involving a collection of household objects, achieving average grasp success rates of 93% (compared to 86% for existing methods), with object recognition times that are ten times faster. Code is available at https://github.com/seungyeon-k/DSQNet-public Note to Practitioners-This paper provides a comprehensive two-step method for grasping previously unseen objects, in which only partially occluded views of the object may be available. End-to-end deep learning-based methods typically require large amounts of training data, in the form of images of the objects taken from different angles and with different levels of occlusion, and grasping experiments that record the success and failure of each attempt; if a new gripper is used, more often than not the training data must be recollected and a new set of experiments performed. Two-step methods that first identify the object structure and shape using deformable shape templates, then plan the grasp based on knowledge of the object shape, are currently a more practical solution, but also have difficulty when only occluded views of the object are available. Our newly proposed two-step method takes advantage of a more flexible set of shape primitives, and also uses a supervised deep learning network to identify the object from occluded views. Our experimental results indicate improved grasp success rates against the state-of-the-art, with recognition rates that are up to ten times faster. Our method shows high recognition and grasping performance so is well applicable on most of the general household objects, but it cannot be directly applied to more diverse public 3D datasets since it requires some human-annotated segmentation labels. In future research, we will develop our deep learning network to automatically learn segmentation without human-annotated labels, allowing it to recognize more complex and diverse object shapes.
ISSN
1545-5955
URI
https://hdl.handle.net/10371/186685
DOI
https://doi.org/10.1109/TASE.2022.3184873
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share