Hand-Object Contact Consistency Reasoning
for Human Grasps Generation

Hanwen Jiang*,  Shaowei Liu*,  Jiashun Wang,  Xiaolong Wang

ICCV 2021 (Oral)
Paper Code Slides

While predicting robot grasps with parallel jaw grippers have been well studied and widely applied in robot manipulation tasks, the study on natural human grasp generation with a multi-finger hand remains a very challenging problem. In this paper, we propose to generate human grasps given a 3D object in the world. Our key observation is that it is crucial to model the consistency between the hand contact points and object contact regions. That is, we encourage the prior hand contact points to be close to the object surface and the object common contact regions to be touched by the hand at the same time. Based on the hand-object contact consistency, we design novel objectives in training the human grasp generation model and also a new self-supervised task which allows the grasp generation network to be adjusted even during test time. Our experiments show significant improvement in human grasp generation over state-of-the-art approaches by a large margin. More interestingly, by optimizing the model during test time with the self-supervised task, it helps achieve larger gain on unseen and out-of-domain objects.

Input and Output

Test-Time Adaptation



Results on Obman (in-domain objects)

Note that grasp generation is multi-modal, and generated grasps can be different from GT. Ideal grasps should be natural and physically plausible.





Results on HO-3D(out-of-domain objects)

Diverse Grasps

Object 1

Object 2

Object 3

Object 4

Grasp Displacement in Simulation


@inproceedings{jiang2021graspTTA, title={Hand-Object Contact Consistency Reasoning for Human Grasps Generation}, author={Jiang, Hanwen and Liu, Shaowei and Wang, Jiashun and Wang, Xiaolong}, booktitle={Proceedings of the International Conference on Computer Vision}, year={2021} }