Sim-and-Human Co-training for Data-Efficient and Generalizable Robotic Manipulation
Abstract
A co-training framework called SimHum leverages complementary strengths of simulation and human data to improve robotic manipulation performance through shared kinematic and visual priors.
Synthetic simulation data and real-world human data provide scalable alternatives to circumvent the prohibitive costs of robot data collection. However, these sources suffer from the sim-to-real visual gap and the human-to-robot embodiment gap, respectively, which limits the policy's generalization to real-world scenarios. In this work, we identify a natural yet underexplored complementarity between these sources: simulation offers the robot action that human data lacks, while human data provides the real-world observation that simulation struggles to render. Motivated by this insight, we present SimHum, a co-training framework to simultaneously extract kinematic prior from simulated robot actions and visual prior from real-world human observations. Based on the two complementary priors, we achieve data-efficient and generalizable robotic manipulation in real-world tasks. Empirically, SimHum outperforms the baseline by up to 40% under the same data collection budget, and achieves a 62.5% OOD success with only 80 real data, outperforming the real only baseline by 7.1times. Videos and additional information can be found at https://kaipengfang.github.io/sim-and-human{project website}.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper