A curated list of awesome papers and resources on humanoid manipulation, dexterous manipulation, bimanual dexterous manipulation, and humanlike manipulation. This repo covers upper-body humanoid robot learning, multi-fingered hand manipulation, in-hand object reorientation, and related topics. Inspired by awesome-humanoid-learning.
Keywords: awesome humanoid manipulation, awesome dexterous manipulation, awesome bimanual manipulation, awesome dexterous hand, awesome humanoid robot, awesome robot hand manipulation, in-hand manipulation, dexterous grasping
Scope: This list covers the following topics:
- Humanoid robot manipulation and loco-manipulation
- Dexterous hand manipulation (single-hand, in-hand reorientation, dexterous grasping)
- Bimanual dexterous multi-fingered manipulation
- Dual-arm manipulation with other end effectors
- Teleoperation and human-to-robot retargeting for dexterous tasks
- Physically simulated humanoid animations and digital human-object interaction
Contributions are welcome! Please feel free to submit a pull request or open an issue.
- Awesome Humanoid & Dexterous Manipulation
| Name | Maker | Formats | License | Meshes | Inertias | Collisions |
|---|---|---|---|---|---|---|
| H1 | Unitree | URDF & MJCF, USD | BSD-3-Clause | ✔️ | ✔️ | ✔️ |
| H1-2 (preview) | Unitree | URDF & MJCF, Simplified URDF | BSD-3-Clause | ✔️ | ✔️ | ✔️ |
| G1 | Unitree | URDF & MJCF | BSD-3-Clause | ✔️ | ✔️ | ✔️ |
| GR-1 | FFTAI (Fourier) | URDF, MJCF | GPL-3.0 | ✔️ | ✔️ | ✔️ |
| GR-2 | FFTAI (Fourier) | URDF | GPL-3.0 | ✔️ | ✔️ | ✔️ |
| AgiBot X1 | AgiBot | URDF & MJCF | Apache-2.0 | ✔️ | ✔️ | ✔️ |
| Atlas v4 | Boston Dynamics | URDF | MIT | ✔️ | ✔️ | ✔️ |
| Digit | Agility Robotics | URDF | ✖️ | ✔️ | ✔️ | ✔️ |
| Magicbot Z1 | Magiclab | URDF | ✖️ | ✔️ | ✔️ | ✔️ |
| Deep Robotics | Deep Robotics | URDF | ✖️ | ✔️ | ✔️ | ✔️ |
| Berkeley Humanoid Lite | UC Berkeley | URDF, MJCF, USD | Open Source | ✔️ | ✔️ | ✔️ |
| Berkeley Humanoid | UC Berkeley | URDF | Open Source | ✔️ | ✔️ | ✔️ |
Also see: MuJoCo Menagerie for high-quality MJCF models, awesome-robot-descriptions for a comprehensive list.
| Name | Maker | Formats | License | Meshes | Inertias | Collisions |
|---|---|---|---|---|---|---|
| Ability Hand | PSYONIC, Inc. | MJCF, URDF | ✖️ | ✔️ | ✔️ | ✖️ |
| Allegro Hand | Wonik Robotics | URDF, MJCF | BSD | ✔️ | ✔️ | ✔️ |
| Shadow Hand | Shadow Robot | URDF, MJCF | BSD | ✔️ | ✔️ | ✔️ |
| LEAP Hand | Carnegie Mellon | URDF | MIT | ✔️ | ✔️ | ✔️ |
| Inspire Hand | Inspire-Robots | URDF | ✖️ | ✔️ | ✔️ | ✔️ |
| ORCA Hand | ORCA Robotics | URDF & MJCF | ✖️ | ✔️ | ✔️ | ✔️ |
| Wuji Hand | Wuji Technology | URDF & MJCF | ✖️ | ✔️ | ✔️ | ✔️ |
| Faive Hand | ETH Zurich (SRL) | MJCF | Open Source | ✔️ | ✔️ | ✔️ |
| SVH Hand | SCHUNK | URDF | ✖️ | ✔️ | ✔️ | ✔️ |
Also see: dex-urdf for a collection of dexterous hand URDFs.
| Name | Maker | Formats | License | Visuals | Inertias | Collisions |
|---|---|---|---|---|---|---|
| YuMi | ABB | URDF | BSD-2-Clause | ✔️ | ✔️ | ✔️ |
| Dual iiwa 14 | KUKA | URDF, Xacro | BSD-3-Clause | ✔️ | ✔️ | ✔️ |
| ALOHA 2 | Google DeepMind | URDF | Apache-2.0 | ✔️ | ✔️ | ✔️ |
| Name | Description | Link |
|---|---|---|
| ManiSkill | GPU-parallelized robotics simulator with dexterous manipulation tasks | [github] [doc] |
| Isaac Lab | NVIDIA Isaac Sim-based robot learning framework with dexterous manipulation support | [github] |
| DexGraspNet | Large-scale robotic dexterous grasp dataset for general objects | [github] [paper] |
| HumanoidBench | Simulated humanoid benchmark for whole-body locomotion and manipulation | [github] [paper] |
| BiGym | Demo-driven mobile bi-manual manipulation benchmark | [github] [paper] |
| GRUtopia | General robots in a city at scale | [github] [paper] |
| Humanoid-Gym | RL for humanoid robot with zero-shot sim2real transfer | [github] [paper] |
| DexterousHands | Bi-level multi-agent RL for dexterous manipulation | [github] |
| MuJoCo Playground | Google DeepMind sim-to-real platform for humanoids, hands, quadrupeds | [website] |
| RoboCasa | Large-scale household task simulation with 120+ kitchen scenes | [website] |
| DexGraspNet 2.0 | Dexterous grasping in cluttered scenes | [github] |
CVPR 2025 [Humanoid Agents Workshop]
RSS 2025 [3rd Workshop on Dexterous Manipulation: Learning and Control with Diverse Data]
CoRL 2025 [2nd Workshop on Dexterous Manipulation: Learning and Control with Diverse Modalities]
CoRL 2025 [Workshop on Generalizable Priors for Robot Manipulation]
Humanoids 2025 [Dexterous Humanoid Manipulation Workshop]
ICLR 2025 [7th Robot Learning Workshop: Towards Robots with Human-Level Abilities]
CoRL 2024 [Workshop on Whole-body Control and Bimanual Manipulation: Applications in Humanoids and Beyond]
CoRL 2024 [Learning Robot Fine and Dexterous Manipulation: Perception and Control]
RSS 2024 [2nd Workshop on Dexterous Manipulation: Design, Perception and Control]
AgiBot-World [AgiBot World: A Large-scale Manipulation Platform] [github]
Lerobot [LeRobot: State-of-the-art AI for real-world robotics]
LEAP Hand [A Low-Cost Dexterous Hand for Robot Learning] [github]
DOGlove [Low-Cost Haptic Force Feedback Glove]
ACE Teleop [Cross-Platform Visual-Exoskeletons for Low-Cost Dexterous Teleoperation] [github]
GR00T N1 [Open Foundation Model for Generalist Humanoid Robots] [github]
BEHAVIOR Robot Suite [Streamlining Real-World Whole-Body Manipulation] [github]
MuJoCo Playground [Sim-to-Real Platform for Diverse Robots]
HOMIE [Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit] [github]
Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation [pkg] [doc]
dex-urdf: A collection of high-quality URDF models for dexterous hands and objects [pkg]
robot_descriptions: Python package to load robot description files (URDF, MJCF) [pkg]
- Awesome-Dexterous-Manipulation - Resources on dexterous manipulation
- Awesome-Loco-Manipulation - Locomotion and manipulation
- Awesome-Robotics-Manipulation - Comprehensive robot manipulation papers
- Awesome-What-Bimanual-Can-Do - Bimanual manipulation
- awesome-humanoid-learning - Humanoid robot learning
- awesome-humanoid-robot-learning - Humanoid robot learning papers
- Awesome-Touch - Tactile sensing and manipulation
- awesome-robot-descriptions - URDF/MJCF robot models
YYYY.MM is the date when paper appears on arxiv.org (if available).
-
[2025.04] Dexterous Manipulation through Imitation Learning: A Survey [paper]
-
[2025.01] Humanoid Locomotion and Manipulation: Current Progress and Challenges in Control, Planning, and Learning [paper]
-
[2026.03] HumDex: Humanoid Dexterous Manipulation Made Easy [teleop] [paper]
-
[2026.03] Omni-Manip: Beyond-FOV Large-Workspace Humanoid Manipulation with Omnidirectional 3D Perception [3D perception] [paper]
-
[2025.11] VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation [RL] [sim2real] [paper]
-
[2025.09] VisualMimic: Visual Humanoid Loco-Manipulation via Motion Tracking and Generation [IL] [paper] [project] [code]
-
[2025.10] DemoHLM: From One Demonstration to Generalizable Humanoid Loco-Manipulation [IL] [paper]
-
[2025.09] DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion [diffusion] [paper]
-
[2025.05] FALCON: Learning Force-Adaptive Humanoid Loco-Manipulation [RL] [paper]
-
[2025.05] Humanoid Loco-manipulation Planning based on Graph Search and Reachability Maps [planning] [paper]
-
[2025.05] MaskedManipulator: Versatile Whole-Body Manipulation [RL] [paper]
-
[2025.03] GR00T N1: An Open Foundation Model for Generalist Humanoid Robots [VLA] [diffusion] [paper] [code]
-
[2025.03] BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household Activities [IL] [paper] [project] [code]
-
[2025.03] Humanoids in Hospitals: A Technical Study of Humanoid Robot Surrogates for Dexterous Medical Interventions [paper] [project]
-
[2025.03] FLAM: Foundation Model-Based Body Stabilization for Humanoid Locomotion and Manipulation [paper] [project]
-
[2025.03] KINESIS: Motion Imitation for Human Musculoskeletal Locomotion [RL] [paper] [code]
-
[2025.02] Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids [RL] [sim2real] [paper] [openreview]
-
[2025.02] InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions [RL] [paper]
-
[2025.02] HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit [teleop] [paper] [project] [code]
-
[2025.02] A Unified and General Humanoid Whole-Body Controller for Versatile Locomotion [RL] [paper] [project]
-
[2025.02] DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning [IL] [paper] [project] [code]
-
[2025.02] Dexterous Safe Control for Humanoids in Cluttered Environments via Projected Safe Set Algorithm [Control] [project] [paper]
-
[2025.01] RoboPanoptes: The All-seeing Robot with Whole-body Dexterity [IL] [paper] [code]
-
[2025.01] Motion Tracks: A Unified Representation for Human-Robot Transfer in Few-Shot Imitation Learning [IL] [paper] [project]
-
[2024.12] Mimicking-Bench: A Benchmark for Generalizable Humanoid-Scene Interaction Learning via Human Mimicking [Benchmark] [project] [paper]
-
[2024.12] ARMOR: Egocentric Perception for Humanoid Robot Collision Avoidance and Motion Planning [IL] [MP] [paper]
-
[2024.12] Mobile-TeleVision: Predictive Motion Priors for Humanoid Whole-Body Control [RL] [project] [paper] [code]
-
[2024.10] EgoMimic: Scaling Imitation Learning via Egocentric Video [IL] [project] [paper] [code]
-
[2024.10] DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning [IL] [project] [paper]
-
[2024.10] OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation [IL] [project] [paper] [openreview]
-
[2024.10] Generalizable Humanoid Manipulation with 3D Diffusion Policies [IL] [diffusion] [project] [paper] [code]
-
[2024.08] RP1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands [Dataset] [project] [paper] [code]
-
[2024.07] Open-TeleVision Teleoperation with Immersive Active Visual Feedback [teleop] [project] [paper] [code]
-
[2024.07] GRUtopia: Dream General Robots in a City at Scale [benchmark] [doc] [paper] [code]
-
[2024.07] BiGym: A Demo-Driven Mobile Bi-Manual Manipulation Benchmark [benchmark] [project] [paper]
-
[2024.06] HumanPlus: Humanoid Shadowing and Imitation from Humans [IL] [project] [paper] [code]
-
[2024.06] HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [VLM] [project] [paper]
-
[2024.06] OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning [benchmark] [project] [paper] [code]
-
[2024.04] Large Language Models for Orchestrating Bimanual Robots [LLM] [paper] [project] [code]
-
[2024.04] Humanoid-Gym: Reinforcement Learning for Humanoid Robot with Zero-Shot Sim2Real Transfer [RL] [benchmark] [paper] [project] [code]
-
[2024.03] HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [benchmark] [project] [paper] [code]
-
[2024.03] Bi-KVIL: Keypoints-based Visual Imitation Learning of Bimanual Manipulation Tasks [IL] [project] [paper] [code]
-
[2026.02] DexRepNet++: Learning Dexterous Robotic Manipulation with Geometric and Spatial Hand-Object Representations [RL] [paper]
-
[2026.02] UniMorphGrasp: Diffusion Model with Morphology-Awareness for Cross-Embodiment Dexterous Grasp Generation [diffusion] [paper]
-
[2026.02] DexEvolve: Evolutionary Optimization for Robust and Diverse Dexterous Grasp Synthesis [optimization] [paper]
-
[2026.02] SimToolReal: An Object-Centric Policy for Zero-Shot Dexterous Tool Manipulation [RL] [sim2real] [paper]
-
[2026.01] Closing the Reality Gap: Zero-Shot Sim-to-Real Deployment for Dexterous Force-Based Grasping and Manipulation [RL] [sim2real] [paper]
-
[2025.12] World Models for Learning Dexterous Hand-Object Interactions from Human Videos [world model] [paper] [project]
-
[2025.10] SaTA: Spatially-anchored Tactile Awareness for Robust Dexterous Manipulation [touch] [IL] [paper]
-
[2025.10] DexCanvas: Bridging Human Demonstrations and Robot Learning for Dexterous Manipulation [Dataset] [paper] [code]
-
[2025.09] DexFlyWheel: A Scalable and Self-improving Data Generation Framework for Dexterous Manipulation [Dataset] [paper] [openreview]
-
[2025.09] OpenEgo: A Large-Scale Multimodal Egocentric Dataset for Dexterous Manipulation [Dataset] [paper]
-
[2025.09] Dexplore: Scalable Neural Control for Dexterous Manipulation from Reference-Scoped Exploration [RL] [paper]
-
[2025.09] The Role of Touch: Towards Optimal Tactile Sensing Distribution in Anthropomorphic Hands for Dexterous In-Hand Manipulation [touch] [RL] [paper]
-
[2025.09] In-Hand Manipulation of Articulated Tools with Dexterous Robot Hands with Sim-to-Real Transfer [RL] [sim2real] [paper]
-
[2025.09] Learning Dexterous Manipulation with Quantized Hand State [RL] [paper]
-
[2025.08] DexReMoE: In-hand Reorientation of General Object via Mixtures of Experts [RL] [paper]
-
[2025.07] DexVLG: Dexterous Vision-Language-Grasp Model at Scale [VLM] [grasping] [paper] [project] [code]
-
[2025.06] mimic-one: a Scalable Model Recipe for General Purpose Robot Dexterity [diffusion] [paper] [openreview]
-
[2025.06] Scaffolding Dexterous Manipulation with Vision-Language Models [VLM] [RL] [paper]
-
[2025.06] ClutterDexGrasp: A Sim-to-Real System for General Dexterous Grasping in Cluttered Scenes [RL] [sim2real] [paper]
-
[2025.05] DexCtrl: Towards Sim-to-Real Dexterity with Adaptive Controller Learning [RL] [sim2real] [paper]
-
[2025.05] Dexterous Contact-Rich Manipulation via the Contact Trust Region [MPC] [contact-rich] [paper]
-
[2025.05] EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video [Dataset] [paper] [code]
-
[2025.04] Dexonomy: Synthesizing All Dexterous Grasp Types in a Grasp Taxonomy [optimization] [grasping] [paper] [project] [code]
-
[2025.04] RobustDexGrasp: Robust Dexterous Grasping of General Objects [RL] [sim2real] [paper] [project]
-
[2025.04] Multi-Goal Dexterous Hand Manipulation using Probabilistic Model-based Reinforcement Learning [RL] [paper]
-
[2025.03] DexGrasp Anything: Towards Universal Robotic Dexterous Grasping with Physics Awareness [diffusion] [paper] [project] [code]
-
[2025.03] Grasping a Handful: Sequential Multi-Object Dexterous Grasp Generation [optimization] [grasping] [paper]
-
[2025.03] RoboDexVLM: Visual Language Model-Enabled Task Planning and Motion Control for Dexterous Robot Manipulation [VLM] [paper] [project]
-
[2025.03] Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation [diffusion] [touch] [paper]
-
[2025.03] GAGrasp: Geometric Algebra Diffusion for Dexterous Grasping [diffusion] [paper] [project]
-
[2025.03] Learning Adaptive Dexterous Grasping from Single Demonstrations [RL] [VLM] [paper] [project]
-
[2025.03] Dexterous Grasping with Real-World Robotic Reinforcement Learning [RL] [paper]
-
[2025.03] Learning Dexterous In-Hand Manipulation with Multifingered Hands via Visuomotor Diffusion [diffusion] [IL] [paper]
-
[2025.03] Training Tactile Sensors to Learn Force Sensing from Each Other [touch] [paper]
-
[2025.02] DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping [VLA] [grasping] [paper] [project] [code]
-
[2025.02] AnyDexGrasp: General Dexterous Grasping for Different Hands with Human-level Learning Efficiency [RL] [sim2real] [paper] [project] [code]
-
[2025.02] DexterityGen: Foundation Controller for Unprecedented Dexterity [RL] [paper]
-
[2025.02] CordViP: Correspondence-based Visuomotor Policy for Dexterous Manipulation in Real-World [IL] [paper] [project]
-
[2025.02] FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning [IL] [paper] [project]
-
[2025.01] From Simple to Complex Skills: The Case of In-Hand Object Reorientation [RL] [sim2real] [paper]
-
[2025.01] Learning to Transfer Human Hand Skills for Robot Manipulations [IL] [mocap] [paper] [project]
-
[2024.12] BODex: Scalable and Efficient Robotic Dexterous Grasp Synthesis Using Bilevel Optimization [optimization] [paper]
-
[2024.12] Dexterous Manipulation Based on Prior Dexterous Grasp Pose Knowledge [RL] [paper]
-
[2024.11] Object-Centric Dexterous Manipulation from Human Motion Data [RL] [project] [paper]
-
[2024.11] DexH2R: Task-oriented Dexterous Manipulation from Human to Robots [RL] [paper]
-
[2024.10] DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes [diffusion] [grasping] [paper] [project] [code]
-
[2024.10] Cross-Embodiment Dexterous Grasping with Reinforcement Learning [RL] [grasping] [paper] [code]
-
[2024.09] DexSim2Real2: Building Explicit World Model for Precise Articulated Object Dexterous Manipulation [world model] [MPC] [paper] [code]
-
[2024.08] Complementarity-Free Multi-Contact Modeling and Optimization for Dexterous Manipulation [MPC] [contact-rich] [paper] [code]
-
[2024.07] DexGANGrasp: Dexterous Generative Adversarial Grasping Synthesis for Task-Oriented Manipulation [GAN] [grasping] [paper] [project] [code]
-
[2024.03] Dexterous Functional Pre-Grasp Manipulation with Diffusion Policy [diffusion] [RL] [paper] [project]
-
[2024.01] DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [touch] [RL] [sim2real] [paper]
-
[2026.01] DemoBot: Efficient Learning of Bimanual Manipulation with Dexterous Hands From Third-Person Human Videos [RL] [IL] [paper]
-
[2025.10] DexMan: Learning Bimanual Dexterous Manipulation from Human and Generated Videos [IL] [paper] [openreview]
-
[2025.08] HERMES: Human-to-Robot Embodied Learning from Multi-Source Motion Data for Mobile Dexterous Manipulation [RL] [sim2real] [paper] [openreview]
-
[2025.07] HumanoidGen: Data Generation for Bimanual Dexterous Manipulation via LLM Reasoning [Dataset] [LLM] [paper] [project] [openreview]
-
[2025.05] DexMachina: Functional Retargeting for Bimanual Dexterous Manipulation [RL] [paper]
-
[2025.03] ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning [IL] [paper]
-
[2024.12] GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities [Dataset] [project] [paper]
-
[2024.11] Bimanual Dexterity for Complex Tasks [IL] [project] [paper]
-
[2024.10] Learning Diverse Bimanual Dexterous Manipulation Skills from Human Demonstrations [IL] [project] [paper] [openreview]
-
[2024.07] Bunny-VisionPro: Real-Time Bimanual Dexterous Teleoperation for Imitation Learning [IL] [project] [paper] [code]
-
[2024.04] Learning Visuotactile Skills with Two Multifingered Hands [IL] [touch] [project] [paper] [code]
-
[2024.03] DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation [IL] [project] [paper] [code]
-
[2025.12] GR-Dexter Technical Report [teleop] [paper]
-
[2025.07] Dexterous Teleoperation of 20-DoF ByteDexter Hand via Human Motion Retargeting [teleop] [paper]
-
[2025.07] TypeTele: Releasing Dexterity in Teleoperation by Dexterous Manipulation Types [teleop] [paper]
-
[2025.07] Human-Exoskeleton Kinematic Calibration to Improve Hand Tracking for Dexterous Teleoperation [teleop] [paper]
-
[2025.06] GEX: Democratizing Dexterity with Fully-Actuated Dexterous Hand and Exoskeleton Glove [teleop] [paper]
-
[2025.05] TeleOpBench: A Simulator-Centric Benchmark for Dual-Arm Dexterous Teleoperation [benchmark] [teleop] [paper]
-
[2025.05] DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation [teleop] [paper]
-
[2025.03] Exo-ViHa: A Cross-Platform Exoskeleton System with Visual and Haptic Feedback for Efficient Dexterous Skill Learning [teleop] [paper]
-
[2025.02] DOGlove: Dexterous Manipulation with a Low-Cost Open-Source Haptic Force Feedback Glove [teleop] [paper]
-
[2024.08] ACE: A Cross-Platform Visual-Exoskeletons System for Low-Cost Dexterous Teleoperation [teleop] [project] [paper] [code]
-
[2025.10] ManiDP: Manipulability-Aware Diffusion Policy for Posture-Dependent Bimanual Manipulation [diffusion] [paper]
-
[2025.05] Towards a Generalizable Bimanual Foundation Policy via Flow-based Video Prediction [IL] [foundation] [paper]
-
[2025.03] Learning Bimanual Manipulation via Action Chunking and Inter-Arm Coordination with Transformers [IL] [paper]
-
[2025.03] LLM+MAP: Bimanual Robot Task Planning using Large Language Models and Planning Domain Definition Language [LLM] [paper] [code]
-
[2025.03] Rethinking Bimanual Robotic Manipulation: Learning with Decoupled Interaction Framework [IL] [paper]
-
[2025.01] YOTO: You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from Video Demonstrations [IL] [paper]
-
[2024.12] AnyBimanual: Transferring Unimanual Policy for General Bimanual Manipulation [RL] [paper] [project] [code]
-
[2024.11] AsymDex: Asymmetry and Relative Coordinates for RL-based Bimanual Dexterity [RL] [paper] [project]
-
[2024.10] RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation [IL] [foundation] [github] [paper] [project]
-
[2024.10] ALOHA Unleashed: A Simple Recipe for Robot Dexterity [IL] [project] [paper]
-
[2024.09] ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation [VLM] [project] [paper] [code]
-
[2024.07] VoxAct-B: Voxel-Based Acting and Stabilizing Policy for Bimanual Manipulation [IL] [VLM] [project] [paper] [code]
-
[2024.07] PerAct2: Benchmarking and Learning for Robotic Bimanual Manipulation Tasks [IL] [project] [paper] [code]
-
[2024.01] Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation [IL] [project] [paper] [code(learning)] [code(hardware)]
-
[2024.11] SIMS: Simulating Stylized Human-Scene Interactions with Retrieval-Augmented Script Generation [RL] [paper]
-
[2025.03] SceneMI: Motion In-betweening for Modeling Human-Scene Interactions [diffusion] [paper] [project] [code]
-
[2025.02] InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions [RL] [paper]
-
[2025.02] Generating Physically Realistic and Directable Human Motions from Multi-Modal Inputs [RL] [mocap] [paper]
-
[2024.10] Autonomous Character-Scene Interaction Synthesis from Text Instruction [mocap] [project] [paper]
-
[2024.07] Omnigrasp: Grasping Diverse Objects with Simulated Humanoids [RL] [project] [paper]
-
[2024.06] Human-Object Interaction from Human-Level Instructions [LLM] [project] [paper]
-
[2024.06] CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics [RL] [project] [paper]
-
[2024.06] CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement [mocap] [project] [paper] [code]
-
[2024.04] HOI-M3: Capture Multiple Humans and Objects Interaction within Contextual Environment [mocap] [project] [paper] [code]
-
[2024.03] AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents [RL] [project] [paper] [code]