← Back to Blog
manipulationdexterousmanipulationtactile-sensingrobot-hand

Dexterous Manipulation: Teaching Robot Hands

In-hand rotation, tool use, DexGraspNet and tactile sensing — complete guide to dexterous manipulation with multi-finger robot hands.

Nguyen Anh Tuan18 tháng 3, 20267 min read
Dexterous Manipulation: Teaching Robot Hands

Beyond Parallel-Jaw Gripper

In previous posts of this series, I focused mainly on parallel-jaw grippers — two fingers that open/close. They're the most common because simple, cheap, and sufficient for many tasks (pick-and-place, bin picking).

But humans have 5 fingers with 20+ DOF, enabling delicate manipulation: rotating a ball, using tools, opening a bottle with one hand. Dexterous manipulation with multi-finger robot hands is the next frontier — and one of the hardest problems in robotics.

This post covers: hardware (Allegro, LEAP, Shadow Hand), core problems (in-hand manipulation, tool use), datasets (DexGraspNet), tactile sensing, and state-of-the-art methods.

See Tactile Sensing for Manipulation for deep dive on tactile sensors.

Multi-finger robot hand — frontier of dexterous manipulation

Hardware: Popular Robot Hands

Allegro Hand

Allegro Hand from Wonik Robotics is most popular in research:

LEAP Hand

LEAP Hand from Carnegie Mellon is low-cost alternative:

Shadow Dexterous Hand

Shadow Hand is gold standard — most human-like:

Comparison

Criterion Allegro LEAP Shadow
DOF 16 16 24
Price ~15K USD ~2K USD ~100K+ USD
Torque sensing Yes No Yes
Tactile No (add-on) No BioTac (optional)
Open-source No Yes No
Sim models MuJoCo, Isaac MuJoCo MuJoCo, Isaac
Best for Research balance Education, budget Top-tier research

Core Problem: In-Hand Manipulation

In-Hand Object Rotation

In-hand rotation is the classic benchmark: robot hand holds object (cube, sphere) and rotates it to target orientation without dropping. Sounds simple but extremely hard:

OpenAI's Rubik's Cube (2019) was a milestone: Shadow Hand solved Rubik's Cube after training RL in simulation with Automatic Domain Randomization (ADR). 13,000+ years simulated experience, transferred zero-shot to real robot.

Modern approach (2024-2026): RL in Isaac Lab with 4,096+ parallel environments. Unitree uses this for their dexterous hands, and Google DeepMind uses it for in-hand manipulation research.

Tool Use

Ability to use tools distinguishes human hands. Robot dexterous manipulation aims for:

Current methods use keypoint-based representations: define important points on tool and hand, learn policy to align them. Stanford and UC Berkeley lead here.

DexGraspNet: Large-Scale Dexterous Grasp Dataset

DexGraspNet (Wang et al., 2023) is the largest dexterous grasping dataset:

Using DexGraspNet

# Load DexGraspNet data (simplified)
import numpy as np

# Each object has ~200+ diverse grasps
grasp_data = np.load("dexgraspnet/core-bottle-1a7ba1f4c892e2da30711cdbdbc73e71.npy",
                      allow_pickle=True).item()

# Each grasp contains:
# - hand_pose: (22,) — wrist 6D + 16 joint angles
# - hand_qpos: (16,) — joint positions
# - target_qpos: (16,) — target joint positions for grasping
# - score: float — grasp quality (force closure based)

print(f"Number of grasps: {len(grasp_data['grasps'])}")
print(f"Object category: {grasp_data['category']}")

DexGraspNet 2.0

DexGraspNet 2.0 (2024) expands to:

Tactile Sensing: "Touch" for Robots

Why Tactile Matters for Dexterous Manipulation?

Vision shows where object is, but tactile shows what it feels like: force, slip detection, surface texture. For dexterous manipulation, tactile is essential:

Types of Tactile Sensors

Vision-based (GelSight, DIGIT):

Capacitive/resistive (BioTac):

Piezoelectric (tuxedo-labs):

DIGIT Sensor for Research

DIGIT from Meta AI is most popular in research:

# Read DIGIT sensor data
import digit_interface

digit = digit_interface.Digit("D00123", resolution=(320, 240))
digit.connect()

# Get frame from sensor (contact geometry)
frame = digit.get_frame()  # (240, 320, 3) RGB image

# Contact detection
is_contact = detect_contact(frame, baseline_frame)

# Force estimation (if calibrated)
force_estimate = estimate_force(frame, calibration_model)

Tactile sensing and sim-to-real transfer for dexterous manipulation

State-of-the-Art Methods

RL + Sim-to-Real (Dominant Approach)

Most common method for dexterous manipulation:

  1. Setup in Isaac Lab: robot hand + object, reward function
  2. Train PPO/SAC with 4,096+ parallel environments
  3. Domain randomization: friction, mass, sensor noise, hand dimensions
  4. Deploy zero-shot to real robot
# Reward function for in-hand rotation (simplified)
def reward_function(env):
    # Orientation error between current and target
    rot_error = quaternion_distance(
        env.object_quat, env.target_quat
    )

    # Bonus when achieving target
    success_bonus = 10.0 if rot_error < 0.1 else 0.0

    # Penalty for dropping
    drop_penalty = -5.0 if env.object_pos[2] < 0.3 else 0.0

    # Encourage contact between finger and object
    fingertip_reward = -0.1 * fingertip_to_object_distance(env)

    return -rot_error + success_bonus + drop_penalty + fingertip_reward

Diffusion Policy for Dexterous

2025-2026 trend: use Diffusion Policy instead of RL. Advantage: collect data via human teleoperation (more natural), no reward engineering.

Stanford IRIS showed Diffusion Policy can learn in-hand reorientation from 100 human demos, achieving comparable performance to RL after thousands of hours sim training.

Teacher-Student Framework

Effective approach: train teacher policy in sim with privileged information (ground truth object pose, contact forces), then distill to student policy using only real robot info (images, joints, tactile).

Teacher (sim, privileged):
  Input: object pose + contact forces + joint pos
  Output: action
  Method: RL (PPO) with full state

Student (real-world compatible):
  Input: images + joint pos + tactile
  Output: action
  Method: BC from teacher demonstrations

Challenges and Future Directions

Sim-to-Real Still the Biggest Problem

Contact physics in simulation still isn't 100% accurate. MuJoCo 3.x with convex optimization contact solver is much better, but deformable objects (fabric, rope) remain open challenge.

Hardware is Still a Bottleneck

Current robot hands are:

LEAP Hand ($2K, open-source) is reducing barriers, but needs better durability and torque.

Bi-manual Dexterous

Combining 2 robot hands (32 DOF total) multiplies complexity exponentially. This is frontier of frontier — see Part 6 for more.

Resources

Next in Series


Related Articles

Related Posts

TutorialLeRobot Ecosystem: Hướng dẫn toàn diện 2026
ai-perceptionmanipulationtutorial

LeRobot Ecosystem: Hướng dẫn toàn diện 2026

Tổng quan LeRobot của Hugging Face -- models, datasets, hardware support và cách bắt đầu với $100.

22/3/20269 min read
Deep DiveDiffusion Policy: Cách mạng robot manipulation
ai-perceptiondiffusion-policymanipulationPart 4

Diffusion Policy: Cách mạng robot manipulation

Tại sao diffusion models là breakthrough cho robotics — multimodal distributions, high-dim actions và stability.

14/3/202610 min read
Deep DiveAction Chunking Transformers (ACT): Kiến trúc chi tiết
ai-perceptionmanipulationresearchPart 3

Action Chunking Transformers (ACT): Kiến trúc chi tiết

Phân tích ACT — tại sao predict nhiều actions cùng lúc tốt hơn, CVAE encoder và temporal ensembling.

11/3/202611 min read