← Back to Blog
researchresearchconferencerobotics

RSS 2026: Must-Read Papers on Sim-to-Real and Manipulation

Analysis of notable papers to be presented at RSS 2026 Sydney — sim-to-real transfer and dexterous manipulation breakthroughs.

Nguyen Anh Tuan7 tháng 3, 20266 min read
RSS 2026: Must-Read Papers on Sim-to-Real and Manipulation

RSS 2026 Sydney — Sim-to-Real and Manipulation Rise to the Top

Robotics: Science and Systems (RSS) 2026 will take place at University of Technology Sydney, Australia from July 13-17, 2026. Sim-to-real transfer and dexterous manipulation continue to be the hottest topics, with numerous breakthrough papers on how to close the gap between simulation and reality. Below is an analysis of notable accepted papers that robotics engineers should read before the conference.

International robotics conference with many robot demos

1. RoboSplat: Gaussian Splatting for Data Augmentation

Paper: "Novel Demonstration Generation with Gaussian Splatting Enables Robust One-Shot Manipulation"

Problem: One of the biggest barriers to robot learning is the lack of diverse training data. Collecting demonstrations in the real world is expensive, while synthetic data from simulators often has large domain gaps due to inaccurate geometry reconstruction.

Solution: RoboSplat uses 3D Gaussian Splatting to generate new demonstrations from a single demo. Rather than augmenting images in RGB space (which creates artifacts) or using complex Real-to-Sim-to-Real pipelines, this method directly manipulates 3D Gaussians to create realistic visual variations while preserving the original trajectory.

Practical Takeaway: If you're building a manipulation system with limited demo data, Gaussian Splatting is a worthwhile direction. With just 1 demonstration, you can generate hundreds of variations to train more robust policies than traditional augmentation.

2. RoboMIND: Large-Scale Multi-Embodiment Benchmark

Paper: "RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation"

Problem: Current manipulation datasets typically use only 1-2 robot types, making models hard to generalize to other platforms. There's a lack of standardized benchmark for cross-embodiment comparison.

Solution: RoboMIND provides 107,000 demonstration trajectories on 479 diverse tasks with 96 object types. The key feature is that the dataset includes multiple different robot types, along with digital twins in Isaac Sim for collecting additional synthetic data with low cost and efficient evaluation.

Practical Takeaway: This is a great reference dataset if you want to pre-train a manipulation policy before fine-tuning for your specific robot. The digital twin environment lets you test quickly without hardware.

3. Sim-to-Real Transfer for Legged Robots

Paper: "Towards Bridging the Gap: Systematic Sim-to-Real Transfer for Diverse Legged Robots" (arXiv:2509.06342)

Problem: Sim-to-real for locomotion still requires significant manual tuning for each robot type. Every time you switch platforms, engineers must adjust domain randomization, reward shaping, and system identification.

Solution: The paper proposes a framework integrating sim-to-real RL with physics-grounded energy models for PMSM (Permanent Magnet Synchronous Motors). Accurate motor models reduce the reality gap while requiring minimal parameters — you don't need to randomize all physics, just model the most critical part (actuator dynamics) correctly.

Practical Takeaway: Instead of domain randomization for everything, focus on accurately modeling actuator dynamics. This is the primary source of reality gap in locomotion.

Robot legs navigating complex terrain

4. Dexterous Grasping with Real-World RL

Paper: "Dexterous Grasping with Real-World Robotic Reinforcement Learning" (arXiv:2503.04014)

Problem: Most RL for dexterous manipulation is trained in simulation then transferred to reality. But the sim-to-real gap for robot hands is huge due to complex contact dynamics.

Solution: DexGraspRL trains directly on real robot hardware instead of simulation. The framework uses sample-efficient RL with curriculum learning to teach the robot dexterous grasping in real environments. Results achieve 92% success rate on diverse grasping tasks — significantly higher than sim-to-real baselines.

Practical Takeaway: With advances in sample-efficient RL (SAC, RLPD), training directly on real hardware is becoming increasingly feasible for manipulation. If your robot has contact-rich tasks, real-world RL may yield better results than sim-to-real.

5. ManipTrans: Transfer Bimanual Skills from Humans to Robots

Paper: "ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning" (arXiv:2503.21860)

Problem: Teaching robots bimanual manipulation is extremely difficult because action space is huge. Human demonstrations are the best data source, but retargeting from human hands to robot hands is non-trivial.

Solution: ManipTrans uses two stages — (1) pre-train a general trajectory imitator from many types of human demonstrations, (2) fine-tune a residual module for specific tasks. The residual learning approach allows quick adaptation to new tasks without retraining from scratch.

Practical Takeaway: If you're doing bimanual manipulation, invest in a data pipeline for collecting human demonstrations (teleoperation or motion capture) then use residual learning to adapt. This pipeline scales far better than training from scratch.

6. The Reality Gap: Survey and Best Practices

Paper: "The Reality Gap in Robotics: Challenges, Solutions, and Best Practices" (arXiv:2510.20808)

This is not an experimental paper but a comprehensive survey on sim-to-real transfer — excellent for anyone wanting a holistic view. The paper categorizes sources of reality gap (visual, dynamics, sensor), compares methods (domain randomization, domain adaptation, system identification), and provides best practices for different task types.

Practical Takeaway: Read this survey before starting a sim-to-real project. It helps you choose the right method for your specific use case instead of trying everything.

Notable Trends from RSS 2026

Looking at the bigger picture, RSS 2026 shows several clear trends:

Foundation Models for Manipulation

VLA (Vision-Language-Action) models are changing how robots learn manipulation. Instead of training a policy for each task, models like GR00T N1 and RT-X use large-scale pre-training then fine-tune. RSS 2026 has many papers expanding this approach across diverse embodiments.

Real-World RL Replacing Sim-to-Real

With more sample-efficient algorithms (RLPD, DreamerV3) and faster hardware, training directly on real robots is becoming feasible. Especially for contact-rich tasks where sim-to-real gap is too large.

3D Representations Replacing 2D

Gaussian Splatting, Neural Radiance Fields, and point cloud representations are replacing 2D image features for manipulation. 3D understanding helps policies generalize better with changing viewpoints and object poses.

Robotics research in university lab

Summary

RSS 2026 continues to affirm its position as the leading conference for robotics research. This year, the boundary between sim and real is increasingly blurred — not just because of better simulation, but because we're closing the gap from both sides. For robotics engineers in Vietnam, this is a perfect time to apply these techniques to real products.

If you want to dive deeper into sim-to-real and robotics research trends, read more comprehensive articles from VnRobo.


Related Articles

Related Posts

IROS 2026: Papers navigation và manipulation đáng theo dõi
researchconferencerobotics

IROS 2026: Papers navigation và manipulation đáng theo dõi

Phân tích papers nổi bật về autonomous navigation và manipulation — chuẩn bị cho IROS 2026 Pittsburgh.

2/4/20267 min read
Sim-to-Real Transfer: Train simulation, chạy thực tế
ai-perceptionresearchrobotics

Sim-to-Real Transfer: Train simulation, chạy thực tế

Kỹ thuật chuyển đổi mô hình từ simulation sang robot thật — domain randomization, system identification và best practices.

1/4/202612 min read
IROS 2026 Preview: Những gì đáng chờ đợi
researchconferencerobotics

IROS 2026 Preview: Những gì đáng chờ đợi

IROS 2026 Pittsburgh — preview workshops, competitions và nghiên cứu navigation, manipulation hàng đầu.

30/3/20267 min read