Publications
Representative papers are highlighted.
|
click to view the proposed framework
|
ImplicitRDP: An End-to-End Visual-Force Diffusion Policy with Structural Slow-Fast Learning
Wendi Chen,
Han Xue,
Yi Wang,
Fangyuan Zhou,
Jun Lv,
Yang Jin,
Shirun Tang,
Chuan Wen\( \dagger \),
Cewu Lu\( \dagger \)
(\( \dagger \)equal advising)
project page
ImplicitRDP is a unified end-to-end visual-force diffusion policy that integrates visual planning and reactive force control.
By leveraging Structural Slow-Fast Learning, it performs closed-loop adjustments at high frequency while maintaining temporal coherence.
Additionally, Virtual-target-based Representation Regularization prevents modality collapse, enabling adaptive attention to visual and force modalities.
|
click to view the proposed framework
|
SOE: Sample-Efficient Robot Policy Self-Improvement via On-Manifold Exploration
Yang Jin,
Jun Lv,
Han Xue,
Wendi Chen,
Chuan Wen\( \dagger \),
Cewu Lu\( \dagger \)
(\( \dagger \)equal advising)
arXiv preprint, 2025
project page
/
paper
/
arXiv
/
bibtex
We propose a plug-and-play module that enables constrained exploration on the manifold of valid actions for robotic policies.
In this way, our model can generate diverse and consistent actions, supporting sample-efficient policy self-improvement.
|
click to view the proposed framework
|
Right-Side-Out: Learning Zero-Shot Sim-to-Real Garment Reversal
Chang Yu*,
Siyu Ma*,
Wenxin Du,
Zeshun Zong,
Han Xue,
Wendi Chen,
Cewu Lu,
Yin Yang,
Xuchen Han,
Joseph Masterjohn,
Alejandro Castro,
Chenfanfu Jiang,
(*equal contributions)
arXiv preprint, 2025
project page
/
paper
/
arXiv
/
bibtex
Right-Side-Out is a zero-shot sim-to-real framework that turns garments right-side out by
decomposing the task into keypoint-parameterized primitives and scaling training via high-fidelity GPU-parallel MPM simulation.
|
click to view the proposed framework
|
Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation
Han Xue*,
Jieji Ren*,
Wendi Chen*,
Gu Zhang\( \dagger \),
Yuan Fang\( \dagger \),
Guoying Gu,
Huazhe Xu\( \ddagger \),
Cewu Lu\( \ddagger \)
(*equal contributions, \( \dagger \)equal contributions, \( \ddagger \)equal advising)
Robotics: Science and Systems (RSS), 2025
🔥Best Student Paper Finalist
🔥Best Paper @ Beyond P&P Workshop in ICRA 2025
project page
/
paper
/
arXiv
/
tweet
/
code
/
bibtex
We propose TactAR and Reactive Diffusion Policy (RDP).
TactAR is a teleopration system that uses AR to provide tactile / force feedback.
RDP is a slow-fast policy learning method which enables closed-loop tactile / force control via fast policy while
maintaining the capability of modeling complex action distributions via slow policy.
|
|
|
DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action Alignment
Wendi Chen*,
Han Xue*,
Fangyuan Zhou,
Yuan Fang,
Cewu Lu
(*equal contributions)
International Conference on Robotics and Automation (ICRA), 2025
🔥Best Paper Finalist @ RMDO Workshop in ICRA 2025
project page
/
paper
/
arXiv
/
tweet
/
code
/
video
/
bibtex
Inspired by RLHF, DeformPAM enhances learning efficiency and mitigates distribution shift in deformable object manipulation
by selecting action through a preference-based implicit reward model.
|
|
|
Selected Awards and Honors
|
|