How can a robot learn to use its different parts for interactions?
How can we go beyond proprioception for robust mobile manipulation?
What abstractions are necessary to describe multiple tasks?
To find some answers to the above questions, I am currently a PhD student at ETH Zurich advised by Marco Hutter, and a Research Scientist at NVIDIA Research.
Over the span of my career, I have had the opportunity to work with some amazing robotic groups
on many different robotic platforms.
I have been a visiting student researcher at Vector Institute,
a research intern at NNAISENSE, and a part-time research engineer
at ETH Zurich. During my undergrad at IIT Kanpur, I was a visiting student at
University of Freiburg, Germany, working closely with Abhinav Valada and Wolfram Burgard.
I also founded the AUV-IITK team, where I worked on different
hardware and software aspects of building an autonomous underwater vehicle.
If you have questions or would like to discuss ideas, feel free to reach out through
email!
news
Jun 15, 2024 |
I was honored to be invited as a speaker at the RSS Workshop on Data Generation for Robotics, where I presented our work on simulation frameworks and
how simulation-based scaling aids in learning robust legged mobile manipulation.
|
Jun 3, 2024 |
Our work on Orbit has evolved into Isaac Lab, which is now officially supported by NVIDIA.
A huge thank to the team and collaborators to make this possible!
|
Feb 1, 2024 |
Four papers (task symmetry in RL, pedipulation, semantic navigation and surgical benchmark) accepted to ICRA 2024
|
Apr 20, 2023 |
Our paper on ‘Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments’ is accepted to IEEE RA-L and will be presented at IROS 2023
|
Jul 1, 2022 |
Our papers on articulated object and in-hand manipulation are accepted to IROS 2022
|
some publications
-
Symmetry Considerations for Learning Task Symmetric Robot Policies
Mayank Mittal*,
Nikita Rudin*,
Victor Klemm,
Arthur Allshire,
and Marco Hutter
ICRA
2024
[Abs]
[arXiv]
Symmetry is a fundamental aspect of many real-world robotic tasks. However, current deep reinforcement learning (DRL) approaches can seldom harness and exploit symmetry effectively. Often, the learned behaviors fail to achieve the desired transformation invariances and suffer from motion artifacts. For instance, a quadruped may exhibit different gaits when commanded to move forward or backward, even though it is symmetrical about its torso. This issue becomes further pronounced in high-dimensional or complex environments, where DRL methods are prone to local optima and fail to explore regions of the state space equally. Past methods on encouraging symmetry for robotic tasks have studied this topic mainly in a single-task setting, where symmetry usually refers to symmetry in the motion, such as the gait patterns. In this paper, we revisit this topic for goal-conditioned tasks in robotics, where symmetry lies mainly in task execution and not necessarily in the learned motions themselves. In particular, we investigate two approaches to incorporate symmetry invariance into DRL – data augmentation and mirror loss function. We provide a theoretical foundation for using augmented samples in an on-policy setting. Based on this, we show that the corresponding approach achieves faster convergence and improves the learned behaviors in various challenging robotic tasks, from climbing boxes with a quadruped to dexterous manipulation.
-
Pedipulate: Enabling Manipulation Skills using a Quadruped Robot’s Leg
Philip Arm,
Mayank Mittal,
Hendrik Kolvenbach,
and Marco Hutter
ICRA
2024
[Abs]
[arXiv]
[Video]
[Website]
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation - using the legs of a legged robot for manipulation. By training a reinforcement learning policy that tracks position targets for one foot, we enable a dedicated pedipulation controller that is robust to disturbances, has a large workspace through whole-body behaviors, and can reach far-away targets with gait emergence, enabling loco-pedipulation. By deploying our controller on a quadrupedal robot using teleoperation, we demonstrate various real-world tasks such as door opening, sample collection, and pushing obstacles. We demonstrate load carrying of more than 2.0 kg at the foot. Additionally, the controller is robust to interaction forces at the foot, disturbances at the base, and slippery contact surfaces.
-
ORBIT: A Unified Simulation Framework for Interactive Robot Learning Environments
Mayank Mittal,
Calvin Yu,
Qinxi Yu,
Jingzhou Liu,
Nikita Rudin,
David Hoeller,
and others
IEEE RA-L
2023
[Abs]
[arXiv]
[Website]
[Code]
We present ORBIT, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim. It offers a modular design to easily and efficiently create robotic environments with photo-realistic scenes and fast and accurate rigid and deformable body simulation. With ORBIT, we provide a suite of benchmark tasks of varying difficulty – from single-stage cabinet opening and cloth folding to multi-stage tasks such as room reorganization. To support working with diverse observations and action spaces, we include fixed-arm and mobile manipulators with different physically-based sensors and motion generators. ORBIT allows training reinforcement learning policies and collecting large demonstration datasets from hand-crafted or expert solutions in a matter of minutes by leveraging GPU-based parallelization. In summary, we offer an open-sourced framework that readily comes with 16 robotic platforms, 4 sensor modalities, 10 motion generators, more than 20 benchmark tasks, and wrappers to 4 learning libraries. With this framework, we aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and task and motion planning. We hope it helps establish interdisciplinary collaborations in these communities, and its modularity makes it easily extensible for more tasks and applications in the future.
-
A Collision-Free MPC for Whole-Body Dynamic Locomotion and Manipulation
Jia-Ruei Chiu,
Jean-Pierre Sleiman,
Mayank Mittal,
Farbod Farshidian,
and Marco Hutter
ICRA
2022
[Abs]
[arXiv]
[Video]
In this paper, we present a real-time whole-body planner for collision-free legged mobile manipulation. We enforce both self-collision and environment-collision avoidance as soft constraints within a Model Predictive Control (MPC) scheme that solves a multi-contact optimal control problem. By penalizing the signed distances among a set of representative primitive collision bodies, the robot is able to safely execute a variety of dynamic maneuvers while preventing any self-collisions. Moreover, collision-free navigation and manipulation in both static and dynamic environments are made viable through efficient queries of distances and their gradients via a euclidean signed distance field. We demonstrate through a comparative study that our approach only slightly increases the computational complexity of the MPC planning. Finally, we validate the effectiveness of our framework through a set of hardware experiments involving dynamic mobile manipulation tasks with potential collisions, such as locomotion balancing with the swinging arm, weight throwing, and autonomous door opening.
-
Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation
Mayank Mittal,
David Hoeller,
Farbod Farshidian,
Marco Hutter,
and Animesh Garg
IROS
2022
[Abs]
[arXiv]
[Website]
A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles. Autonomous interactions in such real-world environments require integrating dexterous manipulation and fluid mobility. While mobile manipulators in different form-factors provide an extended workspace, their real-world adoption has been limited. This limitation is in part due to two main reasons: 1) inability to interact with unknown human-scale objects such as cabinets and ovens, and 2) inefficient coordination between the arm and the mobile base. Executing a high-level task for general objects requires a perceptual understanding of the object as well as adaptive whole-body control among dynamic obstacles. In this paper, we propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments. The first stage uses a learned model to estimate the articulated model of a target object from an RGB-D input and predicts an action-conditional sequence of states for interaction. The second stage comprises of a whole-body motion controller to manipulate the object along the generated kinematic plan. We show that our proposed pipeline can handle complicated static and dynamic kitchen settings. Moreover, we demonstrate that the proposed approach achieves better performance than commonly used control methods in mobile manipulation.