How can a robot learn from its own interactions?
What abstractions are necessary to describe a task?
When does a robot even know that the task is now completed?

In a quest to find answers to the above questions, I am currently a PhD student at ETH Zurich advised by Marco Hutter, and a Research Scientist at NVIDIA Research. I also closely collaborate with Animesh Garg at the University of Toronto.

Over the past few years, I have had the opportunity to work with some amazing robotic groups. I have been a visiting student researcher at Vector Institute, a research intern at NNAISENSE, and a part-time research engineer at ETH Zurich. During my undergrad at IIT Kanpur, I was a visiting student at University of Freiburg, Germany, working closely with Abhinav Valada and Wolfram Burgard.

I am incredibly thankful to my collaborators and mentors, and enjoy exploring new domains through collaborations. If you have questions or would like to work together, feel free to reach out through email!

news

Jul 1, 2022 Our papers on articulated object and in-hand manipulation are accepted to IROS 2022 :robot:
Jan 31, 2022 Our paper on ‘A Collision-Free MPC for Whole-Body Dynamic Locomotion and Manipulation’ is accepted to ICRA 2022
Oct 7, 2021 Joined Marco Hutter’s group at ETH Zurich as a PhD student
Jun 28, 2021 Excited to start as a Deep Learning R&D Engineer at NVIDIA!
May 18, 2020 Excited to start my master thesis with Animesh Garg at PAIR Lab, University of Toronto!

research interests

I am primarily interested in the decision-making and control of robots in human environments. These days, my efforts are focused on designing perception-based systems for contact-rich manipulation tasks, such as articulated object interaction with mobile manipulators and in-hand manipulation. Other areas of interest include hierarchical reinforcement learning, optimal control, and 3D vision.


publications

  1. A Collision-Free MPC for Whole-Body Dynamic Locomotion and Manipulation Jia-Ruei Chiu, Jean-Pierre Sleiman, Mayank Mittal, Farbod Farshidian, and Marco Hutter ICRA 2022 [Abs] [arXiv] [Video]
  2. Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger Arthur Allshire, Mayank Mittal, Varun Lodaya, Viktor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, and Animesh Garg IROS 2022 [Abs] [arXiv] [Website] [Code]
  3. Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, and Animesh Garg IROS 2022 [Abs] [arXiv] [Website]
    1. Neural Lyapunov Model Predictive Control Mayank Mittal, Marco Gallieri, Alessio Quaglino, Seyed Sina Mirrazavi Salehian, and Jan Koutnik (Under Review) [Abs] [arXiv]
    2. Learning Camera Miscalibration Detection Andrei Cramariuc, Aleksandar Petrov, Rohit Suri, Mayank Mittal, Roland Siegwart, and Cesar Cadena ICRA 2020 [Abs] [arXiv] [Code]
    1. Vision-based Autonomous UAV Navigation and Landing for Urban Search and Rescue Mayank Mittal, Rohit Mohan, Wolfram Burgard, and Abhinav Valada ISRR 2019 [Abs] [arXiv] [Website]
    1. Vision-based Autonomous Landing in Catastrophe-Struck Environments Mayank Mittal, Abhinav Valada, and Wolfram Burgard Workshop on Vision-based Drones: What’s Next?
      IROS 2018
      [Abs] [arXiv] [Video] [PDF]