How can a robot learn from its own interactions?
What abstractions are necessary to describe a task?
When does a robot even know that the task is now completed?

In a quest to find answers to the above questions, I am currently a PhD student at ETH Zurich, advised by Marco Hutter, and a Deep Learning R&D Engineer at NVIDIA Research. I also closely collaborate with Animesh Garg at the University of Toronto.

Over the past few years, I have had the opportunity to work with some amazing robotic groups. I have been a visiting student researcher at Vector Institute, a research intern at NNAISENSE, and a part-time research engineer at ETH Zurich. During my undergrad at IIT Kanpur, I was a visiting student at University of Freiburg, Germany, working closely with Abhinav Valada and Wolfram Burgard.

I am incredibly thankful to my collaborators and mentors, and enjoy exploring new domains through collaborations. If you have questions or would like to work together, feel free to reach out through email!

news

Oct 7, 2021 Joined Marco Hutter’s group at ETH Zurich as a PhD student
Jun 28, 2021 Excited to start as a Deep Learning R&D Engineer at NVIDIA!
May 18, 2020 Excited to start my master thesis with Animesh Garg at PAIR Lab, University of Toronto!
Jan 22, 2020 Our paper on ‘Learning Camera Miscalibration Detection’ from my work at Autonomous Systems Lab, ETH Zurich is accepted to ICRA 2020
Sep 1, 2019 Started my internship with the Intelligent Automation team at NNAISENSE, Lugano!

research interests

I am primarily interested in decision-making and control for the operation of robots in human environments. These days, my efforts are focused on designing perception-based systems for contact-rich manipulation tasks, such as articulated object interaction with mobile manipulators and in-hand manipulation. Other areas of interest include hierarchical reinforcement learning, optimal control, and 3D vision.


publications

  1. Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger Arthur Allshire, Mayank Mittal, Varun Lodaya, Viktor Makoviychuk, Denys Makoviichuk, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Ankur Handa, and Animesh Garg (Under Review) [Abs] [arXiv] [Website] [Code]
  2. Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, and Animesh Garg (Under Review) [Abs] [arXiv] [Website]
  1. Neural Lyapunov Model Predictive Control Mayank Mittal, Marco Gallieri, Alessio Quaglino, Seyed Sina Mirrazavi Salehian, and Jan Koutnik (Under Review) [Abs] [arXiv]
  2. Learning Camera Miscalibration Detection Andrei Cramariuc, Aleksandar Petrov, Rohit Suri, Mayank Mittal, Roland Siegwart, and Cesar Cadena ICRA 2020 [Abs] [arXiv] [Code]
  1. Vision-based Autonomous UAV Navigation and Landing for Urban Search and Rescue Mayank Mittal, Rohit Mohan, Wolfram Burgard, and Abhinav Valada ISRR 2019 [Abs] [arXiv] [Website]
  1. Vision-based Autonomous Landing in Catastrophe-Struck Environments Mayank Mittal, Abhinav Valada, and Wolfram Burgard Workshop on Vision-based Drones: What’s Next?
    IROS 2018
    [Abs] [arXiv] [Video] [PDF]