I am an Associate Professor of Computer Science and Engineering at the University of Michigan. I am the leader of the Laboratory for Progress (Perception, Robotics, and Grounded Reasoning Systems). My work and collaborations aim to discover methods for computational reasoning and perception that will enable robots to effectively assist people in common human environments. Essentially, we explore how to make the real world programmable through the control of autonomous robots. Critical challenges towards this goal are enabling robots to perceive our world (i.e., "robots can't work if they can't see"), reason under uncertainty, and learn from human users. This research pertains primarily to interactive robotics with contributions the technology of robot perception and mobile manipulation as well as the usability of this technology by people in real situations. Because science is exactly independently verifiable knowledge, open-source contributions and reproducibility are critical features of my work.
I am currently serving as the Editor-in-Chief for the Journal of Human-Robot Interaction.
My active projects include perceptual reasoning for goal-directed robotic manipulation, interactive systems for assisted robot teleoperation, and independent living technologies for aging populations. Our latest result for manipulation and perception in cluttered scenes is available as an arXiv preprint and highlighted in the following research video:
My past projects include robotic person following and gesture recognition, robot learning from demonstration, physics-based tracking of human motion from video, protocols and libraries for web/cloud robotics, markerless model and motion capture, inertial motion capture, balance control of simulated humanoids, and humanoid imitation learning.
A more-or-less complete listing of my publications is available from my profile on Google Scholar. This profile also includes approximate numbers on the citation counts and statistics for the "impact" of my research. Generating a long list of papers shows that I have been productive and busy. However, I suspect my real research impact comes from a small selelection of our best papers. To this end, I have selected my favorite papers from my publication list and provided them below for (relatively) easy reading:
- A. Fod, M. Mataric, O. Jenkins, Automated Derivation of Primitives for Movement Classification, Autonomous Robots, 12(1): 39-54, 2002.
- O. Jenkins, M. Mataric, A Spatio-temporal Extension to Isomap Nonlinear Dimension Reduction, International Conference on Machine Learning (ICML), 2004.
- M. Vondrak, L. Sigal, O. Jenkins, Dynamical Simulation Priors for Human Motion Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1): 52--65, 2013.
- D. Grollman, O. Jenkins, Incremental Learning of Subtasks from Unsegmented Demonstration, IEEE Conference on Intelligent Robots and Systems (IROS), 2010.
- C. Crick, S. Osentoski, G. Jay, O. Jenkins, Human and robot perception in large-scale learning from demonstration, IEEE/ACM Conference on Human-Robot Interaction (HRI), 2011.
and one more representative of our current work:
- Z. Sui, O. Jenkins, K. Desingh, Axiomatic Particle Filtering for Goal-directed Robotic Manipulation, International Journal of Robotics Research, 36(1): 86-104, 2017.
I aim to realize both equal opportunity and technological excellence in robotics, computing, and all areas of automated technologies. Robotics and computing technology is having an incredible impact on society. However, the wisdom needed for this impact to be positive and beneficial requires broader participation across society. Towards this end, I actively engage in activities to broaden participation in robotics and computing along many dimensions, including improving engagement with students from underrepesented groups. I have co-authored a high-level primer and opinion article on the role of robotics and automation in society:
- O. Jenkins, A. Peseri, Automation, Not Domination: How Robots Will Take Over Our World, Footnote1, 2013.
An overview of my work and thoughts on web robotics (as a convergence of human-robot interaction and cloud robotics) was presented at National Geographic in 2013:
A popular spinout from this talk was my TED presentation with Henry Evans that goes into more detail about web robotics and remote presence for people with disabilities (which I guess is everyone):
- Winter 2017: U-M EECS 467 Autonomous Robotics Laboratory
- Fall 2016: U-M ROB 550 Robotics Systems Laboratory
- Fall 2016: U-M EECS 398-004 Introduction to Autonomous Robotics
- Fall 2016: U-M EECS 598-009 Robot Modeling and Control
- Winter 2016: U-M EECS 398-002 Introduction to Autonomous Robotics
- Winter 2016: U-M EECS 598-010 Robot Modeling and Control
- Fall 2015: U-M EECS 598-010 Interactive Robot Manipulators
- Spring 2014/5: Brown CS 148 Introduction to Autonomous Robotics
- Fall 2013/4: Brown CS 195E/2951P Human-Robot Interaction Seminar
Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science and Engineering at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations.