Past Seminars

Please join the robotics-announce mailing list by clicking here to receive email notification of upcoming seminars.
Seminar Name/Date/Time/Location Description
Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation Dr. Anne Jordt GEOMAR Helmholtz Centre for Ocean Research Kiel Friday Sep 25 12:00-1:00pm Location: GFL107 (Gorguze Family Laboratory) Compared to aerial photogrammetry (image-based measurements), precise and reliable measurements with underwater cameras are complicated by several phenomena. First, while traveling through water, light is scattered and attenuated, depending on the actual composition of the water, the wavelength etc. This distorts the “true” object colors or intensities depending on the object distance, and often leads to blueish or greenish images. Second, light rays can be refracted at the interfaces between water, glass and air, when entering the camera housing. In particular with flat port cameras, this affects the geometric image formation process and needs to be considered when reasoning in 3D. In this talk I will discuss novel automated machine vision approaches for reconstructing the geometry and colors of a 3D scene from an underwater video or photo sequence. These approaches are based on physical models for refraction and light propagation and will be demonstrated on underwater footage from ROV as well as from archeology divers.
From Global Properties to Local Rules in Multi-Agent Systems Prof. Magnus Egerstedt, Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology April 24 10:00 – 11:00 a.m., Room 1200 EECS The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents.
Compiling Global Behaviors into Local Controllers for Mobile Sensor Networks Prof. Kevin Lynch, Professor and Chair, Mechanical Engineering at Northwestern University April 27 10:00am, Johnson Rooms, 3rd Floor, Lurie Bldg. Mobile sensor networks can be deployed for tasks such as environmental monitoring or searching a collapsed building for survivors. Decentralized control algorithms allow the mobile sensors to adapt to a changing environment and to failure of individual sensors, without the need for a centralized controller. I will describe the control theory we are developing to support “swarms” of mobile sensors. This work is based on the concept of “information diffusion” in ad hoc communication networks and motion control laws that drive the sensors to optimally acquire information.
From Global Properties to Local Rules in Multi-Agent Systems Prof. Magnus Egerstedt Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology Friday, April 10th. 2015 3:30 – 4:30 p.m. Room 1500 EECS Building The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents.
A Long-term View of SLAM Friday, April 10th, 2015 3:30 – 4:30 p.m. Room 1500 EECS Building Prof. John J. Leonard Professor MIT, Department of Mechanical and Ocean Engineering This talk will provide a long-term view on the Simultaneous Localization and Mapping (SLAM) problem in Robotics. The first part of the talk will review the history of SLAM research and define some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure. Next, we will give a snapshot of recent MIT research in SLAM based on joint work with the National University of Ireland, Maynooth. A major new trend in SLAM is the development of real-time dense mapping system using RGB-D cameras. We will describe Kintinuous, a new SLAM system capable of producing high quality globally consistent surface reconstructions over hundreds of meters in real-time with only a cheap commodity RGB-D sensor. The approach is based on three key innovations in volumetric fusion-based SLAM: (1) using a GPU-based 3D cyclical buffer trick to extend dense volumetric fusion of depth maps to an unbounded spatial region; (2) combining both dense geometric and photometric camera pose constraints, and (3) efficiently applying loop closure constraints by the use of an as-rigid-as-possible space deformation. Experimental results will be presented for a wide variety of data sets to demonstrate the system’s performance. We will conclude the talk with a discussion of current and future research topics, including object-based and semantic mapping, lifelong learning, and advanced physical interaction with the world. We will also discuss potential implications of SLAM research on the development of self-driving cars.
Robust and Efficient Real-time Mapping for Autonomous Robots Friday, March 27, 2015 3:30pm – 4:30pm 1500 EECS Prof. Michael Kaess Assistant Research Professor Carnegie Mellon University We are starting to see the emergence of autonomous robots that operate outside of controlled factory environments in various applications ranging from driverless cars, space and underwater exploration to service robots for businesses and homes. One of the very first challenges encountered on the way to autonomy is perception: obtaining information about the environment that allows the robot to efficiently navigate through, interact with and manipulate it. Moreover, in many such applications, models of the environment are either unavailable or outdated, thus necessitating real-time robotic mapping using onboard sensors. In this talk I will present my recent research on robust and efficient optimization techniques for real-time robotic mapping. I will focus on our recently developed incremental nonlinear least-squares solver, termed incremental smoothing and mapping (iSAM2). Based on our new probabilistic model called the Bayes tree, iSAM2 efficiently updates an existing solution to a nonlinear least-squares problem after new measurements are added. I will describe some of the key aspects of my work and also address robustness in optimization. Lastly, I will present applications enabled by iSAM2 including long-term visual mapping and Kintinuous — our recent work on dense mapping with RGB-D cameras.
From Running Cockroaches to Foam Robots: why trying to learn from nature is hard, and we should do it anyway Prof. Shai Revzen Wednesday, Jan 21 4-5pm, Little building rm. 2548 Biologically inspired robotics is in vogue these days, with new videos of animal-like robots appearing almost daily. Yet despite superficial similarity, actually copying from nature’s playbook is a difficult task. In this talk I will discuss some of these difficulties, and show vignettes from biology, robotics and mathematics that illustrate one approach toward learning about the control of rapid legged locomotion in animals and how to instantiate similar controls in robots.
Motion Planning and Control for Robot and Human Manipulation Lecture Dr. Kevin M. Lynch Thursday, December 18, 2014 3:00 – 4:00PM 1200 EECS The talk briefly describes the progress on motion planning and control for two very different manipulations problems: (1) nonprehensile manipulation by robots and (2) control of neuroprosthetics for humans with spinal cord injuries. The first part focuses on graspless manipulation modes commonly used by humans and animals but mostly avoided by rots, such as rolling, pushing, pivoting, tapping, and throwing anc catching. The second part will described a recent project on control of a funtional electrical stimulation neuroprosthetic for the human arm.
Control Seminar / Robotics Seminar – From Motion to Actor and Action Inference in Unconstrained Video Speaker: Professor Jason Corso Friday, November 7, 2014 3:30 – 4:30 p.m. 1500 EECS Can a human fly? Can a baby run? Can a car walk? Emphatically the answer is no to all of these questions. Yet, the primary agenda in the action recognition literature has focused strictly on the action ignoring who or what is doing the acting. Concurrently, the object recognition community has focused strictly on identifying the humans, the babies, the cars, etc. without considering what these “actors” are doing in the video. Yet, the articulated motion induced from, say, a bird eating versus an adult human eating is significantly different. In this talk, I will describe our recent work to unify these two problems into joint actor-action recognition with structured inference in unconstrained video. I will present a sequence of increasingly more sophisticated models, from an independent naive Bayes model through a hierarchical graphical model for jointly capturing these two inferential goals. We have developed a new dataset with seven actor classes and nine action classes (including the null action), and will present and discuss results of all models on this challenging dataset.
Control Seminar / Robotics Seminar – Advances in Underwater Robotic Vehicles for Oceanographic Exploration in Extreme Environments Speaker: Louis L. Whitcomb Friday, November 14, 2014 3:30 – 4:30 p.m. 1500 EECS This talk reports recent advances in underwater robotic vehicle research to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker’s group and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. The goal of NUI is to enable exploration and detailed examination of biological and physical environments including the ice-ocean interface in marginal ice zones, in the water column of ice-covered seas, at glacial ice-tongues, and ice-shelf margins, delivering realtime high-definition video in addition to survey data from on board acoustic, optical, chemical, and biological sensors. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland – in which we conducted 4 dives under the moving polar ice-pack to evaluate and develop NUI’s overall functioning and its individual engineered subsystems and under-ice scientific survey capabilities for biological oceanography and sea-ice physics.
AI-Seminar – Human-Centered Principles and Methods for Designing Robotic Technologies Speaker: Prof. Bilge Mutlu Tuesday, October 21, 2014 4:00 – 5:30 p.m. 3725 BBB The emergence of robotic products that serve as automated tools, assistants, and collaborators promises tremendous benefits across a range of everyday settings from the home to manufacturing facilities. While these products promise interactions that can be far more complex than those with conventional products, their successful integration into the human environment requires these interactions to be also natural and intuitive. To achieve complex but intuitive interactions, designers and developers must simultaneously understand and address computational and human challenges. In this talk, I will present my group’s work on building human-centered guidelines, methods, and tools to address these challenges in order to facilitate the design of robotic technologies that are more effective, intuitive, acceptable, and even enjoyable. In particular, I will present a series of projects that demonstrate how a marrying of knowledge about people and computational methods can enable effective user interactions with social, assistive, and telepresence robots and the development of novel tools and methods that support complex design tasks across the key stages of analysis, synthesis, and evaluation in the design process. I will additionally present ongoing work that applies these guidelines to the development of real-world applications of robotic technology.
Control Seminar / Robotics Seminar – Unobtrusive Monitoring for Individuals Using Inertial Systems Speaker: Lauro Ojeda Friday, October 10, 2014 3:30 – 4:30 p.m. 1500 EECS The use of inertial measurement units for tracking and monitoring applications is gaining in importance as the size, power consumption and cost of these sensors have all been decreased by their wide spread use in smartphones and other consumer electronics. As such, inertial sensors are becoming the preferred sensing modality for a growing number of applications, including human and animal movement. In this presentation, I will first present an overview of how data from inertial sensors, in conjunction with the requisite algorithm development, enabled personal dead reckoning for 3D localization. Next, I will describe current work that uses these sensors and algorithms to help understand “how the person walks” instead of simply “where the person is”. Further, analysis of human movement is typically confined to specialized labs instrumented with expensive motion capture systems. To address this, we have demonstrated that inertial systems can be used not only to obtain comparable measurements at a much lower cost, but also effectively monitor subjects in daily life conditions outside of the lab. We have collected week-long, real-world sensor data on subjects, and for the first time, we have been able to capture and reproduce loss of balances in older adults who self- report balance difficulty. Finally, work extending inertial systems to free swimming marine mammals will be presented. Our preliminary results provide accurate estimates of mechanical work on swimming animals for the first time. The work presented here demonstrates the advantages of inertial systems for monitoring and quantifying health and well-being of both people and animals. Click here to view a biographical sketch and pictures.
Making Atlas Move: Dynamic Planning and Control for a Hydraulic Humanoid Robot Speaker: Dr. Scott Kuindersma Friday, September 26, 2014 12:00 – 1:00 p.m. Room: 133 Chrysler What does it take to make a 155kg, 1.8m tall humanoid perform useful work? In this talk, I will describe our approach to achieving reliable dynamic balancing, walking, and manipulation with Atlas. Starting with an efficient optimization-based controller for whole-body locomotion, I will share our process of moving from simulation to the real robot, including system identification, state estimation, and joint-level control. I will also describe our approach to collision-free manipulation planning and highlight our current efforts toward achieving highly dynamic motions with Atlas. For more information visit the event page on the Michigan Engineering Calendar.
Minimally Invasive Robotic Catheters: Addressing Challenges through Modeling, Control, and Design Speaker: Dr. Michael Zinn Tuesday, September 23, 2014 4:00 – 5:00 p.m. Room: 1200 EECS In recent years, minimally-invasive surgical systems based on flexible robotic manipulators have met with success. One major advantage of the flexible manipulator approach is its superior safety characteristics as compared to rigid manipulators – a critical characteristic in sensitive applications including cardio-vascular and neurosurgical procedures. However, their soft compliant structure, in combination with internal friction, results in poor position and force regulation which have limited their use to simpler surgical procedures. To understand the underlying reasons of their performance limitations and potentially overcome them, we have undertaken a coordinated effort to develop improved modeling, controls, and device manipulation approaches. The modeling investigation has focused on developing improved models by which this behavior is explained and predicted. In this work, we explicitly incorporate internal device friction which, in combination with the flexible device structure and drive train, predicts behavior including motion hysteresis, whipping, and control-tendon aligned lobbing – behaviors which are not captured using standard linear-elastic descriptions. However, the underlying history-dependent behavior limits the ability to apply these modeling results towards improved device performance. As such, we have investigated the use of closed-loop control to mitigate the effect of both nonlinear disturbances, such as internal friction, and dynamic flexible body motions. In particular, the use of a hybrid tracking controller, where motions are decomposed and controlled in both modal and flexible-segment joint-space coordinates, has shown significant improvement in both the steady-state and dynamic response. While these improvements are notable, the interaction of the controller and uncontrolled flexible body modes limit the extent of the possible performance improvements. Finally, in an attempt to address the limitations of flexible manipulators directly, we discuss a new approach to continuum robotic manipulation, referred to as interleaved continuum-rigid manipulation, which combines flexible, actively actuated continuum segments with small rigid-link actuators. In this approach the small rigid-link joints are interleaved between successive continuum segments and provide a redundant motion and error correction capability. For more information visit the event page on the Michigan Engineering Calendar.
Swarm robotics: Minimalism, learning and evolution Speaker: Dr. Roderich Gross Friday, September 19, 2014 12:00 – 1:00 p.m. Room: 133 Chrysler Swarm intelligence is the study of systems of spatially distributed individuals that coordinate their actions in a self-organised manner and thereby exhibit complex collective behaviour. In the first part of the talk, we present our recent advances in controlling groups of robots. It is shown that some tasks can be solved by swarms of robots with severely limited abilities. For example, in order for a group of robots to spatially segregate into distinct subgroups or transport a tall object, the robots do not fundamentally require to communicate with each other in an explicit way. Rather it is sufficient if they can discriminate between the object, the goal and the remainder of the environment. In order for a group of robots to gather in a single place, or cluster objects that are initially dispersed, it was found that the robots do not fundamentally require arithmetic computation. Such tasks can be solved by robots that use a binary sensor to trigger one of two possible actions, without the need to store information during run-time. In the second part of the talk, we present a method that is able to identify models (parameters) of individuals, for example, when part of a swarm, through observation. This method does not require any pre-defined metric to gauge the resemblance of models to observed individuals. In the third part of the talk, we outline our recent efforts towards an evolution of energy autonomous robotic organisms. For more information visit the event page on the Michigan Engineering Calendar.
Control Seminar / Robotics Seminar – Multi-objective control problems for multi-robot systems in constrained environments: A set-theoretic formulation Speaker: Dr. Dimitra Panagou Friday, September 12, 2014 3:30 – 4:30 p.m. 1500 EECS The development of planning, coordination and control algorithms for complex systems, such as robotic networks and multi-vehicle systems under multiple tasks, is nowadays motivated by numerous applications. Networked autonomous vehicles, such as aerial, space, marine and ground robots are typically of non-trivial dynamics, often deployed in uncertain environments and also of restricted sensing and communication capabilities. Requirements such as safety of the individual components and reliability of the overall networked system give rise to control objectives which include inter-agent collision avoidance, connectivity maintenance among agents and effective coverage of the environment. In this talk we will present control design methodologies for systems subject to motion and state constraints, which encode sensing and communication restrictions. The methodology relies on a novel class of Lyapunov-like barrier functions, which give rise to a set-theoretic formulation amenable to distributed control design with Lyapunov-like techniques. Our recent results address the safe coordination and dynamic coverage for multi-robot systems in obstacle environments under sensing and communication constraints. Click here to view a biographical sketch and pictures.
Online Bayesian Changepoint Detection for Articulated Motion Models Speaker: Dr. Scott Niekum Thursday, September 11, 2014 12:00 – 1:00 p.m. Room: 2150 Dow Many manipulation problems in robotics are characterized by articulation properties of objects such as rotating doors, sliding drawers, and rigidly connected parts. Several recent methods have been proposed to identify articulation relationships from visual observations but have assumed that these relationships remain static over time. However, many articulation relationships are of a changing nature, requiring analysis from a time-series perspective.We introduce CHAMP, an algorithm for online Bayesian changepoint detection in settings where it is impractical to integrate over the parameters of candidate models. CHAMP is then used in combination with several articulation models to detect changes in the articulated motion of objects in the world. We focus on three settings where a changepoint model is appropriate: objects with intrinsic articulation relationships that can change over time, quasi-static articulated motion of grasped objects, and assembly tasks in which each step changes articulation relationships. Experiments show how this system can be used to infer task-relevant information from demonstration data including causal manipulation models, human-robot correspondences, and skill verification tests. For more information visit the event page on the Michigan Engineering calendar.
Feedback Control of Bipedal Locomotion: Theory and Experiments Speaker: Dr. Jessy W Grizzle Friday, September 5, 2014 12:30 – 1:30pm 1311 EECS The fields of control and robotics are working hand-in-hand to development bipedal machines that can realize walking motions with the stability and agility of a human. Dynamic models for bipeds are hybrid nonlinear systems, meaning they contain both continuous and discrete elements, with switching events that are spatially driven by changes in ground contact. This talk will show how model-based feedback control and optimization methods are enhancing the ability to achieve highly dynamic locomotion in bipedal machines. The theory used in the talk will be amply illustrated with graphics and videos of our experiments to make the material accessible to a wide audience. Biography: Jessy W. Grizzle received the Ph.D. in electrical engineering from The University of Texas at Austin in 1983 and in 1984 held an NSF-NATO Postdoctoral Fellowship in Science in Paris, France. Since September 1987, he has been with The University of Michigan, Ann Arbor, where he is the Jerry and Carol Levin Professor of Engineering. He jointly holds sixteen patents dealing with emissions reduction in passenger vehicles through improved control system design. Professor Grizzle is a Fellow of the IEEE and of IFAC. He received the Paper of the Year Award from the IEEE Vehicular Technology Society in 1993, the George S. Axelby Award in 2002, the Control Systems Technology Award in 2003, and the Bode Lecture Prize in 2012. His work on bipedal locomotion has been the object of numerous plenary lectures and has been featured in The Economist, Wired Magazine, Discover Magazine, Scientific American, Popular Mechanics and several television programs.
Aerial Robot Swarms Seminar Speaker: Dr. Vijay Kumar July 10, 2014 11:00 a.m. – 12:00 p.m. 1303 EECS Autonomous micro aerial robots can operate in three-dimensional, indoor and outdoor environments, with applications to search and rescue, first response and precision farming. I will provide an overview of our work, and describe the challenges in developing small, agile robots and our recent work in the areas of (a) control and planning, (b) state estimation and mapping, and (c) coordinating large teams of robots. Click here to view a biographical sketch and pictures.