Computer Science ETDs

Publication Date

Fall 12-11-2019

Abstract

Robot motion planning in dynamic environments is critical for many robotic applications, such as self-driving cars, UAVs and service robots operating in changing environments. However, motion planning in dynamic environments is very challenging as this problem has been shown to be NP-Hard and in PSPACE, even in the simplest case. As a result, the lack of safe, efficient planning solutions for real-world robots is one of the biggest obstacles for ubiquitous adoption of robots in everyday life. Specifically, there are four main challenges facing motion planning in dynamic environments: obstacle motion uncertainty, obstacle interaction, complex robot dynamics and noise, and planner efficiency. To bring robots out of controlled lab environments, this research addresses these challenges by developing eight novel algorithms and a benchmark comparing state of the art motion planners for dynamic environments. We demonstrate that these challenges can be overcome, or significantly alleviated, by techniques borrowed from the field of artificial intelligence, robotics, computational geometry and machine learning. Specifically, we improve navigation in the presence of obstacle motion uncertainty through the use of Monte Carlo simulations and planners that take risks in an adaptive fashion. We also develop planners for environments with strong obstacle interactions by novel ways of simulating robot-obstacle interactions. Next, we employ and improve reinforcement learning methods to find motion plans for robots with complex dynamics and noise. Lastly, we utilize deep learning to improve planner efficiency and prescribe a fast motion planner for robots with limited computation resources. Our extensive evaluation and bench- mark problems found that methods developed in this work achieve higher or the highest performance compared to existing methods. The development and evaluation of these methods also established new facts that lead to the following conclusions: 1) search-based motion planners must take risks in order to identify paths in crowded stochastic dynamic environments. 2) Reinforcement learning algorithms should not be limited to optimizing the cumulative reward, as reward functions are merely proxies for agent performance. 3) Complex path integrals can often be estimated accurately and rapidly by deep neural nets. 4) Integration of local, reactive-based methods with global, search-based methods is a promising direction for robot motion planning.

Language

English

Keywords

Robotics, Motion Planning, Machine Learning, Reinforcement Learning

Document Type

Dissertation

Degree Name

Computer Science

Level of Degree

Doctoral

Department Name

Department of Computer Science

First Committee Member (Chair)

Lydia Tapia

Second Committee Member

Aleksandra Faust

Third Committee Member

Melanie Moses

Fourth Committee Member

Meeko Oishi

Fifth Committee Member

Jared Saia

Included in

Robotics Commons

Share

COinS