Practical Resolution Methods for MDPs in Robotics Exemplified With Disassembly Planning
In this letter, we focus on finding practical resolution methods for Markov decision processes (MDPs) in robotics. Some of the main difficulties of applying MDPs to real-world robotics problems are: first, having to deal with huge state spaces; and second, designing a method that is robust enough to dead ends. These complications restrict or make more difficult the application of methods, such as value iteration, policy iteration, or labeled real-time dynamic programming (LRTDP). We see in determinization and heuristic search a way to successfully work around these problems. In addition, we believe that many practical use cases offer the opportunity to identify hierarchies of subtasks and solve smaller, simplified problems. We propose a decision-making unit that operates in a probabilistic planning setting through stochastic shortest path problems, which generalize the most common types of MDPs. Our decision-making unit combines: first, automatic hierarchical organization of subtasks; and second, on-line resolution via determinization. We argue that several applications of planning benefit from these two strategies. We exemplify our approach with a robotized disassembly application. The disassembly problem is modeled in probabilistic planning definition language, and serves to define our experiments. Our results show many advantages of our method over LRTDP, such as a better capability to handle problems with large state spaces and state definitions that change when new fluents are discovered.