Combined Task and Motion Planning (CTAMP)
Tasks via Regular Languages (DFA)
Tasks via Linear Temporal Logic (LTL)
Tasks via Planning-Domain Definition Languages (PDDL)
The field of robotics nowadays is marked by an emphasis towards increasing the autonomy of robots. A fundamental component of autonomy is the ability of a robot to plan its motions in order to carry out assigned tasks. Whether the task is to search, inspect, navigate, or explore, it generally involves abstractions into discrete, logical actions, where each discrete action often requires substantial continuous motion planning to carry out. These settings require the robots to reason and plan at multiple levels of discrete and continuous abstractions. The coupling of the discrete and the continuous, however, poses significant challenges as discrete and continuous planning have generally been treated separately.
Drawing from AI, in addition to LTL, our research can use PDDL, e.g., STRIPS, to specify high-level tasks. While LTL combines propositions via Boolean and temporal connectives, PDDL combines predicates via action schemas, which describe actions that change the discrete state of the world. LTL and PDDL are neither mutually exclusive nor equivalent. This brings about another advantage of the proposed framework which can handle both LTL and PDDL.
To bridge the gap between the discrete and the continuous, the proposed framework couples the ability of motion planning to handle the complexity arising from high-dimensional robotic systems, nonlinear dynamics, and collision avoidance with the ability of discrete planning to take into account discrete specifications. The framework makes it possible to specify tasks via LTL or PDDL and automatically computes collision-free and dynamically-feasible motions that enable the robot to carry out the assigned tasks.
The framework leverages from sampling-based motion planning the underlying idea of searching for a solution trajectory by selectively sampling and exploring the continuous space of feasible motions. In particular, sampling-based motion planning extends a tree in the continuous state space by adding new trajectories as tree branches, which are obtained by sampling input controls and propagating forward the motion dynamics of the robot. Discrete planning guides the sampling-based motion planner by searching over both the task representation and a workspace decomposition to provide discrete plans as intermediate sequences of propositional assignments and discrete actions that should be satisfied. The proposed approach samples the discrete space to shorten the length of the discrete plans and to expand the search toward new propositions and discrete actions that enable the sampling-based motion planner to explore the continuous state space. Experiments with high-dimensional dynamical robot models performing various tasks given by LTL or PDDL show significant computational speedups over related work.
- Plaku E and Le D (2016): "Interactive Search for Action and Motion Planning with Dynamics." Journal of Experimental and Theoretical Artificial Intelligence, vol. 28, pp. 849–869 [publisher] [preprint]
- McMahon J and Plaku E (2015): "Robot Motion Planning with Task Specifications via Regular Languages." Robotica, in press [publisher] [preprint]
- McMahon J and Plaku E (2014): “Sampling-Based Tree Search with Discrete Abstractions for Motion Planning with Dynamics and Temporal Logic.“ IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3726–3733 [publisher] [preprint]
- Plaku E (2012): "Planning in Discrete and Continuous Spaces: From LTL Tasks to Robot Motions." Springer LNCS Towards Autonomous Robotic Systems, vol. 7429, pp. 331-342. [publisher] [preprint]
- Plaku E (2012): "Path Planning with Probabilistic Roadmaps and Linear Temporal Logic." IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2269--2275 [publisher] [preprint]
- Plaku E and Hager GD (2010): "Sampling-based Motion and Symbolic Action Planning with Geometric and Differential Constraints."IEEE International Conference on Robotics and Automation, pp. 5002--5008 [publisher] [preprint]