computational robotics

from tasks to motions

Path Planning in Configuration Spaces:

Distributed Sampling-based Roadmap of Trees (DSRT)

High-dimensional problems arising from complex robotic systems test the limits of current motion planners and require the development of efficient distributed motion planners that take full advantage of the available computational resources.

DSRT is a distributed implementation of the Sampling-based Roadmap of Trees (SRT) path planner. DSRT's design is based on a multiple masters, multiple clients distribution. The clients are responsible for tree expansions and connections while the masters ensure that the load is distributed as evenly as possible among the clients. The masters arbitrate tree ownership, edge selection, maintain the connected component data structure, and coordinate the activities of all the processors.

The distribution remains highly efficient obtaining near linear speedup for message-passing clusters even when the computation is distributed over hundreds of processors. DSRT makes it possible to solve high-dimensional problems that cannot be efficiently addressed with other path planners.

Algorithm Details

Tree Computations
 1: //Executed by master M1
 2: Q  <-- empty set
 3: for i = 1 to nrTrees do
 4:     Wait for some tree representative 
 5:     configuration q to arrive
 6:     Add q to Q
 7: Broadcast finish
 8: GC = (VT ,EC) <-- compute graph 
 9:              of candidate edges
10: Send GC to all other masters
 1: //Executed by all clients C1,...,Cn 
 2: //and masters M2,...,Mk
 3: trees <-- empty set
 4: while finish has not been received do
 5:   T <-- generate a tree
 6:   add T to trees
 7:   send representative configuration q 
 8:   to master M1
 9://Executed by masters M2, . . . ,Mk
10: Send trees to clients to balance load
Tree Connections
 1: //Executed by each master M1,...,Mk
 2: CMi <-- clients managed by master Mi
 3: W <-- CMi 
 4: while unprocessed edges remain in GC do
 5:  for client Cj in W do
 6:   if there is an edge e = (T', T'') 
 7:      such that Cj owns both trees then
 8:    Send e to Cj and Remove Cj from W
 9:   else
10:    Find some trees that would create 
11:    edges for Cj. Notify owners of 
12:    those trees to send copies to Cj
13:  end for
14:  if computed edge arrives from Ch 
15:    Update connected comps and GC
16:    if Ch is in CMi then Add Ch to W 
17: end while
18: Broadcast finish
 1: //Executed by all clients C1,...Cn
 2: while finish has not been received do
 3:  if send(T,Ci) is received then
 4:   Send copy of tree T to Ci
 5:  if recv(T,Ci) is received then
 6:   Add T to the list of trees received
 7:   if capacity is exceeded then
 8:    T' <-- first tree in the list   
 9:    Send to owner of T' configurations 
10:    and edges added to T'
11:    Delete T'
12: if update(T,Ci) is received then
13:   Add the new configurations and 
14:   the new edges to T
15: if e = (T',T'') is received then
16:   Connect trees T' and T''
17:   Send result to all the masters
18: end while

Data dependencies: Tree computations and random edge selections can be trivially parallelized as there are no dependencies. The computation of nearest neighbors for each tree is more challenging since it requires all tree representatives. Edge computations are not independent of each-other. Since trees can change after an edge computation as a result of adding new configurations to the trees and since computing an edge requires both trees, the edge computations cannot be efficiently distributed without some effort. It requires trees to be sent from one client to the other. Furthermore, computation pruning due to connected component analysis entails control flow dependencies during edge computations. Our experiments with the sequential SRT implementation revealed that the bulk of the run time occurs in tree and edge computations.

Masters-clients architecture: A given set of processors P = {P1,P2, . . . ,Pp} is partitioned into a set of master processors M = {M1,M2,...,Mk} and a set of client processors C = {C1,C2, . . . ,Cn}. Each client Ci is managed by some master Mj. Consequently, each master Mi manages a set of clients. Each master is responsible for only a fraction of the clients in order to ensure a timely response to their needs, since all the useful computation is done by the clients. This design was chosen as we expect to significantly increase the number of clients. In our implementation, each master manages n/k clients.

Tree computations: During this stage, all processors except master M1 compute trees and send the tree representatives to master M1 until a predefined total number of trees have been computed. Each tree computed by client Ci is stored locally in Ci while the set of the representatives is stored in M1. The trees computed by masters M2 through Mk are sent to those clients which computed the smallest number of trees in order to balance the load as much as possible. During the tree computation stage, the communication is limited and non-blocking in order to obtain an efficient distribution of the computation load and little overhead.

Candidate edges: Master M1 computes the graph of candidate edges GC = (VT, EC) as in SRT and sends it to all the other masters. There is no distribution of this stage since it constitutes only a small amount of the total computation time and requires complex search data structures.

Tree connections: The connections between trees are computed by the clients while the masters decide which candidate edges should be connected. Let e=(T',T'') be the edge that master Mi sent to client Cj. If both trees are currently owned by Cj, then Cj runs the tree-connection algorithm and if the connection is successful, it sends to all the masters the results of the connection. Otherwise, client Cj has to wait until it receives copies of the trees that it does not own from their respective owners. During this time Cj completes send operations that would help other clients to compute their assigned edges.

Related Publications

  • Plaku E, Bekris KE, Chen BY, Ladd AM, and Kavraki LE (2005): "Sampling-Based Roadmap of Trees for Parallel Motion Planning." IEEE Transactions on Robotics, 2005, vol. 21(4), pp. 587--608   [publisher]   [preprint]
  • Plaku E and Kavraki LE (2005): "Distributed Sampling-Based Roadmap of Trees for Large-Scale Motion Planning." IEEE International Conference on Robotics and Automation, pp. 3879--3884  [publisher]  [preprint]
  • Akinc M, Bekris KE, Chen BY, Ladd AM, Plaku E, and Kavraki LE (2003): "Probabilistic Roadmaps of Trees for Parallel Computation of Multiple Query Roadmaps." Springer Tracts in Advanced Robotics, vol. 15, pp. 80--89  [publisher]  [preprint]
  • Bekris KE, Chen BY, Ladd AM, Plaku E, and Kavraki LE (2003): "Multiple Query Motion Planning using Single Query Primitives." IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 656--661  [publisher]  [preprint]