Opened 8 years ago

Last modified 6 years ago

#3104 new enhancement

Parallelize tools like r.cost

Reported by: belg4mit Owned by: grass-dev@…
Priority: normal Milestone: 7.6.2
Component: Raster Version: unspecified
Keywords: r.cost Cc:
CPU: Unspecified Platform: Unspecified

Description

Currently QGIS will peg one of my cores while running r.cost. This leaves the system usable, but it also greatly slows down the execution time. Traversal of the cost raster should be readily parallelizable, and could divide the run time by the number of cores allocated.

Change History (14)

comment:1 by martinl, 8 years ago

Component: DefaultRaster
Keywords: r.cost added
Milestone: 7.0.57.3.0

in reply to:  description comment:2 by mmetz, 8 years ago

Replying to belg4mit:

Currently QGIS will peg one of my cores while running r.cost. This leaves the system usable, but it also greatly slows down the execution time. Traversal of the cost raster should be readily parallelizable, and could divide the run time by the number of cores allocated.

Can you provide hints about how r.cost could be parallelized? That would help a lot.

comment:3 by belg4mit, 8 years ago

Hmm, the algorithm used is different than I imagined. However it seems like one possibility, although it would not necessarily scale easily beyond two cores, would be to have a pair of threads iterating over the raster cells: One beginning at the top left and the other working backwards from the bottom right.

in reply to:  3 comment:4 by mmetz, 8 years ago

Replying to belg4mit:

Hmm, the algorithm used is different than I imagined. However it seems like one possibility, although it would not necessarily scale easily beyond two cores, would be to have a pair of threads iterating over the raster cells: One beginning at the top left and the other working backwards from the bottom right.

r.cost, like least cost search path methods in general, do not iterate over items. Instead, r.cost starts with the start cells, calculates costs to the neighbors and continues with the cell with the least cost, calculating the costs to its neighbors, granted that the neighbors have not yet been processed. It stops when all cells have been processed or if all stop points have been reached. That means random access to cells, the order in which cells are processed is determined by the cost to reach a cell.

comment:5 by belg4mit, 8 years ago

I understand your point, but that's still a form of iteration, simply over a continuously updated list rather than pixel indices (which does occur as well). Regardless, why couldn't the start cells be divided up among multiple threads? e.g; those on the top of the image for core0, and those on bottom for core1.

in reply to:  5 comment:6 by mmetz, 8 years ago

Replying to belg4mit:

I understand your point, but that's still a form of iteration, simply over a continuously updated list rather than pixel indices (which does occur as well). Regardless, why couldn't the start cells be divided up among multiple threads? e.g; those on the top of the image for core0, and those on bottom for core1.

The list needs to be updated using the cell with the current lowest cost. There is always only one such cell, processing the e.g. four cells with the lowest costs in separate threads would corrupt the result (after one cell is processed, the list of the smallest four cells might change). In other words, the order of iteration is not predictable.

You could start a separate thread for each start point separately, but then you would need to create an individual temporary output for each start cell and process all cells once for each start cell. At the end, the different outputs would need to be merged, assigning to each cell the cost and the path to the closest start cell. In the current single-thread version, all start cells are loaded at the beginning and processed together. Each cell needs to be processed only once, regardless of the number of start cells. Thus with your suggestion, more cores could be used, but total processing time would be longer.

comment:7 by belg4mit, 8 years ago

Apparently others have done work on this, however C is not my forte so I'm not sure about adapting these:

http://calhoun.nps.edu/bitstream/handle/10945/27274/solvingweightedr00garc.pdf http://www.sciencedirect.com/science/article/pii/092523129400018N

in reply to:  7 comment:8 by mmetz, 8 years ago

Replying to belg4mit:

Apparently others have done work on this, however C is not my forte so I'm not sure about adapting these:

http://calhoun.nps.edu/bitstream/handle/10945/27274/solvingweightedr00garc.pdf http://www.sciencedirect.com/science/article/pii/092523129400018N

Thanks for the references! There are more helpful references in the links you provided. Parallelizing r.cost according to these references would be a nice Google Summer of Code project.

comment:9 by martinl, 8 years ago

Milestone: 7.3.07.4.0

Milestone renamed

comment:10 by neteler, 7 years ago

Milestone: 7.4.07.4.1

Ticket retargeted after milestone closed

comment:11 by neteler, 7 years ago

Milestone: 7.4.17.4.2

comment:12 by martinl, 6 years ago

Milestone: 7.4.27.6.0

All enhancement tickets should be assigned to 7.6 milestone.

comment:13 by martinl, 6 years ago

Milestone: 7.6.07.6.1

Ticket retargeted after milestone closed

comment:14 by martinl, 6 years ago

Milestone: 7.6.17.6.2

Ticket retargeted after milestone closed

Note: See TracTickets for help on using tickets.