Pathfinding Routine & Libraries for Slower Areas

  • Thread starter Thread starter Hepth
  • Start date Start date
  • Tags Tags
    Libraries
Click For Summary

Discussion Overview

The discussion revolves around pathfinding algorithms and libraries suitable for navigating grids with varying traversal costs, particularly in the context of a hobby project involving a 200x200 grid. Participants explore different approaches to optimize pathfinding in areas with slow traversal, such as mud, and consider the potential for parallelization.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant modified the A* algorithm to incorporate weights for different grid squares, seeking to identify if this approach has an official name.
  • Another participant suggests using a genetic algorithm to iteratively find the best path by scoring potential solutions across generations.
  • A participant describes their experience with light mapping in games, proposing that it relates to the problem at hand by allowing for variable weights on grid squares.
  • Discussion includes the idea of treating the grid as a variable density grid, where some squares have different traversal costs based on density.
  • One participant proposes a method for parallelizing the pathfinding process by dividing the grid into patches and optimizing paths within those patches, while acknowledging the challenges of merging these paths efficiently.
  • Another participant questions whether the problem is simply a shortest route problem, suggesting that traffic flow analysis might provide insights into optimizing routes based on time rather than distance.
  • It is noted that all such problems can be framed as shortest path problems with appropriate metrics, with Dijkstra's method mentioned as a classic algorithm that does not easily parallelize.

Areas of Agreement / Disagreement

Participants express a range of views on the best approach to pathfinding, with no consensus on a single method or algorithm. There is acknowledgment of the complexity involved in optimizing paths in grids with variable costs, and differing opinions on the applicability of various algorithms and techniques.

Contextual Notes

Participants highlight limitations related to the parallelization of algorithms and the challenges of merging paths from different sections of the grid. The discussion also touches on the need for a suitable metric for evaluating paths, which remains unresolved.

Hepth
Science Advisor
Gold Member
Messages
458
Reaction score
40
I'm looking for the name of a routine for pathfinding or some libraries, where it can have slow areas.

I know what A* is for binary paths, but what about paths that have regions where you might want to avoid, but only to a certain cost. Like, if there is mud that slows you by 1/2, then there are times its more efficient to cross that "mud".

I took an A* and modified it so I could put weights at each point, and it works, but I'm wondering if this has an official name? Maybe a programmer can help me.

I'm doing a hobby project where I need a 2D pathfinding over a 200x200 grid where some squares "cost" more than others, and if it can be parallelized it would be even better. It's going to be used for a Neural Net so I have to think about optimization too.

Thanks!
 
Technology news on Phys.org
What about using a genetic algorithm to find the best path?

Each solution would be a potential path that you score and keep if above some threshold value for your next generation. You iterate thru several generations and finally choose the one with the best score.
 
Hepth said:
I took an A* and modified it so I could put weights at each point, and it works, but I'm wondering if this has an official name? Maybe a programmer can help me.
Lightmapping, bitmapping...
I've never heard that given a name, but I know it's very common in games. I wrote a series of articles many years ago about using maps for cheap pseudo-intelligence. I always called it light mapping, because originally, that's what I figured out how to do it with, figuring out how to light a character as it moved through space.

You can have your blocks actually have multiple variables and a function that returns their weight.
 
Yeah, its sort of like a greyscale bumpmap.
In a mathematical sense its like having a variable density grid, where some squares cost double to cross because they have a more dense spatial grid.
If it was in a video game (which it isnt) it'd be like having a bot cross water or jump a wall to find the minimum time path.

I could see how its lightmapping, its basically like ray tracing with different refraction indices. I'll look into that.

In the end I could just program it myself, as I have done. But I know A* isn't the fastest when there's symmetry, and I thought SURELY someone else has done this really well. Give it a greyscale bitmap and it finds the path of least resistance.

I'd rather not do it in a random way, but a path-finding way that builds from the diagonal out.
 
About parallelising, it seems like it should be possible.
You're looking at a very symmetric graph. You assign a weight to each vertex (the cost to traverse the corresponding square).

How to parallelize this? I'd say divide your square map in patches (n x n or n x m doesn't really matter).
In each patch look for the optimal path.

Now comes the hard part, patching these paths together in such a way that you optimize your metric (whatever it is like time, fuel consumption, ...)
This is hard because of several reasons, some sections can be ignored but you'll need to lessen the efficiency almost always.
Snapshot.jpg


I drew a simple sketch here. Suppose each of the smaller grids containing a black path is such a section.
As you can see they don't align.

Suppose the best path possible is the grey one (there's a big swamp at the bottom :) ).
You'll want to go from the upperleft to it's right neighbour in this case.
However this isn't that easy since the data is quite possibly at different CPUs/cores and in different caches.
A communication step is necessary.

That's the very naive and quite possible much slower algorithm I just came up with.

Another thing you can do is create some sort of distance-function for comparing paths.
So say you have a path A that's terrible. You'll know that a path B that's "close to A" has a high probability of being a bad one as well. (close/far is dependent on the norm you create so to speak)
If another path C is very good (or good) you can restrict your search to it's surrounding. That way you'll always have a good solution(C) and might be able to improve by perturbing the path.

The latter is easier to parallelize with OpenMP. This is as far as I know the quickest way to try and parallelize some code.
If you want to go more into the computer science part where you estimate the cost of your algorithm and benchmark the code, I'm more inclined towards BSP.
Because it has a very simple cost model. And can be used as a gentle introduction to some parallel computing.
 
Isn't this just a shortest route problem?
 
All such problems are essentially shortest path problems with an appropriate metric applied to each route section e.g. transit time -> fastest route, transit cost -> cheapest route, length -> shortest route.

Dijkstra's method is the classic algorithm but it does not easily parallelise.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
7
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 0 ·
Replies
0
Views
368
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K