Proof: All Blind-Memoryless search strategies are equivalent

AI Thread Summary
The discussion centers on the equivalence of blind-memoryless search strategies in terms of their performance regarding time to goal and domain coverage. It is suggested that no blind-memoryless search strategy can outperform another, a notion that aligns with the No Free Lunch theorems. These theorems state that all algorithms searching for an extremum of a cost function perform equally when averaged over all possible cost functions, implying that any advantage in one problem domain is offset by disadvantages in another. The proof concept involves demonstrating that for any algorithm, successful search landscapes are counterbalanced by challenging landscapes that mislead the algorithm. The conversation highlights the relevance of Wolpert and Macready's theorems in supporting this idea, providing a foundation for further exploration of the topic.
Eidos
Messages
107
Reaction score
1
Hi All

Is there a proof that all blind-memoryless search strategies are equivalent?

By equivalent I mean that no blind-memoryless search strategy can outperform any other in terms of time to goal and domain coverage.

It seems to me that this is intuitively true.

How would I go about proving it?

Thanks :-)
 
Technology news on Phys.org
You might want to look at the No Free Lunch theorems, which prove something stronger:

All algorithms that search for an extremum of a cost function perform exactly the same when averaged over all possible cost functions. So, for any search/optimization algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class.

The way the proof is done is something along the lines of showing that for any given algorithm you may choose, each search landscape that the algorithm turns out to do well on is balanced by a perverse search landscape (somewhere out there in the set of all possible landscapes) that looks like that optimal landscape locally but is actually tricking the algorithm into moving away from the good points.

I think what you want to prove may actually follow from Wolpert and Macready's existing theorems here?
 
Last edited:
Coin said:
You might want to look at the No Free Lunch theorems, which prove something stronger:



The way the proof is done is something along the lines of showing that for any given algorithm you may choose, each search landscape that the algorithm turns out to do well on is balanced by a perverse search landscape (somewhere out there in the set of all possible landscapes) that looks like that optimal landscape locally but is actually tricking the algorithm into moving away from the good points.

I think what you want to prove may actually follow from Wolpert and Macready's existing theorems here?

Thanks very much ^^
Thats exactly what I was looking for.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...

Similar threads

Replies
6
Views
2K
Replies
17
Views
1K
Replies
6
Views
2K
Replies
24
Views
2K
Replies
7
Views
2K
Back
Top