Is the Subset Sum Problem Truly Solved by Generating Large Random Sets?

  • Thread starter Thread starter LolipopDragon
  • Start date Start date
  • Tags Tags
    P vs np Sum
LolipopDragon
Messages
2
Reaction score
0
Hello,

My hobby is to design algorithms especially data compression algorithms, but when I can't find a solution to my problems I usually go find myself a different problem to solve because it helps me think differently or maybe it lights a bulb about the original problem …today I stumbled on the P vs NP problem for the first time, 5 minutes reading through Wikipedia I found the Subset Sum problem https://en.wikipedia.org/wiki/Subset_sum_problem which Wikipedia mentions is an important problem and is NP Complete…. So I decided to try and write an algorithm that finds the elements of any set of random integers that sum up to any random number S Fast, first what I did is generate an array of one million random numbers between one and one million, then told the computer to generate a random number also between one and one million, then applied my algorithm which finds elements in the array that sum to the random number S in seconds. The Bigger the set of RANDOM numbers the easier to find a solution!

So the answer was too simple and easy, my question is what am I missing/doing wrong here?
 
Last edited:
Physics news on Phys.org
So, where's your proof that your algorithm runs in polynomial time?
Secondly, a million is a very small number. Efficient algorithms are known for small N. Try your algorithm with, say, 2 raised to the power of a million. I doubt that your algorithm will terminate.
Thirdly, you said you generated a million numbers, but with a upper bound of a million on each number. Well, there's a very high probability that you generated the majority of numbers from 1 to 1,000,000. So yeah, obviously any algorithm should be fast. Try again but remove the million upper limit.

Edit: This
The Bigger the set of RANDOM numbers the easier to find a solution!
indicates that it's the third point that's making it easy for you.
 
Last edited:
From the Wikipedia article:

"For the case that each xi is positive and bounded by a fixed constant C, Pisinger found a linear time algorithm having time complexity O(NC)."
 
Samy_A said:
From the Wikipedia article:

"For the case that each xi is positive and bounded by a fixed constant C, Pisinger found a linear time algorithm having time complexity O(NC)."
Does this mean i should not pursue this any longer?
pwsnafu said:
So, where's your proof that your algorithm runs in polynomial time?
Secondly, a million is a very small number. Efficient algorithms are known for small N. Try your algorithm with, say, 2 raised to the power of a million. I doubt that your algorithm will terminate.
Thirdly, you said you generated a million numbers, but with a upper bound of a million on each number. Well, there's a very high probability that you generated the majority of numbers from 1 to 1,000,000. So yeah, obviously any algorithm should be fast. Try again but remove the million upper limit.

Edit: This

indicates that it's the third point that's making it easy for you.

thanks for your input, i will remove the upper bounds and test again, one thing tho,

I don’t know where/how you came up with that number (2 power million (around 300000 zeros of numbers to store!)) but I don’t think any pc or any ram can handle even storing such amount of number to try and perform operations on it, the maximum array size in my programming language is 16 million, I could use multiple arrays and cycle through them but even then the number wouldn’t come close to 2 power million. anyway i think I've cleared my head and its time to head back to data compression...

Edit: yes, you are correct, after removing the upper limits the algorithm did not yield the desired results...
 
Last edited:
LolipopDragon said:
Does this mean i should not pursue this any longer?
I don't know your algorithm. Just to mention: you are not allowed to use random guesses.
 
LolipopDragon said:
Does this mean i should not pursue this any longer?
Pisinger's algorithm, published in 1995, does seem to address the case you treated. It's an interesting paper by the way, if you like this subject you may like his paper.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top