About a year ago I came up with an idea for a compression algorithm which could potentially compress any type of data (pictures, movies, text, ect) with equal ability, and could potentially compress it to 1/1000 or less of its original size. It is such a simple idea that I am sure that if it was practical it would have been done.(adsbygoogle = window.adsbygoogle || []).push({});

It is based on a very simple idea; here is an example: lets say a certain picture file looks like this in binary:

11111110001001000011001011010111111100000000010101001001000010110101100001011010111111101001100101100011000110

00000001100010010111110110100000111000001111010111001000000011100101010000000111110010101010010100110011111000

1000111011101

That is 233 bits long, (I know a real picture file would be way longer this is just for illustration purposes). Now lets imagine that you wanted to compress this to a much smaller size. My idea would do it like this: it would treat the file as if it were one huge number, and then it would try to find a mathematical equivalent of it this is much shorter. For example the 233 bit number above, could be written as (3^147)+2 which is only 80 bits, even when considering 8 bits per number/math symbol, so that could be optimized into some sort of special type which only includes numbers and mathematical symbols. You can try it yourself at http://world.std.com/~reinhold/BigNumCalc.html

And the algorithm could use all kinds of mathematical operations to reduce the size. I am pretty sure it should work theoretically, because any number can probably be written in a mathematical form which is much shorter. The only question is whether or not it is practical, my guess is no, considering that it would take billions of years to crack even a 128 bit encryption key, which I would compare this to, except the difference with this is that with a proper algorithm it should be possible to quickly narrow down on the possible solutions, because after each guess, you know exactly how far you were off by.

Has anyone ever heard of a compression algorithm similar to this or at least an idea for one, or do you know of a reason that this would be completely impossible or impractical from a number theory or computing point of view?

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Ultra-lossless compression algorithm

Loading...

Similar Threads - Ultra lossless compression | Date |
---|---|

Insights New High Performance Ultra-wide Angle Lenses - Comments | Mar 19, 2017 |

Zipgenius compression program, command line options | Dec 4, 2015 |

TI89 calculator solver in functions? | Jan 19, 2015 |

(?) on fixing bootmgr is compressed | Nov 1, 2012 |

Geforce 5900 Ultra 256MB | Jul 15, 2003 |

**Physics Forums - The Fusion of Science and Community**