Hi, Dean, welcome to the forums.
Your compression method would pay off when you choose large enough chunks of data, so that the size of the key + start index (an end index is not needed, for fixed-size input chunks) is small compared to the size of the chunk.
Put yourself for a moment in the following situation: if your data chunk size is, say, 512 bits, imagine that,
instead of using Pi, you use the full collection of 10^{512} possible strings of digits, each of length 512,
0000 ... 0000
0000 ... 0001
...
9999 ... 9998
9999 ... 9999
where you are guaranteed to find a match to your input data.
Here is the catch: given your 512-bit input chunk, the start index would be a number between 0 and 10^{512}-1, which needs more than 512 bits to be stored.
If your table was made of all 2^{512} combinations of
binary digits instead of decimal (and you key were either 01 or 10), the size of the index would again be 512 bits. In this case, you can see why the choice of key will not make the index smaller.
In the bottom of this discussion is the idea of
how much information you want to compress. Compression algorithms usually tackle some known regularities in the input data (for example, if the data is always ASCII text, or if it is a sound wave, or a picture): the more regular the input data is, the less information it actually contains, so it is possible to convey that information in less bits.
If there is something about your idea that I'm missing, please let me know.