- #1
FallenApple
- 566
- 61
Is there a relation between these concepts? We know that as we move down the number line, the primes become less common. This makes sense: as we get more and more numbers, they could be constituted using all the primes that came before early on and less likely to be constituted by all the recent primes.
Although the curse of dimensionality is more of an issue related to basis vectors. The curse is this: as more and more unrelated( i.e independent) vectors are added, the dimension of the spanned space goes up while each increment of data(i.e column vector) stays the same, hence it has to occupy more hyper volume for same increment of data. Hence the data becomes more sparse for each basis added.
This is analogous because each new number can be written as a multiplicative combination of older primes or it can be a new prime, with unknown chance. If there is a new prime, then this is analogous to adding a dimension(new building block) which could in principle have a non zero probability to build another number down the line.
But I don't know if we can think of a multiplicative building block in terms of a basis. Unless there is some way to code each prime in terms of 1s and 0s in vectors that somehow make the prime vectors orthogonal also add up to build composite numbers(in vector 0s and 1s form as well). Perhaps there's a way to use log transformations to map from multiplication to addition? But if we were to do this, how would we incorporate the randomness(chance of being prime or not) of the next prime into this picture since there isn't really a fixed probability of nth number being prime. Maybe we can use Bayesian Statistics to assign some weight to a prime based on past occurrences of primes?
In short, are the ideas analogous?
Although the curse of dimensionality is more of an issue related to basis vectors. The curse is this: as more and more unrelated( i.e independent) vectors are added, the dimension of the spanned space goes up while each increment of data(i.e column vector) stays the same, hence it has to occupy more hyper volume for same increment of data. Hence the data becomes more sparse for each basis added.
This is analogous because each new number can be written as a multiplicative combination of older primes or it can be a new prime, with unknown chance. If there is a new prime, then this is analogous to adding a dimension(new building block) which could in principle have a non zero probability to build another number down the line.
But I don't know if we can think of a multiplicative building block in terms of a basis. Unless there is some way to code each prime in terms of 1s and 0s in vectors that somehow make the prime vectors orthogonal also add up to build composite numbers(in vector 0s and 1s form as well). Perhaps there's a way to use log transformations to map from multiplication to addition? But if we were to do this, how would we incorporate the randomness(chance of being prime or not) of the next prime into this picture since there isn't really a fixed probability of nth number being prime. Maybe we can use Bayesian Statistics to assign some weight to a prime based on past occurrences of primes?
In short, are the ideas analogous?
Last edited: