I've been reading up on the mathematics of quantum theory, and it's all pretty interesting. I have a background in information theory, so when I read about the uncertainty principle I had an idea. Here it goes: let's say that the possible observable values for momentum are p1, p2, ... , pn and the possible observable values for position are x1, x2, ... , xn. Thus we can establish a probability vector for observing, say, a specific position. For example, if the probability that the observed particle lies at x1 is 0.5 and the probability that it lies at x2 is 0.5, then our vector would be (0.5, 0.5, 0, 0, ... , 0), with the vector being n-dimensional. Now, due to the uncertainty principle, a zero shannon entropy ( -sigma(xilogxi )) for the, say, momentum vector would correspond to a high entropy for the vectors for other observables. Thus far, everything is obvious. But can we take this further and say that H(observable1) + H(observable2) = k, where k is a constant? If we can, what would determine the constant? I'm thinking log(n), but I could be wrong.