Hi Andy, it's nice to hear from you. You seem to think about the things I say without getting your ego involved. You are a very rare bird indeed and I appreciate the opportunity to communicate with you. (Note, I have been having a very bad time with the latex interpreter; I think it has some bugs in it. I have been trying various work arounds.)
saviourmachine said:
Aha, it's that simple!
Aha, that makes some things clear.
That's straightforward.
Yes, that makes sense. Providing the items in reversed order is still in order.
Looking forward,
I always tell people it's simple but they always want to complicate things. The math is not difficult at all. With regard to the issue of mathematics and simplicity, do you have any knowledge of matrix mechanics or matrix multiplication? I am wondering if I will have to teach you the subject as it comes up pretty quickly from where we are at the moment.
Meanwhile, there are three significant steps yet to be undertaken. Again, they are not really difficult but they are rather askew of the typical perspective. The first one has to do with the representation of probability. Probability, when viewed as the output of a mathematical function, constrains that function to have some very specific properties. These constraints come directly from the definition of probability. (Just as an aside, there is an individual out there who has some major difficulties with probability theory and is getting a reception roughly equivalent to the one I manage to generate with authorities. I have a strong suspicion his complaints are very rational.) But that is beside the point as I use none of the sophisticated aspects of probability theory he is referring to.
The first fundamental property of probability is that it cannot be negative and the second is that the sum (or integral if the number of possibilities become infinite) over all possibilities can not exceed unity. If you have been following the details of my approach you should have at least an inkling of the central motivation behind that approach. I have made every effort possible to insure that my representation imposes no constraints whatsoever on the possibilities which can be represented. I want my conclusions to be absolutely general without any presumptions as to where and how success (that explanation we are seeking) is to be found. Since we have established that our solution to any problem can be seen as finding the proper algorithm to apply to the set of numbers representing our knowledge, it is in our interest to remove constraints imposed by issues outside the information itself without making any constraint on the range of algorithms available to our analysis. The fact that probability must be a number between zero and one is just such a constraint. The need to satisfy this superfluous constraint may be removed from consideration via a very simple procedure.
A function can be seen as consisting of two components: the "argument" of the function (the input) and the "value" of the function (the output). Both of these components can be represented by a set of numbers (I think we have already discussed that issue). It follows directly that absolutely any function can be represented by the following shorthand notation.
\vec{G}(\vec{x},t)<br />
\equiv \left {{}G_1(x_1, x_2, \cdots, x_n,t), G_2(x_1, x_2, \cdots, x_n,t),\cdots, G_k(x_1, x_2, \cdots, x_n,t) \right{}}<br />
(Without this shorthand, the size of the equations which will soon appear will be far to complex to write out in full.) In the interest of obtaining a very specific representation, I will constrain the arguments, x_i, to be taken from the set of real numbers and the results of the algorithm, G_j, to be taken from the set of complex numbers. Note that the common meaning of such an expression, that
G rotates like a vector in the space of
x is specifically not to be the intended interpretation. Note further that there is no implied relationship between n and k: that is, the number of elements in the two sets is held to be a completely open issue.
Given this totally general representation of an arbitrary functional relationship, we can define (for any specific function) what is called its adjoint function and written \vec{G}^\dagger (\vec{x},t). The adjoint is defined to be exactly the same as the original function except that each and every G_i (the specific complex numbers defining the function) would be replaced with its complex conjugate (G_i = (a+ib) goes directly to G_i ^\dagger = (a-ib)). The central issue is of course the fact that G_i ^\dagger * G_i = a^2 + b^2, a positive definite real number. (If b = 0 then the adjoint is identical to the original which of course means that "self-adjoint" means real; which I suspect everyone here knows.)
Now add to the above the standard definition of a "dot" product of vectors (seen as a definition of a procedure) and the notation \vec{G}^\dagger \cdot \vec{G} results in a sum over a collection of positive real numbers which must be positive definite. Lastly, the sum over all possibilities (or the integral if the number of possibilities is infinite) must be greater than any sum (or integral) over any sub set of possibilities. It follows that
1 \geq <br />
\frac{ { \int \int \cdots \int \vector{G}^\dagger \cdot \vector{G} \, d^n x} }{ { { \int \int \cdots \int \vector{G}^\dagger \cdot \vector{G} \, d^n x} } }<br />
\geq 0
so long as the denominator is summed (or integrated) over
all possibilities. That also brings up another shorthand notation I would like to use.
\oint f(\vec{x}) dv \equiv \int \int \cdots \int f(\vec{x}) d^n x
Ordinarily \oint would denote a line integral but, since I have no need for line integrals in my work, there should be no confusion. If I knew how to do it, I might very well put a capital "V" in the circle to denote that I want a volume integral over the entire represented abstract volume. Meanwhile, I will just hope that anyone who reads this has the attention span to remember that identification.
If follows that, if one defines the function \vec{\Psi} via
\vec{\Psi}(\vec{x},t) \equiv \frac{ \vec{G}(\vec{x},t) }{ { \sqrt{ \oint \vector{G}^\dagger \cdot \vector{G} dv} } }
we can "define" the probability of the B_j to be given by
<br />
P(\vec{x},t) = \vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)dv<br />
where dv \equiv d^n x.
The really important issue here is that \vec{\Psi} is an absolutely unconstrained functional relationship; absolutely any possible function can serve the roll of \vec{\Psi} as it is identical to \vec{G} except for the numerical factor \sqrt{\oint \vector{G}^\dagger \cdot \vector{G}dv}. There is to be no constraint on \vec{\Psi} other than the fact that the probability generated by the definition given above be a correct representation of our expectations. If our expectations can be generated, \vec{G}, must be a member of the set of "all possible algorithms".
Two possible problems might exist. Both involve extreme values of that numerical factor \sqrt{\oint \vec{G}^\dagger \cdot \vector{G}dv}. The case where the factor is zero (and division would be undefined) is trivial. In that case, \vec{G} will serve the purpose of \vec{\Psi} and the division is unnecessary. The second case, where the factor is infinite is a little more problematical. In that case, the defined probability becomes zero. This case obviously occurs when the number of possibilities become infinite and the probability of any specific B becomes zero. This becomes a very real possibility as we will soon be dealing with the limit as n approaches infinity; however, in this case also, the division once again becomes immaterial. In this second case, our interest will be in comparing probabilities of various collections of B's and the ratios of those probabilities are the important factor (the denominator being the same in all cases, the division is immaterial).
The only factor of interest is that the output obtained from the definition can be interpreted as a probability.
The net effect of all this is that, in order to keep the representation totally open, we want to work with \vec{\Psi} instead of working directly with the probabilities defined by \vec{\Psi}. If I can make it clear one more time, the set up I have arranged makes utterly no restrictions on the form or character of the method of arriving at expectations. The only constraint being put on the method is that it must yield satisfactory results; an issue not to be discussed until the notation is fully proscribed.
Finally, since we want to work with \vec{\Psi}, we need to re-express the relationships developed earlier in terms of the probability. The relationships already written may be rewritten as
<br />
\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\Psi}\,=\, i \kappa \vec{\Psi}\,\,\,and\,\,\frac{\partial}{\partial t}\vec{\Psi}\,=\, im\vec{\Psi}
This can be proved quite simply. The complex conjugates of the above expressions are,
<br />
\sum_{i=1}^n \frac{\partial}{\partial x_i}\vec{\Psi}^\dagger\,=\, -i \kappa \vec{\Psi}^\dagger \,\,\,and\,\,\frac{\partial}{\partial t}\vec{\Psi}^\dagger\,=\, -im\vec{\Psi}^\dagger .
This, together with the chain rule of calculus guarantees that any \vec{\Psi} which satisfies the above relations also satisfies the relation on the probability stated earlier. In the interest of saving space, I will show the result explicitly for the time derivative (the derivatives with respect to the arguments x_i go through exactly the same.
\frac{\partial}{\partial t}P(\vec{x},t)
=\,\, \left( \frac{\partial}{\partial t}\vec{\Psi}^\dagger \right) \cdot \vec{\Psi}+\vec{\Psi}^\dagger \cdot<br />
\left( {\frac{ \partial}{ \partial t}} \vec{\Psi} \right)
=\,\, -im \vec{\Psi}^\dagger \cdot \vec{\Psi}+im \vec{\Psi}^\dagger \cdot \vec{\Psi}\,\,=\,\,0.
If you have any questions about anything I have put down, please let me know. If all this makes sense to you, I will establish the final two steps and then state the ultimate conclusion.
I hope I have not run you off – Dick